4 Sums of Random Variables
|
|
|
- Ashlie Atkinson
- 9 years ago
- Views:
Transcription
1 Sums of a Random Variables 47 4 Sums of Random Variables Many of the variables dealt with in physics can be expressed as a sum of other variables; often the components of the sum are statistically independent. This section deals with determining the behavior of the sum from the properties of the individual components. First, simple averages are used to find the mean and variance of the sum of statistically independent elements. Next, functions of a random variable are used to examine the probability density of the sum of dependent as well as independent elements. Finally, the Central Limit Theorem is introduced and discussed. Consider a sum S n of n statistically independent random variables x i. The probability densities for the n individual variables need not be identical. n S n x i i=1 p xi (ζ) does not necessarily = p xj (ζ) for i = j <x i > E i < (x i E i) > σ i The mean value of the sum is the sum of the individual means: <S n > = (x 1 + x + + x n ) p(x 1,x,...,x n ) dx 1 dx dx n } {{ } p 1 (x 1 )p (x ) p n(x n) n = [ x i p i (x i ) dx i ][ pj(x j) dx j ] i=1 } {{ } i=j } {{ } E i 1 n = E i i=1 The variance of the sum is the sum of the individual variances: Var(S n ) = < (S n <S n >) > = < (x 1 E 1 + x E + + x n E n ) >
2 48 Probability The right hand side of the above expression is a sum of terms, each quadratic in x, having the following form: < (x i E i )(x j E j ) > = <x i x j > <x i >E j <x j >E i + E i E j = <x i x j > E i E j = { <x i > E i = σ i if i = j 0 if i = j Therefore, n Var(S n )= σi. i=1 n the special case where all the individual x i s have the same probability density the above results reduce to <S n >= nei, Var(S n )=nσ i. Physically, the width of p(s n ) grows as n while the mean of S n grows as n. The probability density becomes more concentrated about the mean as n increases. f n is very large, the distribution develops a sharp narrow peak at the location of the mean. Now turn to the problem of finding the entire probability density, p S (α), for the sum of two arbitrary random variables x and y represented by the joint density p x,y (ζ,η). This is a straight forward application of functions of a random variable.
3 Sums of a Random Variables 49 α ζ P S (α) = dζ dη p x,y (ζ,η) d p s (α) = P S (α) dα = dζ px,y (ζ,α ζ) n this expression p(s) is given as a single integral over the joint density p(x, y). The result is valid even when x and y are statistically dependent. The result simplifies somewhat when x and y are statistically independent, allowing the joint density to be factored into the product of two individual densities. p S (α) = dζ px (ζ)p y (α ζ) if x and y are S.. The integral operation involved in the last expression is known as convolution. The probability density for the sum of two S.. random variables is the convolution of the densities of the two individual variables. Convolution appears in other disciplines as well. The transient output of a linear system (such as an electronic circuit) is the convolution of the impulse response of the system and the input pulse shape. The recorded output of a linear spectrometer (such as a grating spectrograph) is the convolution of the instrumental profile and the intrinsic spectrum being measured. The convolution of two functions p(x) and q(x) is designated by and is defined as p q p(z)q(x z)dz. t is easy to show from this expression that convolution is commutative, that is, the result does not depend on which function is taken first: Convolution is also distributive, a b = b a. a (b + c) =a b + a c,
4 50 Probability and associative, a (b c) =(a b) c. Perhaps the best way to visualize the convolution integral is in terms of a series of successive steps. 1. FLP q(z) ABOUT THE ORGN OF TS ARGUMENT TO FORM THE MRROR MAGE q( z).. SHFT q( z) TO THE RGHT BY AN AMOUNT x TO FORM q( z + x). 3. MULTPLY q(x z) BYp(z). 4. NTEGRATE THE PRODUCT AND PLOT THE RESULT.
5 Sums of a Random Variables REPEAT THROUGH 4 FOR DFFERENT VALUES OF x. f p(z) and q(z) are each represented by a single analytic function for all z one can work out the convolution integral mathematically with little thought given to the above graphic procedure. On the other hand, when the input functions are zero in some regions or are only piecewise continuous, then the graphic procedure is useful in setting the proper limits of integration. Example: Sum of Two Uniformly Distributed Variables Given x and y are two statistically independent random variables, uniformly distributed in the regions x a and y b. Problem Find the probability density p(s) for the sum S = x + y. Solution The form of the integral will depend on the value of S. Three separate regions need to be considered. Assume as indicated above that b<a. 0 <S<a b
6 5 Probability p(s) = S+b ( )( ) dx = S b a b a a b<s<a+ b p(s) = a 1 1 a + b S ( )( ) dx = S b a b (a)(b) a + b<s p(s) = 0 since there is no overlap The symmetry of the problem demands that the result be even in S. The final result is plotted below.
7 Sums of a Random Variables 53 When the widths of the two probability densities are the same, the density for the sum becomes triangular.... Example: Classical ntensity of Unpolarized Light Given An unpolarized beam of thermal light can be viewed as two statistically independent beams of equal average intensity, polarized at right angles to each other. The total intensity T is the sum of the intensities of each of the two polarized components, 1 and. Problem Find the probability density for T using the result from a previous example that 1 p( 1 ) = exp[ 1 /α] 1 > 0 πα 1 = 0 1 < 0 where α =< 1 >. Solution Since the light is thermal and unpolarized, the intensity in each of the two polarization directions has the same density: p 1 (ζ) =p (ζ).
8 54 Probability p( T ) = = p 1 ( 1 )p ( T 1 ) d 1 ( ) T exp[ 1 /α] exp[ ( T 1 )/α] d 1 0 πα1 πα( T 1 ) 1 T = exp[ T /α] [ 1 ( T 1 )] 1/ d 1 πα 0 T / = exp[ T /α] ( T x ) 1/ dx using x 1 πα T / 4 } {{ } 1 = exp[ T /α] T 0 α = 0 T < 0 π T
9 Sums of a Random Variables n the two examples just considered the variables being summed had probability densities of the same functional form, rectangles for instance. Certainly this will not always be the case; for example one might be interested in the sum of two variables, one having a uniform density and the other having a Gaussian density. Yet even when the input variables do have probability densities of identical form, the density of the sum will in general have a different functional form than that of the input variables. The two previous examples illustrate this point. There are three special cases, however, where the functional form of the density is preserved during the addition of statistically independent, similarly distributed variables. The sum of two Gaussian variables is Gaussian. This is shown in an example below. Simply knowing that the result is Gaussian, though, is enough to allow one to predict the parameters of the density. Recall that a Gaussian is completely specified by its mean and variance. The fact that the means and variances add when summing S.. random variables means that the mean of the resultant Gaussian will be the sum of the input means and the variance of the sum will be the sum of the input variances. The sum of two S.. Poisson random variables is also Poisson. Here again, knowing that the result is Poisson allows one to determine the parameters in the sum density. Recall that a Poisson density is completely specified by one number, the mean, and the mean of the sum is the sum of the means.
10 56 Probability by A Lorentzian (or Cauchy) density depends on two parameters and is given Γ 1 p(x) = π (x m) +Γ m =< x> Γ = half width at half height The sum of two S.. Lorentzian random variables is Lorentzian with m S = m 1 +m and Γ S =Γ 1 +Γ. Note that the widths add! Doesn t this contradict the rule about the variances adding, which implies that the widths should grow as the square root of the sum of the squared widths? This apparent contradiction can be resolved by trying to calculate the variance for the Lorentzian density. Example: Sum of Two S.. Gaussian Random Variables Given 1 p(x) = exp[ (x E x ) /σx] πσx 1 p(y) = exp[ (y Ey) /σ y ] πσy Problem Find p(s) where S x + y. Solution p(s) = p x (x)p y (S x) dx = (πσ x σ y ) 1 exp[ (x E x ) /σx (S x E y ) /σy] dx The argument of the exponent in the integral is quadratic in the integration variable x. t can be simplified by changing variables, ζ x E x, and defining a temporary quantity a S (E x + E y ), a constant as far as the
11 Sums of a Random Variables 57 integration is concerned. The argument of the exponent can then be written and processed as follows. 1 [ ζ (a ζ) ] + = σx σy [ 1 σx + σ a a ] y ζ ζ + = σxσ σ y y σy [{ 1 σ x + σy a σ a } { a σ a }] ζ x x ζ + + = σ σ σ σ ( σ + σ ) σ x y y y x y y σy(σ x + σy) ( 1 σ σ ) 1 ( x y σ ) { } ζ xa a + σ + σ σ + σ σ + σ x y x y x y The second of the two terms in { } brackets in the last line produces a constant which factors out of the integral. The integral over ζ coming about from the first term in {}brackets is just the normalization integral for a Gaussian and has the value ( σ π xσ ) 1/ y. σx + σy Bringing all of these terms together gives the probability density for the sum. p(s) = (πσ x σ y ) 1 a ( σ exp[ ] π xσ ) 1/ y (σx + σy) σx + σy 1 (S (Ex + E y )) = exp[ ] π(σ + σ ) (σx + σy) x y The result is a Gaussian with mean E x + E y and variance σ x + σ y.... The result above applies to the sum of two statistically independent Gaussian variables. t is a remarkable and important fact that the sum of two statistically dependent Gaussian variables is also a Gaussian if the two input variables have a bivariate joint probability density (the special joint density introduced in an earlier example). Those with a masochistic streak and a flair for integration can prove this by applying the expression for the density of the sum of two dependent variables to the bivariate Gaussian joint density.
12 58 Probability Mathematical Digression: Convolution and Fourier Transforms After the algebra of the last example it is reasonable to ask if there is some easier way to compute convolutions. There is, and it involves Fourier transforms. This digression is intended for interested students who are familiar with Fourier techniques. Let the Fourier transform of f(x) bef (k) and use the notation f F to indicate a transform pair. F (k) e ikx f(x) dx f(x) = 1 e ikx F (k) dk π (The Fourier transform of a probability density is called the characteristic function or moment generating function and is quite useful for more advanced topics in probability theory.) One can show that the Fourier transform of the convolution of two functions is the product of the individual Fourier transforms. f a A and b B then a b AB To find the probability density for the sum of two statistically independent random variables one can multiply the Fourier transforms of the individual probability densities and take the inverse transform of the product. As a practical application and example one can show that the sums of Gaussians are Gaussian, sums of Poisson variables are Poisson, and sums of Lorentzians are Lorentzian.
13 Sums of a Random Variables 59 Gaussian Poisson Lorentzian 1 (x E) σ k exp[ ] exp[ike ] πσ σ 1 λ n e λ δ(x exp[ n=0 n! n) λ(e ik 1)] Γ 1 exp[imk kλ ] π (x m) +Γ Each of these three transforms F (k) has the property that a product of similar functions preserves the functional form; only the parameters change. For example if p G (S) represents the sum of two Gaussians, then σ k σ k p G (S) exp[ike 1 ] exp[ike exp[ik(e 1 + E ) 1 ] (σ1 + σ )k ] The last expression is the Fourier transform of a Gaussian of mean E 1 + E and variance σ1 + σ. Check to see that the transforms for the Poisson and Lorentzian behave in a similar manner....
14 60 Probability Simple averaging revealed that for statistically independent random variables the mean of the sum is the sum of the means and the variance is the sum of the variances. The Central Limit Theorem gives information about the functional form of the resulting probability density. Central Limit Theorem (non-mathematical form) Let S n be the sum of n statistically independent, identically distributed random variables of mean E x and variance σ x (which must be finite). For large n, p(s n ) can often be well represented by a Gaussian with mean ne x and variance nσ x. 1 (Sn ne x ) p(s n ) exp[ ] πnσ nσx x The vague wording of the theorem as presented here shows immediately why it is qualified by the adjective non-mathematical. Yet this is perhaps the most useful form of the theorem for our purposes. Consider the specific wording used here. σ x (which must be finite) This excludes cases such as Lorentzian random variables where the density falls off so slowly with x that the variance is infinite. Of course, for Lorentzians one does not need a Central Limit Theorem for it should be clear from earlier discussion in this section that the sum of n statistically independent Lorentzian random variables will be Lorentzian for any value of n. For large n How large? Mathematicians would be more specific. t will be shown below that for practical purposes convergence with increasing n is often quite rapid.
15 Sums of a Random Variables 61 can often be well represented by Certainly not the words of a mathematician! However, the vagueness is deliberate. The Gaussian result presented is a continuous function of the sum variable S n. But a sum of n discrete random variables will always be discrete, no matter how large n might be. A continuous function is not going to converge in any mathematical sense to a sum of δ functions. What actually happens in such cases is that the envelope of the comb of δ functions approaches a Gaussian shape. A course grained average of p(s n ) (averaging over a number of discrete values at a time) would approach the continuous density presented in the theorem. So much for the limitations of the Central Limit Theorem; now consider some extensions. The Gaussian can be a good practical approximation for modest values of n. The examples which follow illustrate this point. The Central Limit Theorem may work even if the individual members of the sum are not identically distributed. A prerequisite is that no individual member should dominate the sum. The necessity for this restriction can be visualized easily. Consider adding 6 variables, each uniformly distributed between -1 and 1, and a 7th variable, uniform between -100 and 100. The density for the sum will look rectangular with rounded edges near ±100. The requirement that the variables be statistically independent may even be waived in some cases, particularly when n is very large. The Central Limit Theorem plays a very important role in science and engineering. Knowingly or unknowingly, it is often the basis for the assumption that some particular physical random variable has a Gaussian probability density.
16 6 Probability Example: Sum of Four Uniformly Distributed Variables Consider a random variable x which is uniformly distributed between a and a. The sum of only four such variables has a density which is reasonably close to the CLT approximation. An earlier example found the density for a sum of two of these variables to be triangular. Since convolution is associative, the density of the sum of four can be found by convolving the triangular density with itself. The result is as follows. p(s) = 1 (3a 3 1aS +3 S 3 ) 0 S a 96a 4 = 1 96a 4 (4a S ) 3 a S 4a = 0 4a S The probability density for the sum is plotted below for a = 1. n this case the mean of the single random variable is zero and its variance is 1/3. The CLT approximation has also been plotted, a Gaussian with zero mean and variance equal to 4/3. Even though the density for a single variable is discontinuous, the density for the sum is quite smooth. The good match to the Gaussian in this example when the number of variables in the sum, n, is as small as 4 is due to the fact that the single density is symmetric and falls off rapidly with increasing argument. But note that the true density for the sum is zero beyond S = 4 while the Gaussian is finite. One could say that the percentage error is infinite beyond S = 4. Well, you can t have everything. This does illustrate, however, the problems encountered in formulating a mathematical statement of the convergence of the Central Limit Theorem.
17 Sums of a Random Variables Example: Sums of Exponentially Distributed Variables A single sided exponential is an example of a density that is both discontinuous and asymmetric. t falls off more gradually with increasing argument than does a rectangular or Gaussian density. n this case convergence to the CLT approximation requires a larger number of elements, n, in the sum. 1 p(x) = e x/a x>0 a = 0 x<0 Here the mean is a and the variance is a. This density is sufficiently simple that the repeated convolution necessary to find the density for the sum of n identical statistically independent variables can be carried out analytically. [ S n 1 ] p(s) = e S/a S 0 (n 1)!a n = 0 S<0 This result is shown below for a = 1 and several values of n. The Gaussian approximation is also shown for comparison.
18 64 Probability
19 Sums of a Random Variables 65 Example: Poisson Density for Large <n> The Central Limit Theorem applied to sums of discrete random variables states that the envelope of the train of δ functions representing p(s) should approach a Gaussian. Here the Poisson density is used to illustrate this behavior. The graphs on the following page show the Poisson density for several values of the mean number of events <n>. (Recall that for a Poisson density, the variance is equal to the mean.) The dots indicate the coefficients of the delta functions that would be located at each positive integer value of S. The dashed lines show the CLT approximation 1 (S <n>) p(s) = exp[ ]. π <n> <n> Since the sum of Poisson random variables also has a Poisson density, the case for which <n>= 0 could be the result of adding 10 variables of mean or of adding 5 variables of mean 4. For the Poisson, one can see that it is the mean of the final density, not the number of statistically independent elements in the sum, which determines the convergence to the CLT result. The envelope of the density of a single Poisson random variable will approach a Gaussian if the mean is large enough! The fact that a single Poisson density approaches a Gaussian envelope for large mean is not typical of discrete densities. Consider the Bose-Einstein (or geometric) density that was treated in a homework problem. For that density the probability of obtaining a given integer decreases monotonically with increasing value, no matter how large the mean of the density. n contrast, the density for the sum of 10 identical Bose-Einstein random variables, each with mean, would not be of the Bose-Einstein form and, following the CLT, would exhibit a peak in its envelope near S = 0.
20 66 Probability
21 MT OpenCourseWare Statistical Physics Spring 013 For information about citing these materials or our Terms of Use, visit:
ECE302 Spring 2006 HW5 Solutions February 21, 2006 1
ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur
Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
Lecture Notes on Probability for 8.044: Statistical Physics I
Lecture Notes on Probability for 8.044: Statistical Physics I Thomas J. Greytak Physics Department Massachusetts Institute of Technology Spring Term 2004 Lecturer: Tom Greytak Preface Probability is the
Chapter 3 RANDOM VARIATE GENERATION
Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.
Notes on Continuous Random Variables
Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes
Inner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
6 PROBABILITY GENERATING FUNCTIONS
6 PROBABILITY GENERATING FUNCTIONS Certain derivations presented in this course have been somewhat heavy on algebra. For example, determining the expectation of the Binomial distribution (page 5.1 turned
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2015 Timo Koski Matematisk statistik 24.09.2015 1 / 1 Learning outcomes Random vectors, mean vector, covariance matrix,
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms
Joint Exam 1/P Sample Exam 1
Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question
The Bivariate Normal Distribution
The Bivariate Normal Distribution This is Section 4.7 of the st edition (2002) of the book Introduction to Probability, by D. P. Bertsekas and J. N. Tsitsiklis. The material in this section was not included
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution
SF2940: Probability theory Lecture 8: Multivariate Normal Distribution Timo Koski 24.09.2014 Timo Koski () Mathematisk statistik 24.09.2014 1 / 75 Learning outcomes Random vectors, mean vector, covariance
Math 115 Spring 2011 Written Homework 5 Solutions
. Evaluate each series. a) 4 7 0... 55 Math 5 Spring 0 Written Homework 5 Solutions Solution: We note that the associated sequence, 4, 7, 0,..., 55 appears to be an arithmetic sequence. If the sequence
Properties of Real Numbers
16 Chapter P Prerequisites P.2 Properties of Real Numbers What you should learn: Identify and use the basic properties of real numbers Develop and use additional properties of real numbers Why you should
Algebra I Vocabulary Cards
Algebra I Vocabulary Cards Table of Contents Expressions and Operations Natural Numbers Whole Numbers Integers Rational Numbers Irrational Numbers Real Numbers Absolute Value Order of Operations Expression
1 Sufficient statistics
1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =
a 1 x + a 0 =0. (3) ax 2 + bx + c =0. (4)
ROOTS OF POLYNOMIAL EQUATIONS In this unit we discuss polynomial equations. A polynomial in x of degree n, where n 0 is an integer, is an expression of the form P n (x) =a n x n + a n 1 x n 1 + + a 1 x
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.
Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x
1 The Brownian bridge construction
The Brownian bridge construction The Brownian bridge construction is a way to build a Brownian motion path by successively adding finer scale detail. This construction leads to a relatively easy proof
E3: PROBABILITY AND STATISTICS lecture notes
E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................
CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.
Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,
Definition 8.1 Two inequalities are equivalent if they have the same solution set. Add or Subtract the same value on both sides of the inequality.
8 Inequalities Concepts: Equivalent Inequalities Linear and Nonlinear Inequalities Absolute Value Inequalities (Sections 4.6 and 1.1) 8.1 Equivalent Inequalities Definition 8.1 Two inequalities are equivalent
Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability
CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces
UNIT I: RANDOM VARIABLES PART- A -TWO MARKS
UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0
MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables
MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables Tony Pourmohamad Department of Mathematics De Anza College Spring 2015 Objectives By the end of this set of slides,
Lecture 3: Linear methods for classification
Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,
15.062 Data Mining: Algorithms and Applications Matrix Math Review
.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop
Chapter 3. Cartesian Products and Relations. 3.1 Cartesian Products
Chapter 3 Cartesian Products and Relations The material in this chapter is the first real encounter with abstraction. Relations are very general thing they are a special type of subset. After introducing
THE CENTRAL LIMIT THEOREM TORONTO
THE CENTRAL LIMIT THEOREM DANIEL RÜDT UNIVERSITY OF TORONTO MARCH, 2010 Contents 1 Introduction 1 2 Mathematical Background 3 3 The Central Limit Theorem 4 4 Examples 4 4.1 Roulette......................................
0.8 Rational Expressions and Equations
96 Prerequisites 0.8 Rational Expressions and Equations We now turn our attention to rational expressions - that is, algebraic fractions - and equations which contain them. The reader is encouraged to
5. Continuous Random Variables
5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be
STAT 830 Convergence in Distribution
STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31
Maximum Likelihood Estimation
Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for
Section 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
Differentiation and Integration
This material is a supplement to Appendix G of Stewart. You should read the appendix, except the last section on complex exponentials, before this material. Differentiation and Integration Suppose we have
Math 4310 Handout - Quotient Vector Spaces
Math 4310 Handout - Quotient Vector Spaces Dan Collins The textbook defines a subspace of a vector space in Chapter 4, but it avoids ever discussing the notion of a quotient space. This is understandable
Econometrics Simple Linear Regression
Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight
6.4 Normal Distribution
Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under
Probability Generating Functions
page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence
ALGEBRA. sequence, term, nth term, consecutive, rule, relationship, generate, predict, continue increase, decrease finite, infinite
ALGEBRA Pupils should be taught to: Generate and describe sequences As outcomes, Year 7 pupils should, for example: Use, read and write, spelling correctly: sequence, term, nth term, consecutive, rule,
6.041/6.431 Spring 2008 Quiz 2 Wednesday, April 16, 7:30-9:30 PM. SOLUTIONS
6.4/6.43 Spring 28 Quiz 2 Wednesday, April 6, 7:3-9:3 PM. SOLUTIONS Name: Recitation Instructor: TA: 6.4/6.43: Question Part Score Out of 3 all 36 2 a 4 b 5 c 5 d 8 e 5 f 6 3 a 4 b 6 c 6 d 6 e 6 Total
4. Continuous Random Variables, the Pareto and Normal Distributions
4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random
MATH 4330/5330, Fourier Analysis Section 11, The Discrete Fourier Transform
MATH 433/533, Fourier Analysis Section 11, The Discrete Fourier Transform Now, instead of considering functions defined on a continuous domain, like the interval [, 1) or the whole real line R, we wish
Principle of Data Reduction
Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then
MA107 Precalculus Algebra Exam 2 Review Solutions
MA107 Precalculus Algebra Exam 2 Review Solutions February 24, 2008 1. The following demand equation models the number of units sold, x, of a product as a function of price, p. x = 4p + 200 a. Please write
INTERPOLATION. Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y).
INTERPOLATION Interpolation is a process of finding a formula (often a polynomial) whose graph will pass through a given set of points (x, y). As an example, consider defining and x 0 =0, x 1 = π 4, x
a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.
Chapter 1 LINEAR EQUATIONS 1.1 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,..., a n, b are given
3. INNER PRODUCT SPACES
. INNER PRODUCT SPACES.. Definition So far we have studied abstract vector spaces. These are a generalisation of the geometric spaces R and R. But these have more structure than just that of a vector space.
RANDOM INTERVAL HOMEOMORPHISMS. MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis
RANDOM INTERVAL HOMEOMORPHISMS MICHA L MISIUREWICZ Indiana University Purdue University Indianapolis This is a joint work with Lluís Alsedà Motivation: A talk by Yulij Ilyashenko. Two interval maps, applied
1 Prior Probability and Posterior Probability
Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which
The Heat Equation. Lectures INF2320 p. 1/88
The Heat Equation Lectures INF232 p. 1/88 Lectures INF232 p. 2/88 The Heat Equation We study the heat equation: u t = u xx for x (,1), t >, (1) u(,t) = u(1,t) = for t >, (2) u(x,) = f(x) for x (,1), (3)
Introduction to Probability
Introduction to Probability EE 179, Lecture 15, Handout #24 Probability theory gives a mathematical characterization for experiments with random outcomes. coin toss life of lightbulb binary data sequence
Common Core Unit Summary Grades 6 to 8
Common Core Unit Summary Grades 6 to 8 Grade 8: Unit 1: Congruence and Similarity- 8G1-8G5 rotations reflections and translations,( RRT=congruence) understand congruence of 2 d figures after RRT Dilations
Linear Algebra Notes for Marsden and Tromba Vector Calculus
Linear Algebra Notes for Marsden and Tromba Vector Calculus n-dimensional Euclidean Space and Matrices Definition of n space As was learned in Math b, a point in Euclidean three space can be thought of
Answer Key for California State Standards: Algebra I
Algebra I: Symbolic reasoning and calculations with symbols are central in algebra. Through the study of algebra, a student develops an understanding of the symbolic language of mathematics and the sciences.
CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA
We Can Early Learning Curriculum PreK Grades 8 12 INSIDE ALGEBRA, GRADES 8 12 CORRELATED TO THE SOUTH CAROLINA COLLEGE AND CAREER-READY FOUNDATIONS IN ALGEBRA April 2016 www.voyagersopris.com Mathematical
Digital Imaging and Multimedia. Filters. Ahmed Elgammal Dept. of Computer Science Rutgers University
Digital Imaging and Multimedia Filters Ahmed Elgammal Dept. of Computer Science Rutgers University Outlines What are Filters Linear Filters Convolution operation Properties of Linear Filters Application
Lectures on Stochastic Processes. William G. Faris
Lectures on Stochastic Processes William G. Faris November 8, 2001 2 Contents 1 Random walk 7 1.1 Symmetric simple random walk................... 7 1.2 Simple random walk......................... 9 1.3
Exploratory Data Analysis
Exploratory Data Analysis Johannes Schauer [email protected] Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction
Section 5.1 Continuous Random Variables: Introduction
Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,
Fixed Point Theorems
Fixed Point Theorems Definition: Let X be a set and let T : X X be a function that maps X into itself. (Such a function is often called an operator, a transformation, or a transform on X, and the notation
Undergraduate Notes in Mathematics. Arkansas Tech University Department of Mathematics
Undergraduate Notes in Mathematics Arkansas Tech University Department of Mathematics An Introductory Single Variable Real Analysis: A Learning Approach through Problem Solving Marcel B. Finan c All Rights
1 if 1 x 0 1 if 0 x 1
Chapter 3 Continuity In this chapter we begin by defining the fundamental notion of continuity for real valued functions of a single real variable. When trying to decide whether a given function is or
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 5 9/17/2008 RANDOM VARIABLES Contents 1. Random variables and measurable functions 2. Cumulative distribution functions 3. Discrete
Method To Solve Linear, Polynomial, or Absolute Value Inequalities:
Solving Inequalities An inequality is the result of replacing the = sign in an equation with ,, or. For example, 3x 2 < 7 is a linear inequality. We call it linear because if the < were replaced with
3.2. Solving quadratic equations. Introduction. Prerequisites. Learning Outcomes. Learning Style
Solving quadratic equations 3.2 Introduction A quadratic equation is one which can be written in the form ax 2 + bx + c = 0 where a, b and c are numbers and x is the unknown whose value(s) we wish to find.
6.1. The Exponential Function. Introduction. Prerequisites. Learning Outcomes. Learning Style
The Exponential Function 6.1 Introduction In this block we revisit the use of exponents. We consider how the expression a x is defined when a is a positive number and x is irrational. Previously we have
99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm
Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the
x a x 2 (1 + x 2 ) n.
Limits and continuity Suppose that we have a function f : R R. Let a R. We say that f(x) tends to the limit l as x tends to a; lim f(x) = l ; x a if, given any real number ɛ > 0, there exists a real number
Basics of Statistical Machine Learning
CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar
The Convolution Operation
The Convolution Operation Convolution is a very natural mathematical operation which occurs in both discrete and continuous modes of various kinds. We often encounter it in the course of doing other operations
Continued Fractions and the Euclidean Algorithm
Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction
Some probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
Review of Fundamental Mathematics
Review of Fundamental Mathematics As explained in the Preface and in Chapter 1 of your textbook, managerial economics applies microeconomic theory to business decision making. The decision-making tools
Normal Distribution. Definition A continuous random variable has a normal distribution if its probability density. f ( y ) = 1.
Normal Distribution Definition A continuous random variable has a normal distribution if its probability density e -(y -µ Y ) 2 2 / 2 σ function can be written as for < y < as Y f ( y ) = 1 σ Y 2 π Notation:
Zeros of a Polynomial Function
Zeros of a Polynomial Function An important consequence of the Factor Theorem is that finding the zeros of a polynomial is really the same thing as factoring it into linear factors. In this section we
An Introduction to Partial Differential Equations
An Introduction to Partial Differential Equations Andrew J. Bernoff LECTURE 2 Cooling of a Hot Bar: The Diffusion Equation 2.1. Outline of Lecture An Introduction to Heat Flow Derivation of the Diffusion
The Deadly Sins of Algebra
The Deadly Sins of Algebra There are some algebraic misconceptions that are so damaging to your quantitative and formal reasoning ability, you might as well be said not to have any such reasoning ability.
A characterization of trace zero symmetric nonnegative 5x5 matrices
A characterization of trace zero symmetric nonnegative 5x5 matrices Oren Spector June 1, 009 Abstract The problem of determining necessary and sufficient conditions for a set of real numbers to be the
Supplement to Call Centers with Delay Information: Models and Insights
Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290
Algebra Unpacked Content For the new Common Core standards that will be effective in all North Carolina schools in the 2012-13 school year.
This document is designed to help North Carolina educators teach the Common Core (Standard Course of Study). NCDPI staff are continually updating and improving these tools to better serve teachers. Algebra
CHI-SQUARE: TESTING FOR GOODNESS OF FIT
CHI-SQUARE: TESTING FOR GOODNESS OF FIT In the previous chapter we discussed procedures for fitting a hypothesized function to a set of experimental data points. Such procedures involve minimizing a quantity
The Standard Normal distribution
The Standard Normal distribution 21.2 Introduction Mass-produced items should conform to a specification. Usually, a mean is aimed for but due to random errors in the production process we set a tolerance
Chapter 17. Orthogonal Matrices and Symmetries of Space
Chapter 17. Orthogonal Matrices and Symmetries of Space Take a random matrix, say 1 3 A = 4 5 6, 7 8 9 and compare the lengths of e 1 and Ae 1. The vector e 1 has length 1, while Ae 1 = (1, 4, 7) has length
Prime numbers and prime polynomials. Paul Pollack Dartmouth College
Prime numbers and prime polynomials Paul Pollack Dartmouth College May 1, 2008 Analogies everywhere! Analogies in elementary number theory (continued fractions, quadratic reciprocity, Fermat s last theorem)
For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )
Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll
VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA
VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA Csilla Csendes University of Miskolc, Hungary Department of Applied Mathematics ICAM 2010 Probability density functions A random variable X has density
Metric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
Lies My Calculator and Computer Told Me
Lies My Calculator and Computer Told Me 2 LIES MY CALCULATOR AND COMPUTER TOLD ME Lies My Calculator and Computer Told Me See Section.4 for a discussion of graphing calculators and computers with graphing
Moving Least Squares Approximation
Chapter 7 Moving Least Squares Approimation An alternative to radial basis function interpolation and approimation is the so-called moving least squares method. As we will see below, in this method the
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS
NEW YORK STATE TEACHER CERTIFICATION EXAMINATIONS TEST DESIGN AND FRAMEWORK September 2014 Authorized for Distribution by the New York State Education Department This test design and framework document
CHAPTER 3. Methods of Proofs. 1. Logical Arguments and Formal Proofs
CHAPTER 3 Methods of Proofs 1. Logical Arguments and Formal Proofs 1.1. Basic Terminology. An axiom is a statement that is given to be true. A rule of inference is a logical rule that is used to deduce
The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].
Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real
ALGEBRA REVIEW LEARNING SKILLS CENTER. Exponents & Radicals
ALGEBRA REVIEW LEARNING SKILLS CENTER The "Review Series in Algebra" is taught at the beginning of each quarter by the staff of the Learning Skills Center at UC Davis. This workshop is intended to be an
The continuous and discrete Fourier transforms
FYSA21 Mathematical Tools in Science The continuous and discrete Fourier transforms Lennart Lindegren Lund Observatory (Department of Astronomy, Lund University) 1 The continuous Fourier transform 1.1
Introduction to MATLAB IAP 2008
MIT OpenCourseWare http://ocw.mit.edu Introduction to MATLAB IAP 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. Introduction to Matlab Ideas for
This unit will lay the groundwork for later units where the students will extend this knowledge to quadratic and exponential functions.
Algebra I Overview View unit yearlong overview here Many of the concepts presented in Algebra I are progressions of concepts that were introduced in grades 6 through 8. The content presented in this course
15. Symmetric polynomials
15. Symmetric polynomials 15.1 The theorem 15.2 First examples 15.3 A variant: discriminants 1. The theorem Let S n be the group of permutations of {1,, n}, also called the symmetric group on n things.
Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab
Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?
