# 1 Prior Probability and Posterior Probability

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which observations are to be taken from a distribution for which the pdf or the mass probability function is f(x θ), where θ is a parameter having an unknown value. It is assumed that the unknown value of θ must lie in a specified parameter space Θ. In many problems, before any observations from f(x θ) are available, the experimenter or statistician will be able to summarize his previous information and knowledge about where in Θ the value of θ is likely to lie by constructing a probability distribution for θ on the set Θ. In other words, before any experimental data have been collected or observed, the experimenter s past experience and knowledge will lead him to believe that θ is more likely to lie in certain regions of Θ than in others. We would assume that the relative likelihoods of the different regions can be expressed in terms of a probability distribution on Θ. This distribution is called the prior distribution of θ, because it represents the relative likelihood that the true value of θ lies in each of various regions of Θ prior to observing any values from f(x θ). The concept of a prior distribution is very controversial in statistics. This controversy is closely related to the controversy regarding the meaning of probability. Some statisticians believe that a prior distribution can be chosen for the parameter θ in every statistics problem. They believe that this distribution is a subjective probability distribution in the sense that it represents an individual experimenter s information and subjective beliefs about where the true value of θ is likely to lie. They also believe, however, that a prior distribution is no different from any other probability distribution used in the field of statistics and that all the rules of probability theory apply to a prior distribution. This school of statisticians are said to adhere to the philosophy of Bayesian statistics. Other statisticians believe that in many problems it is not appropriate to speak of a probability distribution of θ because the true value of θ is not a random variable at all, but is rather a certain fixed number whose value happens to be unknown to the experimenter. These statisticians believe that a prior distribution can be assigned to a parameter θ only when there is extensive previous information about the relevant frequencies with which θ has taken 1

2 2 each of its possible values in the past, so that two different scientists would agree on the correct prior distribution to be used. This group of statisticians are known as frequentism. Both groups of statisticians agree that whenever a meaningful prior distribution can be chosen, the theory and methods to be described in this section are applicable and useful. The method of moment and maximum likelihood are not based on the the prior distribution, while the method to be introduced in this lecture will be based on the prior distribution. Suppose now that the n random variables X 1,, X n form a random sample from a distribution for which the pdf or the point mass function is f(x θ). Suppose also that the value of the parameter θ is unknown, and that the prior pdf or prior point mass function is p(θ). For simplicity, we shall assume that the parameter space Θ is either an interval of the real line or the entire real line, and that p(θ) is a prior pdf on Θ, rather than a prior point mass function. However, the discussion that will be given here can be easily adapted for a problem in which f(x θ) or p(θ) is a point mass function. Since the random variables X 1,, X n form a random sample from a distribution for which the pdf is f(x θ), it follows that their joint pdf f n (x 1,, x n θ) is given by f n (x 1,, x n θ) = f(x 1 θ) f(x n θ) If we use the vector notation x = (x 1,, x n ), the above joint pdf could be written simply as f n (x θ). Since the parameter θ itself is now regarded as having a distribution for which the pdf is p(θ), the joint pdf f n (x θ) could be regarded as the conditional joint pdf of X 1,, X n for a given value of θ. If we multiply this conditional joint pdf by the prior pdf p(θ), we obtain the (n + 1)-dimensional joint pdf of X 1,, X n and θ in the form f n (x θ)p(θ). The marginal joint pdf of X 1,, X n can now be obtained by integrating this joint pdf over all values of θ. Therefore, the n-dimensional marginal joint pdf g n (x) of X 1,, X n can be written in the form g n (x) = f n (x θ)p(θ)dθ. Θ Furthermore, the conditional pdf of θ given that X 1 = x 1,, X n = x n, which will be denoted by p(θ x), must be the joint pdf of X 1,, X n and θ divided by the marginal joint pdf of X 1,, X n. Thus, we have p(θ x) = f n(x θ)p(θ) g n (x) for θ Θ (1) The probability distribution over Θ represented by the conditional pdf in Eqn. 1 is called the posterior distribution of θ, because it is the distribution of θ after the values of X 1,, X n have been observed. We may say that the prior pdf p(θ) represents the relative likelihood, before the values of X 1,, X n have been observed, that the true value of θ lies in each of various regions of Θ. The prior distribution reflects our prior belief about the parameter θ.

3 3 On the other hand, the posterior pdf p(θ x) represents the relative likelihood after the values X 1 = x 1,, X n = x n have been observed. The posterior distribution reflects our belief about the parameter θ after we observed the data. Usually, the prior distribution is different from the posterior distribution, this means that the data can change our belief about the parameter. The denominator on the right hand side of Eqn. 1 is simply the integral of the numerator over all the possible values of θ. Although the value of this integral depends on the observed values x 1,, x n, it does not depend on θ in which we are interested, therefore it can be treated as a constant when the right hand side of Eqn. 1 is regarded as a pdf of θ. We may therefore replace Eqn. 1 with the following relation: p(θ x) f n (x θ)p(θ). (2) The proportionality symbol is used to indicate that the left side is equal to the right side except possibly for a constant factor, the value of which may depend on the observed values x 1,, x n but does not depend on θ. The appropriate constant factor which will establish the equality of the two sides in the relation (Eqn. 2) can be determined at any time by using the fact that Θ p(θ x)dθ = 1, because p(θ x) is a pdf of θ. Recall that when the joint pdf or the joint point mass function f n (x θ) of the observations in a random sample is regarded as a function of θ for given values of x 1,, x n, it is called the likelihood function. In this terminology, the relation (2) states that the posterior pdf of θ is proportional to the product of the likelihood function and the prior pdf of θ, i.e. Posterior Likelihood Prior By using the proportional relation (2), it is often possible to determine the posterior pdf of θ without explicitly performing the integration. If we can recognize the right hand side of the relation (2) as being equal to one of the standard pdf s except possibly for a constant factor, then we can easily determine the appropriate factor which will convert the right side of (2) into a proper pdf of θ. We will see some examples in the following. 2 Conjugate Prior Distributions Certain prior distributions are particularly convenient for use with samples from certain other distributions. For example, suppose that a random sample is taken from a Bernoulli distribution for which the value of the parameter θ is unknown. If the prior distribution is a beta distribution, the for any possible set of observed sample values, the posterior distribution of θ will again be a beta distribution. Theorem 1: Suppose that X 1,, X n form a random sample from a Bernoulli distribution for which the value of the parameter θ is unknown (0 < θ < 1). Suppose also that the prior

4 4 distribution of θ is a beta distribution with given parameter α and β (α > 0 and β > 0). Then the posterior distribution of θ given that X i = x i (i = 1,, n) is a beta distribution with parameters α + n x i and β + n n x i. Proof: Let y = n x i, then the likelihood function, i.e., the joint pdf f n (x θ) of X 1,, X n is given by f n (x θ) = f(x 1 θ) f(x n θ) = θ x 1 (1 θ) 1 x1 θ xn (1 θ) 1 xn = θ y (1 θ) n y The prior distribution is p(θ) = Γ(α + β) Γ(α)Γ(β) θα 1 (1 θ) β 1 θ α 1 (1 θ) β 1 Since the posterior pdf p(θ x) is proportional to the product f n (x θ)p(θ), it follows that p(θ x) θ y+α 1 (1 θ) β+n y 1 for 0 < θ < 1. The right side of this relation can be recognized as being, except for a constant factor, equal to the pdf of a beta distribution with parameter α + y and β + n y. Therefore the posterior distribution of θ is as specified in the theorem, and we can easily determine the constant as Γ(α+β+n) Γ(α+y)Γ(β+n y). An implication of Theorem 1 is the following: In Bernoulli distribution, suppose θ, the proportion of 1, is unknown, and we have n random variables sampled from this Bernoulli distribution, i.e. a series of 0 or 1. Suppose also that the prior distribution of θ is a beta distribution with parameters α and β. If the first random variable is a 1, the posterior distribution of θ will be a beta distribution with parameters α + 1 and β; if the first random variable is a 0, the posterior distribution of θ will be a beta distribution with parameters α and β + 1. The process can be continued in this way. Each time, a random variable is obtained, the current posterior distribution of θ is changed to a new beta distribution in which the value of either the parameter α or the parameter β is increased by one unit: the value of α is increased by one unit if a 1 is observed, and the value of β is increased by one unit if a 0 is obtained. The family of beta distribution is called a conjugate family of prior distributions for samples from a Bernoulli distribution. If the prior distribution of θ is a beta distribution, then the posterior distribution at each stage of sampling will also be a beta distribution, regardless of the observed values in the sample. It is also said that the family of beta distribution is closed under sampling from a Bernoulli distribution. When samples are taken from a Poisson distribution, the family of gamma distributions is a conjugate family of prior distributions. This relationship is shown in the next theorem. Theorem 2: Suppose that X 1,, X n form a random sample from a Poisson distribution for which the value of the parameter θ is unknown (θ > 0). Suppose also that the prior

5 5 distribution of θ is a gamma distribution with given parameter α and β (α > 0 and β > 0). Then the posterior distribution of θ given that X i = x i (i = 1,, n) is a gamma distribution with parameters α + n x i and β + n. Proof: Let y = n x i, then the likelihood function, i.e., the joint pdf f n (x θ) of X 1,, X n is given by The prior pdf of θ is f n (x θ) = f(x 1 θ) f(x n θ) = θx 1 e θ p(θ) = x 1! θxn e θ x n! βα Γ(α) θα 1 e βθ for θ > 0. = θy e nθ x 1! x n! Since the posterior pdf p(θ x) is proportional to the product f n (x θ)p(θ), it follows that p(θ x) θ y+α 1 e (β+n)θ for θ > 0. The right side of this relation can be recognized as being, except for a constant factor, equal to the pdf of a gamma distribution with parameter α + y and β + n. Therefore the posterior distribution of θ is as specified in the theorem, and we can easily determine the constant as (β+n) α+y Γ(α+y). When samples are taken from a normal distribution N(µ, σ 2 ), we will discuss two cases: µ is unknown but σ 2 is known, and σ 2 is unknown but µ is known. In these two cases, the conjugate prior distributions are different as the following two theorems show. For the convenience of our discuss, we reparameterize the normal distribution, replacing σ 2 by ξ = σ 2 ; ξ is called the precision because the variance σ 2 measures the spread of the normal distribution. Under this representation, the normal distribution can be expressed as f(x µ, ξ) = ( ) 1/2 ξ exp ( 12 ) 2π ξ(x µ)2 Theorem 3: Suppose that X 1,, X n form a random sample from a normal distribution for which the value of the mean µ is unknown ( < µ < ), and the value of precision ξ (ξ > 0) is known to be ξ 0. Suppose also that the prior distribution of µ is a normal distribution with given parameter µ p and ξ p. Then the posterior distribution of µ given that X i = x i (i = 1,, n) is a normal distribution with mean µ 1 and precision ξ 1 as µ 1 = nξ 0 x + µ p ξ p nξ 0 + ξ p (3) and ξ 1 = nξ 0 + ξ p

6 6 Proof: Given the observed data X 1 = x 1,, X n = x n and the prior distribution, we can write out the posterior distribution of µ as f(µ x) f n (x µ) p(µ) = ( ) n/2 ξ0 n 2π ( [ exp 1 2 exp n ξ 0 ( ξ ) 0 2 (x i µ) 2 (x i µ) 2 + ξ p (µ µ p ) 2 ]) ( ) 1/2 ( ξp exp ξ ) p 2π 2 (µ µ p) 2 Here we have exhibited only the terms in the posterior density that depend on µ; the last expression above shows the shape of the posterior density as a function of µ. The posterior density itself is proportional to this expression, with a proportionality constant that is determined by the requirement that the posterior density integrates to 1. We will now manipulate the expression for above to cast it in a form so that we can recognize that the posterior density is normal. We can write (x i µ) 2 = (x i x) 2 + (µ x) 2 + 2(x i x)( x µ) Summing up the two sides, it is easy to verify that n n (x i µ) 2 = (x i x) 2 + n(µ x) 2 Therefore, we can absorb more terms that do not depend on µ into the constant of proportionality, and find ( f(µ x) exp 1 [ n ]) ξ 0 (x i x) 2 + nξ 0 (µ x) 2 + ξ p (µ µ p ) 2 2 ( exp 1 2 [ nξ0 (µ x) 2 + ξ p (µ µ p ) 2]) Now, observe that this is of the form exp( Q(µ)/2), where Q(µ) is a quadratic polynomial. We can write Q(µ) as Q(µ) = nξ 0 (µ x) 2 + ξ p (µ µ p ) 2 = nξ 0 µ 2 + n x 2 2nξ 0 xµ + ξ p µ 2 + ξ p µ 2 p 2ξ p µ p µ [ = (nξ 0 + ξ p ) µ 2 2 nξ ] 0 x + µ p ξ p µ + C 1 nξ 0 + ξ p [ = (nξ 0 + ξ p ) µ nξ ] 2 0 x + µ p ξ p + C 2 nξ 0 + ξ p where C 1 and C 2 are two constant expressions which do not depend on µ. Finally, we can write the posterior distribution of µ as f(µ x) exp( Q(µ)/2) exp nξ [ 0 + ξ p µ nξ ] 2 0 x + µ p ξ p 2 nξ 0 + ξ p

7 7 Comparing the above expression to the normal distribution density function (Eqn. 3), we can recognize that the posterior distribution is a normal distribution with mean µ 1 and precision ξ 1 as and µ 1 = nξ 0 x + µ 0 ξ p nξ 0 + ξ p ξ 1 = nξ 0 + ξ p The mean value µ 1 of the posterior distribution of µ can be rewritten as follows: µ 1 = nξ 0 ξ p x + µ p nξ 0 + ξ p nξ 0 + ξ p which means that µ 1 is a weighted average of the mean µ p of the prior distribution and the sample mean x. Furthermore, it can be seen that the relative weight given to x satisfies the following three properties: i) For fixed values of ξ 0 and ξ p, the larger the sample size n, the greater will be the relative weight that is given to x; ii) For fixed values of ξ p and n, the smaller the precision ξ 0 of each observation in the sample, the smaller will be the relative weight that is given to x; iii) For fixed value of ξ 0 and n, the smaller the precision ξ p of the prior distribution, the larger will be the relative weight is given to x. Moreover, it can be seen that the precision ξ 1 of the posterior distribution of µ depends on the number n of the observations that have been taken but not depend on the magnitudes of the observed values. Suppose, therefore, that a random sample of n observations is to be taken from a normal distribution for which the value of the mean µ is unknown and the value of the variance (or precision) is known, and suppose that the prior distribution of µ is a specified normal distribution. Then, before any observations have been taken, we can calculate the value that the variance of the posterior distribution will have. However, the value of the mean µ 1 of the posterior distribution will depend on the observed values that are obtained in the sample. Now, let us consider the case when ξ p nξ 0, which would be the case if n were sufficiently large or if ξ p were small (as for a very flat prior). In this case, the posterior mean would be µ 1 x, which is the maximum likelihood estimate. The result in Theorem 3 can also be expressed in terms of mean value and variance value, which is µ 1 = σ2 0µ p + nσ 2 p x σ nσ 2 p and σ 2 1 = σ2 0σ 2 p σ nσ 2 p where σ 0 and σ p are the known variances of the sample distribution and prior distribution, respectively.

8 8 Theorem 4: Suppose that X 1,, X n form a random sample from a normal distribution for which the value of the mean µ is known to be µ 0, and the value of precision ξ (ξ > 0) is unknown. Suppose also that the prior distribution of ξ is a gamma distribution with given parameter α and β. Then the posterior distribution of ξ given that X i = x i (i = 1,, n) is a gamma distribution with parameters α 1 and precision β 1 as α 1 = α + n 2 and β 1 = β + 1 n (x i µ 0 ) 2 2 Proof: Given the observed data X 1 = x 1,, X n = x n and the prior distribution, we can write out the posterior distribution of ξ as f(ξ x) f n (x ξ) p(ξ) = ( ) n/2 ξ n 2π ξ α+ n 2 1 exp ( ξ2 ) (x i µ 0 ) 2 βα Γ(α) ξα 1 e βξ [ ]) 1 n ξ (x i µ) 2 + β 2 exp ( The right side of this relation can be recognized as being, except for a constant factor, equal to the pdf of a gamma distribution with parameter α+ n 2 and β n (x i µ 0 ) 2. Therefore the posterior distribution of ξ is as specified in the theorem. Now, by summarizing the above examples, we can give out the formal definition of conjugate priors: if the prior distribution belongs to a family G and, conditional on the parameters from G, the data have a distribution H, then G is said to be conjugate to H if the posterior is in the family G. Conjugate priors are used used for mathematical convenience (required integrations can be done in closed form) and they can assume a variety of shapes at the parameters of the prior are varied. 3 Bayes Estimator 3.1 Review of Estimator Suppose we have an i.i.d. random sample X 1,, X n which is taken from a distribution f(x θ) with the unknown parameter(s) θ. Suppose also that the value of θ must lie in a given interval Θ of the real line. We also assume the the value of θ must be estimated from the observed values in the sample.

9 9 An estimator of the parameter θ, based on the random variables X 1,, X n, is a real-valued function δ(x 1,, X n ) which specifies the estimated value of θ for each possible set of values of X 1,, X n. In other words, if the observed values of X 1,, X n turn out to be x 1,, x n, then the estimated value of θ is δ(x 1,, x n ). Since the value of θ must belong to the interval Θ, it is reasonable to require that every possible value of an estimator δ(x 1,, X n ) must also belong to Θ. It will often be convenient to use vector notation and to let X = (X 1,, X n ) and x = (x 1,, x n ). In this notation, an estimator is a function δ(x) of the random vector X, and an estimate is a specific value δ(x). 3.2 Loss Functions and Bayes Estimators The foremost requirement of a good estimator of δ is that it yield an estimate of θ which is close to the actual value of θ. In other words, a good estimator is one for which it is highly probable that the error δ(x) θ will be close to 0. We assume that for each possible value of θ Θ and each possible estimate a Θ, there is a number L(θ, a) which measures the loss or cost to the statistician when the true value of the parameter is θ and his estimate is a. Typically, the greater the distance between a and θ, the larger will be the value of L(θ, a). As before, we let p(θ) denote the prior pdf of θ on the interval Θ, and consider a problem in which the statistician must estimate the value of θ without being able to observe the values in a random sample. If the statistician chooses a particular estimate a, then his expected loss will be E[L(θ, a)] = L(θ, a)p(θ)dθ (4) Θ This is actually the average loss considering all the possible value of the true value of θ. We will assume that the statistician wishes to choose an estimate a for which the expected loss in Eqn. 4 is a minimum. In any estimation problem, a function L for which the expectation E[L(θ, a)] is to be minimized is called a loss function. Suppose now that the statistician can observe the value x of the random vector X before estimating θ, and let p(θ x) denote the posterior pdf of θ on Θ. For any estimate a the statistician might use, the expected loss will now be E[L(θ, a) x] = Θ L(θ, a)p(θ x)dθ (5) Now, the statistician would now choose an estimate a for which the expectation in Eqn. 5 is minimum. For each possible value x of the random vector X, let δ (x) denote a value of the estimate a for which the expected loss in Eqn. 5 is a minimum. Then the function δ (X) for which the values are specified in this way will be an estimator of θ. This estimator is called a Bayes

10 10 estimator of θ. In other words, for each possible value x of X, the value δ (x) of the Bayes estimator is chosen so that E[L(θ, δ (x)) x] = min E[L(θ, a) x] (6) a Θ It should be emphasized that the form of the Bayes estimator will depend on both the loss function that is used in the problem and the prior distribution that is assigned to θ. The Squared Error Loss Function: By far the most commonly used loss function in estimation problems is the squared error loss function. This function is defined as L(θ, a) = (θ a) 2. When the squared error loss function is used, the Bayes estimate δ (x) for any observed value of x will be the value of a for which the expectation E[(θ a) 2 x] is a minimum. It could be shown that for any given probability distribution of θ, the expectation of (θ a) 2 will be a minimum when a is chosen to be equal to the mean of the distribution of θ. Therefore, when the expectation of (θ a) 2 is calculated with respect to the posterior distribution of θ, this expectation will be a minimum when a is chosen to be equal to the mean E(θ x) of the posterior distribution. This means that when the squared error loss function is used, the Bayes estimator is δ (X) = E(θ X). Example 1: Estimating the Parameter of a Bernoulli Distribution. Suppose that a random sample X 1,, X n is to be taken from a Bernoulli distribution for which the value of the parameter θ is unknown and must be estimated, and suppose that the prior distribution of θ is a beta distribution with given parameters α and β (α > 0 and β > 0). Suppose also that the squared error loss function is used. We shall determine the Bayes estimator of θ. For any observed values x 1,, x n, let y = n x i. Then it follows from Theorem 1 that the posterior distribution of θ will be a beta distribution with parameters α + y and β + n y, therefore the mean value of this posterior distribution of θ is (α + y)/(α + β + n). The Bayes estimate δ (x) will be equal to this value for any observed vector x, i.e. δ (x) = α + n x i α + β + n Example 2: Estimating the Mean of a Normal Distribution. Suppose that a random sample X 1,, X n is to be taken from a normal distribution for which the value of the mean µ is unknown and the value of the variance σ 2 is known, and suppose that the prior distribution of µ is a normal distribution with given values of the mean µ p and the variance σ 2 p. Suppose also that the squared error loss function is used. We shall determine the Bayes estimator of µ. It follows from Theorem 3 that for any observed values x 1,, x n, the posterior distribution of µ will be a normal distribution for which the mean µ 1 is µ 1 = σ2 µ p + nσ 2 p x σ 2 + nσ 2 p

11 11 Therefore the Bayes estimator δ (X) is specified as µ 1 = σ2 µ p + nσ 2 p X σ 2 + nσ 2 p The Absolute Error Loss Function: Another commonly used loss function in estimation problems is the absolute error loss function, which is defined as L(θ, a) = θ a. For any observed value of x, the Bayes estimate δ(x) will now be the value of a for which the expectation E( θ a x) is a minimum. It could be shown that for any given probability distribution of θ, the expectation of θ a will be a minimum when a is chosen to be equal to a median of the distribution of θ. Therefore, when the expectation of θ a is calculated with respect to the posterior distribution of θ, this expectation will be a minimum when a is chosen to be equal to a median of the posterior distribution of θ. It follows that when the absolute error loss function is used, the Bayes estimator δ (X) is an estimator for which the value is always equal to a median of the posterior distribution of θ. 4 Exercises Exercises 1: Let p(θ) be a pdf which is defined as follows, for constant α > 0 and β > 0: p(θ) = { β α Γ(α) θ (α+1) e β/θ, for θ > 0 0, for θ 0 (a) Please verify that p(θ) is actually a pdf by verifying that 0 p(θ)dθ = 1. (b) Consider the family of probability distributions that can be represented by a pdf p(θ) having the given form for all possible pairs of constants α > 0 and β > 0. Show that this family is a conjugate family of prior for samples from a normal distribution with a known value of the mean µ and an unknown value of the variance θ. Exercises 2: The Pareto distribution with parameters x 0 > 0 and α > 0 is defined by the following pdf: { αx α 0, for θ > x f(θ x 0, α) = θ α+1 0 0, for θ x 0 Show that the family of Pareto distribution is a conjugate family of prior distribution for sample X 1,, X n from a uniform distribution on the interval (0, θ), where the value of the parameter θ is unknown. Also find out the posterior distribution (determine the proportional coefficients in the distribution).

12 12 Exercises 3: Suppose that X 1,, X n form a random sample from a distribution for which the pdf f(x θ) is as follows: f(x θ) = { θx θ 1, for 0 < x < 1 0, otherwise Suppose also that the value of the parameter θ is unknown (θ > 0) and that the prior distribution of θ is a gamma distribution with parameters α and β (α > 0 and β > 0). Determine the mean and the variance of the posterior distribution of θ. Exercises 4: Suppose that a random sample of size n is taken from a Bernoulli distribution for which the value of the parameter θ is unknown, and that the prior distribution of θ is a beta distribution for which the mean is µ 0. Show that the mean of the posterior distribution of θ will be a weighted average having the form γ n X n + (1 γ n )µ 0, and show that lim n γ n = 1. Exercises 5: Suppose that a random sample of size n is taken from a Poisson distribution for which the value of the mean θ is unknown, and that the prior distribution of θ is a gamma distribution for which the mean is µ 0. Show that the mean of the posterior distribution of θ will be a weighted average having the form γ n X n +(1 γ n )µ 0, and show that lim n γ n = 1. Exercises 6: Suppose that X 1,, X n form a random sample from a uniform distribution on the interval (0, θ), where the value of the parameter θ is unknown (θ > 0). Suppose also that the prior distribution of θ is a Pareto distribution with parameters x 0 and α (x 0 > 0 and α > 0). Suppose, finally, that the value of θ is to be determined by using the squared error loss function. Determined the Bayesian estimator of θ.

### Maximum Likelihood Estimation

Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

### Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation

Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2015 CS 551, Fall 2015

### 1 Sufficient statistics

1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

### Principle of Data Reduction

Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

### Basics of Statistical Machine Learning

CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

### Sufficient Statistics and Exponential Family. 1 Statistics and Sufficient Statistics. Math 541: Statistical Theory II. Lecturer: Songfeng Zheng

Math 541: Statistical Theory II Lecturer: Songfeng Zheng Sufficient Statistics and Exponential Family 1 Statistics and Sufficient Statistics Suppose we have a random sample X 1,, X n taken from a distribution

### Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.5 The t Test 9.6

### P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

### Using pivots to construct confidence intervals. In Example 41 we used the fact that

Using pivots to construct confidence intervals In Example 41 we used the fact that Q( X, µ) = X µ σ/ n N(0, 1) for all µ. We then said Q( X, µ) z α/2 with probability 1 α, and converted this into a statement

### Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

### Exact Confidence Intervals

Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

### Bayesian Statistics in One Hour. Patrick Lam

Bayesian Statistics in One Hour Patrick Lam Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models Outline Introduction Bayesian Models Applications Missing Data Hierarchical

### Part 2: One-parameter models

Part 2: One-parameter models Bernoilli/binomial models Return to iid Y 1,...,Y n Bin(1, θ). The sampling model/likelihood is p(y 1,...,y n θ) =θ P y i (1 θ) n P y i When combined with a prior p(θ), Bayes

### Constrained Bayes and Empirical Bayes Estimator Applications in Insurance Pricing

Communications for Statistical Applications and Methods 2013, Vol 20, No 4, 321 327 DOI: http://dxdoiorg/105351/csam2013204321 Constrained Bayes and Empirical Bayes Estimator Applications in Insurance

### Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

### Hypothesis Testing. 1 Introduction. 2 Hypotheses. 2.1 Null and Alternative Hypotheses. 2.2 Simple vs. Composite. 2.3 One-Sided and Two-Sided Tests

Hypothesis Testing 1 Introduction This document is a simple tutorial on hypothesis testing. It presents the basic concepts and definitions as well as some frequently asked questions associated with hypothesis

### Inner Product Spaces

Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and

### 5 =5. Since 5 > 0 Since 4 7 < 0 Since 0 0

a p p e n d i x e ABSOLUTE VALUE ABSOLUTE VALUE E.1 definition. The absolute value or magnitude of a real number a is denoted by a and is defined by { a if a 0 a = a if a

### Introduction to Hypothesis Testing. Point estimation and confidence intervals are useful statistical inference procedures.

Introduction to Hypothesis Testing Point estimation and confidence intervals are useful statistical inference procedures. Another type of inference is used frequently used concerns tests of hypotheses.

### STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

### MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

### A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

### Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

### Normal distribution. ) 2 /2σ. 2π σ

Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

### Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22

Math 151. Rumbos Spring 2014 1 Solutions to Assignment #22 1. An experiment consists of rolling a die 81 times and computing the average of the numbers on the top face of the die. Estimate the probability

### Markov Chain Monte Carlo Simulation Made Simple

Markov Chain Monte Carlo Simulation Made Simple Alastair Smith Department of Politics New York University April2,2003 1 Markov Chain Monte Carlo (MCMC) simualtion is a powerful technique to perform numerical

### Lab 8: Introduction to WinBUGS

40.656 Lab 8 008 Lab 8: Introduction to WinBUGS Goals:. Introduce the concepts of Bayesian data analysis.. Learn the basic syntax of WinBUGS. 3. Learn the basics of using WinBUGS in a simple example. Next

### Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

### The Dirichlet-Multinomial and Dirichlet-Categorical models for Bayesian inference

The Dirichlet-Multinomial and Dirichlet-Categorical models for Bayesian inference Stephen Tu tu.stephenl@gmail.com 1 Introduction This document collects in one place various results for both the Dirichlet-multinomial

### Statistical Machine Learning from Data

Samy Bengio Statistical Machine Learning from Data 1 Statistical Machine Learning from Data Gaussian Mixture Models Samy Bengio IDIAP Research Institute, Martigny, Switzerland, and Ecole Polytechnique

### Beta Distribution. Paul Johnson <pauljohn@ku.edu> and Matt Beverlin <mbeverlin@ku.edu> June 10, 2013

Beta Distribution Paul Johnson and Matt Beverlin June 10, 2013 1 Description How likely is it that the Communist Party will win the net elections in Russia? In my view,

### Models for Count Data With Overdispersion

Models for Count Data With Overdispersion Germán Rodríguez November 6, 2013 Abstract This addendum to the WWS 509 notes covers extra-poisson variation and the negative binomial model, with brief appearances

### Bayesian Updating with Discrete Priors Class 11, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

1 Learning Goals Bayesian Updating with Discrete Priors Class 11, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom 1. Be able to apply Bayes theorem to compute probabilities. 2. Be able to identify

### Notes on the Negative Binomial Distribution

Notes on the Negative Binomial Distribution John D. Cook October 28, 2009 Abstract These notes give several properties of the negative binomial distribution. 1. Parameterizations 2. The connection between

### 7 Hypothesis testing - one sample tests

7 Hypothesis testing - one sample tests 7.1 Introduction Definition 7.1 A hypothesis is a statement about a population parameter. Example A hypothesis might be that the mean age of students taking MAS113X

### Dongfeng Li. Autumn 2010

Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

### Summary of Probability

Summary of Probability Mathematical Physics I Rules of Probability The probability of an event is called P(A), which is a positive number less than or equal to 1. The total probability for all possible

### Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections 9.1 Problems of Testing Hypotheses Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.6 Comparing the

### Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

### Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

### Numerical Summarization of Data OPRE 6301

Numerical Summarization of Data OPRE 6301 Motivation... In the previous session, we used graphical techniques to describe data. For example: While this histogram provides useful insight, other interesting

### CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

### E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

### For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

### STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Discrete vs. continuous random variables Examples of continuous distributions o Uniform o Exponential o Normal Recall: A random

### Aggregate Loss Models

Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing

### THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE. Alexander Barvinok

THE NUMBER OF GRAPHS AND A RANDOM GRAPH WITH A GIVEN DEGREE SEQUENCE Alexer Barvinok Papers are available at http://www.math.lsa.umich.edu/ barvinok/papers.html This is a joint work with J.A. Hartigan

### 6. Distribution and Quantile Functions

Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 6. Distribution and Quantile Functions As usual, our starting point is a random experiment with probability measure P on an underlying sample spac

### An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

### Statistics 100A Homework 8 Solutions

Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

### Standard Deviation Estimator

CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of

### Econometrics Simple Linear Regression

Econometrics Simple Linear Regression Burcu Eke UC3M Linear equations with one variable Recall what a linear equation is: y = b 0 + b 1 x is a linear equation with one variable, or equivalently, a straight

### Chapter 3 Joint Distributions

Chapter 3 Joint Distributions 3.6 Functions of Jointly Distributed Random Variables Discrete Random Variables: Let f(x, y) denote the joint pdf of random variables X and Y with A denoting the two-dimensional

### Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.5 The t Test 9.6 Comparing the Means of Two Normal

### Gaussian Conjugate Prior Cheat Sheet

Gaussian Conjugate Prior Cheat Sheet Tom SF Haines 1 Purpose This document contains notes on how to handle the multivariate Gaussian 1 in a Bayesian setting. It focuses on the conjugate prior, its Bayesian

### CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction

CA200 Quantitative Analysis for Business Decisions File name: CA200_Section_04A_StatisticsIntroduction Table of Contents 4. Introduction to Statistics... 1 4.1 Overview... 3 4.2 Discrete or continuous

### ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME Alexey Chuprunov Kazan State University, Russia István Fazekas University of Debrecen, Hungary 2012 Kolchin s generalized allocation scheme A law of

### A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS

A SURVEY ON CONTINUOUS ELLIPTICAL VECTOR DISTRIBUTIONS Eusebio GÓMEZ, Miguel A. GÓMEZ-VILLEGAS and J. Miguel MARÍN Abstract In this paper it is taken up a revision and characterization of the class of

### Chapter 4 Expected Values

Chapter 4 Expected Values 4. The Expected Value of a Random Variables Definition. Let X be a random variable having a pdf f(x). Also, suppose the the following conditions are satisfied: x f(x) converges

### INSURANCE RISK THEORY (Problems)

INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,

### Metric Spaces. Chapter 7. 7.1. Metrics

Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some

### A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São

### CHAPTER 2 Estimating Probabilities

CHAPTER 2 Estimating Probabilities Machine Learning Copyright c 2016. Tom M. Mitchell. All rights reserved. *DRAFT OF January 24, 2016* *PLEASE DO NOT DISTRIBUTE WITHOUT AUTHOR S PERMISSION* This is a

### Some Notes on Taylor Polynomials and Taylor Series

Some Notes on Taylor Polynomials and Taylor Series Mark MacLean October 3, 27 UBC s courses MATH /8 and MATH introduce students to the ideas of Taylor polynomials and Taylor series in a fairly limited

### VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA

VISUALIZATION OF DENSITY FUNCTIONS WITH GEOGEBRA Csilla Csendes University of Miskolc, Hungary Department of Applied Mathematics ICAM 2010 Probability density functions A random variable X has density

### Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis

Chapter 5 Polar Coordinates; Vectors 5.1 Polar coordinates 1. Pole and polar axis 2. Polar coordinates A point P in a polar coordinate system is represented by an ordered pair of numbers (r, θ). If r >

### Senior Secondary Australian Curriculum

Senior Secondary Australian Curriculum Mathematical Methods Glossary Unit 1 Functions and graphs Asymptote A line is an asymptote to a curve if the distance between the line and the curve approaches zero

### Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

### Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

### Bayesian Analysis for the Social Sciences

Bayesian Analysis for the Social Sciences Simon Jackman Stanford University http://jackman.stanford.edu/bass November 9, 2012 Simon Jackman (Stanford) Bayesian Analysis for the Social Sciences November

### Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

### Hypothesis Testing for Beginners

Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes

### Chapter 4. Probability and Probability Distributions

Chapter 4. robability and robability Distributions Importance of Knowing robability To know whether a sample is not identical to the population from which it was selected, it is necessary to assess the

### Probability Theory. Elementary rules of probability Sum rule. Product rule. p. 23

Probability Theory Uncertainty is key concept in machine learning. Probability provides consistent framework for the quantification and manipulation of uncertainty. Probability of an event is the fraction

### Math Placement Test Practice Problems

Math Placement Test Practice Problems The following problems cover material that is used on the math placement test to place students into Math 1111 College Algebra, Math 1113 Precalculus, and Math 2211

### The Normal distribution

The Normal distribution The normal probability distribution is the most common model for relative frequencies of a quantitative variable. Bell-shaped and described by the function f(y) = 1 2σ π e{ 1 2σ

### Efficiency and the Cramér-Rao Inequality

Chapter Efficiency and the Cramér-Rao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.

### L10: Probability, statistics, and estimation theory

L10: Probability, statistics, and estimation theory Review of probability theory Bayes theorem Statistics and the Normal distribution Least Squares Error estimation Maximum Likelihood estimation Bayesian

### We call this set an n-dimensional parallelogram (with one vertex 0). We also refer to the vectors x 1,..., x n as the edges of P.

Volumes of parallelograms 1 Chapter 8 Volumes of parallelograms In the present short chapter we are going to discuss the elementary geometrical objects which we call parallelograms. These are going to

### Web-based Supplementary Materials for Bayesian Effect Estimation. Accounting for Adjustment Uncertainty by Chi Wang, Giovanni

1 Web-based Supplementary Materials for Bayesian Effect Estimation Accounting for Adjustment Uncertainty by Chi Wang, Giovanni Parmigiani, and Francesca Dominici In Web Appendix A, we provide detailed

### Standard Deviation Calculator

CSS.com Chapter 35 Standard Deviation Calculator Introduction The is a tool to calculate the standard deviation from the data, the standard error, the range, percentiles, the COV, confidence limits, or

### 5.1 Identifying the Target Parameter

University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying

### 1 Review of Newton Polynomials

cs: introduction to numerical analysis 0/0/0 Lecture 8: Polynomial Interpolation: Using Newton Polynomials and Error Analysis Instructor: Professor Amos Ron Scribes: Giordano Fusco, Mark Cowlishaw, Nathanael

### Lecture 3: Linear methods for classification

Lecture 3: Linear methods for classification Rafael A. Irizarry and Hector Corrada Bravo February, 2010 Today we describe four specific algorithms useful for classification problems: linear regression,

### , for x = 0, 1, 2, 3,... (4.1) (1 + 1/n) n = 2.71828... b x /x! = e b, x=0

Chapter 4 The Poisson Distribution 4.1 The Fish Distribution? The Poisson distribution is named after Simeon-Denis Poisson (1781 1840). In addition, poisson is French for fish. In this chapter we will

### Stat260: Bayesian Modeling and Inference Lecture Date: February 1, Lecture 3

Stat26: Bayesian Modeling and Inference Lecture Date: February 1, 21 Lecture 3 Lecturer: Michael I. Jordan Scribe: Joshua G. Schraiber 1 Decision theory Recall that decision theory provides a quantification

### Continued Fractions and the Euclidean Algorithm

Continued Fractions and the Euclidean Algorithm Lecture notes prepared for MATH 326, Spring 997 Department of Mathematics and Statistics University at Albany William F Hammond Table of Contents Introduction

### Least Squares Estimation

Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

### STATISTICS FOR PSYCH MATH REVIEW GUIDE

STATISTICS FOR PSYCH MATH REVIEW GUIDE ORDER OF OPERATIONS Although remembering the order of operations as BEDMAS may seem simple, it is definitely worth reviewing in a new context such as statistics formulae.

### Monte Carlo-based statistical methods (MASM11/FMS091)

Monte Carlo-based statistical methods (MASM11/FMS091) Jimmy Olsson Centre for Mathematical Sciences Lund University, Sweden Lecture 5 Sequential Monte Carlo methods I February 5, 2013 J. Olsson Monte Carlo-based

### I. Pointwise convergence

MATH 40 - NOTES Sequences of functions Pointwise and Uniform Convergence Fall 2005 Previously, we have studied sequences of real numbers. Now we discuss the topic of sequences of real valued functions.

### Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

### Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections - we are still here Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.5 The t Test 9.6 Comparing the

### Sampling and Hypothesis Testing

Population and sample Sampling and Hypothesis Testing Allin Cottrell Population : an entire set of objects or units of observation of one sort or another. Sample : subset of a population. Parameter versus

### Anomaly detection for Big Data, networks and cyber-security

Anomaly detection for Big Data, networks and cyber-security Patrick Rubin-Delanchy University of Bristol & Heilbronn Institute for Mathematical Research Joint work with Nick Heard (Imperial College London),

### 6.4 Normal Distribution

Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

### Data Modeling & Analysis Techniques. Probability & Statistics. Manfred Huber 2011 1

Data Modeling & Analysis Techniques Probability & Statistics Manfred Huber 2011 1 Probability and Statistics Probability and statistics are often used interchangeably but are different, related fields

### 5. Convergence of sequences of random variables

5. Convergence of sequences of random variables Throughout this chapter we assume that {X, X 2,...} is a sequence of r.v. and X is a r.v., and all of them are defined on the same probability space (Ω,

### 1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...

MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability

### Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?