MATH4427 Notebook 2 Spring MATH4427 Notebook Definitions and Examples Performance Measures for Estimators...

Save this PDF as:

Size: px
Start display at page:

Download "MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators..."

Transcription

1 MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook Definitions and Examples Performance Measures for Estimators Measuring Accuracy: Bias Measuring Precision: Mean Squared Error (MSE) Comparing Estimators: Efficiency Another Performance Measure: Consistency Interval Estimation Error Probability, Confidence Coefficient, Confidence Interval Confidence Interval Procedures for Normal Distributions Method of Moments (MOM) Estimation K th Moments and K th Sample Moments MOM Estimation Method MOM Estimation Method for Multiple Parameters Maximum Likelihood (ML) Estimation Likelihood and Log-Likelihood Functions ML Estimation Method Cramér-Rao Lower Bound Efficient Estimators; Minimum Variance Unbiased Estimators Large Sample Theory: Fisher s Theorem Approximate Confidence Interval Procedures; Special Cases Multinomial Experiments ML Estimation Method for Multiple Parameters

2 2

3 2 MATH4427 Notebook 2 There are two core areas of mathematical statistics: estimation theory and hypothesis testing theory. This notebook is concerned with the first core area, estimation theory. The notes include material from Chapter 8 of the Rice textbook. 2.1 Definitions and Examples 1. Random Sample: A random sample of size n from the X distribution is a list, X 1, X 2,..., X n, of n mutually independent random variables, each with the same distribution as X. 2. Statistic: A statistic is a function of a random sample (or random samples). 3. Sampling Distribution: The probability distribution of a statistic is known as its sampling distribution. Sampling distributions are often difficult to find. 4. Point Estimator: A point estimator (or estimator) is a statistic used to estimate an unknown parameter of a probability distribution. 5. Estimate: An estimate is the value of an estimator for a given set of data. 6. Test Statistic: A test statistic is a statistic used to test an assertion. Example: Normal distribution. Let X 1, X 2,..., X n be a random sample from a normal distribution with mean µ and standard deviation σ. 1. Sample Mean: The sample mean, X = 1 n n i=1 X i, is an estimator of µ. This statistic has a normal distribution with mean µ and standard deviation σ/ n. 2. Sample Variance: The sample variance, S 2 = 1 n 1 n (X i X) 2, i=1 is an estimator of σ 2. The distribution of this statistic is related to the chi-square. Specifically, the statistic V = (n 1) S 2 has a chi-square distribution with (n 1) df. σ 2 If the numbers 90.8, 98.0, 113.0, 134.7, 80.5, 97.6, 117.6, are observed, for example, then 1. x = is an estimate of µ, and 2. s 2 = is an estimate of σ 2 based on these observations. 3

4 X and S 2 can also be used to test assertions about the parameters of a normal distribution. For example, X can be used to test the assertion that The mean of the distribution is 100. Example: Multinomial experiments. Consider a multinomial experiment with k outcomes whose probabilities are p i, for i = 1, 2,..., k. The results of n independent trials of a multinomial experiment are usually presented to us in summary form: Let X i be the number of occurrences of the i th outcome in n independent trials of the experiment, for i = 1, 2,..., k. Then 1. Sample Proportions: The i th sample proportion, p i = X i n, (read p i -hat ) is an estimator of p i, for i = 1, 2,..., k. The distribution of this statistic is related to the binomial distribution. Specifically, X i = n p i has a binomial distribution with parameters n and p i. 2. Pearson s statistic: Pearson s statistic, X 2 = k i=1 (X i np i ) 2 np i, can be used to test the assertion that the true probability list is (p 1, p 2,..., p k ). The exact probability distribution of X 2 is hard to calculate. But, if n is large, the distribution of Pearson s statistic is approximately chi-square with (k 1) degrees of freedom. If n = 40 and (x 1, x 2, x 3, x 4, x 5, x 6 ) = (5, 7, 8, 5, 11, 4) is observed, for example, then p 1 = 0.125, p 2 = 0.175, p 3 = 0.20, p 4 = 0.125, p 5 = 0.275, p 6 = 0.10 are point estimates of the model proportions based on the observed list of counts. (It is common practice to use the same notation for estimators and estimates of proportions.) If, in addition, we were interested in testing the assertion that all outcomes are equally likely, (that is, the assertion that p i = 1 6 for each i), for example, then x 2 obs = 6 (x i 40/6) 2 i=1 40/6 = 5.0 is the value of the test statistic for the observed list of counts. 4

5 2.2 Performance Measures for Estimators Let X 1, X 2,..., X n be a random sample from a distribution with parameter θ. The notation θ = θ (X 1, X 2,..., X n ) (read theta-hat ) is used to denote an estimator of θ. Estimators should be both accurate (measured using the center of the sampling distribution) and precise (measured using both the center and the spread of the sampling distribution) Measuring Accuracy: Bias 1. Bias: The bias of the estimator θ is defined as follows: BIAS( θ) = E( θ) θ. An accurate estimator will have little or no bias. 2. Unbiased/Biased Estimators: If E( θ) = θ, then θ is said to be an unbiased estimator of the parameter θ; otherwise, θ is said to be a biased estimator of θ. 3. Asymptotically Unbiased Estimator: If θ is a biased estimator satisfying lim E( θ) = θ, n where n is the sample size, then θ is said to be an asymptotically unbiased estimator of θ. Example: Normal distribution. Let X be the sample mean, S 2 be the sample variance, and S be the sample standard deviation of a random sample of size n from a normal distribution with mean µ and standard deviation σ. 1. Sample Mean: X is an unbiased estimator of µ since E(X) = µ. 2. Sample Variance: S 2 is an unbiased estimator of σ 2 since E(S 2 ) = σ Sample Standard Deviation: S is a biased estimator of σ, with expectation E(S) = σ 2 n 1 Γ(n/2) Γ((n 1)/2) σ. Since E(S) σ as n, S is an asymptotically unbiased estimator of σ. 5

6 Exercise. Another commonly used estimator of σ 2 is σ 2 = 1 n n (X i X) 2 = i=1 (n 1) S 2. n Demonstrate that σ 2 is a biased estimator of σ 2, but that it is asymptotically unbiased. Exercise. Let X be the sample mean of a random sample of size n from a uniform distribution on the interval [0, b], and let b = 2X be an estimator of the upper endpoint b. (a) Demonstrate that b is an unbiased estimator of b, and find the variance of the estimator. (b) Use b to estimate b using the following observations: 3.64, 3.76, 2.53, 7.06,

7 2.2.2 Measuring Precision: Mean Squared Error (MSE) The mean squared error (MSE) of an estimator is the expected value of the square of the difference between the estimator and the true parameter: MSE( θ) = E(( θ θ) 2 ). A precise estimator will have a small mean squared error. Mean Squared Error Theorem. Let X 1, X 2,..., X n be a random sample from a distribution with parameter θ, and let θ be an estimator of θ. Then 1. MSE( θ) = V ar( θ) + (BIAS( θ)) If θ is an unbiased estimator of θ, then MSE( θ) = V ar( θ). Exercise. Use the definitions of mean squared error and variance, and properties of expectation, to prove the theorem above. 7

8 2.2.3 Comparing Estimators: Efficiency 1. Unbiased Estimators: Let θ 1 and θ 2 be unbiased estimators of θ, each based on a random sample of size n from the X distribution. = ( ) θ 1 is said to be more efficient than θ 2 if V ar( θ 1 ) < V ar( θ 2 ). θ = ( ) 2. General Estimators: More generally, let θ 1 and θ 2 be two estimators of θ, each based on a random sample of size n from the X distribution. θ 1 is said to be more efficient than θ 2 if MSE( θ 1 ) < MSE( θ 2 ). θ Given two unbiased estimators, we would prefer to use the one with the smaller variance (as illustrated in the top plot); otherwise, we would prefer to use the one whose average squared deviation from the true parameter is smaller (as illustrated in the bottom plot). Exercise. Let X 1, X 2, X 3, X 4 be a random sample of size 4 from a distribution with mean µ and standard deviation σ. The following two statistics are unbiased estimators of µ: µ 1 = 1 2 X X X X 4 and µ 2 = 1 4 X X X X 4. Which is the more efficient estimator (or are they equally efficient)? 8

9 A natural question to ask is whether a most efficient estimator exists. unbiased estimators only, we have the following definition. If we consider MVUE: The unbiased estimator θ = θ(x 1, X 2,..., X n ) is said to be a minimum variance unbiased estimator (MVUE) of θ if V ar( θ) V ar( θ ) for all unbiased estimators θ = θ (X 1, X 2,..., X n ). We will consider the question of finding an MVUE in Section (page 29) of these notes Another Performance Measure: Consistency The estimator θ = θ(x 1, X 2,..., X n ) is said to be a consistent estimator of θ if, for every positive number ɛ, lim P ( θ θ ɛ) = 0, n where n is the sample size. (A consistent estimator is unlikely to be far from the true parameter when n is large enough.) Note that consistent estimators are not necessarily unbiased, but they are generally asymptotically unbiased. If an estimator is unbiased, however, a quick check for consistency is given in the following theorem. Consistency Theorem. If θ is an unbiased estimator of θ satisfying lim V ar( θ) = 0, n where n is the sample size, then θ is a consistent estimator of θ. The consistency theorem can be proven using Chebyshev s inequality from probability theory: Chebyshev s Inequality. Let W be a random variable with finite mean µ w and finite standard deviation σ w, and let k be a positive constant. Then Equivalently, the conditions imply that P ( W µ w kσ w ) 1 k 2. P ( W µ w < kσ w ) 1 1 k 2 for a given positive constant k. Chebyshev s inequality says that the probability that W is k or more standard deviations from its mean is at most 1 and the probability that W is less than k standard deviations from its k 2 mean is at least the complementary probability, 1 1 k 2. 9

10 Exercise. Use Chebyshev s inequality to prove the consistency theorem. Exercise. Let X be the number of successes in n independent trials of a Bernoulli experiment with success probability p, and let p = X n be the sample proportion. Demonstrate that p is a consistent estimator of p. 10

11 Exercise. Let X be the sample mean and S 2 be the sample variance of a random sample of size n from a normal distribution with mean µ and standard deviation σ. (a) Demonstrate that X is a consistent estimator of µ. (b) Demonstrate that S 2 is a consistent estimator of σ 2. 11

12 2.3 Interval Estimation Let X 1, X 2,..., X n be a random sample from a distribution with parameter θ. The goal in interval estimation is to find two statistics L = L (X 1, X 2,..., X n ) and U = U (X 1, X 2,..., X n ) with the property that θ lies in the interval [L, U] with high probability Error Probability, Confidence Coefficient, Confidence Interval 1. Error Probability: It is customary to let α (called the error probability) equal the probability that θ is not in the interval, and to find statistics L and U satisfying (a) P (θ < L) = P (θ > U) = α 2 and (b) P (L θ U) = 1 α. α/ -α α/ 2. Confidence Coefficient: The probability (1 α) is called the confidence coefficient. 3. Confidence Interval: The interval [L, U] is called a 100(1 α)% confidence interval for θ. A 100(1 α)% confidence interval is a random quantity. In applications, we substitute sample values for the lower and upper endpoints and report the interval [l, u]. The reported interval is not guaranteed to contain the true parameter, but 100(1 α)% of reported intervals will contain the true parameter. Example: Normal distribution with known σ. Let X be a normal random variable with mean µ and known standard deviation σ, and let X be the sample mean of a random sample of size n from the X distribution. The following steps can be used to find expressions for the lower limit L and the upper limit U of a 100(1 α)% confidence interval for the mean µ: 1. First note that Z = (X µ)/ σ 2 /n is a standard normal random variable. 12

13 2. For convenience, let p = α/2. Further, let z p and z 1 p be the p th and (1 p) th quantiles of the standard normal distribution. Then P (z p Z z 1 p ) = 1 2p. 3. The following statements are equivalent: Thus, P z p Z z 1 p z p (X µ) z σ 2 1 p /n (X z 1 p σ 2 n µ X z p z p σ 2 n X z 1 p σ 2 n (X µ) z 1 p ) σ 2 n = 1 2p. µ X z p σ 2 n σ 2 n 4. Since z p = z 1 p and p = α/2, the expressions we want are L = X z 1 α/2 σ 2 n σ 2 σ and U = X z α/2 n = X + z 2 1 α/2 n. Example. I used the computer to generate 50 random samples of size 16 from the normal distribution with mean 0 and standard deviation 10. For each sample, I computed a 90% confidence interval for µ using the expressions for L and U from above. The following plot shows the observed intervals as vertical line segments intervals contain µ = 0 (solid lines), while 4 do not (dashed lines). [Question: If you were to repeat the computer experiment above many times, how many intervals would you expect would contain 0? Why?] 13

14 2.3.2 Confidence Interval Procedures for Normal Distributions The following tables give confidence interval (CI) procedures for the parameters of the normal distribution. In each case, X is the sample mean and S 2 is the sample variance of a random sample of size n from a normal distribution with mean µ and standard deviation σ (1 α)% CI for µ, when σ is known: X ± z(α/2) σ 2 n, where z(α/2) is the 100(1 α/2)% point of the standard normal distribution (1 α)% CI for µ, when σ is estimated: X ± t n 1 (α/2) S 2 n, where t n 1 (α/2) is the 100(1 α/2)% point of the Student t distribution with n 1 df (1 α)% CI for σ 2, when µ is known: [ n i=1 (X i µ) 2 χ 2, n(α/2) n i=1 (X i µ) 2 χ 2 n(1 α/2) where χ 2 n(p) is the 100(1 p)% point of the chi-square distribution with n df. ], (1 α)% CI for σ 2, when µ is estimated: [ n i=1 (X i X) 2 χ 2 n 1 (α/2), n i=1 (X i X) 2 χ 2 n 1 (1 α/2) where χ 2 n 1(p) is the 100(1 p)% point of the chi-square distribution with (n 1) df. ], (1 α)% CI for σ: If [L, U] is a 100(1 α)% CI for σ 2, then [ L, U] is a 100(1 α)% CI for σ. Notes: 1. Upper Tail Notation: It is common practice to use upper tail notation for the quantiles used in confidence interval procedures. Thus, we use x(p) to denote the quantile x 1 p. 2. (Estimated) Standard Error of the Mean: The standard deviation of X, σ 2 SD(X) = n = σ n is often called the standard error of the mean, and the approximation called the estimated standard error of the mean. S 2 n = S n is often 3. Equivalent Forms: The equivalent form (n 1)S 2 = n i=1 (X i X) 2 might be useful. 14

15 Exercise. Demonstrate that the confidence interval procedure for estimating σ 2 when µ is known is correct. 15

16 Exercise. Assume that the following numbers are the values of a random sample from a normal distribution with known standard deviation 10: Sample summaries: n = 15, x = Construct 80% and 90% confidence intervals for µ. 16

17 Exercise. Assume that the following numbers are the values of a random sample from a normal distribution: Sample summaries: n = 15, x = , s 2 = Construct 90% confidence intervals for σ 2 and σ. 17

18 Exercise. As part of a study on body temperatures of healthy adult men and women, 30 temperatures (in degrees Fahrenheit) were recorded: Sample summaries: n = 30, x = , s = Assume these data are the values of a random sample from a normal distribution. Construct and interpret 95% confidence intervals for µ and σ. 18

19 2.4 Method of Moments (MOM) Estimation The method of moments is a general method for estimating one or more unknown parameters, introduced by Karl Pearson in the 1880 s. In general, MOM estimators are consistent, but they are not necessarily unbiased K th Moments and K th Sample Moments 1. The k th moment of the X distribution is µ k = E(X k ), for k = 1, 2,..., whenever the expected value converges. 2. The k th sample moment is the statistic µ k = 1 n Xi k, for k = 1, 2,..., n i=1 where X 1, X 2,..., X n is a random sample of size n from the X distribution. [Question: The k th sample moment is an unbiased and consistent estimator of µ k, when µ k exists. Can you see why?] MOM Estimation Method Let X 1, X 2,..., X n be a random sample from a distribution with parameter θ, and suppose that µ k = E(X k ) is a function of θ for some k. Then, a method of moments estimator of θ is obtained using the following procedure: Solve µ k = µ k for the parameter θ. For example, let X 1, X 2,..., X n be a random sample of size n from the uniform distribution on the interval [0, b]. Since E(X) = b/2, a method of moments estimator of the upper endpoint b can be obtained as follows: µ 1 = µ 1 = E(X) = X = b 2 = X = b = 2X. Thus, b = 2X is a MOM estimator of b. 19

20 Exercise. Let X 1, X 2,..., X n be a random sample from a uniform distribution on the interval [ b, b], where b > 0 is unknown. (a) Find a method of moments estimator of b. (b) Suppose that the numbers 6.2, 4.9, 0.4, 5.8, 6.6, 6.9 are observed. Use the formula derived in part (a) to estimate the upper endpoint b. 20

21 Exercise (slope-parameter model). Let X be a continuous random variable with range ( 1, 1) and PDF as follows: f(x) = 1 (1 + αx) 2 when x ( 1, 1) and 0 otherwise, where α ( 1, 1) is an unknown parameter. Find a method of moments estimator of α based on a random sample of size n from the X distribution. 21

22 2.4.3 MOM Estimation Method for Multiple Parameters The method of moments procedure can be generalized to any number of unknown parameters. For example, if the X distribution has two unknown parameters (say θ 1 and θ 2 ), then MOM estimators are obtained using the procedure: Solve µ k1 = µ k1 and µ k2 = µ k2 simultaneously for θ 1 and θ 2. Example: Normal distribution. Let X 1, X 2,..., X n be a random sample from a normal distribution with mean µ and variance σ 2, where both parameters are unknown. To find method of moments estimators of µ and σ 2, we need to solve two equations in the two unknowns. Since E(X) = µ and E(X 2 ) = V ar(x) + (E(X)) 2 = σ 2 + µ 2, we can use the equations µ 1 = µ 1 and µ 2 = µ 2 to find the MOM estimators. Now (complete the derivation), 22

23 Example: Gamma distribution. Let X 1, X 2,..., X n be a random sample from a gamma distribution with parameters α and β, where both parameters are unknown. To find method of moments estimators of α and β we need to solve two equations in the two unknowns. Since E(X) = αβ and E(X 2 ) = V ar(x) + (E(X)) 2 = αβ 2 + (αβ) 2, we can use the equations µ 1 = µ 1 and µ 2 = µ 2 to find the MOM estimators. Now (complete the derivation), 23

24 2.5 Maximum Likelihood (ML) Estimation The method of maximum likelihood is a general method for estimating one or more unknown parameters. Maximum likelihood estimation was introduced by R.A. Fisher in the 1920 s. In general, ML estimators are consistent, but they are not necessarily unbiased Likelihood and Log-Likelihood Functions Let X 1, X 2,..., X n be a random sample from a distribution with a scalar parameter θ. 1. The likelihood function is the joint PDF of the random sample thought of as a function of the parameter θ, with the random X i s left unevaluated: n i=1 Lik(θ) = p(x i) when X is discrete n i=1 f(x i) when X is continuous 2. The log-likelihood function is the natural logarithm of the likelihood function: l(θ) = log(lik(θ)) Note that it is common practice to use log() to denote the natural logarithm, and that this convention is used in the Mathematica system as well. For example, if X is an exponential random variable with parameter λ, then 1. The PDF of X is f(x) = λe λx when x > 0, and f(x) = 0 otherwise, 2. The likelihood function for the sample of size n is Lik(λ) = n f(x i ) = i=1 n i=1 λe λx i = λ n e λ n i=1 X i, and 3. The log-likelihood function for the sample of size n is l(λ) = log (λ n e λ ) n i=1 X i = n log(λ) λ n X i. i= ML Estimation Method The maximum likelihood estimator (or ML estimator) of θ is the value that maximizes either the likelihood function (Lik(θ)) or the log-likelihood function (l(θ)). In general, ML estimators are obtained using methods from calculus. 24

25 Example: Bernoulli distribution. Let X 1, X 2,..., X n be a random sample from a Bernoulli distribution with success probability p and let Y = n i=1 X i be the sample sum. Then the ML estimator of p is the sample proportion p = Y/n. To see this, 1. Since there are Y successes (each occurring with probability p) and n Y failures (each occurring with probability 1 p), the likelihood and log-likelihood functions are n Lik(p) = p(x i ) = p Y (1 p) n Y i=1 and l(p) = Y log(p) + (n Y ) log(1 p). 2. The derivative of the log-likelihood function is l (p) = Y p (n Y ) (1 p). 3. There are three cases to consider: Y = 0: If no successes are observed, then Lik(p) = (1 p) n is decreasing on [0, 1]. The maximum value occurs at the left endpoint. Thus, p = 0. Y = n: If no failures are observed, then Lik(p) = p n is increasing on [0, 1]. The maximum value occurs at the right endpoint. Thus, p = 1. 0 < Y < n: In the usual case, some successes and some failures are observed. Working with the log-likelihood function we see that l (p) = 0 Y p = (n Y ) (1 p) Y (1 p) = p(n Y ) p = Y n. Thus, p = Y n is the proposed ML estimator. The second derivative test is generally used to check that the proposed estimator is a maximum. In this case, since l (p) = Y p 2 (n Y ) (1 p) 2 and l ( p) < 0, the second derivative test implies that p = Y n is a maximum. In all cases, the estimator is p = Y n. To illustrate maximum likelihood estimation in the Bernoulli case, assume that 2 successes are observed in 6 trials. The graph shows the observed likelihood function, = ( ) y = Lik(p) = p 2 (1 p) 4, for p (0, 1). This function is maximized at 1/3. Thus, the ML estimate is 1/3. 25

26 Example: Slope-parameter model. Let X be a continuous random variable with range ( 1, 1) and with PDF as follows: f(x) = 1 (1 + αx) 2 when x ( 1, 1) and 0 otherwise, where α ( 1, 1) is an unknown parameter. Given a random sample of size n from the X distribution, the likelihood function is Lik(α) = n f(x i ) = i=1 n i=1 (1 + αx i ) 2 = 1 2 n n (1 + αx i ), i=1 the log-likelihood simplifies to l(α) = n log(2) + n i=1 log(1 + αx i), and the first derivative of the log-likelihood function is l (α) =. Since the likelihood and log-likelihood functions are not easy to analyze by hand, we will use the computer to find the ML estimate given specific numbers. For example, assume that the following data are the values of a random sample from the X distribution: Sample summaries: n = 16, x = The graph shows the function = (α) y = Lik(α) = i=1 (1 + αx i ) for α ( 1, 1), and the x i s from above. This function is maximized at Thus, the ML estimate of α for these data is α Using the work we did on page 21, a MOM estimate of α for the data above is 26

27 Exercise: Poisson distribution. Let X 1, X 2,..., X n be a random sample from a Poisson distribution with parameter λ, where λ > 0 is unknown. Let Y = n i=1 X i be the sample sum, and assume that Y > 0. Demonstrate that the ML estimator of λ is λ = Y/n. 27

28 Exercise: Uniform distribution on [0,b]. Let X 1, X 2,..., X n be a random sample from a uniform distribution on the interval [0, b], where b > 0 is unknown. Demonstrate that b = max(x 1, X 2,..., X n ) is the ML estimator of b. 28

29 2.5.3 Cramér-Rao Lower Bound The following theorem, proven by both Cramér and Rao in the 1940 s, gives a formula for the lower bound on the variance of an unbiased estimator of θ. The formula is valid under certain smoothness conditions on the X distribution. Theorem (Cramér-Rao Lower Bound). Let X 1, X 2,..., X n be a random sample from a distribution with scalar parameter θ and let θ be an unbiased estimator of θ. Under smoothness conditions on the X distribution: V ar( θ) 1 ni(θ), where ni(θ) = E(l (θ)). In this formula, l (θ) is the second derivative of the log-likelihood function, and the expectation is computed using the joint distribution of the X i s for fixed θ. Notes: 1. Smoothness Conditions: If the following three conditions hold: (a) The PDF of X has continuous second partial derivatives (except, possibly, at a finite number of points), (b) The parameter θ is not at the boundary of possible parameter values, and (c) The range of X does not depend on θ, then X satisfies the smoothness conditions of the theorem. The theorem excludes, for example, the Bernoulli distribution with p = 1 (condition 2 is violated) and the uniform distribution on [0, b] (condition 3 is violated). 2. Fisher Information: The quantity ni(θ) is called the information in a sample of size n. 3. Cramér-Rao Lower Bound: The lower bound for the variance given in the theorem is called the Cramér-Rao lower bound; it is the reciprocal of the information, 1 ni(θ) Efficient Estimators; Minimum Variance Unbiased Estimators Let X 1, X 2,..., X n be a random sample from a distribution with parameter θ, and let θ be an estimator based on this sample. Then θ is said to be an efficient estimator of θ if E( θ) = θ and V ar( θ) = 1 ni(θ). Note that if the X distribution satisfies the smoothness conditions listed above, and θ is an efficient estimator, then the Cramér-Rao theorem implies that θ is the minimum variance unbiased estimator (MVUE) of θ. 29

30 Example: Bernoulli distribution. Let X 1, X 2,..., X n be a random sample from a Bernoulli distribution with success probability p and let Y = n i=1 X i be the sample sum. Assume that 0 < Y < n. Then p = Y n is an efficient estimator of p. To see this, 1. From the work we did starting on page 25, we know that p = Y n is the ML estimator of the success probability p, and that the second derivative of the log-likelihood function is l (p) = Y p 2 (n Y ) (1 p) The following computations demonstrate that the information ni(p) = n p(1 p) : ni(p) = E(l (p)) ( = E Y p 2 (n Y ) ) (1 p) 2 = 1 p 2 E(Y ) + 1 (1 p) 2 E(n Y ) = 1 p 2 (np) + 1 (n np) since E(Y ) = ne(x) = np (1 p) 2 = n p + n (1 p) = n(1 p) + np p(1 p) = n p(1 p). Thus, the Cramér-Rao lower bound is p(1 p) n. 3. The following computations demonstrate that p is an unbiased estimator of p with variance equal to the Cramér-Rao lower bound: E( p) V ar( p) ( ) Y = E = 1 n n E(Y ) = 1 n (ne(x)) = 1 n (np) = p ( ) Y = V ar = 1 n n 2 V ar(y ) = 1 n 2 (nv ar(x)) = 1 p(1 p) (np(1 p)) =. n2 n Thus, p is an efficient estimator of p. 30

31 Exercise: Poisson Distribution. Let X 1, X 2,..., X n be a random sample from a Poisson distribution with parameter λ and let Y = n i=1 X i be the sample sum. Assume that Y > 0. Demonstrate that λ = Y/n is an efficient estimator of λ. 31

32 2.5.5 Large Sample Theory: Fisher s Theorem R.A. Fisher proved a generalization of the Central Limit Theorem for ML estimators. Fisher s Theorem. Let θ n be the ML estimator of θ based on a random sample of size n from the X distribution, and Z n = θ n θ 1/(nI(θ)), where ni(θ) is the information. Under smoothness conditions on the X distribution, lim P (Z n x) = Φ(x) for every real number x, n where Φ( ) is the CDF of the standard normal random variable. Notes: 1. Smoothness Conditions: The smoothness conditions in Fisher s theorem are the same ones mentioned earlier. 2. Asymptotic Efficiency: If the X distribution satisfies the smoothness conditions and the sample size is large, then the sampling distribution of the ML estimator is approximately normal with mean θ and variance equal to the Cramer-Rao lower bound. Thus, the ML estimator is said to be asymptotically efficient. It is instructive to give an outline of a key idea Fisher used to prove the theorem above. 1. Let l(θ) = log(lik(θ)) be the log-likelihood function, and θ be the ML estimator. 2. Using the second order Taylor polynomial of l(θ) expanded around θ, we get l(θ) l( θ) + l ( θ)(θ θ) + l ( θ) 2 (θ θ) 2 = l( θ) + l ( θ) 2 (θ θ) 2 since l ( θ) = Since Lik(θ) = e l(θ) = exp(l(θ)), we get ( ) Lik(θ) exp l( θ) + l ( θ) 2 (θ θ) = Lik( θ)e (θ θ) 2 /(2( 1/l ( θ))). 2 The rightmost expression has the approximate form of the density function of a normal distribution with mean θ and variance equal to the reciprocal of ni(θ) = E(l (θ)). 32

33 2.5.6 Approximate Confidence Interval Procedures; Special Cases Under the conditions of Fisher s theorem, an approximate 100(1 α)% confidence interval for θ has the following form: 1 θ ± z(α/2) ni( θ) where z(α/2) is the 100(1 α/2)% point of the standard normal distribution. Exercise: Bernoulli/Binomial Distribution. Let Y = n i=1 X i be the sample sum of a random sample of size n from a Bernoulli distribution with parameter p, where p (0, 1). (a) Assume that the sample size n is large and that 0 < Y < n. Use Fisher s theorem to find the form of an approximate 100(1 α)% confidence interval for p. (b) Public health officials in Florida conducted a study to determine the level of resistance to the antibiotic penicillin in individuals diagnosed with a strep infection. They chose a simple random sample of 1714 individuals from this population, and tested cultures taken from these individuals. In 973 cases, the culture showed partial or complete resistance to the antibiotic. Construct and interpret an approximate 99% confidence interval for the proportion p of all individuals who would exhibit partial or complete resistance to penicillin when diagnosed with a strep infection. 33

34 Exercise: Poisson Distribution. Let Y = n i=1 X i be the sample sum of a random sample of size n from a Poisson distribution with parameter λ, where λ > 0. (a) Assume that the sample size n is large and that Y > 0. Use Fisher s theorem to find the form of an approximate 100(1 α)% confidence interval for λ. (b) Public health officials interested in determining the monthly mean rate of suicides in western Massachusetts gathered information over a 5-year period. The following table gives the frequency distribution of the number of cases reported each month: Number of cases, x: Total: Number of months: (There were no cases reported in one month, 1 case reported in each of seven months, 2 cases reported in each of nineteen months, and so forth.) Construct and interpret a 95% confidence interval for the population monthly mean rate. 34

35 Exercise: Exponential distribution. Let Y = n i=1 X i be the sample sum of a random sample of size n from an exponential distribution with parameter λ. (a) Demonstrate that the ML estimator of λ is λ = n/y. (b) Demonstrate that ni(λ) = n/λ 2. 35

36 (c) Assume that the sample size n is large. Use Fisher s theorem to find the form of an approximate 100(1 α)% confidence interval for λ. (d) A university computer lab has 128 computers, which are used continuously from time of installation until they break. The average time to failure for these 128 computers was years. Assume this information is a summary of a random sample from an exponential distribution. Construct an approximate 90% confidence interval for the exponential λ. 36

37 2.5.7 Multinomial Experiments Consider a multinomial experiment with k outcomes, each of whose probabilities is a function of the scalar parameter θ: p i = p i (θ), for i = 1, 2,..., k. The results of n independent trials of a multinomial experiment are usually presented to us in summary form: Let X i be the number of occurrences of the i th outcome in n independent trials of the experiment, for i = 1, 2,..., k. Since the random k-tuple (X 1, X 2,..., X k ) has a multinomial distribution, the likelihood function is the joint PDF thought of as a function of θ with the X i s left unevaluated: ( ) n Lik(θ) = p 1 (θ) X 1 p 2 (θ) X2 p k (θ) X k. X 1, X 2,..., X k We work with this likelihood when finding the ML estimator, the information, and so forth. Exercise. Consider a multinomial experiment with 3 outcomes, whose probabilities are p 1 = p 1 (θ) = (1 θ) 2, p 2 = p 2 (θ) = 2θ(1 θ), p 3 = p 3 (θ) = θ 2, where θ (0, 1) is unknown. Let X i be the number of occurrences of the i th outcome in n independent trials of the multinomial experiment, for i = 1, 2, 3. Assume each X i > 0. (a) Set up and simplify the likelihood and log-likelihood functions. 37

38 (b) Find the ML estimator, θ. Check that your estimator corresponds to a maximum. 38

39 (c) Find the information, ni(θ). (d) Find the form of an approximate 100(1 α)% confidence interval for θ. 39

40 Application: Hardy-Weinberg equilibrium model. In the Hardy-Weinberg equilibrium model there are three outcomes, with probabilities: p 1 = (1 θ) 2, p 2 = 2θ(1 θ), p 3 = θ 2. The model is used in genetics. The usual setup is as follows: 1. Genetics is the science of inheritance. Hereditary characteristics are carried by genes, which occur in a linear order along chromosomes. A gene s position along the chromosome is also called its locus. Except for the sex chromosomes, there are two genes at every locus (one inherited from mom and the other from dad). 2. An allele is an alternative form of a genetic locus. 3. Consider the simple case of a gene not located on one of the sex chromosomes, and with just two alternative forms (i.e., two alleles), say A and a. This gives three genotypes: AA, Aa, and aa. 4. Let θ equal the probability that a gene contains allele a. After many generations, the proportions of the three genotypes in a population are (approximately): p 1 = P (AA occurs) = (1 θ) 2 p 2 = P (Aa occurs) = 2θ(1 θ) p 3 = P (aa occurs) = θ 2 Exercise (Source: Rice textbook, Chapter 8). In a sample from the Chinese population in Hong Kong in 1937, blood types occurred with the following frequencies, Blood type MM MN NN Frequency where M and N are two types of antigens in the blood. Assume these data summarize the values of a random sample from a Hardy-Weinberg equilibrium model, where: p 1 = P (MM occurs) = (1 θ) 2 p 2 = P (MN occurs) = 2θ(1 θ) p 3 = P (NN occurs) = θ 2 40

41 Find the ML estimate of θ, the estimated proportions of each blood type (MM, MN, NN) in the population, and an approximate 90% confidence interval for θ. 41

42 2.5.8 ML Estimation Method for Multiple Parameters If the X distribution has two or more unknown parameters, then ML estimators are computed using the techniques of multivariable calculus. Consider the two-parameter case under smoothness conditions, and let l(θ 1, θ 2 ) = log(lik(θ 1, θ 2 )) be the log-likelihood function. 1. To find the ML estimators, we need to solve the following system of partial derivatives for the unknown parameters θ 1 and θ 2 : l 1 (θ 1, θ 2 ) = 0 and l 2 (θ 1, θ 2 ) = To check that ( θ 1, θ 2 ) is a maximum using the second derivative test, we need to check that the following two conditions hold: d 1 = l 11 ( θ 1, θ 2 ) < 0 and d 2 = l 11 ( θ 1, θ 2 )l 22 ( θ 1, θ 2 ) (l 12 ( θ 1, θ 2 )) 2 > 0, where the l ij () functions are the second partial derivatives. Exercise: Normal distribution. Let X be a normal random variable with mean µ and variance σ 2. Given a random sample of size n from the X distribution, the likelihood function and log-likelihood functions can be written as follows: Lik(µ, σ 2 ) = ( ) n 1 i=1 exp (X i µ) 2 = ( 2πσ 2) n/2 n ) 2πσ 2 2σ exp ( 2 i=1 (X i µ) 2 2σ 2 l(µ, σ 2 ) = n 2 log(2π) n 2 log(σ2 ) 1 2σ 2 n i=1 (X i µ) 2 (a) Find the first and second partial derivatives of l(µ, σ 2 ). 42

43 (b) Demonstrate that the ML estimators of µ and σ 2 are µ = X and σ 2 = 1 n n (X i X) 2 i=1 and check that you have a maximum. 43

44 Example: Gamma distribution. Let X be a gamma random variable with shape parameter α and scale parameter β. Given a random sample of size n from the X distribution, the likelihood and log-likelihood functions are as follows: Lik(α, β) l(α, β) = ( ) n n 1 i=1 Γ(α)β X α 1 α i e Xi/β = 1 Γ(α)β α (Πi X i ) α 1 e Σ ix i /β = n log(γ(α)) nα log(β) + (α 1) log(π i X i ) Σ ix i β The partial derivatives of the log-likelihood with respect to α and β are as follows: l 1 (α, β) = l 2 (α, β) = Since the system of equations l 1 (α, β) = 0 and l 2 (α, β) = 0 cannot be solved exactly, the computer is used to analyze specific samples. For example, assume the following data are the values of a random sample from the X distribution: The graph below is a contour diagram of the observed log-likelihood function: z = l(α, β) = 12 log(γ(α)) 12α log(β) + (α 1)(32.829) β The contours shown in the graph correspond to β z = 39.6, 40.0, 40.4, The function is maximized at (5.91, 2.85) (the point of intersection of the dashed lines). Thus, the ML estimate of (α, β) is (5.91, 2.85). α Note that, for these data, the MOM estimate of (α, β) is (5.17, 3.25). 44

1.1 Introduction, and Review of Probability Theory... 3. 1.1.1 Random Variable, Range, Types of Random Variables... 3. 1.1.2 CDF, PDF, Quantiles...

MATH4427 Notebook 1 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 1 MATH4427 Notebook 1 3 1.1 Introduction, and Review of Probability

MATH4427 Notebook 3 Spring MATH4427 Notebook Hypotheses: Definitions and Examples... 3

MATH4427 Notebook 3 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 3 MATH4427 Notebook 3 3 3.1 Hypotheses: Definitions and Examples............................

Maximum Likelihood Estimation

Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

Exact Confidence Intervals

Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo. 3 MT426 Notebook 3 3. 3.1 Definitions... 3. 3.2 Joint Discrete Distributions...

MT426 Notebook 3 Fall 2012 prepared by Professor Jenny Baglivo c Copyright 2004-2012 by Jenny A. Baglivo. All Rights Reserved. Contents 3 MT426 Notebook 3 3 3.1 Definitions............................................

Practice problems for Homework 11 - Point Estimation

Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best:

P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

Testing on proportions

Testing on proportions Textbook Section 5.4 April 7, 2011 Example 1. X 1,, X n Bernolli(p). Wish to test H 0 : p p 0 H 1 : p > p 0 (1) Consider a related problem The likelihood ratio test is where c is

Senior Secondary Australian Curriculum

Senior Secondary Australian Curriculum Mathematical Methods Glossary Unit 1 Functions and graphs Asymptote A line is an asymptote to a curve if the distance between the line and the curve approaches zero

Chapter 6: Point Estimation. Fall 2011. - Probability & Statistics

STAT355 Chapter 6: Point Estimation Fall 2011 Chapter Fall 2011 6: Point1 Estimat / 18 Chap 6 - Point Estimation 1 6.1 Some general Concepts of Point Estimation Point Estimate Unbiasedness Principle of

For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

7 Hypothesis testing - one sample tests

7 Hypothesis testing - one sample tests 7.1 Introduction Definition 7.1 A hypothesis is a statement about a population parameter. Example A hypothesis might be that the mean age of students taking MAS113X

An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

( ) = P Z > = P( Z > 1) = 1 Φ(1) = 1 0.8413 = 0.1587 P X > 17

4.6 I company that manufactures and bottles of apple juice uses a machine that automatically fills 6 ounce bottles. There is some variation, however, in the amounts of liquid dispensed into the bottles

Principle of Data Reduction

Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

1 Prior Probability and Posterior Probability

Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

Basics of Statistical Machine Learning

CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

Introduction to Hypothesis Testing. Point estimation and confidence intervals are useful statistical inference procedures.

Introduction to Hypothesis Testing Point estimation and confidence intervals are useful statistical inference procedures. Another type of inference is used frequently used concerns tests of hypotheses.

Solution Using the geometric series a/(1 r) = x=1. x=1. Problem For each of the following distributions, compute

Math 472 Homework Assignment 1 Problem 1.9.2. Let p(x) 1/2 x, x 1, 2, 3,..., zero elsewhere, be the pmf of the random variable X. Find the mgf, the mean, and the variance of X. Solution 1.9.2. Using the

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

MATHEMATICAL METHODS OF STATISTICS

MATHEMATICAL METHODS OF STATISTICS By HARALD CRAMER TROFESSOK IN THE UNIVERSITY OF STOCKHOLM Princeton PRINCETON UNIVERSITY PRESS 1946 TABLE OF CONTENTS. First Part. MATHEMATICAL INTRODUCTION. CHAPTERS

Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

Chapter 2, part 2. Petter Mostad

Chapter 2, part 2 Petter Mostad mostad@chalmers.se Parametrical families of probability distributions How can we solve the problem of learning about the population distribution from the sample? Usual procedure:

Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

9-3.4 Likelihood ratio test. Neyman-Pearson lemma

9-3.4 Likelihood ratio test Neyman-Pearson lemma 9-1 Hypothesis Testing 9-1.1 Statistical Hypotheses Statistical hypothesis testing and confidence interval estimation of parameters are the fundamental

Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails

12th International Congress on Insurance: Mathematics and Economics July 16-18, 2008 A Uniform Asymptotic Estimate for Discounted Aggregate Claims with Subexponential Tails XUEMIAO HAO (Based on a joint

Chapter 4 Statistical Inference in Quality Control and Improvement. Statistical Quality Control (D. C. Montgomery)

Chapter 4 Statistical Inference in Quality Control and Improvement 許 湘 伶 Statistical Quality Control (D. C. Montgomery) Sampling distribution I a random sample of size n: if it is selected so that the

MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables

MATH 10: Elementary Statistics and Probability Chapter 5: Continuous Random Variables Tony Pourmohamad Department of Mathematics De Anza College Spring 2015 Objectives By the end of this set of slides,

Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

WHERE DOES THE 10% CONDITION COME FROM?

1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

Topic 19: Goodness of Fit

Topic 19: November 24, 2009 A goodness of fit test examine the case of a sequence if independent experiments each of which can have 1 of k possible outcomes. In terms of hypothesis testing, let π = (π

APPLIED MATHEMATICS ADVANCED LEVEL INTRODUCTION This syllabus serves to examine candidates knowledge and skills in introductory mathematical and statistical methods, and their applications. For applications

Fisher Information and Cramér-Rao Bound. 1 Fisher Information. Math 541: Statistical Theory II. Instructor: Songfeng Zheng

Math 54: Statistical Theory II Fisher Information Cramér-Rao Bound Instructor: Songfeng Zheng In the parameter estimation problems, we obtain information about the parameter from a sample of data coming

Probability Calculator

Chapter 95 Introduction Most statisticians have a set of probability tables that they refer to in doing their statistical wor. This procedure provides you with a set of electronic statistical tables that

ECE302 Spring 2006 HW4 Solutions February 6, 2006 1

ECE302 Spring 2006 HW4 Solutions February 6, 2006 1 Solutions to HW4 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in

Multivariate Normal Distribution

Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

Lecture 8: More Continuous Random Variables

Lecture 8: More Continuous Random Variables 26 September 2005 Last time: the eponential. Going from saying the density e λ, to f() λe λ, to the CDF F () e λ. Pictures of the pdf and CDF. Today: the Gaussian

Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation

Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2015 CS 551, Fall 2015

Topic 21: Goodness of Fit

Topic 21: December 5, 2011 A goodness of fit tests examine the case of a sequence of independent observations each of which can have 1 of k possible categories. For example, each of us has one of 4 possible

MAT140: Applied Statistical Methods Summary of Calculating Confidence Intervals and Sample Sizes for Estimating Parameters

MAT140: Applied Statistical Methods Summary of Calculating Confidence Intervals and Sample Sizes for Estimating Parameters Inferences about a population parameter can be made using sample statistics for

Efficiency and the Cramér-Rao Inequality

Chapter Efficiency and the Cramér-Rao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.

Dongfeng Li. Autumn 2010

Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

Topic 2: Scalar random variables. Definition of random variables

Topic 2: Scalar random variables Discrete and continuous random variables Probability distribution and densities (cdf, pmf, pdf) Important random variables Expectation, mean, variance, moments Markov and

Sampling and Hypothesis Testing

Population and sample Sampling and Hypothesis Testing Allin Cottrell Population : an entire set of objects or units of observation of one sort or another. Sample : subset of a population. Parameter versus

PROBABILITIES AND PROBABILITY DISTRIBUTIONS

Published in "Random Walks in Biology", 1983, Princeton University Press PROBABILITIES AND PROBABILITY DISTRIBUTIONS Howard C. Berg Table of Contents PROBABILITIES PROBABILITY DISTRIBUTIONS THE BINOMIAL

Hypothesis Testing COMP 245 STATISTICS. Dr N A Heard. 1 Hypothesis Testing 2 1.1 Introduction... 2 1.2 Error Rates and Power of a Test...

Hypothesis Testing COMP 45 STATISTICS Dr N A Heard Contents 1 Hypothesis Testing 1.1 Introduction........................................ 1. Error Rates and Power of a Test.............................

Homework 4 - KEY. Jeff Brenion. June 16, 2004. Note: Many problems can be solved in more than one way; we present only a single solution here.

Homework 4 - KEY Jeff Brenion June 16, 2004 Note: Many problems can be solved in more than one way; we present only a single solution here. 1 Problem 2-1 Since there can be anywhere from 0 to 4 aces, the

MINITAB ASSISTANT WHITE PAPER

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME

ON SOME ANALOGUE OF THE GENERALIZED ALLOCATION SCHEME Alexey Chuprunov Kazan State University, Russia István Fazekas University of Debrecen, Hungary 2012 Kolchin s generalized allocation scheme A law of

1 Sufficient statistics

1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

3. The Multivariate Normal Distribution

3. The Multivariate Normal Distribution 3.1 Introduction A generalization of the familiar bell shaped normal density to several dimensions plays a fundamental role in multivariate analysis While real data

Master s Theory Exam Spring 2006

Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

Models for Count Data With Overdispersion

Models for Count Data With Overdispersion Germán Rodríguez November 6, 2013 Abstract This addendum to the WWS 509 notes covers extra-poisson variation and the negative binomial model, with brief appearances

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

Statistics 100A Homework 8 Solutions

Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. b 0, b 1 Q = n (Y i (b 0 + b 1 X i )) 2 i=1 Minimize this by maximizing

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

Aggregate Loss Models

Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing

Section 5.1 Continuous Random Variables: Introduction

Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

Statistics I for QBIC. Contents and Objectives. Chapters 1 7. Revised: August 2013

Statistics I for QBIC Text Book: Biostatistics, 10 th edition, by Daniel & Cross Contents and Objectives Chapters 1 7 Revised: August 2013 Chapter 1: Nature of Statistics (sections 1.1-1.6) Objectives

Tests of Hypotheses Using Statistics

Tests of Hypotheses Using Statistics Adam Massey and Steven J. Miller Mathematics Department Brown University Providence, RI 0292 Abstract We present the various methods of hypothesis testing that one

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution

Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution 8 October 2007 In this lecture we ll learn the following: 1. how continuous probability distributions differ

INSURANCE RISK THEORY (Problems)

INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,

i=1 In practice, the natural logarithm of the likelihood function, called the log-likelihood function and denoted by

Statistics 580 Maximum Likelihood Estimation Introduction Let y (y 1, y 2,..., y n be a vector of iid, random variables from one of a family of distributions on R n and indexed by a p-dimensional parameter

5.1 Identifying the Target Parameter

University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying

Notes on Continuous Random Variables

Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

Statistics 641 - EXAM II - 1999 through 2003

Statistics 641 - EXAM II - 1999 through 2003 December 1, 1999 I. (40 points ) Place the letter of the best answer in the blank to the left of each question. (1) In testing H 0 : µ 5 vs H 1 : µ > 5, the

Math/Stats 342: Solutions to Homework

Math/Stats 342: Solutions to Homework Steven Miller (sjm1@williams.edu) November 17, 2011 Abstract Below are solutions / sketches of solutions to the homework problems from Math/Stats 342: Probability

STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Discrete vs. continuous random variables Examples of continuous distributions o Uniform o Exponential o Normal Recall: A random

Exploratory Data Analysis

Exploratory Data Analysis Johannes Schauer johannes.schauer@tugraz.at Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction

E3: PROBABILITY AND STATISTICS lecture notes

E3: PROBABILITY AND STATISTICS lecture notes 2 Contents 1 PROBABILITY THEORY 7 1.1 Experiments and random events............................ 7 1.2 Certain event. Impossible event............................

11. Analysis of Case-control Studies Logistic Regression

Research methods II 113 11. Analysis of Case-control Studies Logistic Regression This chapter builds upon and further develops the concepts and strategies described in Ch.6 of Mother and Child Health:

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as...

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1 PREVIOUSLY used confidence intervals to answer questions such as... You know that 0.25% of women have red/green color blindness. You conduct a study of men

Math Chapter Seven Sample Exam

Math 333 - Chapter Seven Sample Exam 1. The cereal boxes coming off an assembly line are each supposed to contain 12 ounces. It is reasonable to assume that the amount of cereal per box has an approximate

CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

e = random error, assumed to be normally distributed with mean 0 and standard deviation σ

1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.

Exponential Distribution

Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1

Chapter 4 Expected Values

Chapter 4 Expected Values 4. The Expected Value of a Random Variables Definition. Let X be a random variable having a pdf f(x). Also, suppose the the following conditions are satisfied: x f(x) converges

Poisson Models for Count Data

Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

Math 431 An Introduction to Probability. Final Exam Solutions

Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

Statistiek (WISB361)

Statistiek (WISB361) Final exam June 29, 2015 Schrijf uw naam op elk in te leveren vel. Schrijf ook uw studentnummer op blad 1. The maximum number of points is 100. Points distribution: 23 20 20 20 17

A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São

ECE302 Spring 2006 HW7 Solutions March 11, 2006 1

ECE32 Spring 26 HW7 Solutions March, 26 Solutions to HW7 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics where

1 The Pareto Distribution

Estimating the Parameters of a Pareto Distribution Introducing a Quantile Regression Method Joseph Lee Petersen Introduction. A broad approach to using correlation coefficients for parameter estimation

Properties of Future Lifetime Distributions and Estimation

Properties of Future Lifetime Distributions and Estimation Harmanpreet Singh Kapoor and Kanchan Jain Abstract Distributional properties of continuous future lifetime of an individual aged x have been studied.