Statistiek (WISB361)

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Statistiek (WISB361)"

Transcription

1 Statistiek (WISB361) Final exam June 29, 2015 Schrijf uw naam op elk in te leveren vel. Schrijf ook uw studentnummer op blad 1. The maximum number of points is 100. Points distribution: Consider the sample X = X i n of i.i.d. random variables with probability density function: 2 2 f(x ) = x 3 for x, 0 otherwise for > 0. (a) [7pt] Determine the maximum likelihood estimator ˆ MLE and 2 MLE of and 2 respectively. The likelihood for a realization x of the sample can be written as: n n lik() = The likelihood is increasing in, hence: 2 2 x 3 1 (,+ ) (x i ) = 2n 2n n i x3 i MLE = X (1) where X (1) := min i X i. By the invariance principle: 2 MLE = ( MLE ) 2 = X 2 (1) 1 (0,xi)() = 2n 2n n x3 i 1 (0,x(1) )() (b) [4pt] Determine the distribution of ˆ MLE. The probability distribution function (PDF) of MLE is the PDF of X (1). Therefore with for x >. Therefore F X(1) (x) = 1 P(X (1) > x) = 1 F X1 (x) = and the probability density function (pdf) is: x n P(X i > x) = 1 (1 F X1 (x)) n, F X(1) (x) = (1 2n f X(1) (x) = 2 2 /y 3 dy = 1 2 x 2 x 2n (2n 2n x 2n+1 ) 1 (, ) (x) ) 1 (, ) (x) (c) [3pt] Is ˆ MLE a biased estimator? Is ˆ MLE asymptotically unbiased? We have: so that MLE is biased. However, E( MLE ) = E(X (1) ) = 2n so that MLE is asymptotically unbiased. lim E(X (1)) = n x 2n dx = 2n x2n+1 2n 1 (1)

2 (d) [3pt] Find the method of moment estimator ˆ MoM of. Since E(X (1) ) = E(X 1 ), by (1) with n = 1, we have that E(X 1 ) = 2. Thus: where we denote with X n the sample mean. (e) [3pt] Find the variance of ˆ MoM. Is it finite? Var(ˆ MoM ) = Var(X1) 4n = +, because E(X 2 1 ) = ˆ MoM = X n 2 x 2 2 dx = + x3 (f) [3pt] Compare the mean squared errors (MSE) of ˆ MLE and ˆ MoM. Which estimator is the most efficient? By point (d) it follows that ˆ MoM is unbiased. Hence MSE(ˆ MoM ) = Var(ˆ MoM ) = +. As regards the MSE( MLE ), we have: Since when n 2, we get: ( ) 2 MLE = Var( MLE ) + (E( MLE ) ) 2 = Var(X (1) ) + 2n 1 E(X 2 (1) ) = MSE( MLE ) = x 2 2n 2n x 2n+1 x = n d n (n 1)(2n 1) Therefore if the sample size is larger than 1, MLE is more efficient than MoM. 2. Coffee abuse is often considered related to an increase of heart rate (i.e. number of poundings of the heart per unit of time). In order to support this claim, an experiment was planned on a sample of 10 subjects. The hearth rate was measured twice for each subject: the first time at rest, the second after having drank a cup of coffee. The measurements at rest are: and after a cup of coffe: x = x 1,..., x 10 = 74, 68, 67, 71, 69, 65, 70, 70, 66, 67 y = y 1,..., y 10 = 71, 72, 69, 66, 73, 77, 69, 68, 71, 78 We can assume that the random variables Z i = Y i X i, denoting the difference between the heart rate after the cup of coffee and the heart rate at rest, are i.i.d. normal random variables. (a) [12pt] Test the hypothesis that the coffee increase the heart rate at α = 0.05 level of significance. We have paired normal observations and we will perform a paired t test. Since Z i := Y i X i N(µ, σ 2 ), with µ := E(Y i X i ), and unknown variance σ 2, we want to test the hypotheses: H0 : µ = 0, H 1 : µ > 0. at α = 0.05 level of significance. Given the test statistics T : T = Z S/ n 2

3 with S 2 = 1 n 1 (Z i Z) 2, under H 0 T t(n 1). With the data of the problem, we have the following realization t of the test statistic: t = z s/ n = = We perform a one sided t test, so that the rejection region B is B = [t 9,0.05, + ) = [1.833, + ). Since / B, we do not reject H 0 at 0.05 level of significance. (b) [5pt] Calculate the p-value of the test. p value = P(T H 0 ). From the tables, we can just say that 0.05 < p value < (c) [3pt] In case the normality assumption does not hold, how you could test the hypothesis of point (a)? (It is enough to explain which test is the most appropriate, without performing the analysis). Signed rank test. 3. Let X 1 and X 2 be i.i.d. random variables such that X i Unif[, + 1], for i = 1, 2, where Unif[, + 1] denotes the uniform distribution in the interval [, + 1]. In order to test: H0 : = 0, H 1 : > 0 at α = 0.05 level of significance. we have two competing tests, with the following rejection regions: with C R. (a) [2pt] Find the significance level α of TEST1. TEST1 : Reject H 0 if X 1 > 0.95, TEST2 : Reject H 0 if X 1 + X 2 > C, α 1 = P(X 1 > 0.95 = 0) = 0.05 (b) [4pt] Find the value of C so that TEST2 has the same significance level α of TEST1. Since P(X 1 + X 2 > C = 0) is the area of region inside the unit square (with vertices (0, 0), (0, 1), (1, 0) and (1, 1)) above the line x 2 = C x 1, we have: 1 C 2 /2 : if 0 C 1, α 2 (C) = P(X 1 + X 2 > C = 0) = (2 C) 2 /2 if 1 < C 2, 0 otherwise We see that the equation α 2 (C) = α 1 has solution only for 1 < C 2. Therefore, the solution is (c) [4pt] Calculate the power function of each test. For the first test: C = 2 2α 1 = π 1 () = P(X 1 > 0.95 ) = 0 if 0.5, if 0.5 < 0.95, 1 otherwise 3

4 For the second test, we notice that π 2 () = P(X 1 + X 2 > C ) is the area of the unit square with vertices (, ), (, + 1), ( + 1, ) and ( + 1, + 1), over the line x 2 = C x 1. Hence, 0 if C/2 1, (2 + 2 C) π 2 () = 2 /2 if (C/2) 1 < (C 1)/2, 1 (C 2) 2 /2 if (C 1)/2 < C/2 1 if > C/2. (d) [6pt] Is TEST2 more powerful than TEST1? From point (c) it follows that TEST1 is more powerful for near 0, but TEST2 is more powerful for larger. Hence, TEST2 is not uniformly more powerful than TEST1. (e) [4pt] Show how to get a test that has the same significance level but more powerful than TEST2. If either X 1 1 or X 2 1 we should reject H 0, because if = 0 then P(X i < 1) = 1. Thus, if we consider the rejection region: B := (x 1, x 2 ) : x 1 + x 2 > C (x 1, x 2 ) : x 1 > 1 (x 1, x 2 ) : x 2 > 1 The first set is the rejection region for TEST2. The test with this rejection region B has the same significance as TEST2 because the last two sets both have probability 0 if = 0. But for 0 < < C 1 the power of this test is strictly larger than π 2 (). 4. Let the independent normal random variables Y 1, Y 2,..., Y n be such that Y i N(µ, α 2 x 2 i ), for i = 1,..., n, where the given constants x i are not all equal and no one of which is zero. (a) [13pt] Derive the least squares estimators of µ and α 2, after you have properly rescaled the random variables Y i. The linear model for Y i is: Y i = µ + ɛ i, i = 1, 2,..., n, where ɛ i are independent normal RV with zero mean and different variances: Var(ɛ i ) = α 2 x 2 i. Let Therefore, we have the linear model for Z i : Z i = Y i /x i Z i = µ 1 x i + ɛ i i = 1, 2,..., n, where ɛ i are i.i.d. RV with zero mean and common variance: Var( ɛ i ) = α 2. Minimizing the sum of the squares S(µ) = (Z i µ x i ) 2, we obtain the LS estimator ˆµ LSE of µ: ˆµ LSE = Z i/x i 1/x2 i = Y i/x 2 i 1/x2 i and, from the residual sum of the squares (RSS), we get the estimator of the variance: α 2 = 1 n 1 n (Z i ˆµ LSE ) 2 = 1 n (Y i /x i ˆµ LSE ) 2 n 1 (b) [4pt] Which is the distribution of (n 1)ˆα 2 /α 2? It follows that (n 1)ˆα 2 /α 2 has a χ 2 n 1 distribution. 4

5 (c) [3pt] Discuss the test of hypotheses: H0 : α = 1, H 1 : α 1. By point (b) we can perform a two sided χ 2 n 1 test, using the test statistics (n 1)ˆα Consider the sample X = X i n of i.i.d. random variables such that X i N(, σ 2 ) with σ 2 known and Ω, where the parameter space is Ω = 2, 0, 1. (a) [2pt] Show that X = 1/n X i is a sufficient statistics for and that the likelihood lik() can be factorized in lik() = h(x)g ( x), where x is a realization of X, x is a realization of X and h(x) = (2πσ 2 ) n/2 exp 1 2σ 2 n (x i x) 2, g ( x) = exp n ( x )2 2σ2 lik() = (2πσ 2 ) n/2 exp 1 n 2σ 2 (x i ) 2 = (2πσ 2 ) n/2 exp 1 n 2σ 2 (x i x + x ) 2 = (2πσ 2 ) n/2 exp 1 n 2σ 2 (x i x) 2 exp n ( x )2 2σ2 In the figure above the functions g (y) are plotted for the three possible values of the parameter. (b) [8pt] Find a maximum likelihood estimator ˆ MLE of. From the Figure follows that: 2 if x < 1, ˆ MLE = 0 if 1 x if x > 0.5 (c) [4pt] Find the probability mass function of ˆ MLE. 5

6 If we denote with 0 the true value of the parameter, we have: Φ(( 1 0 ) n/σ) if t = 2, P(ˆ MLE = t 0 ) = Φ((0.5 0 ) n/σ) Φ(( 1 0 ) n/σ) if t = 0, 1 Φ((0.5 0 ) n/σ) if t = 1, for 0 2, 0, 1 and where Φ( ) denote the CDF of the standard normal distribution. (d) [3pt] Is ˆ MLE a biased estimator? ˆ MLE is a biased estimator. In fact, in case 0 = 2, we have: E(ˆ MLE 0 = 2) = 2Φ( n/σ)+1 Φ(2.5 n/σ) = 2Φ( n/σ)+φ( 2.5 n/σ) > 2Φ( n/σ) > 2 because 0 < Φ( ) < 1. 6

Definition The covariance of X and Y, denoted by cov(x, Y ) is defined by. cov(x, Y ) = E(X µ 1 )(Y µ 2 ).

Definition The covariance of X and Y, denoted by cov(x, Y ) is defined by. cov(x, Y ) = E(X µ 1 )(Y µ 2 ). Correlation Regression Bivariate Normal Suppose that X and Y are r.v. s with joint density f(x y) and suppose that the means of X and Y are respectively µ 1 µ 2 and the variances are 1 2. Definition The

More information

Random Variables, Expectation, Distributions

Random Variables, Expectation, Distributions Random Variables, Expectation, Distributions CS 5960/6960: Nonparametric Methods Tom Fletcher January 21, 2009 Review Random Variables Definition A random variable is a function defined on a probability

More information

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization Function to minimize w.r.t. b 0, b 1 Q = n (Y i (b 0 + b 1 X i )) 2 i=1 Minimize this by maximizing

More information

Continuous Distributions

Continuous Distributions MAT 2379 3X (Summer 2012) Continuous Distributions Up to now we have been working with discrete random variables whose R X is finite or countable. However we will have to allow for variables that can take

More information

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood

Regression Estimation - Least Squares and Maximum Likelihood. Dr. Frank Wood Regression Estimation - Least Squares and Maximum Likelihood Dr. Frank Wood Least Squares Max(min)imization 1. Function to minimize w.r.t. β 0, β 1 Q = n (Y i (β 0 + β 1 X i )) 2 i=1 2. Minimize this by

More information

Inference in Regression Analysis

Inference in Regression Analysis Yang Feng Inference in the Normal Error Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters X i is a known constant, the value of

More information

Joint Probability Distributions and Random Samples (Devore Chapter Five)

Joint Probability Distributions and Random Samples (Devore Chapter Five) Joint Probability Distributions and Random Samples (Devore Chapter Five) 1016-345-01 Probability and Statistics for Engineers Winter 2010-2011 Contents 1 Joint Probability Distributions 1 1.1 Two Discrete

More information

3.6: General Hypothesis Tests

3.6: General Hypothesis Tests 3.6: General Hypothesis Tests The χ 2 goodness of fit tests which we introduced in the previous section were an example of a hypothesis test. In this section we now consider hypothesis tests more generally.

More information

Conditional expectation

Conditional expectation A Conditional expectation A.1 Review of conditional densities, expectations We start with the continuous case. This is sections 6.6 and 6.8 in the book. Let X, Y be continuous random variables. We defined

More information

Generalized Linear Model Theory

Generalized Linear Model Theory Appendix B Generalized Linear Model Theory We describe the generalized linear model as formulated by Nelder and Wedderburn (1972), and discuss estimation of the parameters and tests of hypotheses. B.1

More information

The t Test. April 3-8, exp (2πσ 2 ) 2σ 2. µ ln L(µ, σ2 x) = 1 σ 2. i=1. 2σ (σ 2 ) 2. ˆσ 2 0 = 1 n. (x i µ 0 ) 2 = i=1

The t Test. April 3-8, exp (2πσ 2 ) 2σ 2. µ ln L(µ, σ2 x) = 1 σ 2. i=1. 2σ (σ 2 ) 2. ˆσ 2 0 = 1 n. (x i µ 0 ) 2 = i=1 The t Test April 3-8, 008 Again, we begin with independent normal observations X, X n with unknown mean µ and unknown variance σ The likelihood function Lµ, σ x) exp πσ ) n/ σ x i µ) Thus, ˆµ x ln Lµ,

More information

Topic 2: Scalar random variables. Definition of random variables

Topic 2: Scalar random variables. Definition of random variables Topic 2: Scalar random variables Discrete and continuous random variables Probability distribution and densities (cdf, pmf, pdf) Important random variables Expectation, mean, variance, moments Markov and

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables COMP 245 STATISTICS Dr N A Heard Contents 1 Continuous Random Variables 2 11 Introduction 2 12 Probability Density Functions 3 13 Transformations 5 2 Mean, Variance and Quantiles

More information

Inference in Regression Analysis. Dr. Frank Wood

Inference in Regression Analysis. Dr. Frank Wood Inference in Regression Analysis Dr. Frank Wood Inference in the Normal Error Regression Model Y i = β 0 + β 1 X i + ɛ i Y i value of the response variable in the i th trial β 0 and β 1 are parameters

More information

Lecture Note 7 Random Sample. MIT Spring 2006 Herman Bennett

Lecture Note 7 Random Sample. MIT Spring 2006 Herman Bennett Lecture Note 7 Random Sample MIT 14.30 Spring 2006 Herman Bennett 17 Definitions 17.1 Random Sample Let X 1,..., X n be mutually independent RVs such that f Xi (x) = f Xj (x) i = j. Denote f Xi (x) = f(x).

More information

Statistics - Written Examination MEC Students - BOVISA

Statistics - Written Examination MEC Students - BOVISA Statistics - Written Examination MEC Students - BOVISA Prof.ssa A. Guglielmi 26.0.2 All rights reserved. Legal action will be taken against infringement. Reproduction is prohibited without prior consent.

More information

The written Master s Examination

The written Master s Examination The written Master s Examination Option Statistics and Probability Fall Full points may be obtained for correct answers to 8 questions. Each numbered question (which may have several parts) is worth the

More information

Linear and Piecewise Linear Regressions

Linear and Piecewise Linear Regressions Tarigan Statistical Consulting & Coaching statistical-coaching.ch Doctoral Program in Computer Science of the Universities of Fribourg, Geneva, Lausanne, Neuchâtel, Bern and the EPFL Hands-on Data Analysis

More information

MATH 201. Final ANSWERS August 12, 2016

MATH 201. Final ANSWERS August 12, 2016 MATH 01 Final ANSWERS August 1, 016 Part A 1. 17 points) A bag contains three different types of dice: four 6-sided dice, five 8-sided dice, and six 0-sided dice. A die is drawn from the bag and then rolled.

More information

Lecture 2: Statistical Estimation and Testing

Lecture 2: Statistical Estimation and Testing Bioinformatics: In-depth PROBABILITY AND STATISTICS Spring Semester 2012 Lecture 2: Statistical Estimation and Testing Stefanie Muff (stefanie.muff@ifspm.uzh.ch) 1 Problems in statistics 2 The three main

More information

L10: Probability, statistics, and estimation theory

L10: Probability, statistics, and estimation theory L10: Probability, statistics, and estimation theory Review of probability theory Bayes theorem Statistics and the Normal distribution Least Squares Error estimation Maximum Likelihood estimation Bayesian

More information

Good luck! Problem 1. (b) The random variables X 1 and X 2 are independent and N(0, 1)-distributed. Show that the random variables X 1

Good luck! Problem 1. (b) The random variables X 1 and X 2 are independent and N(0, 1)-distributed. Show that the random variables X 1 Avd. Matematisk statistik TETAME I SF2940 SAOLIKHETSTEORI/EXAM I SF2940 PROBABILITY THE- ORY, WEDESDAY OCTOBER 26, 2016, 08.00-13.00. Examinator : Boualem Djehiche, tel. 08-7907875, email: boualem@kth.se

More information

t-statistics for Weighted Means with Application to Risk Factor Models

t-statistics for Weighted Means with Application to Risk Factor Models t-statistics for Weighted Means with Application to Risk Factor Models Lisa R. Goldberg Barra, Inc. 00 Milvia St. Berkeley, CA 94704 Alec N. Kercheval Dept. of Mathematics Florida State University Tallahassee,

More information

FALL 2005 EXAM C SOLUTIONS

FALL 2005 EXAM C SOLUTIONS FALL 005 EXAM C SOLUTIONS Question #1 Key: D S ˆ(300) = 3/10 (there are three observations greater than 300) H ˆ (300) = ln[ S ˆ (300)] = ln(0.3) = 1.0. Question # EX ( λ) = VarX ( λ) = λ µ = v = E( λ)

More information

Applications of the Central Limit Theorem

Applications of the Central Limit Theorem Applications of the Central Limit Theorem October 23, 2008 Take home message. I expect you to know all the material in this note. We will get to the Maximum Liklihood Estimate material very soon! 1 Introduction

More information

Lecture 2: October 24

Lecture 2: October 24 Machine Learning: Foundations Fall Semester, 2 Lecture 2: October 24 Lecturer: Yishay Mansour Scribe: Shahar Yifrah, Roi Meron 2. Bayesian Inference - Overview This lecture is going to describe the basic

More information

Sections 2.11 and 5.8

Sections 2.11 and 5.8 Sections 211 and 58 Timothy Hanson Department of Statistics, University of South Carolina Stat 704: Data Analysis I 1/25 Gesell data Let X be the age in in months a child speaks his/her first word and

More information

Fisher Information and Cramér-Rao Bound. 1 Fisher Information. Math 541: Statistical Theory II. Instructor: Songfeng Zheng

Fisher Information and Cramér-Rao Bound. 1 Fisher Information. Math 541: Statistical Theory II. Instructor: Songfeng Zheng Math 54: Statistical Theory II Fisher Information Cramér-Rao Bound Instructor: Songfeng Zheng In the parameter estimation problems, we obtain information about the parameter from a sample of data coming

More information

Lecture 13: Some Important Continuous Probability Distributions (Part 2)

Lecture 13: Some Important Continuous Probability Distributions (Part 2) Lecture 13: Some Important Continuous Probability Distributions (Part 2) Kateřina Staňková Statistics (MAT1003) May 14, 2012 Outline 1 Erlang distribution Formulation Application of Erlang distribution

More information

Jointly Distributed Random Variables

Jointly Distributed Random Variables Jointly Distributed Random Variables COMP 245 STATISTICS Dr N A Heard Contents 1 Jointly Distributed Random Variables 1 1.1 Definition......................................... 1 1.2 Joint cdfs..........................................

More information

10 BIVARIATE DISTRIBUTIONS

10 BIVARIATE DISTRIBUTIONS BIVARIATE DISTRIBUTIONS After some discussion of the Normal distribution, consideration is given to handling two continuous random variables. The Normal Distribution The probability density function f(x)

More information

1 Maximum likelihood estimators

1 Maximum likelihood estimators Maximum likelihood estimators maxlik.tex and maxlik.pdf, March, 2003 Simplyput,ifweknowtheformoff X (x; θ) and have a sample from f X (x; θ), not necessarily random, the ml estimator of θ, θ ml,isthatθ

More information

Continuous Probability Distributions (120 Points)

Continuous Probability Distributions (120 Points) Economics : Statistics for Economics Problem Set 5 - Suggested Solutions University of Notre Dame Instructor: Julio Garín Spring 22 Continuous Probability Distributions 2 Points. A management consulting

More information

Continuous Random Variables

Continuous Random Variables Probability 2 - Notes 7 Continuous Random Variables Definition. A random variable X is said to be a continuous random variable if there is a function f X (x) (the probability density function or p.d.f.)

More information

Discrete and Continuous Random Variables. Summer 2003

Discrete and Continuous Random Variables. Summer 2003 Discrete and Continuous Random Variables Summer 003 Random Variables A random variable is a rule that assigns a numerical value to each possible outcome of a probabilistic experiment. We denote a random

More information

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 This primer provides an overview of basic concepts and definitions in probability and statistics. We shall

More information

Lecture 8: Continuous random variables, expectation and variance

Lecture 8: Continuous random variables, expectation and variance Lecture 8: Continuous random variables, expectation and variance Lejla Batina Institute for Computing and Information Sciences Digital Security Version: spring 2012 Lejla Batina Version: spring 2012 Wiskunde

More information

In this chapter, we aim to answer the following questions: 1. What is the nature of heteroskedasticity?

In this chapter, we aim to answer the following questions: 1. What is the nature of heteroskedasticity? Lecture 9 Heteroskedasticity In this chapter, we aim to answer the following questions: 1. What is the nature of heteroskedasticity? 2. What are its consequences? 3. how does one detect it? 4. What are

More information

Homework Zero. Max H. Farrell Chicago Booth BUS41100 Applied Regression Analysis. Complete before the first class, do not turn in

Homework Zero. Max H. Farrell Chicago Booth BUS41100 Applied Regression Analysis. Complete before the first class, do not turn in Homework Zero Max H. Farrell Chicago Booth BUS41100 Applied Regression Analysis Complete before the first class, do not turn in This homework is intended as a self test of your knowledge of the basic statistical

More information

Random Variables of The Discrete Type

Random Variables of The Discrete Type Random Variables of The Discrete Type Definition: Given a random experiment with an outcome space S, a function X that assigns to each element s in S one and only one real number x X(s) is called a random

More information

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j.

The purpose of this section is to derive the asymptotic distribution of the Pearson chi-square statistic. k (n j np j ) 2. np j. Chapter 7 Pearson s chi-square test 7. Null hypothesis asymptotics Let X, X 2, be independent from a multinomial(, p) distribution, where p is a k-vector with nonnegative entries that sum to one. That

More information

Inference for Regression

Inference for Regression Simple Linear Regression Inference for Regression The simple linear regression model Estimating regression parameters; Confidence intervals and significance tests for regression parameters Inference about

More information

Examination 110 Probability and Statistics Examination

Examination 110 Probability and Statistics Examination Examination 0 Probability and Statistics Examination Sample Examination Questions The Probability and Statistics Examination consists of 5 multiple-choice test questions. The test is a three-hour examination

More information

Bivariate Distributions

Bivariate Distributions Chapter 4 Bivariate Distributions 4.1 Distributions of Two Random Variables In many practical cases it is desirable to take more than one measurement of a random observation: (brief examples) 1. What is

More information

Exam 3 - Math Solutions - Spring Name : ID :

Exam 3 - Math Solutions - Spring Name : ID : Exam 3 - Math 3200 Solutions - Spring 2013 Name : ID : General instructions: This exam has 14 questions, each worth the same amount. Check that no pages are missing and promptly notify one of the proctors

More information

2.8 Expected values and variance

2.8 Expected values and variance y 1 b a 0 y 0.0 0.2 0.4 0.6 0.8 1.0 1.2 0 a b (a) Probability density function x 0 a b (b) Cumulative distribution function x Figure 2.3: The probability density function and cumulative distribution function

More information

Estimation, Inference and Hypothesis Testing

Estimation, Inference and Hypothesis Testing Chapter 2 Estimation, Inference and Hypothesis Testing Note: The primary reference for these notes is Ch. 7 and 8 of Casella & Berger 2. This text may be challenging if new to this topic and Ch. 7 of Wackerly,

More information

Wording of Final Conclusion. Slide 1

Wording of Final Conclusion. Slide 1 Wording of Final Conclusion Slide 1 8.3: Assumptions for Testing Slide 2 Claims About Population Means 1) The sample is a simple random sample. 2) The value of the population standard deviation σ is known

More information

Chapter 6: Point Estimation. Fall 2011. - Probability & Statistics

Chapter 6: Point Estimation. Fall 2011. - Probability & Statistics STAT355 Chapter 6: Point Estimation Fall 2011 Chapter Fall 2011 6: Point1 Estimat / 18 Chap 6 - Point Estimation 1 6.1 Some general Concepts of Point Estimation Point Estimate Unbiasedness Principle of

More information

Chapter 9 Testing Hypothesis and Assesing Goodness of Fit

Chapter 9 Testing Hypothesis and Assesing Goodness of Fit Chapter 9 Testing Hypothesis and Assesing Goodness of Fit 9.1-9.3 The Neyman-Pearson Paradigm and Neyman Pearson Lemma We begin by reviewing some basic terms in hypothesis testing. Let H 0 denote the null

More information

Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

More information

POL 571: Expectation and Functions of Random Variables

POL 571: Expectation and Functions of Random Variables POL 571: Expectation and Functions of Random Variables Kosuke Imai Department of Politics, Princeton University March 10, 2006 1 Expectation and Independence To gain further insights about the behavior

More information

3. Continuous Random Variables

3. Continuous Random Variables 3. Continuous Random Variables A continuous random variable is one which can take any value in an interval (or union of intervals) The values that can be taken by such a variable cannot be listed. Such

More information

Fundamental Probability and Statistics

Fundamental Probability and Statistics Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are

More information

Review Solutions, Exam 3, Math 338

Review Solutions, Exam 3, Math 338 Review Solutions, Eam 3, Math 338. Define a random sample: A random sample is a collection of random variables, typically indeed as X,..., X n.. Why is the t distribution used in association with sampling?

More information

Will Landau. Feb 26, 2013

Will Landau. Feb 26, 2013 ,, and,, and Iowa State University Feb 26, 213 Iowa State University Feb 26, 213 1 / 27 Outline,, and Iowa State University Feb 26, 213 2 / 27 of continuous distributions The p-quantile of a random variable,

More information

Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average

Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average PHP 2510 Expectation, variance, covariance, correlation Expectation Discrete RV - weighted average Continuous RV - use integral to take the weighted average Variance Variance is the average of (X µ) 2

More information

ML estimation: very general method, applicable to. wide range of problems. Allows, e.g., estimation of

ML estimation: very general method, applicable to. wide range of problems. Allows, e.g., estimation of Maximum Likelihood (ML) Estimation ML estimation: very general method, applicable to wide range of problems. Allows, e.g., estimation of models nonlinear in parameters. ML estimator requires a distributional

More information

Hence, multiplying by 12, the 95% interval for the hourly rate is (965, 1435)

Hence, multiplying by 12, the 95% interval for the hourly rate is (965, 1435) Confidence Intervals for Poisson data For an observation from a Poisson distribution, we have σ 2 = λ. If we observe r events, then our estimate ˆλ = r : N(λ, λ) If r is bigger than 20, we can use this

More information

CSE 312, 2011 Winter, W.L.Ruzzo. 7. continuous random variables

CSE 312, 2011 Winter, W.L.Ruzzo. 7. continuous random variables CSE 312, 2011 Winter, W.L.Ruzzo 7. continuous random variables continuous random variables Discrete random variable: takes values in a finite or countable set, e.g. X {1,2,..., 6} with equal probability

More information

OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique - ie the estimator has the smallest variance

OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique - ie the estimator has the smallest variance Lecture 5: Hypothesis Testing What we know now: OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique - ie the estimator has the smallest variance (if the Gauss-Markov

More information

Data Analysis and Uncertainty Part 3: Hypothesis Testing/Sampling

Data Analysis and Uncertainty Part 3: Hypothesis Testing/Sampling Data Analysis and Uncertainty Part 3: Hypothesis Testing/Sampling Instructor: Sargur N. University at Buffalo The State University of New York srihari@cedar.buffalo.edu Topics 1. Hypothesis Testing 1.

More information

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems

ELEC-E8104 Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Stochastics models and estimation, Lecture 3b: Linear Estimation in Static Systems Minimum Mean Square Error (MMSE) MMSE estimation of Gaussian random vectors Linear MMSE estimator for arbitrarily distributed

More information

Lecture 10: Other Continuous Distributions and Probability Plots

Lecture 10: Other Continuous Distributions and Probability Plots Lecture 10: Other Continuous Distributions and Probability Plots Devore: Section 4.4-4.6 Page 1 Gamma Distribution Gamma function is a natural extension of the factorial For any α > 0, Γ(α) = 0 x α 1 e

More information

Continuous Random Variables and Probability Distributions. Stat 4570/5570 Material from Devore s book (Ed 8) Chapter 4 - and Cengage

Continuous Random Variables and Probability Distributions. Stat 4570/5570 Material from Devore s book (Ed 8) Chapter 4 - and Cengage 4 Continuous Random Variables and Probability Distributions Stat 4570/5570 Material from Devore s book (Ed 8) Chapter 4 - and Cengage Continuous r.v. A random variable X is continuous if possible values

More information

Gamma distribution. Let us take two parameters α > 0 and β > 0. Gamma function Γ(α) is defined by. 1 = x α 1 e x dx = y α 1 e βy dy 0 0

Gamma distribution. Let us take two parameters α > 0 and β > 0. Gamma function Γ(α) is defined by. 1 = x α 1 e x dx = y α 1 e βy dy 0 0 Lecture 6 Gamma distribution, χ -distribution, Student t-distribution, Fisher F -distribution. Gamma distribution. Let us tae two parameters α > and >. Gamma function is defined by If we divide both sides

More information

Summary of Probability

Summary of Probability Summary of Probability Mathematical Physics I Rules of Probability The probability of an event is called P(A), which is a positive number less than or equal to 1. The total probability for all possible

More information

The calculations lead to the following values: d 2 = 46, n = 8, s d 2 = 4, s d = 2, SEof d = s d n s d n

The calculations lead to the following values: d 2 = 46, n = 8, s d 2 = 4, s d = 2, SEof d = s d n s d n EXAMPLE 1: Paired t-test and t-interval DBP Readings by Two Devices The diastolic blood pressures (DBP) of 8 patients were determined using two techniques: the standard method used by medical personnel

More information

ORF 245 Fundamentals of Statistics Chapter 8 Hypothesis Testing

ORF 245 Fundamentals of Statistics Chapter 8 Hypothesis Testing ORF 245 Fundamentals of Statistics Chapter 8 Hypothesis Testing Robert Vanderbei Fall 2015 Slides last edited on December 11, 2015 http://www.princeton.edu/ rvdb Coin Tossing Example Consider two coins.

More information

Renewal Theory. (iv) For s < t, N(t) N(s) equals the number of events in (s, t].

Renewal Theory. (iv) For s < t, N(t) N(s) equals the number of events in (s, t]. Renewal Theory Def. A stochastic process {N(t), t 0} is said to be a counting process if N(t) represents the total number of events that have occurred up to time t. X 1, X 2,... times between the events

More information

Simple Linear Regression Chapter 11

Simple Linear Regression Chapter 11 Simple Linear Regression Chapter 11 Rationale Frequently decision-making situations require modeling of relationships among business variables. For instance, the amount of sale of a product may be related

More information

Stat 704 Data Analysis I Probability Review

Stat 704 Data Analysis I Probability Review 1 / 30 Stat 704 Data Analysis I Probability Review Timothy Hanson Department of Statistics, University of South Carolina Course information 2 / 30 Logistics: Tuesday/Thursday 11:40am to 12:55pm in LeConte

More information

9740/02 October/November MATHEMATICS (H2) Paper 2 Suggested Solutions. Ensure calculator s mode is in Radians. (iii) Refer to part (i)

9740/02 October/November MATHEMATICS (H2) Paper 2 Suggested Solutions. Ensure calculator s mode is in Radians. (iii) Refer to part (i) GCE Level October/November 8 Suggested Solutions Mathematics H (974/) version. MTHEMTICS (H) Paper Suggested Solutions. Topic : Functions and Graphs (i) 974/ October/November 8 (iii) (iv) f(x) g(x)

More information

1 Logit & Probit Models for Binary Response

1 Logit & Probit Models for Binary Response ECON 370: Limited Dependent Variable 1 Limited Dependent Variable Econometric Methods, ECON 370 We had previously discussed the possibility of running regressions even when the dependent variable is dichotomous

More information

Change of Continuous Random Variable

Change of Continuous Random Variable Change of Continuous Random Variable All you are responsible for from this lecture is how to implement the Engineer s Way (see page 4) to compute how the probability density function changes when we make

More information

Statistics for Management II-STAT 362-Final Review

Statistics for Management II-STAT 362-Final Review Statistics for Management II-STAT 362-Final Review Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. The ability of an interval estimate to

More information

Hypothesis testing. Null hypothesis H 0 and alternative hypothesis H 1.

Hypothesis testing. Null hypothesis H 0 and alternative hypothesis H 1. Hypothesis testing Null hypothesis H 0 and alternative hypothesis H 1. Hypothesis testing Null hypothesis H 0 and alternative hypothesis H 1. Simple and compound hypotheses. Simple : the probabilistic

More information

The Simple Linear Regression Model: Specification and Estimation

The Simple Linear Regression Model: Specification and Estimation Chapter 3 The Simple Linear Regression Model: Specification and Estimation 3.1 An Economic Model Suppose that we are interested in studying the relationship between household income and expenditure on

More information

where b is the slope of the line and a is the intercept i.e. where the line cuts the y axis.

where b is the slope of the line and a is the intercept i.e. where the line cuts the y axis. Least Squares Introduction We have mentioned that one should not always conclude that because two variables are correlated that one variable is causing the other to behave a certain way. However, sometimes

More information

Multivariate Normal Distribution

Multivariate Normal Distribution Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

More information

Bivariate Unit Normal. Lecture 19: Joint Distributions. Bivariate Unit Normal - Normalizing Constant. Bivariate Unit Normal, cont.

Bivariate Unit Normal. Lecture 19: Joint Distributions. Bivariate Unit Normal - Normalizing Constant. Bivariate Unit Normal, cont. Lecture 19: Joint Distributions Statistics 14 If X and Y have independent unit normal distributions then their joint distribution f (x, y) is given by f (x, y) φ(x)φ(y) c e 1 (x +y ) Colin Rundel April,

More information

Slides for Risk Management

Slides for Risk Management Slides for Risk Management Introduction to the modeling of assets Groll Seminar für Finanzökonometrie Prof. Mittnik, PhD Groll (Seminar für Finanzökonometrie) Slides for Risk Management Prof. Mittnik,

More information

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators... MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

More information

The Delta Method and Applications

The Delta Method and Applications Chapter 5 The Delta Method and Applications 5.1 Linear approximations of functions In the simplest form of the central limit theorem, Theorem 4.18, we consider a sequence X 1, X,... of independent and

More information

TRANSFORMATIONS OF RANDOM VARIABLES

TRANSFORMATIONS OF RANDOM VARIABLES TRANSFORMATIONS OF RANDOM VARIABLES 1. INTRODUCTION 1.1. Definition. We are often interested in the probability distributions or densities of functions of one or more random variables. Suppose we have

More information

CF4103 Financial Time Series. 1 Financial Time Series and Their Characteristics

CF4103 Financial Time Series. 1 Financial Time Series and Their Characteristics CF4103 Financial Time Series 1 Financial Time Series and Their Characteristics Financial time series analysis is concerned with theory and practice of asset valuation over time. For example, there are

More information

Review of Likelihood Theory

Review of Likelihood Theory Appendix A Review of Likelihood Theory This is a brief summary of some of the key results we need from likelihood theory. A.1 Maximum Likelihood Estimation Let Y 1,..., Y n be n independent random variables

More information

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference

What is Statistics? Lecture 1. Introduction and probability review. Idea of parametric inference 0. 1. Introduction and probability review 1.1. What is Statistics? What is Statistics? Lecture 1. Introduction and probability review There are many definitions: I will use A set of principle and procedures

More information

Statistical foundations of machine learning

Statistical foundations of machine learning Machine learning p. 1/45 Statistical foundations of machine learning INFO-F-422 Gianluca Bontempi Département d Informatique Boulevard de Triomphe - CP 212 http://www.ulb.ac.be/di Machine learning p. 2/45

More information

Simultaneous Equations

Simultaneous Equations Simultaneous Equations 1 Motivation and Examples Now we relax the assumption that EX u 0 his wil require new techniques: 1 Instrumental variables 2 2- and 3-stage least squares 3 Limited LIML and full

More information

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4.

Continuous Random Variables. and Probability Distributions. Continuous Random Variables and Probability Distributions ( ) ( ) Chapter 4 4. UCLA STAT 11 A Applied Probability & Statistics for Engineers Instructor: Ivo Dinov, Asst. Prof. In Statistics and Neurology Teaching Assistant: Neda Farzinnia, UCLA Statistics University of California,

More information

Chapter 11: Two Variable Regression Analysis

Chapter 11: Two Variable Regression Analysis Department of Mathematics Izmir University of Economics Week 14-15 2014-2015 In this chapter, we will focus on linear models and extend our analysis to relationships between variables, the definitions

More information

Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

More information

In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a

In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a Math 143 Inference on Regression 1 Review of Linear Regression In Chapter 2, we used linear regression to describe linear relationships. The setting for this is a bivariate data set (i.e., a list of cases/subjects

More information

Continuous random variables

Continuous random variables Continuous random variables So far we have been concentrating on discrete random variables, whose distributions are not continuous. Now we deal with the so-called continuous random variables. A random

More information

Chapter 2 Principles of Linear Regression

Chapter 2 Principles of Linear Regression Chapter 2 Principles of Linear Regression E tric Methods Winter Term 2012/13 Nikolaus Hautsch Humboldt-Universität zu Berlin 2.1 Basic Principles of Inference 2.1.1 Econometric Modelling 2 86 Outline I

More information

0.1 Solutions to CAS Exam ST, Fall 2015

0.1 Solutions to CAS Exam ST, Fall 2015 0.1 Solutions to CAS Exam ST, Fall 2015 The questions can be found at http://www.casact.org/admissions/studytools/examst/f15-st.pdf. 1. The variance equals the parameter λ of the Poisson distribution for

More information

Statistics in Geophysics: Linear Regression II

Statistics in Geophysics: Linear Regression II Statistics in Geophysics: Linear Regression II Steffen Unkel Department of Statistics Ludwig-Maximilians-University Munich, Germany Winter Term 2013/14 1/28 Model definition Suppose we have the following

More information

Bayesian Decision Theory. Prof. Richard Zanibbi

Bayesian Decision Theory. Prof. Richard Zanibbi Bayesian Decision Theory Prof. Richard Zanibbi Bayesian Decision Theory The Basic Idea To minimize errors, choose the least risky class, i.e. the class for which the expected loss is smallest Assumptions

More information

Engineering 323 Beautiful Homework Set 7 1 of 7 Morelli Problem The cdf of checkout duration X as described in Exercise 1 is

Engineering 323 Beautiful Homework Set 7 1 of 7 Morelli Problem The cdf of checkout duration X as described in Exercise 1 is Engineering 33 Beautiful Homework Set 7 of 7 Morelli Problem.. The cdf of checkout duration X as described in Eercise is F( )

More information