# Practice problems for Homework 11 - Point Estimation

Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Practice problems for Homework 11 - Point Estimation 1. (10 marks) Suppose we want to select a random sample of size 5 from the current CS 3341 students. Which of the following strategies is the best: a) Pick 5 students from the first row. b) Pick 5 of your friends from the class. c) Post a note on elearning and get the first 5 students who respond. d) Wait near the door and pick the first 5 students who enter the class. e) Assign a number to each student in the class and use a random number generator to pick 5 students.. (10 marks) Consider the following results of 10 tosses of a coin: H, T, T, T, T, H, T, H, T, T a) Estimate the probability of head (H) for this coin. b) Estimate the standard error of your estimate. 3. (10 marks) Suppose the following data shows the number of the problems from the Practice Problems Set attempted in the past week by 10 randomly selected students: a) Find the sample mean. b) Find the sample variance., 4, 0, 7, 1,, 0, 3,, 1. c) Estimate the mean number of practice problems attempted by a student in the past week. d) Estimate the standard error of the estimated mean. 4. (10 marks) Let X 1,..., X n denote a random sample from a Uniform (0, θ) distribution, with θ > 0 as the unknown parameter. Let X denote the sample mean. a) Is X unbiased for θ? Explain your answer. b) Find an unbiased estimator for θ. c) Find the variance of the estimator in the previous part. d) Suggest an alternative estimator for θ. 1

2 5. (10 marks) A sample of 3 observations, (X 1 = 0.4, X = 0.7, X 3 = 0.9) is collected from a continuous distribution with density Estimate θ a. by the method of moments; b. by the method of maximum likelihood. 6. (10 marks) (P. 648, #3, first part) f(x) = θx θ 1 for 0 < x < 1. The memory residence times of 13,171 jobs were measured, and the sample mean was found to be 0.05 s and the sample variance, s. Assuming that the memory residence time is gamma-distributed, estimate its parameters r and λ using the method of moments. 7. (10 marks) Derive method of moments and maximum-likelihood estimators for: a) parameter p based on a Bernoulli(p) sample of size n. b) parameter p based on a Binomial(N, p) sample of size n. Compute your estimators if the observed sample is (3, 6,, 0, 0, 3) and N = 10. c) parameter λ based on a Poisson(λ) sample of size n. d) parameters a and b based on a Uniform (a, b) sample of size n. e) parameter µ based on a Normal(µ, σ ) sample of size n with known variance σ and unknown mean µ. f) parameter σ based on a Normal(µ, σ ) sample of size n with known mean µ and unknown variance σ. g) parameters (µ, σ ) based on a Normal(µ, σ ) sample of size n with unknown mean µ and variance σ.

3 Solutions: Strategy (e) is the best as this is the only strategy that ensures that each possible sample of 5 students is equally likely to be selected. The other strategies lead to convenience samples. Let X denote the toss of a single coin. Further, let X = 1 if a head results, and X = 0 if a tail results. This X is a Bernoulli (p) random variable, where p denotes the probability of head. Let ˆp denote the estimator of p. a) The estimated value of p is ˆp = ( )/10 = 0.3. b) The estimated standard error of ˆp is ˆp(1 ˆp)/n = 0.3(0.7)/10 = a) X = n X i/n = ( )/10 =. b) S = n (X i X) /(n 1) = ( (.) + (4.) (.) + (1.) ) /(10 1) = 4.4 c) The estimate is X =.. d) Estimated standard error of X is S/ n = 4.4/10 = Let X denote a Uniform (0, θ) random variable. a) No. This is because expectation of E(X) is θ/, which implies that E(X) = E(X) = θ/ θ. b) The estimator X is unbiased since E(X) = θ, for all θ. c) var(x) = 4var(X) = 4var(X)/n = 4θ /(1n) = θ /(3n). d) Since θ denotes the maximum value in the population, a natural estimator is maximum value in the sample, i.e., maxx 1,..., X n }. 5. a. Method of moments. Compute 3

4 equate it to µ 1 = E(X) = 1 0 xf(x)dx = 1 0 θx θ dx = θxθ+1 θ + 1 x=1 x=0 = θ θ + 1, and solve for θ, m 1 = X = = 3 θ θ + 1 = 3 = ˆθ = b. Method of maximum likelihood. The joint density is Take logarithm, f(x 1,..., X n ) = 3 θx θ 1 i 3 3 ln f(x 1,..., X n ) = ln θ + (θ 1) ln x i } = 3 ln θ + (θ 1) ln x i Take derivative, equate to 0, and solve for θ, ln f θ = 3 3 θ + ln x i = 0, 3 ˆθ = 3/ ln x i = 3/(ln ln ln 0.9) = For the Gamma distribution, E(X) = r/λ and V ar(x) = r/λ. So, the first two moments are µ 1 = r/λ and µ = (r/λ ) + (r/λ). 4

5 Also, the first two sample moments are M 1 = X and M = 1 n X i. Then, the method of moments estimators can be obtained by solving the system of equations r/λ = M 1 with respect to r and λ. µ = (r/λ ) + (r/λ) = M r/λ = M 1 (r/λ ) + (r/λ) = M r/λ = M 1 r/λ + M 1 = M r = M 1 /(M M 1 ) λ = M 1 /(M M 1 ) Notice that M M 1 = 1 n (Xi X) = (n 1)s /n = (13170)( )/(13171) = Then we can compute the estimates, ˆr = X / = 0.37, ˆλ = X/ = It is easier to use the \bf central} second moments, s and V ar(x) instead of E(X ) and M. This way, we solve the system of equations E(X) = r/λ = X V ar(x) = r/λ = s. Solving this system in terms of r and λ, we immediately get the method of moment estimates ˆr = X /s = 0.37, ˆλ = X/s = # 1a (Bernoulli) This is a special case of #1(b) when N = 1. 5

6 Following #1(b), both methods result in ˆp = X. # 1b (Binomial) Method of moments. To estimate parameter p of Binomial(N, p) distribution, we recall that µ 1 = E(X) = Np. There is only one unknown parameter p, hence we write one equation. Equate µ 1 = E(X) = Np to X and solve: ˆp = X/N. Maximum likelihood. Start with the joint p.m.f. f(x 1,..., X n ) = n ( N X i ) p X i (1 p) N X i, then ln f(x 1,..., X n ) = ( N X i ) + X i ln p + (N X i ) ln(1 p), p ln f(x 1,..., X n ) = n 1 X n i 1 (N X i) p 1 p (find roots of the derivative in order to maximize the density). Solving this equation, = n X p nn n X 1 p = 0, Answer: ˆp = X/N. (1 p) X = p(n X) p = X/N. # 1(c) (Poisson) Method of moments. To estimate parameter λ of Poisson(λ) distribution, we recall that µ 1 = E(X) = λ. There is only one unknown parameter, hence we write one equation, µ 1 = λ = m 1 = X. 6

7 Solving it for λ, we obtain ˆλ = X, the method of moments estimator of λ. Maximum likelihood. The p.m.f. of Poisson distribution is P (x) = e λ λx x!, and its logarithm is ln P (x) = λ + x ln λ ln(x!). Thus, we need to maximize ln P (X 1,..., X n ) = ( λ + X i ln λ) + C = nλ + ln λ X i, where C = ln(x!) is a constant that does not contain the unknown parameter λ. Find the critical point(s) of this log-likelihood. Differentiating it and equating its derivative to 0, we get This equation has only one solution λ ln P (X 1,..., X n ) = n + 1 λ ˆλ = 1 n X i = X. X i = 0. Since this is the only critical point, and since the likelihood vanishes (converges to 0) as λ 0 or λ, we conclude that ˆλ is the maximizer. Therefore, it is the maximum likelihood estimator of λ. For the Poisson distribution, the method of moments and the method of maximum likelihood returned the same estimator, ˆλ = X. # 1d (Uniform) 7

8 Method of moments. Use two moments, µ 1 = (a + b)/ and, for example, µ = V ar(x) = (b a) /1. Find â and ˆb by solving the system a+b = X (b a) = S â = X S 1 1, ˆb = X + S 1 Maximum likelihood. The joint density f(x 1,..., X n ) = ( 1 b a) n if a X 1,..., X n b 0 otherwise is monotonically increasing in a and decreasing in b. It is maximized at the largest value of a and the smallest value of b where this density is not 0. These are â = min(x i ) and ˆb = max(x i ) # 1e (Normal with unknown mean) Method of moments. Equate µ to X and trivially obtain ˆµ = X. Maximum likelihood. The joint density is Then f(x 1,..., X n ) = n 1 σ π exp (X i µ) σ }. ln f(x 1,..., X n ) = n ln(σ π) (X i µ) σ = n ln(σ π) nµ X i µ + Xi σ is a parabola in terms of µ, and it is maximized at ˆµ = X i /n = X. # 1f (Normal with unknown variance) Method of moments. The first population moment is not a function of σ, thus we equate the second (central, for simplicity) moment σ to the second moment S and obtain ˆσ = S. 8

9 Since µ is known, we can also use the estimator instead of S. ˆσ = 1 n (X i µ) Maximum likelihood. Differentiate ln f(x 1,..., X n ) from #1e with respect to σ, σ ln f(x 1,..., X n ) = n σ + (Xi µ) = 0, σ 3 (find roots of the derivative in order to maximize the density) so that #1 g (Normal with both parameters unknown) Method of moments. (Xi µ) ˆσ =. n When both µ and σ are unknown, we equate the first and second (central, for simplicity) moments and trivially obtain ˆµ = X and ˆσ = S so that ˆσ = S. Maximum likelihood. First, maximizing ln f(x 1,..., X n ) from # 1e in µ, we again obtain ˆµ = X regardless of the value of σ. Then, substitute the obtained maximizer into ln f(x 1,..., X n ) for the unknown value of µ and maximize the resulting function in terms of σ. We get the same answer as in (d) with the unknown parameter µ replaced by X, (Xi ˆσ = X) = S. n 9

### Maximum Likelihood Estimation

Math 541: Statistical Theory II Lecturer: Songfeng Zheng Maximum Likelihood Estimation 1 Maximum Likelihood Estimation Maximum likelihood is a relatively simple method of constructing an estimator for

### Math 461 Fall 2006 Test 2 Solutions

Math 461 Fall 2006 Test 2 Solutions Total points: 100. Do all questions. Explain all answers. No notes, books, or electronic devices. 1. [105+5 points] Assume X Exponential(λ). Justify the following two

### 1 Sufficient statistics

1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

### 5. Continuous Random Variables

5. Continuous Random Variables Continuous random variables can take any value in an interval. They are used to model physical characteristics such as time, length, position, etc. Examples (i) Let X be

### MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

### Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation

Parametric Models Part I: Maximum Likelihood and Bayesian Density Estimation Selim Aksoy Department of Computer Engineering Bilkent University saksoy@cs.bilkent.edu.tr CS 551, Fall 2015 CS 551, Fall 2015

### Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment 2-3, Probability and Statistics, March 2015. Due:-March 25, 2015.

Department of Mathematics, Indian Institute of Technology, Kharagpur Assignment -3, Probability and Statistics, March 05. Due:-March 5, 05.. Show that the function 0 for x < x+ F (x) = 4 for x < for x

### For a partition B 1,..., B n, where B i B j = for i. A = (A B 1 ) (A B 2 ),..., (A B n ) and thus. P (A) = P (A B i ) = P (A B i )P (B i )

Probability Review 15.075 Cynthia Rudin A probability space, defined by Kolmogorov (1903-1987) consists of: A set of outcomes S, e.g., for the roll of a die, S = {1, 2, 3, 4, 5, 6}, 1 1 2 1 6 for the roll

### Section 5.1 Continuous Random Variables: Introduction

Section 5. Continuous Random Variables: Introduction Not all random variables are discrete. For example:. Waiting times for anything (train, arrival of customer, production of mrna molecule from gene,

### CHAPTER 6: Continuous Uniform Distribution: 6.1. Definition: The density function of the continuous random variable X on the interval [A, B] is.

Some Continuous Probability Distributions CHAPTER 6: Continuous Uniform Distribution: 6. Definition: The density function of the continuous random variable X on the interval [A, B] is B A A x B f(x; A,

### Practice Problems for Homework #6. Normal distribution and Central Limit Theorem.

Practice Problems for Homework #6. Normal distribution and Central Limit Theorem. 1. Read Section 3.4.6 about the Normal distribution and Section 4.7 about the Central Limit Theorem. 2. Solve the practice

### Notes on Continuous Random Variables

Notes on Continuous Random Variables Continuous random variables are random quantities that are measured on a continuous scale. They can usually take on any value over some interval, which distinguishes

### Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS. Part 3: Discrete Uniform Distribution Binomial Distribution

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 3: Discrete Uniform Distribution Binomial Distribution Sections 3-5, 3-6 Special discrete random variable distributions we will cover

### Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

### Statistics 100A Homework 4 Solutions

Problem 1 For a discrete random variable X, Statistics 100A Homework 4 Solutions Ryan Rosario Note that all of the problems below as you to prove the statement. We are proving the properties of epectation

### Principle of Data Reduction

Chapter 6 Principle of Data Reduction 6.1 Introduction An experimenter uses the information in a sample X 1,..., X n to make inferences about an unknown parameter θ. If the sample size n is large, then

### Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page

Errata for ASM Exam C/4 Study Manual (Sixteenth Edition) Sorted by Page 1 Errata and updates for ASM Exam C/Exam 4 Manual (Sixteenth Edition) sorted by page Practice exam 1:9, 1:22, 1:29, 9:5, and 10:8

### Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab

Monte Carlo Simulation: IEOR E4703 Fall 2004 c 2004 by Martin Haugh Overview of Monte Carlo Simulation, Probability Review and Introduction to Matlab 1 Overview of Monte Carlo Simulation 1.1 Why use simulation?

### Math 425 (Fall 08) Solutions Midterm 2 November 6, 2008

Math 425 (Fall 8) Solutions Midterm 2 November 6, 28 (5 pts) Compute E[X] and Var[X] for i) X a random variable that takes the values, 2, 3 with probabilities.2,.5,.3; ii) X a random variable with the

### 3. The Multivariate Normal Distribution

3. The Multivariate Normal Distribution 3.1 Introduction A generalization of the familiar bell shaped normal density to several dimensions plays a fundamental role in multivariate analysis While real data

### 7 Hypothesis testing - one sample tests

7 Hypothesis testing - one sample tests 7.1 Introduction Definition 7.1 A hypothesis is a statement about a population parameter. Example A hypothesis might be that the mean age of students taking MAS113X

### 1 Prior Probability and Posterior Probability

Math 541: Statistical Theory II Bayesian Approach to Parameter Estimation Lecturer: Songfeng Zheng 1 Prior Probability and Posterior Probability Consider now a problem of statistical inference in which

### Basics of Statistical Machine Learning

CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu jerryzhu@cs.wisc.edu Modern machine learning is rooted in statistics. You will find many familiar

### Statistics 100A Homework 8 Solutions

Part : Chapter 7 Statistics A Homework 8 Solutions Ryan Rosario. A player throws a fair die and simultaneously flips a fair coin. If the coin lands heads, then she wins twice, and if tails, the one-half

### Efficiency and the Cramér-Rao Inequality

Chapter Efficiency and the Cramér-Rao Inequality Clearly we would like an unbiased estimator ˆφ (X of φ (θ to produce, in the long run, estimates which are fairly concentrated i.e. have high precision.

### Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS

Chapter 3: DISCRETE RANDOM VARIABLES AND PROBABILITY DISTRIBUTIONS Part 4: Geometric Distribution Negative Binomial Distribution Hypergeometric Distribution Sections 3-7, 3-8 The remaining discrete random

### Hypothesis Testing COMP 245 STATISTICS. Dr N A Heard. 1 Hypothesis Testing 2 1.1 Introduction... 2 1.2 Error Rates and Power of a Test...

Hypothesis Testing COMP 45 STATISTICS Dr N A Heard Contents 1 Hypothesis Testing 1.1 Introduction........................................ 1. Error Rates and Power of a Test.............................

### Math 431 An Introduction to Probability. Final Exam Solutions

Math 43 An Introduction to Probability Final Eam Solutions. A continuous random variable X has cdf a for 0, F () = for 0 <

### 2WB05 Simulation Lecture 8: Generating random variables

2WB05 Simulation Lecture 8: Generating random variables Marko Boon http://www.win.tue.nl/courses/2wb05 January 7, 2013 Outline 2/36 1. How do we generate random variables? 2. Fitting distributions Generating

### Introduction to Hypothesis Testing. Point estimation and confidence intervals are useful statistical inference procedures.

Introduction to Hypothesis Testing Point estimation and confidence intervals are useful statistical inference procedures. Another type of inference is used frequently used concerns tests of hypotheses.

### Using pivots to construct confidence intervals. In Example 41 we used the fact that

Using pivots to construct confidence intervals In Example 41 we used the fact that Q( X, µ) = X µ σ/ n N(0, 1) for all µ. We then said Q( X, µ) z α/2 with probability 1 α, and converted this into a statement

### 2. Discrete random variables

2. Discrete random variables Statistics and probability: 2-1 If the chance outcome of the experiment is a number, it is called a random variable. Discrete random variable: the possible outcomes can be

### Math 151. Rumbos Spring 2014 1. Solutions to Assignment #22

Math 151. Rumbos Spring 2014 1 Solutions to Assignment #22 1. An experiment consists of rolling a die 81 times and computing the average of the numbers on the top face of the die. Estimate the probability

### Math 141. Lecture 7: Variance, Covariance, and Sums. Albyn Jones 1. 1 Library 304. jones/courses/141

Math 141 Lecture 7: Variance, Covariance, and Sums Albyn Jones 1 1 Library 304 jones@reed.edu www.people.reed.edu/ jones/courses/141 Last Time Variance: expected squared deviation from the mean: Standard

### ST 371 (VIII): Theory of Joint Distributions

ST 371 (VIII): Theory of Joint Distributions So far we have focused on probability distributions for single random variables. However, we are often interested in probability statements concerning two or

### An Introduction to Basic Statistics and Probability

An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random

### 4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

### The Exponential Distribution

21 The Exponential Distribution From Discrete-Time to Continuous-Time: In Chapter 6 of the text we will be considering Markov processes in continuous time. In a sense, we already have a very good understanding

### Exact Confidence Intervals

Math 541: Statistical Theory II Instructor: Songfeng Zheng Exact Confidence Intervals Confidence intervals provide an alternative to using an estimator ˆθ when we wish to estimate an unknown parameter

### Probability Generating Functions

page 39 Chapter 3 Probability Generating Functions 3 Preamble: Generating Functions Generating functions are widely used in mathematics, and play an important role in probability theory Consider a sequence

### Chapter 4 Expected Values

Chapter 4 Expected Values 4. The Expected Value of a Random Variables Definition. Let X be a random variable having a pdf f(x). Also, suppose the the following conditions are satisfied: x f(x) converges

### ECE302 Spring 2006 HW4 Solutions February 6, 2006 1

ECE302 Spring 2006 HW4 Solutions February 6, 2006 1 Solutions to HW4 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in

### Continuous Random Variables

Continuous Random Variables COMP 245 STATISTICS Dr N A Heard Contents 1 Continuous Random Variables 2 11 Introduction 2 12 Probability Density Functions 3 13 Transformations 5 2 Mean, Variance and Quantiles

### UNIT I: RANDOM VARIABLES PART- A -TWO MARKS

UNIT I: RANDOM VARIABLES PART- A -TWO MARKS 1. Given the probability density function of a continuous random variable X as follows f(x) = 6x (1-x) 0

### Chapter 5. Random variables

Random variables random variable numerical variable whose value is the outcome of some probabilistic experiment; we use uppercase letters, like X, to denote such a variable and lowercase letters, like

### 6. Jointly Distributed Random Variables

6. Jointly Distributed Random Variables We are often interested in the relationship between two or more random variables. Example: A randomly chosen person may be a smoker and/or may get cancer. Definition.

### P (x) 0. Discrete random variables Expected value. The expected value, mean or average of a random variable x is: xp (x) = v i P (v i )

Discrete random variables Probability mass function Given a discrete random variable X taking values in X = {v 1,..., v m }, its probability mass function P : X [0, 1] is defined as: P (v i ) = Pr[X =

### Exponential Distribution

Exponential Distribution Definition: Exponential distribution with parameter λ: { λe λx x 0 f(x) = 0 x < 0 The cdf: F(x) = x Mean E(X) = 1/λ. f(x)dx = Moment generating function: φ(t) = E[e tx ] = { 1

### STAT 830 Convergence in Distribution

STAT 830 Convergence in Distribution Richard Lockhart Simon Fraser University STAT 830 Fall 2011 Richard Lockhart (Simon Fraser University) STAT 830 Convergence in Distribution STAT 830 Fall 2011 1 / 31

### ISyE 6761 Fall 2012 Homework #2 Solutions

1 1. The joint p.m.f. of X and Y is (a) Find E[X Y ] for 1, 2, 3. (b) Find E[E[X Y ]]. (c) Are X and Y independent? ISE 6761 Fall 212 Homework #2 Solutions f(x, ) x 1 x 2 x 3 1 1/9 1/3 1/9 2 1/9 1/18 3

### Practice Problems #4

Practice Problems #4 PRACTICE PROBLEMS FOR HOMEWORK 4 (1) Read section 2.5 of the text. (2) Solve the practice problems below. (3) Open Homework Assignment #4, solve the problems, and submit multiple-choice

### Chapter 6: Point Estimation. Fall 2011. - Probability & Statistics

STAT355 Chapter 6: Point Estimation Fall 2011 Chapter Fall 2011 6: Point1 Estimat / 18 Chap 6 - Point Estimation 1 6.1 Some general Concepts of Point Estimation Point Estimate Unbiasedness Principle of

### Chapter 4 Lecture Notes

Chapter 4 Lecture Notes Random Variables October 27, 2015 1 Section 4.1 Random Variables A random variable is typically a real-valued function defined on the sample space of some experiment. For instance,

### Lecture 6: Discrete & Continuous Probability and Random Variables

Lecture 6: Discrete & Continuous Probability and Random Variables D. Alex Hughes Math Camp September 17, 2015 D. Alex Hughes (Math Camp) Lecture 6: Discrete & Continuous Probability and Random September

### Solution Using the geometric series a/(1 r) = x=1. x=1. Problem For each of the following distributions, compute

Math 472 Homework Assignment 1 Problem 1.9.2. Let p(x) 1/2 x, x 1, 2, 3,..., zero elsewhere, be the pmf of the random variable X. Find the mgf, the mean, and the variance of X. Solution 1.9.2. Using the

### Homework 4 - KEY. Jeff Brenion. June 16, 2004. Note: Many problems can be solved in more than one way; we present only a single solution here.

Homework 4 - KEY Jeff Brenion June 16, 2004 Note: Many problems can be solved in more than one way; we present only a single solution here. 1 Problem 2-1 Since there can be anywhere from 0 to 4 aces, the

### A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA

REVSTAT Statistical Journal Volume 4, Number 2, June 2006, 131 142 A LOGNORMAL MODEL FOR INSURANCE CLAIMS DATA Authors: Daiane Aparecida Zuanetti Departamento de Estatística, Universidade Federal de São

### Statistics 100A Homework 7 Solutions

Chapter 6 Statistics A Homework 7 Solutions Ryan Rosario. A television store owner figures that 45 percent of the customers entering his store will purchase an ordinary television set, 5 percent will purchase

### Lecture 13: Martingales

Lecture 13: Martingales 1. Definition of a Martingale 1.1 Filtrations 1.2 Definition of a martingale and its basic properties 1.3 Sums of independent random variables and related models 1.4 Products of

### Notes for STA 437/1005 Methods for Multivariate Data

Notes for STA 437/1005 Methods for Multivariate Data Radford M. Neal, 26 November 2010 Random Vectors Notation: Let X be a random vector with p elements, so that X = [X 1,..., X p ], where denotes transpose.

### The Neyman-Pearson lemma. The Neyman-Pearson lemma

The Neyman-Pearson lemma In practical hypothesis testing situations, there are typically many tests possible with significance level α for a null hypothesis versus alternative hypothesis. This leads to

### INSURANCE RISK THEORY (Problems)

INSURANCE RISK THEORY (Problems) 1 Counting random variables 1. (Lack of memory property) Let X be a geometric distributed random variable with parameter p (, 1), (X Ge (p)). Show that for all n, m =,

### STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE

STAT 315: HOW TO CHOOSE A DISTRIBUTION FOR A RANDOM VARIABLE TROY BUTLER 1. Random variables and distributions We are often presented with descriptions of problems involving some level of uncertainty about

### Lecture 10: Depicting Sampling Distributions of a Sample Proportion

Lecture 10: Depicting Sampling Distributions of a Sample Proportion Chapter 5: Probability and Sampling Distributions 2/10/12 Lecture 10 1 Sample Proportion 1 is assigned to population members having a

### Final Mathematics 5010, Section 1, Fall 2004 Instructor: D.A. Levin

Final Mathematics 51, Section 1, Fall 24 Instructor: D.A. Levin Name YOU MUST SHOW YOUR WORK TO RECEIVE CREDIT. A CORRECT ANSWER WITHOUT SHOWING YOUR REASONING WILL NOT RECEIVE CREDIT. Problem Points Possible

### Chapters 5. Multivariate Probability Distributions

Chapters 5. Multivariate Probability Distributions Random vectors are collection of random variables defined on the same sample space. Whenever a collection of random variables are mentioned, they are

### Lecture 8. Confidence intervals and the central limit theorem

Lecture 8. Confidence intervals and the central limit theorem Mathematical Statistics and Discrete Mathematics November 25th, 2015 1 / 15 Central limit theorem Let X 1, X 2,... X n be a random sample of

### SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM C CONSTRUCTION AND EVALUATION OF ACTUARIAL MODELS EXAM C SAMPLE QUESTIONS

SOCIETY OF ACTUARIES/CASUALTY ACTUARIAL SOCIETY EXAM C CONSTRUCTION AND EVALUATION OF ACTUARIAL MODELS EXAM C SAMPLE QUESTIONS Copyright 005 by the Society of Actuaries and the Casualty Actuarial Society

### Discrete Mathematics and Probability Theory Fall 2009 Satish Rao, David Tse Note 18. A Brief Introduction to Continuous Probability

CS 7 Discrete Mathematics and Probability Theory Fall 29 Satish Rao, David Tse Note 8 A Brief Introduction to Continuous Probability Up to now we have focused exclusively on discrete probability spaces

### Multiple random variables

Multiple random variables Multiple random variables We essentially always consider multiple random variables at once. The key concepts: Joint, conditional and marginal distributions, and independence of

### ECE302 Spring 2006 HW5 Solutions February 21, 2006 1

ECE3 Spring 6 HW5 Solutions February 1, 6 1 Solutions to HW5 Note: Most of these solutions were generated by R. D. Yates and D. J. Goodman, the authors of our textbook. I have added comments in italics

### I. Pointwise convergence

MATH 40 - NOTES Sequences of functions Pointwise and Uniform Convergence Fall 2005 Previously, we have studied sequences of real numbers. Now we discuss the topic of sequences of real valued functions.

### Chapter 9: Hypothesis Testing Sections

Chapter 9: Hypothesis Testing Sections - we are still here Skip: 9.2 Testing Simple Hypotheses Skip: 9.3 Uniformly Most Powerful Tests Skip: 9.4 Two-Sided Alternatives 9.5 The t Test 9.6 Comparing the

### Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution

Lecture 3: Continuous distributions, expected value & mean, variance, the normal distribution 8 October 2007 In this lecture we ll learn the following: 1. how continuous probability distributions differ

### 1 Geometric Brownian motion

Copyright c 006 by Karl Sigman Geometric Brownian motion Note that since BM can take on negative values, using it directly for modeling stock prices is questionable. There are other reasons too why BM

### Chapter 4 Statistical Inference in Quality Control and Improvement. Statistical Quality Control (D. C. Montgomery)

Chapter 4 Statistical Inference in Quality Control and Improvement 許 湘 伶 Statistical Quality Control (D. C. Montgomery) Sampling distribution I a random sample of size n: if it is selected so that the

### Normal distribution. ) 2 /2σ. 2π σ

Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

### Exercises with solutions (1)

Exercises with solutions (). Investigate the relationship between independence and correlation. (a) Two random variables X and Y are said to be correlated if and only if their covariance C XY is not equal

### Testing on proportions

Testing on proportions Textbook Section 5.4 April 7, 2011 Example 1. X 1,, X n Bernolli(p). Wish to test H 0 : p p 0 H 1 : p > p 0 (1) Consider a related problem The likelihood ratio test is where c is

### Definition 6.1.1. A r.v. X has a normal distribution with mean µ and variance σ 2, where µ R, and σ > 0, if its density is f(x) = 1. 2σ 2.

Chapter 6 Brownian Motion 6. Normal Distribution Definition 6... A r.v. X has a normal distribution with mean µ and variance σ, where µ R, and σ > 0, if its density is fx = πσ e x µ σ. The previous definition

### The sample space for a pair of die rolls is the set. The sample space for a random number between 0 and 1 is the interval [0, 1].

Probability Theory Probability Spaces and Events Consider a random experiment with several possible outcomes. For example, we might roll a pair of dice, flip a coin three times, or choose a random real

### Math 370, Spring 2008 Prof. A.J. Hildebrand. Practice Test 1 Solutions

Math 70, Spring 008 Prof. A.J. Hildebrand Practice Test Solutions About this test. This is a practice test made up of a random collection of 5 problems from past Course /P actuarial exams. Most of the

### ( ) = P Z > = P( Z > 1) = 1 Φ(1) = 1 0.8413 = 0.1587 P X > 17

4.6 I company that manufactures and bottles of apple juice uses a machine that automatically fills 6 ounce bottles. There is some variation, however, in the amounts of liquid dispensed into the bottles

### WHERE DOES THE 10% CONDITION COME FROM?

1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay

### STT315 Chapter 4 Random Variables & Probability Distributions KM. Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables

Chapter 4.5, 6, 8 Probability Distributions for Continuous Random Variables Discrete vs. continuous random variables Examples of continuous distributions o Uniform o Exponential o Normal Recall: A random

### MTH135/STA104: Probability

MTH135/STA14: Probability Homework # 8 Due: Tuesday, Nov 8, 5 Prof Robert Wolpert 1 Define a function f(x, y) on the plane R by { 1/x < y < x < 1 f(x, y) = other x, y a) Show that f(x, y) is a joint probability

### Joint Exam 1/P Sample Exam 1

Joint Exam 1/P Sample Exam 1 Take this practice exam under strict exam conditions: Set a timer for 3 hours; Do not stop the timer for restroom breaks; Do not look at your notes. If you believe a question

### Bayesian Statistics in One Hour. Patrick Lam

Bayesian Statistics in One Hour Patrick Lam Outline Introduction Bayesian Models Applications Missing Data Hierarchical Models Outline Introduction Bayesian Models Applications Missing Data Hierarchical

### Chapter 2: Data quantifiers: sample mean, sample variance, sample standard deviation Quartiles, percentiles, median, interquartile range Dot diagrams

Review for Final Chapter 2: Data quantifiers: sample mean, sample variance, sample standard deviation Quartiles, percentiles, median, interquartile range Dot diagrams Histogram Boxplots Chapter 3: Set

### Aggregate Loss Models

Aggregate Loss Models Chapter 9 Stat 477 - Loss Models Chapter 9 (Stat 477) Aggregate Loss Models Brian Hartman - BYU 1 / 22 Objectives Objectives Individual risk model Collective risk model Computing

### ACTS 4301 FORMULA SUMMARY Lesson 1: Probability Review. Name f(x) F (x) E[X] Var(X) Name f(x) E[X] Var(X) p x (1 p) m x mp mp(1 p)

ACTS 431 FORMULA SUMMARY Lesson 1: Probability Review 1. VarX)= E[X 2 ]- E[X] 2 2. V arax + by ) = a 2 V arx) + 2abCovX, Y ) + b 2 V ary ) 3. V ar X) = V arx) n 4. E X [X] = E Y [E X [X Y ]] Double expectation

### STAB47S:2003 Midterm Name: Student Number: Tutorial Time: Tutor:

STAB47S:200 Midterm Name: Student Number: Tutorial Time: Tutor: Time: 2hours Aids: The exam is open book Students may use any notes, books and calculators in writing this exam Instructions: Show your reasoning

### e.g. arrival of a customer to a service station or breakdown of a component in some system.

Poisson process Events occur at random instants of time at an average rate of λ events per second. e.g. arrival of a customer to a service station or breakdown of a component in some system. Let N(t) be

### IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem

IEOR 6711: Stochastic Models I Fall 2012, Professor Whitt, Tuesday, September 11 Normal Approximations and the Central Limit Theorem Time on my hands: Coin tosses. Problem Formulation: Suppose that I have

### 5.1 Identifying the Target Parameter

University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying

### Worked examples Multiple Random Variables

Worked eamples Multiple Random Variables Eample Let X and Y be random variables that take on values from the set,, } (a) Find a joint probability mass assignment for which X and Y are independent, and

### Feb 28 Homework Solutions Math 151, Winter 2012. Chapter 6 Problems (pages 287-291)

Feb 8 Homework Solutions Math 5, Winter Chapter 6 Problems (pages 87-9) Problem 6 bin of 5 transistors is known to contain that are defective. The transistors are to be tested, one at a time, until the