# Hence, multiplying by 12, the 95% interval for the hourly rate is (965, 1435)

Save this PDF as:

Size: px
Start display at page:

Download "Hence, multiplying by 12, the 95% interval for the hourly rate is (965, 1435)"

## Transcription

1 Confidence Intervals for Poisson data For an observation from a Poisson distribution, we have σ 2 = λ. If we observe r events, then our estimate ˆλ = r : N(λ, λ) If r is bigger than 20, we can use this to find an approximate confidence interval. Example: Geiger counter records 100 radioactive decays in 5 minutes. Find an approximate 95% confidence interval for the number of decays per hour. We start by finding an interval for the number in a 5 minute period. The estimated standard deviation s = 100 = 10. So the interval is ˆλ ± 1.96 λ = 100 ± 19.6 (80.4, 119.6). Hence, multiplying by 12, the 95% interval for the hourly rate is (965, 1435) Example: BBC news (23 February 2010). The homicide rate in Scotland fell last year to 99 from 115 the year before. Is this reported change really newsworthy? The actual number of homicides will vary from year to year. If we assume that each homicide is an independent event, then a Poisson distribution could be a reasonable model. Consider the figures for A 95% confidence interval for the true homicide rate based on these is: 115 ± (94.0, 136.0). Thus the true rate could be even lower than 99 per year. It is not reasonable to conclude that there has been a reduction in the true rate. Note: If some incidents result in more than one death, the assumption of independence will be wrong. The effect of this will be to widen the confidence interval. 1

2 As/l). Confidence Interval for continuous data Example: A series of 10 water samples gave the following arsenic concentrations (µg 5.5, 6.9, 3.6, 3.5, 3.5, 9.8, 7.1, 4.7, 3.8, 5.0 The sample mean x = 5.34 The sample standard deviation s = We wish to find a 95% confidence interval. We cannot assume that s σ for this by using a multiplier t which is bigger than since the sample is too small to ensure this. We allow x ± 1.96 s s x ± t n n The exact value of t depends on n and is given in Lindley & Scott Table 10. We take: ν = n 1 = 9 P = 2.5% t = Hence, the confidence interval is: x ± t s n = 5.34 ± = 5.34 ± 1.47 (3.87, 6.81). Note 1: The parameter ν (nu) that is used in looking up the value of t in Table 10 is called the degrees of freedom. Note 2: All of the confidence interval examples above have been of the same form. The interval has been symmetric about the estimate and the width of the interval has been a s constant times a value such as. The latter is called the Standard Error of the Mean, n or more generally the Standard Error of the estimate. Most publications prefer to report their results as estimates and the corresponding standard errors, and assume readers can construct the appropriate confidence intervals if they require them. Note 3: The method for finding the confidence intervals when n is small will only work well if the data come from a distribution that is roughly Normal. We can check this by plotting the data. If the sample size is bigger than about 30, the value of t is not much bigger than In such cases, the value 2 is often used for convenience. 2

3 Confidence Interval for a Difference (Rees 9.11) Suppose we have independent random samples from two different populations. We will use the random variable X 1 for observations from the first population and X 2 for the second. Suppose also that: X 1 : N(µ 1, σ2 n 1 ) and X2 : N(µ 2, σ2 n 2 ) where n 1 and n 2 are the sample sizes. Note that we have assumed that the two populations have different means but the same standard deviation. Then ( )) ( X 1 X 2 ) : N (µ 1 µ 2, σ 2 + 1n1 1n2 Note: We subtract the means but we add the variances. We can use the same ideas as those used to obtain a confidence interval for a single sample. If we can estimate σ then we will be able to estimate a standard error. Then the confidence interval will be the estimated difference plus or minus a suitable multiple of the standard error. Example: A further 4 water samples were taken from the same borehole as in an earlier example. The arsenic concentrations were (µg As/l): 5.7, 6.3, 9.6, 7.0 The sample mean x = 7.15 The sample standard deviation s = The previous values were x = 5.34, s = How much have arsenic levels increased? The estimate of σ comes from a weighted average of the squared standard deviations. The weights are the degrees of freedom. s = ν 1 s ν 2 s 2 2 ν 1 + ν 2 = (n 1 1) s (n 2 1) s 2 2 n 1 + n 2 2 (10 1) (4 1) = = =

4 The standard error of the difference is: s = = = ( ) 4 The sample sizes are small, so we need to use a value of t from NCST Table 10. We use the sum of the degrees of freedom for each sample. ν = ν 1 + ν 2 = n 1 + n 2 2 ν = (n 1 1) + (n 2 1) = 12 P = 2.5 t = So the 95% confidence interval is: ( ) ± ( 0.74, 4.36) Note that this interval includes the value zero. Thus although our estimate suggests that the true concentration has gone up by 1.81 µg/l, it is possible that it has actually decreased. In the UK, safe levels for arsenic in water are considered to be less than 10 µg/l. The previous example relied on the two samples being independent. If they are not, then the calculations are different. Example: Sixteen matched pairs of cars, of a variety of makes and ages, were used in a test of a new fuel additive. Within each pair, one car was selected at random to use the additive and the other did not. The results, in miles per gallon were as follows: Pair Add No A Pair Add No A If you do a scatter plot of the pairs, you will find that the 16 points lie close to a line through the origin, showing that the two results for a pair are similar to each other. This means that the two samples are not independent. To allow for this lack of independence, we find the difference between the two results for each pair. It is important to keep the correct sign with these differences. Taking the result without the additive from the value with the additive gives the following data values: 4

5 This is just a single sample of 16 values, so we can find a 95% confidence interval in the same way as before. The mean is 1.531, the standard deviation is 1.222, so the 95% confidence interval is: ± (0.880, 2.183) This interval does not come close to including zero, so it is clear that the additive does improve the fuel consumption figures. Note 1: The last two examples look similar in that there are two sets of values to be compared. However, they must be analysed in different ways. For the car example, each measurement in one set tells us something about one of the values in the other set. This is because the cars in a pair were chosen to be similar to each other. For the arsenic example, the two sets of water samples were taken at different times from the same location. There is no direct or indirect matching of samples. However, if samples had been taken at the same times from two or more different wells, allowance should be made for the lack of independence. Note 2: For both of these examples, we found a 95% interval for the true value of the difference between the groups. In the first case, the interval included zero which implied that there might not have been a change. This does not mean that there has been no change. There might have been a change, but we did not have enough data. For the second example, zero was well outside the confidence interval, so we were able to conclude that the additive had improved fuel consumption. There is another way to consider whether a true difference could be zero. This is to assume that there is no difference and then see how likely it is to obtain the difference that was observed, or a more extreme difference. If the probability is small, the conclusion is that there really was a difference between the two groups. This is called a hypothesis test. The calculations behind hypothesis tests here are essentially the same as those used in calculating a confidence interval, so the conclusions will always be the same. If there is a difference, a confidence interval is more informative because it tells us what values of the true difference are likely. 5

6 Standard Errors We have met four types of confidence interval: Binomial probability; Poisson rate; Mean of continuous variable; Difference between means of two independent samples from a continuous variable. Each of these has followed the same pattern: Estimate ± t Standard Error of estimate For a 95% interval, the value of t is 1.96 if the standard deviation is known and a little larger than this if it has to be estimated from the data. The formula for the standard error depends on the type of estimate: p(1 p) Binomial: n Poisson: λ σ Sample Mean: n Standard Errors of Regression Estimates If a simple regression model is adequate, it is useful to know the accuracy of the parameter estimates and of the predictions that are made from the model. We can do this by calculating standard errors and confidence intervals. For the slope: Variance of ˆβ = σ2 S XX The residual standard deviation s is used as an estimate of σ. To calculate a confidence interval for ˆβ, we use t with n 2 degrees of freedom. Example: ˆβ = s 2 = For the stream data. S XX = S.E.( ˆβ) = = For a 95% confidence interval and 9 degrees of freedom, t = 2.262, so C.I. for ˆβ is: ± (4.8, 14.3) ( ) 1 For the intercept: Variance of ˆα = σ 2 n + x2 S XX 6

7 Example: ˆα = For the stream data. x = 0.55 ( ) 1 S.E.(ˆα) = = A 95% confidence interval for ˆα is: ± ( 3.35, 2.09) This interval contains zero. This suggests that we could use the simpler regression model: Flow = β Depth + ɛ i This model makes better physical sense. It is straightforward to fit a regression without an intercept; in the formulae above S XX, S XY, S Y Y are replaced by x 2, xy, y 2 respectively. If we wish to predict the y value at some new value x, we just use the regression equation: ŷ = ˆα + ˆβx ( ) 1 Then Variance of ŷ = σ 2 (x x)2 + n S XX This can be used to calculate a confidence interval. The variance of the estimate of the intercept ˆα that was given earlier is a special case of this; just set x = 0 in the formula. Note: The best predictions are close to the mean of the x values. If we know in advance where predictions will be required, we should centre the x values on this. Example: For the stream data, suppose x = 0.75 ŷ = = 6.54 S.E.(ŷ) = A 95% C.I. is (5.33, 7.75) be. This interval is where we think that the true mean flow at a depth of 0.75m is likely to We can also calculate another interval which is where we think that a new observation of flow at a depth of 0.75m is likely to be. This is called a prediction interval. The variance that is used to calculate a prediction interval is similar to that used for a confidence interval; ( we just need to add σ 2. Thus: Variance = σ ) (x x)2 + n S XX Example: For the stream data at x = 0.75 S.E.(Prediction) = A 95% P.I. is (3.77, 9.31) Note: In practice, Confidence Intervals are used much more often than Prediction Intervals. 7

8 Significance Tests The calculation of confidence intervals is based on the Central Limit Theorem we do not expect an estimate to be far from the true (unknown) value. Far here is relative to the standard error of the estimate. If we are interested in whether the true value has a particular value, such as zero, we see whether the value lies within the confidence interval. An alternative method is to standardise by: This is called a Test Statistic. Estimate Value of Interest S.E.(Estimate) We use the Normal distribution or t distribution tables to find the probability of getting a value as extreme as this or more extreme, either positive or negative. The result is called a Significance Probability. A small probability means that it is unlikely that the value of the parameter being estimated takes the value specified. A large value means that the value is plausible. A 95% confidence interval is just the collection of all the plausible values that have a significance probability of greater than Example: Stream Flow data ˆα = Value of interest: α = 0 S.E.(ˆα) = Test Statistic: t = Use the t tables with 9 degrees of freedom. From NCST Table 9, P(t < 0.5) = ) and P(t < 0.6) = , so P(t < ) So Significance Probability 2( ) = This probability is large, so the intercept could be zero. This suggests that we could use the simpler model: Flow = β Depth + ɛ i Note: The simpler model will lead to a different estimate of β. The reasoning that we have used in making a decision from a significance probability is called a Hypothesis Test. Formally we: 1. Have a Null Hypothesis. (e.g. True value of α = 0) 2. Calculate a test statistic. ( ) Estimate 0 e.g. S.E.(Estimate) 3. Calculate a significance probability. 4. Reject the Null Hypothesis if the significance probability is small. 8

9 Typically, there is an Alternative Hypothesis to the null hypothesis. The test statistic is chosen to give as good a discrimination as possible between the two hypotheses. There are some difficulties with the hypothesis testing procedure: Firstly, The failure of any of the assumptions can lead to a small Significance Probability, and not just the assumption of particular interest. (e.g. The test above also assumes that the data are independent observations that come from a Normal distribution.) Secondly, Acceptance of the Null Hypothesis does NOT mean that it is true, although users often treat it as so. Example: Risks to public health. Scientists will conclude that there is no evidence of any risk. Politicians like to treat this as implying that there is no risk. Journalists like to treat this as implying that there is a risk. Statisticians would like to say that they can be pretty sure that the risk is less than x cases per million, but the general public is not good at interpreting this. (cf. Rail travel v. car travel.) Statistical tests are widely (over-)used in scientific journals. Conventionally, the significance probabilities are graded into: P > 0.05 (Not Significant) P < 0.05 P < 0.01 (Statistically Significant) P < (Highly Significant) Note: Statistical Significance is not the same as Practical Significance. If sample sizes are small, a statistical test may not detect an important effect. If sample sizes are large, a statistical test may detect an effect that is too small to be useful. 9

### Regression Analysis: A Complete Example

Regression Analysis: A Complete Example This section works out an example that includes all the topics we have discussed so far in this chapter. A complete example of regression analysis. PhotoDisc, Inc./Getty

### Statistical Inference

Statistical Inference Idea: Estimate parameters of the population distribution using data. How: Use the sampling distribution of sample statistics and methods based on what would happen if we used this

### Statistics - Written Examination MEC Students - BOVISA

Statistics - Written Examination MEC Students - BOVISA Prof.ssa A. Guglielmi 26.0.2 All rights reserved. Legal action will be taken against infringement. Reproduction is prohibited without prior consent.

### Simple Regression Theory II 2010 Samuel L. Baker

SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

### Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

### Multiple Hypothesis Testing: The F-test

Multiple Hypothesis Testing: The F-test Matt Blackwell December 3, 2008 1 A bit of review When moving into the matrix version of linear regression, it is easy to lose sight of the big picture and get lost

### t-tests and F-tests in regression

t-tests and F-tests in regression Johan A. Elkink University College Dublin 5 April 2012 Johan A. Elkink (UCD) t and F-tests 5 April 2012 1 / 25 Outline 1 Simple linear regression Model Variance and R

### Regression. Name: Class: Date: Multiple Choice Identify the choice that best completes the statement or answers the question.

Class: Date: Regression Multiple Choice Identify the choice that best completes the statement or answers the question. 1. Given the least squares regression line y8 = 5 2x: a. the relationship between

### Inferential Statistics

Inferential Statistics Sampling and the normal distribution Z-scores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are

### Regression, least squares

Regression, least squares Joe Felsenstein Department of Genome Sciences and Department of Biology Regression, least squares p.1/24 Fitting a straight line X Two distinct cases: The X values are chosen

### UCLA STAT 13 Statistical Methods - Final Exam Review Solutions Chapter 7 Sampling Distributions of Estimates

UCLA STAT 13 Statistical Methods - Final Exam Review Solutions Chapter 7 Sampling Distributions of Estimates 1. (a) (i) µ µ (ii) σ σ n is exactly Normally distributed. (c) (i) is approximately Normally

### 1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number

1) Write the following as an algebraic expression using x as the variable: Triple a number subtracted from the number A. 3(x - x) B. x 3 x C. 3x - x D. x - 3x 2) Write the following as an algebraic expression

### Hypothesis Testing Level I Quantitative Methods. IFT Notes for the CFA exam

Hypothesis Testing 2014 Level I Quantitative Methods IFT Notes for the CFA exam Contents 1. Introduction... 3 2. Hypothesis Testing... 3 3. Hypothesis Tests Concerning the Mean... 10 4. Hypothesis Tests

### CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the

### Null Hypothesis Significance Testing Signifcance Level, Power, t-tests Spring 2014 Jeremy Orloff and Jonathan Bloom

Null Hypothesis Significance Testing Signifcance Level, Power, t-tests 18.05 Spring 2014 Jeremy Orloff and Jonathan Bloom Simple and composite hypotheses Simple hypothesis: the sampling distribution is

### 1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

### 4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

### The correlation coefficient

The correlation coefficient Clinical Biostatistics The correlation coefficient Martin Bland Correlation coefficients are used to measure the of the relationship or association between two quantitative

### Hypothesis testing - Steps

Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =

### Good luck! BUSINESS STATISTICS FINAL EXAM INSTRUCTIONS. Name:

Glo bal Leadership M BA BUSINESS STATISTICS FINAL EXAM Name: INSTRUCTIONS 1. Do not open this exam until instructed to do so. 2. Be sure to fill in your name before starting the exam. 3. You have two hours

### DEPARTMENT OF ECONOMICS. Unit ECON 12122 Introduction to Econometrics. Notes 4 2. R and F tests

DEPARTMENT OF ECONOMICS Unit ECON 11 Introduction to Econometrics Notes 4 R and F tests These notes provide a summary of the lectures. They are not a complete account of the unit material. You should also

### Simple Linear Regression Inference

Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation

### Prediction and Confidence Intervals in Regression

Fall Semester, 2001 Statistics 621 Lecture 3 Robert Stine 1 Prediction and Confidence Intervals in Regression Preliminaries Teaching assistants See them in Room 3009 SH-DH. Hours are detailed in the syllabus.

### How to Conduct a Hypothesis Test

How to Conduct a Hypothesis Test The idea of hypothesis testing is relatively straightforward. In various studies we observe certain events. We must ask, is the event due to chance alone, or is there some

### 1.5 Oneway Analysis of Variance

Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

### Chapter 14: 1-6, 9, 12; Chapter 15: 8 Solutions When is it appropriate to use the normal approximation to the binomial distribution?

Chapter 14: 1-6, 9, 1; Chapter 15: 8 Solutions 14-1 When is it appropriate to use the normal approximation to the binomial distribution? The usual recommendation is that the approximation is good if np

### Outline. Correlation & Regression, III. Review. Relationship between r and regression

Outline Correlation & Regression, III 9.07 4/6/004 Relationship between correlation and regression, along with notes on the correlation coefficient Effect size, and the meaning of r Other kinds of correlation

### Statistiek (WISB361)

Statistiek (WISB361) Final exam June 29, 2015 Schrijf uw naam op elk in te leveren vel. Schrijf ook uw studentnummer op blad 1. The maximum number of points is 100. Points distribution: 23 20 20 20 17

### 5.1 Identifying the Target Parameter

University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying

### 7 Hypothesis testing - one sample tests

7 Hypothesis testing - one sample tests 7.1 Introduction Definition 7.1 A hypothesis is a statement about a population parameter. Example A hypothesis might be that the mean age of students taking MAS113X

### 12.5: CHI-SQUARE GOODNESS OF FIT TESTS

125: Chi-Square Goodness of Fit Tests CD12-1 125: CHI-SQUARE GOODNESS OF FIT TESTS In this section, the χ 2 distribution is used for testing the goodness of fit of a set of data to a specific probability

### REGRESSION LINES IN STATA

REGRESSION LINES IN STATA THOMAS ELLIOTT 1. Introduction to Regression Regression analysis is about eploring linear relationships between a dependent variable and one or more independent variables. Regression

### , has mean A) 0.3. B) the smaller of 0.8 and 0.5. C) 0.15. D) which cannot be determined without knowing the sample results.

BA 275 Review Problems - Week 9 (11/20/06-11/24/06) CD Lessons: 69, 70, 16-20 Textbook: pp. 520-528, 111-124, 133-141 An SRS of size 100 is taken from a population having proportion 0.8 of successes. An

### Part 2: Analysis of Relationship Between Two Variables

Part 2: Analysis of Relationship Between Two Variables Linear Regression Linear correlation Significance Tests Multiple regression Linear Regression Y = a X + b Dependent Variable Independent Variable

### Inferences About Differences Between Means Edpsy 580

Inferences About Differences Between Means Edpsy 580 Carolyn J. Anderson Department of Educational Psychology University of Illinois at Urbana-Champaign Inferences About Differences Between Means Slide

### Statistical Inference and t-tests

1 Statistical Inference and t-tests Objectives Evaluate the difference between a sample mean and a target value using a one-sample t-test. Evaluate the difference between a sample mean and a target value

### Sample Exam #1 Elementary Statistics

Sample Exam #1 Elementary Statistics Instructions. No books, notes, or calculators are allowed. 1. Some variables that were recorded while studying diets of sharks are given below. Which of the variables

### 4. Simple regression. QBUS6840 Predictive Analytics. https://www.otexts.org/fpp/4

4. Simple regression QBUS6840 Predictive Analytics https://www.otexts.org/fpp/4 Outline The simple linear model Least squares estimation Forecasting with regression Non-linear functional forms Regression

### Power and Sample Size Determination

Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 Power 1 / 31 Experimental Design To this point in the semester,

### Simple Regression and Correlation

Simple Regression and Correlation Today, we are going to discuss a powerful statistical technique for examining whether or not two variables are related. Specifically, we are going to talk about the ideas

### Simple linear regression

Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between

### e = random error, assumed to be normally distributed with mean 0 and standard deviation σ

1 Linear Regression 1.1 Simple Linear Regression Model The linear regression model is applied if we want to model a numeric response variable and its dependency on at least one numeric factor variable.

### " Y. Notation and Equations for Regression Lecture 11/4. Notation:

Notation: Notation and Equations for Regression Lecture 11/4 m: The number of predictor variables in a regression Xi: One of multiple predictor variables. The subscript i represents any number from 1 through

### Answer: C. The strength of a correlation does not change if units change by a linear transformation such as: Fahrenheit = 32 + (5/9) * Centigrade

Statistics Quiz Correlation and Regression -- ANSWERS 1. Temperature and air pollution are known to be correlated. We collect data from two laboratories, in Boston and Montreal. Boston makes their measurements

### SIMPLE REGRESSION ANALYSIS

SIMPLE REGRESSION ANALYSIS Introduction. Regression analysis is used when two or more variables are thought to be systematically connected by a linear relationship. In simple regression, we have only two

### Probability and Statistics Lecture 9: 1 and 2-Sample Estimation

Probability and Statistics Lecture 9: 1 and -Sample Estimation to accompany Probability and Statistics for Engineers and Scientists Fatih Cavdur Introduction A statistic θ is said to be an unbiased estimator

### Hypothesis Testing. Bluman Chapter 8

CHAPTER 8 Learning Objectives C H A P T E R E I G H T Hypothesis Testing 1 Outline 8-1 Steps in Traditional Method 8-2 z Test for a Mean 8-3 t Test for a Mean 8-4 z Test for a Proportion 8-5 2 Test for

### 4. Introduction to Statistics

Statistics for Engineers 4-1 4. Introduction to Statistics Descriptive Statistics Types of data A variate or random variable is a quantity or attribute whose value may vary from one unit of investigation

### Chapter 8: Introduction to Hypothesis Testing

Chapter 8: Introduction to Hypothesis Testing We re now at the point where we can discuss the logic of hypothesis testing. This procedure will underlie the statistical analyses that we ll use for the remainder

### Unit 26 Estimation with Confidence Intervals

Unit 26 Estimation with Confidence Intervals Objectives: To see how confidence intervals are used to estimate a population proportion, a population mean, a difference in population proportions, or a difference

### LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

### Statistiek I. t-tests. John Nerbonne. CLCG, Rijksuniversiteit Groningen. John Nerbonne 1/35

Statistiek I t-tests John Nerbonne CLCG, Rijksuniversiteit Groningen http://wwwletrugnl/nerbonne/teach/statistiek-i/ John Nerbonne 1/35 t-tests To test an average or pair of averages when σ is known, we

### Coefficient of Determination

Coefficient of Determination The coefficient of determination R 2 (or sometimes r 2 ) is another measure of how well the least squares equation ŷ = b 0 + b 1 x performs as a predictor of y. R 2 is computed

### Chapter 9. Section Correlation

Chapter 9 Section 9.1 - Correlation Objectives: Introduce linear correlation, independent and dependent variables, and the types of correlation Find a correlation coefficient Test a population correlation

### Module 5 Hypotheses Tests: Comparing Two Groups

Module 5 Hypotheses Tests: Comparing Two Groups Objective: In medical research, we often compare the outcomes between two groups of patients, namely exposed and unexposed groups. At the completion of this

### Hypothesis Testing COMP 245 STATISTICS. Dr N A Heard. 1 Hypothesis Testing 2 1.1 Introduction... 2 1.2 Error Rates and Power of a Test...

Hypothesis Testing COMP 45 STATISTICS Dr N A Heard Contents 1 Hypothesis Testing 1.1 Introduction........................................ 1. Error Rates and Power of a Test.............................

### Multiple Linear Regression. Multiple linear regression is the extension of simple linear regression to the case of two or more independent variables.

1 Multiple Linear Regression Basic Concepts Multiple linear regression is the extension of simple linear regression to the case of two or more independent variables. In simple linear regression, we had

### Using Minitab for Regression Analysis: An extended example

Using Minitab for Regression Analysis: An extended example The following example uses data from another text on fertilizer application and crop yield, and is intended to show how Minitab can be used to

### Section 14 Simple Linear Regression: Introduction to Least Squares Regression

Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship

### Null Hypothesis H 0. The null hypothesis (denoted by H 0

Hypothesis test In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test (or test of significance) is a standard procedure for testing a claim about a property

### Canonical Correlation

Chapter 400 Introduction Canonical correlation analysis is the study of the linear relations between two sets of variables. It is the multivariate extension of correlation analysis. Although we will present

### STAT 350 Practice Final Exam Solution (Spring 2015)

PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects

### MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question

Stats: Test Review Name MULTIPLE CHOICE. Choose the one alternative that best completes the statement or answers the question Provide an appropriate response. ) Given H0: p 0% and Ha: p < 0%, determine

### Correlation key concepts:

CORRELATION Correlation key concepts: Types of correlation Methods of studying correlation a) Scatter diagram b) Karl pearson s coefficient of correlation c) Spearman s Rank correlation coefficient d)

### CALCULATIONS & STATISTICS

CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

### Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure?

Case Study in Data Analysis Does a drug prevent cardiomegaly in heart failure? Harvey Motulsky hmotulsky@graphpad.com This is the first case in what I expect will be a series of case studies. While I mention

### Chapter 7: Simple linear regression Learning Objectives

Chapter 7: Simple linear regression Learning Objectives Reading: Section 7.1 of OpenIntro Statistics Video: Correlation vs. causation, YouTube (2:19) Video: Intro to Linear Regression, YouTube (5:18) -

### A Short Guide to Significant Figures

A Short Guide to Significant Figures Quick Reference Section Here are the basic rules for significant figures - read the full text of this guide to gain a complete understanding of what these rules really

Lecture 16: Generalized Additive Models Regression III: Advanced Methods Bill Jacoby Michigan State University http://polisci.msu.edu/jacoby/icpsr/regress3 Goals of the Lecture Introduce Additive Models

### Example: Boats and Manatees

Figure 9-6 Example: Boats and Manatees Slide 1 Given the sample data in Table 9-1, find the value of the linear correlation coefficient r, then refer to Table A-6 to determine whether there is a significant

### Week 4: Standard Error and Confidence Intervals

Health Sciences M.Sc. Programme Applied Biostatistics Week 4: Standard Error and Confidence Intervals Sampling Most research data come from subjects we think of as samples drawn from a larger population.

### 17. SIMPLE LINEAR REGRESSION II

17. SIMPLE LINEAR REGRESSION II The Model In linear regression analysis, we assume that the relationship between X and Y is linear. This does not mean, however, that Y can be perfectly predicted from X.

### 99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm

Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the

### Introductory Statistics Notes

Introductory Statistics Notes Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone: (205) 348-4431 Fax: (205) 348-8648 August

### Chapter 7. Section Introduction to Hypothesis Testing

Section 7.1 - Introduction to Hypothesis Testing Chapter 7 Objectives: State a null hypothesis and an alternative hypothesis Identify type I and type II errors and interpret the level of significance Determine

### Statistical Significance and Bivariate Tests

Statistical Significance and Bivariate Tests BUS 735: Business Decision Making and Research 1 1.1 Goals Goals Specific goals: Re-familiarize ourselves with basic statistics ideas: sampling distributions,

### Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)

Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume

### Confidence level. Most common choices are 90%, 95%, or 99%. (α = 10%), (α = 5%), (α = 1%)

Confidence Interval A confidence interval (or interval estimate) is a range (or an interval) of values used to estimate the true value of a population parameter. A confidence interval is sometimes abbreviated

### 6.4 Normal Distribution

Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

### 2013 MBA Jump Start Program. Statistics Module Part 3

2013 MBA Jump Start Program Module 1: Statistics Thomas Gilbert Part 3 Statistics Module Part 3 Hypothesis Testing (Inference) Regressions 2 1 Making an Investment Decision A researcher in your firm just

### CHI-SQUARE: TESTING FOR GOODNESS OF FIT

CHI-SQUARE: TESTING FOR GOODNESS OF FIT In the previous chapter we discussed procedures for fitting a hypothesized function to a set of experimental data points. Such procedures involve minimizing a quantity

### Simple Linear Regression in SPSS STAT 314

Simple Linear Regression in SPSS STAT 314 1. Ten Corvettes between 1 and 6 years old were randomly selected from last year s sales records in Virginia Beach, Virginia. The following data were obtained,

### 1 Sufficient statistics

1 Sufficient statistics A statistic is a function T = rx 1, X 2,, X n of the random sample X 1, X 2,, X n. Examples are X n = 1 n s 2 = = X i, 1 n 1 the sample mean X i X n 2, the sample variance T 1 =

### HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as...

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1 PREVIOUSLY used confidence intervals to answer questions such as... You know that 0.25% of women have red/green color blindness. You conduct a study of men

### Chapter Additional: Standard Deviation and Chi- Square

Chapter Additional: Standard Deviation and Chi- Square Chapter Outline: 6.4 Confidence Intervals for the Standard Deviation 7.5 Hypothesis testing for Standard Deviation Section 6.4 Objectives Interpret

### Factors affecting online sales

Factors affecting online sales Table of contents Summary... 1 Research questions... 1 The dataset... 2 Descriptive statistics: The exploratory stage... 3 Confidence intervals... 4 Hypothesis tests... 4

### One-Way Analysis of Variance

One-Way Analysis of Variance Note: Much of the math here is tedious but straightforward. We ll skim over it in class but you should be sure to ask questions if you don t understand it. I. Overview A. We

### Statistical estimation using confidence intervals

0894PP_ch06 15/3/02 11:02 am Page 135 6 Statistical estimation using confidence intervals In Chapter 2, the concept of the central nature and variability of data and the methods by which these two phenomena

### Chapter 7 Part 2. Hypothesis testing Power

Chapter 7 Part 2 Hypothesis testing Power November 6, 2008 All of the normal curves in this handout are sampling distributions Goal: To understand the process of hypothesis testing and the relationship

### Statistics 112 Regression Cheatsheet Section 1B - Ryan Rosario

Statistics 112 Regression Cheatsheet Section 1B - Ryan Rosario I have found that the best way to practice regression is by brute force That is, given nothing but a dataset and your mind, compute everything

### AP Statistics 2001 Solutions and Scoring Guidelines

AP Statistics 2001 Solutions and Scoring Guidelines The materials included in these files are intended for non-commercial use by AP teachers for course and exam preparation; permission for any other use

### Statistics and research

Statistics and research Usaneya Perngparn Chitlada Areesantichai Drug Dependence Research Center (WHOCC for Research and Training in Drug Dependence) College of Public Health Sciences Chulolongkorn University,

### 9.1 (a) The standard deviation of the four sample differences is given as.68. The standard error is SE (ȳ1 - ȳ 2 ) = SE d - = s d n d

CHAPTER 9 Comparison of Paired Samples 9.1 (a) The standard deviation of the four sample differences is given as.68. The standard error is SE (ȳ1 - ȳ 2 ) = SE d - = s d n d =.68 4 =.34. (b) H 0 : The mean

### Chapter 10. Key Ideas Correlation, Correlation Coefficient (r),

Chapter 0 Key Ideas Correlation, Correlation Coefficient (r), Section 0-: Overview We have already explored the basics of describing single variable data sets. However, when two quantitative variables

Section 5.4 The Quadratic Formula 481 5.4 The Quadratic Formula Consider the general quadratic function f(x) = ax + bx + c. In the previous section, we learned that we can find the zeros of this function

### CURVE FITTING LEAST SQUARES APPROXIMATION

CURVE FITTING LEAST SQUARES APPROXIMATION Data analysis and curve fitting: Imagine that we are studying a physical system involving two quantities: x and y Also suppose that we expect a linear relationship

### 1. How different is the t distribution from the normal?

Statistics 101 106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M 7.1 and 7.2, ignoring starred parts. Reread M&M 3.2. The effects of estimated variances on normal approximations. t-distributions.

### Normal distribution. ) 2 /2σ. 2π σ

Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a