Comparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples

Size: px
Start display at page:

Download "Comparing Two Groups. Standard Error of ȳ 1 ȳ 2. Setting. Two Independent Samples" Transcription

1 Comparing Two Groups Chapter 7 describes two ways to compare two populations on the basis of independent samples: a confidence interval for the difference in population means and a hypothesis test. The basic structure of the confidence interval is the same as in the previous chapter an estimate plus or minus a multiple of a standard error. Hypothesis testing will introduce several new concepts. Standard Error of ȳ 1 ȳ 2 The standard error of the difference in two sample means is an empirical measure of how far the difference in sample means will typically be from the difference in the respective population means. SE(ȳ 1 ȳ 2 ) = s2 1 + s2 2 An alternative formula is SE(ȳ 1 ȳ 2 ) = (SE(ȳ 1 )) 2 + (SE(ȳ 2 )) 2 This formula reminds us of how to find the length of the hypotenuse of a triangle. (Variances add, but standard deviations don t.) Statistics 371, Fall Statistics 371, Fall Setting Two Independent Samples Bret Larget Department of Statistics University of Wisconsin - Madison Model two populations as buckets of numbered balls. The population means are µ 1 and µ 2, respectively. The population standard deviations are σ 1 and σ 2, respectively. We are interested in estimating µ 1 µ 2 and in testing the hypothesis that µ 1 = µ 2. October 17, 2003 mean µ 1 (1) y 1,..., y (1) sd σ 1 y 1 s 1 mean µ 2 (2) y 1,..., y (2) sd σ 2 y 2 s 2 Statistics 371, Fall 2003 Statistics 371, Fall

2 Sampling Distributions The sampling distribution of the difference in sample means has these characteristics. Mean: µ 1 µ 2 SD: σ σ2 2 Shape: Exactly normal if both populations are normal, approximately normal if populations are not normal but both sample sizes are sufficiently large. Statistics 371, Fall Theory for Confidence Interval standard deviations, then Pr 1.96 (Ȳ 1 Ȳ 2 ) (µ 1 µ 2 ) σ σ = 0.95 where we can choose z other tha.96 for different confidence levels. This statement is true because the expression in the middle has a standard normal distribution. But in practice, we don t know the population standard deviations. If we substitute in sample estimates instead, we get this. Pr t (Ȳ 1 Ȳ 2 ) (µ 1 µ 2 ) s s2 2 t = 0.95 We need to choose different end points to account for the additional randomness in the denominator. Statistics 371, Fall Pooled Standard Error If we wish to assume that the two population standard deviations are equal, σ 1 = σ 2, then it makes sense to use data from both samples to estimate the common population standard deviation. We estimate the common population variance with a weighted average of the sample variances, weighted by the degrees of freedom. s 2 pooled = ( 1)s ( 1)s The pooled standard error is then as below. SE pooled = s pooled Theory for Confidence Interval The recipe for constructing a confidence interval for a single population mean is based on facts about the sampling distribution of the statistic T = Ȳ µ SE(Ȳ ). Similarly, the theory for confidence intervals for µ 1 µ 2 is based on the sampling distribution of the statistic T = (Ȳ 1 Ȳ 2 ) (µ 1 µ 2 ) SE(Ȳ 1 Ȳ 2 ) where we standardize by subtracting the mean and dividing by the standard deviation of the sampling distribution. If both populations are normal and if we know the population Statistics 371, Fall Statistics 371, Fall

3 Confidence Interval for µ 1 µ 2 The confidence interval for differences in population means has the same structure as that for a single population mean. (Estimate) ± (t Multiplier) SE The only difference is that for this more complicated setting, we have more complicated formulas for the standard error and the degrees of freedom. Here is the df formula. (SE 2 df = 1 + SE2 2 )2 SE 4 1 /( 1) + SE 4 2 /( 1) where SE i = s i / n i for i = 1, 2. As a check, the value is often close to + 2. (This will be exact if s 1 = s 2 and if =.) The value from the messy formula will always be between the smaller of 1 and 1 and + 2. Statistics 371, Fall Example A calculator or R can compute the margin of error. > se = sqrt(1.34^ ^2) > tmult = qt(0.975, 190) > me = round(tmult * se, 1) > se  > tmult  > me  3.7 We are 95% confident that the mean reduction in systolic blood pressure due to the biofeedback treatment in a population of similar individuals to those in this study would be between 6.1 and 13.5 mm more than the mean reduction in the same population undergoing the control treatment. Statistics 371, Fall Theory for Confidence Interval It turns out that the sampling distribution of the statistic above is approximately a t distribution where the degrees of freedom should be estimated from the data as well. Algebraic manipulation leads to the following expression. Pr (Ȳ 1 Ȳ 2 ) t s s2 2 s 2 1 µ 1 µ 2 (Ȳ 1 Ȳ 2) + t + s2 2 = 0.95 We use a t multiplier so that the area between t and t under a t distribution with the estimated degrees of freedom will be Example Exercise 7.12 In this example, subjects with high blood pressure are randomly allocated to two treatments. The biofeedback group receives relaxation training aided by biofeedback and meditation over eight weeks. The control group does not. Reduction in systolic blood pressure is tabulated here. Biofeedback Control n ȳ SE For 190 degrees of freedom (which come from both the simple and messy formulas) the table says to use (140 is rounded down) whereas with R you find Statistics 371, Fall Statistics 371, Fall

4 Example Assuming Equal Variances For the same data, were we to assume that the population variances were equal, the degrees of freedom, the standard error, and the confidence interval are all slightly different. > t.test(height ~ color, var.equal = T) Two Sample t-test data: height by color t = , df = 40, p-value = alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: sample estimates: mean in group green mean in group red Logic of Hypothesis Tests All of the hypothesis tests we will see this semester fall into this general framework. 1. State a null hypothesis and an alternative hypothesis. 2. Gather data and compute a test statistic. 3. Consider the sampling distribution of the test statistic assuming that the null hypothesis is true. 4. Compute a p-value, a measure of how consistent the data is with the null hypothesis in consideration of a specific alternative hypothesis. Statistics 371, Fall Statistics 371, Fall Example Using R Exercise 7.21 This exercise examines the growth of bean plants under red and green light. A 95% confidence interval is part of the output below. > ex7.21 = read.table("lights.txt", header = T) > str(ex7.21) data.frame : 42 obs. of 2 variables: \$ height: num \$ color : Factor w/ 2 levels "green","red": > attach(ex7.21) > t.test(height ~ color) Welch Two Sample t-test data: height by color t = , df = , p-value = alternative hypothesis: true difference in means is not equal to 0 95 percent confidence interval: sample estimates: mean in group green mean in group red Statistics 371, Fall Hypothesis Tests Hypothesis tests are an alternative approach to statistical inference. Unlike confidence intervals where the goal is estimation with assessment of likely precision of the estimate, the goal of hypothesis testing is to ascertain whether or not data is consistent with what we might expect to see assuming that a hypothesis is true. The logic of hypothesis testing is a probabilistic form of proof by contradiction. In logic, if we can say that a proposition H leads to a contradiction, then we have proved H false and have proved {noth} to be true. In hypothesis testing, if observed data is highly unlikely under an assumed hypothesis H, then there is strong (but not definitive) evidence that the hypothesis is false. Statistics 371, Fall

5 Wisconsin Fast Plants Example In an experiment, seven Wisconsin Fast Plants (Brassica campestris) were grown with a treatment of Ancymidol (ancy) and eight control plants were given ordinary water. The null hypothesis is that the treatment has no effect on plant growth (as measured by the height of the plant after 14 days of growth). The alternative hypothesis is that the treatment has an effect which would result in different mean growth amounts A summary of the sample data is as follows. The eight control plants had a mean growth of 15.9 cm and standard deviation 4.8 cm. The seven ancy plants had a mean growth of 11.0 cm and standard deviation 4.7 cm. The question is, is it reasonable to think that the observed difference in sample means of 4.9 cm is due to chance variation alone, or is there evidence that some of the difference is due to the ancy treatment? Statistics 371, Fall Example: Calculate a Test Statistic In the setting of a difference between two independent sample means, our test statistic is t = (ȳ 1 ȳ 2 ) (µ 1 µ 2 ) s 2 1 n + s2 2 1 (Your book adds a subscript, t s, to remind you that this is computed from the sample.) For the data, we find this. > se = sqrt(4.8^2/ ^2/7) > se  > ts = ( )/se > ts  The standard error tells us that we would expect that the observed difference in sample means would typically differ from the population difference in sample means by about 2.5 cm. Statistics 371, Fall Logic of Hypothesis Tests 5. Assess the strength of the evidence against the null hypothesis in the context of the problem. We will introduce all of these concepts in the setting of testing the equality of two population means, but the general ideas will reappear in many settings throughout the remainder of the semester. Example: State Hypotheses Let µ 1 be the population mean growth with the control conditions and let µ 2 be the population mean with ancy. The null and alternative hypotheses are expressed as H 0 : µ 1 = µ 2 H A : µ 1 µ 2 We state statistical hypotheses as statements about population parameters. Statistics 371, Fall Statistics 371, Fall

6 Example: Find the Sampling Distribution The sampling distribution of the test statistic is a t distribution with degrees of freedom calculated by the messy formula. This useful R code computes it. If you type this in and save your work space at the end of a session, you can use it again in the future. > getdf = function(s1, n1, s2, n2) { + se1 = s1/sqrt(n1) + se2 = s2/sqrt(n2) + return((se1^2 + se2^2)^2/(se1^4/(n1-1) + se2^4/(n2-1))) + } > getdf(4.8, 8, 4.7, 7)  Example: Interpreting a P-Value The smaller the p-value, the more inconsistent the data is with the null hypothesis, the stronger the evidence is against the null hypothesis in favor of the alternative. Traditionally, people have measured statistical significance by comparing a p-value with arbitrary significance levels such as α = The phrase statistically significant at the 5% level means that the p-value is smaller than In reporting results, it is best to report an actual p-value and not simply a statement about whether or not it is statistically significant. Statistics 371, Fall Statistics 371, Fall Example: Calculate a Test Statistic If the population means are equal, their difference is zero. This test statistic tells us that the actual observed difference in sample means is 1.99 standard errors away from zero. Example: Compute a P-Value To describe how likely it is to see such a test statistic, we can ask what is the probability that chance alone would result in a test statistic at least this far from zero? The answer is the area below 1.99 and above 1.99 under a t density curve with 12.8 degrees of freedom. With the t-table, we can only calculate this p-value within a range. If we round down to 12 df, the t statistic is bracketed betwee.912 and in the table. Thus, the area to the right of 1.99 is between 0.03 and The p-value in this problem is twice as large because we need to include as well the area to the left of So, 0.06 < p < With, R we can be more precise. > p = 2 * pt(-ts, getdf(4.8, 8, 4.7, 7)) > p  Statistics 371, Fall Statistics 371, Fall

7 Rejection Regions Suppose that we were asked to make a decision about a hypothesis based on data. We may decide, for example to reject the null hypothesis if the p-value were smaller than 0.05 and to not reject the null hypothesis if the p-value were larger than This procedure has a significance level of 0.05, which means that if we follow the rule, there is a probability of 0.05 of rejecting a true null hypothesis. (We would need further assumptions to calculate the probability of not rejecting a false null hypothesis.) Comparing α and P -values In this setting, the significance level α and p-values are both areas under t curves, but they are not the same thing. The significance level is a prespecified, arbitrary value, that does not depend on the data. The p-value depends on the data. If a decision rule is to reject the null hypothesis when the test statistic is in a rejection region, this is equivalent to rejecting the null hypothesis when the p-value is less than the significance level α. Rejecting the null hypothesis occurs precisely when the test statistic falls into a rejection region, in this case either the upper or lower 2.5% tail of the sampling distribution. Statistics 371, Fall Statistics 371, Fall Example: Summarizing the Results For this example, I might summarize the results as follows. There is slight evidence (p = 0.068, two-sided independent sample t-test) that there is a difference in the mean height at 14 days between Wisconsin Fast Plants grown with ordinary water and those grown with Ancymidol. Generally speaking, a confidence interval is more informative than a p-value because it estimates a difference in the units of the problem, which allows the reader with background knowledge in the subject area to assess both the statistical significance and the practical importance of the observed difference. In contrast, a hypothesis test examines statistical significance alone. Relationship between t tests and confidence intervals The rejection region corresponds exactly to the test statistics for which a 95% confidence interval contains 0. We would reject the null hypothesis H 0 : µ 1 µ 2 = 0 versus the two-sided alternative at the α = 0.05 level of significance if and only if a 95% confidence interval for µ 1 µ 2 does not contain 0. We could make similar statements for general α and a (1 α) 100% confidence interval. Statistics 371, Fall Statistics 371, Fall

8 More on P -Values Another way to think about P -values is to recognize that they depend on the values of the data, and so are random variables. Let P be the p-value from a test. If the null hypothesis is true, then P is a random variable distributed uniformly between 0 and 1. In other words, the probability density of P is a flat rectangle. Notice that this implies that Pr {P < c} = c for any number c between 0 and 1. If the null is true, there is a 5% probability that P is less than 0.05, a 1% probability P is less than 0.01, and so on. On the other hand, if the alternative hypothesis is true, then the distribution of P will be not be uniform and instead will be shifted toward zero. More P-value Interpretations A verbal definition of a p-value is as follows. The p-value of the data is the probability calculated assuming that the null hypothesis is true of obtaining a test statistic that deviates from what is expected under the null (in the direction of the alternative hypothesis) at least as much as the actual data does. The p-value is not the probability that the null hypothesis is true. Interpreting the p-value in this way will mislead you! Statistics 371, Fall Statistics 371, Fall Type I and Type II Errors There are two possible decision errors. Rejecting a true null hypothesis is a Type I error. You can interpret α = Pr {rejecting H 0 H 0 is true}, so α is the probability of a Type I error. (You cannot make a Type I error when the null hypothesis is false.) Not rejecting a false null hypothesis is a Type II error. It is convention to use β as the probability of a Type II error, or β = Pr {not rejecting H 0 H 0 is false}. If the null hypothesis is false, one of the many possible alternative hypotheses is true. It is typical to calculate β separately for each possible alternative. (In this setting, for each value of µ 1 µ 2.) Power is the probability of rejecting a false null hypothesis. Power = 1 β. Simulation We can explore these statements with a simulation based on the Wisconsin Fast Plants example. The first histogram shows p- values from 100,000 samples where µ 1 µ 2 = 0 while the second assumes that µ 1 µ 2 = 5. Both simulations use σ 1 = σ 2 = 4.8 but the calculation of the p-value does not. Sampling Dist of P under Null P value Sampling Dist of P under Alt P value Statistics 371, Fall Statistics 371, Fall

9 Example (cont.) Here is a table of the results in a hypothetical population of 100,000 people. True Situation Healthy Ill (H 0 is true) (H 0 is false) Total Test Negative 94, ,250 (do not reject H 0) Result Positive 4, ,750 (reject H 0) Total 99,000 1, ,000 Exercise 7.54 The following data comes from an experiment to test the efficacy of a drug to reduce pain in women after child birth. Possible pain relief scores vary from 0 (no relief) to 56 (complete relief). Pain Relief Score Treatment n mean sd Drug Placebo Notice that of the 5750 times H 0 was rejected (so that the the test indicated illness), the person was actually healthy 4950/5750 = 86% the time! A rule that rejects H 0 when the p-value is less than 5% only rejects 5% of the true null hypotheses, but this can be a large proportion of the total number of rejected hypotheses when the false null hypotheses occur rarely. State hypotheses. Let µ 1 be the population mean score for the drug. the population mean score for the placebo. H 0 : µ 1 = µ 2 H A : µ 1 > µ 2 and µ 2 be Statistics 371, Fall Statistics 371, Fall Example for P-value Interpretation In a medical testing setting, we may want a procedure that indicates when a subject has a disease. We can think of the decision healthy as corresponding to a null hypothesis and the decision ill as corresponding to the alternative hypothesis. Consider now a situation where 1% of a population has a disease. Suppose that a test has an 80% chance of detecting the disease when a person has the disease (so the power of the test is 80%) and that the test has a 95% of correctly saying the person does not have the disease when the person does not (so there is a 5% chance of a false positive, or false rejecting the null). One-tailed Tests Often, we are interested not only in demonstrating that two population means are different, but in demonstrating that the difference is in a particular direction. Instead of the two-sided alternative µ 1 µ 2, we would choose one of two possible one-sided alternatives, µ 1 < µ 2 or µ 1 > µ 2. For the alternative hypothesis H A : µ 1 < µ 2, the p-value is the area to the left of the test statistic. For the alternative hypothesis H A : µ 1 > µ 2, the p-value is the area to the right of the test statistic. If the test statistic is in the direction of the alternative hypothesis, the p-value from a one-sided test will be half the p-value of a two-sided test. Statistics 371, Fall Statistics 371, Fall

10 Exercise 7.54 Summarize the results. There is fairly strong evidence that the drug would provide more pain relief than the placebo on average for a population of women similar to those in this study (p = 0.038, one-sided independent sample t-test). Notice that this result is statistically significant at the 5% level because the p-value is less than For a two-sided test, the p-value would be twice as large, and not significant at the 5% level. Statistics 371, Fall Permutation Tests The idea of a permutation test in this setting is quite straightforward. We begin by computing the difference in sample means for the two samples of sizes and. Now, imagine taking the group labels and mixing them up (permuting them) and then assigning them at random to the observations. We could then again calculate a difference in sample means. Next, imagine doing this process over and over and collecting the permutation sampling distribution of the difference in sample means. If the difference in sample means for the actual grouping of the data is atypical as compared to the differences from random groupings, this indicates evidence that the actual grouping is associated with the measured variable. The p-value would be the proportion of random relabellings with sample mean differences at least as extreme as that from the original groups. Statistics 371, Fall Exercise 7.54 Calculate a test statistic. > ts = ( )/sqrt(12.05^2/ ^2/25) > ts  Find the null sampling distribution. The book reports a t distribution with 47.2 degrees of freedom. We can check this. > degf = getdf(12.05, 25, 13.78, 25) > degf  Compute a (one-sided) p-value. > p = 1 - pt(ts, degf) > p  Statistics 371, Fall Validity of t Methods All of the methods seen so far are formally based on the assumption that populations are normal. In practice, they are valid as long as the sampling distribution of the difference in sample means is approximately normal, which occurs when the sample sizes are large enough (justified by the Central Limit Theorem). Specifically, we need the sampling distribution of the test statistic to have an approximate t distribution. But what if the sample sizes are small and the samples indicate non-normality in the populations? One approach is to transform the data, often by taking logarithms, so that the transformed distribution is approximately normal. The textbook suggests a nonparametric method called the Wilcoxon-Mann-Whitney test that is based on converting the data to ranks. I will show an alternative called a permutation test. Statistics 371, Fall

11 Example Soil cores were taken from two areas, an area under an opening in a forest canopy (the gap) and a nearby area under an area of heavy tree growth (the growth). The amount of carbon dioxide given off by each soil core (in mol CO 2g soil/hr). > growth = c(17, 20, 170, 315, 22, 190, 64) > gap = c(22, 29, 13, 16, 15, 18, 14, 6) > boxplot(list(growth = growth, gap = gap)) growth gap Statistics 371, Fall Permutation Tests With very small samples, it is possible to enumerate all possible ways to divide the + total observations into groups of size and. An R function can carry out a permutation test. Example Permutation Test in R > library(exactranktests) > perm.test(growth, gap) 2-sample Permutation Test data: growth and gap T = 798, p-value = alternative hypothesis: true mu is not equal to 0 There is very strong evidence (p = , two sample permutation test) that the soil respiration rates are different in the gap and growth areas. Statistics 371, Fall Statistics 371, Fall

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

Experimental Design. Power and Sample Size Determination. Proportions. Proportions. Confidence Interval for p. The Binomial Test Experimental Design Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 To this point in the semester, we have largely

Lesson 1: Comparison of Population Means Part c: Comparison of Two- Means Lesson : Comparison of Population Means Part c: Comparison of Two- Means Welcome to lesson c. This third lesson of lesson will discuss hypothesis testing for two independent means. Steps in Hypothesis

HYPOTHESIS TESTING: POWER OF THE TEST HYPOTHESIS TESTING: POWER OF THE TEST The first 6 steps of the 9-step test of hypothesis are called "the test". These steps are not dependent on the observed data values. When planning a research project,

Two-sample inference: Continuous data Two-sample inference: Continuous data Patrick Breheny April 5 Patrick Breheny STA 580: Biostatistics I 1/32 Introduction Our next two lectures will deal with two-sample inference for continuous data As

Permutation Tests for Comparing Two Populations Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of

Unit 26 Estimation with Confidence Intervals Unit 26 Estimation with Confidence Intervals Objectives: To see how confidence intervals are used to estimate a population proportion, a population mean, a difference in population proportions, or a difference

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as... HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1 PREVIOUSLY used confidence intervals to answer questions such as... You know that 0.25% of women have red/green color blindness. You conduct a study of men

HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1. used confidence intervals to answer questions such as... HYPOTHESIS TESTING (ONE SAMPLE) - CHAPTER 7 1 PREVIOUSLY used confidence intervals to answer questions such as... You know that 0.25% of women have red/green color blindness. You conduct a study of men

Inference for two Population Means Inference for two Population Means Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison October 27 November 1, 2011 Two Population Means 1 / 65 Case Study Case Study Example

Hypothesis testing - Steps Hypothesis testing - Steps Steps to do a two-tailed test of the hypothesis that β 1 0: 1. Set up the hypotheses: H 0 : β 1 = 0 H a : β 1 0. 2. Compute the test statistic: t = b 1 0 Std. error of b 1 =

Chapter 8 Hypothesis Testing Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing Chapter 8 Hypothesis Testing 1 Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing 8-3 Testing a Claim About a Proportion 8-5 Testing a Claim About a Mean: s Not Known 8-6 Testing

Introduction. Hypothesis Testing. Hypothesis Testing. Significance Testing Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters

C. The null hypothesis is not rejected when the alternative hypothesis is true. A. population parameters. Sample Multiple Choice Questions for the material since Midterm 2. Sample questions from Midterms and 2 are also representative of questions that may appear on the final exam.. A randomly selected sample

Introduction to Hypothesis Testing OPRE 6301 Introduction to Hypothesis Testing OPRE 6301 Motivation... The purpose of hypothesis testing is to determine whether there is enough statistical evidence in favor of a certain belief, or hypothesis, about

3.4 Statistical inference for 2 populations based on two samples 3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted

Lecture Notes Module 1 Lecture Notes Module 1 Study Populations A study population is a clearly defined collection of people, animals, plants, or objects. In psychological research, a study population usually consists of a specific

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Chapter 45 Two-Sample T-Tests Allowing Unequal Variance (Enter Difference) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when no assumption

Study Guide for the Final Exam Study Guide for the Final Exam When studying, remember that the computational portion of the exam will only involve new material (covered after the second midterm), that material from Exam 1 will make

Stat 411/511 THE RANDOMIZATION TEST. Charlotte Wickham. stat511.cwick.co.nz. Oct 16 2015 Stat 411/511 THE RANDOMIZATION TEST Oct 16 2015 Charlotte Wickham stat511.cwick.co.nz Today Review randomization model Conduct randomization test What about CIs? Using a t-distribution as an approximation

The Wilcoxon Rank-Sum Test 1 The Wilcoxon Rank-Sum Test The Wilcoxon rank-sum test is a nonparametric alternative to the twosample t-test which is based solely on the order in which the observations from the two samples fall. We

Two-Sample T-Tests Assuming Equal Variance (Enter Means) Chapter 4 Two-Sample T-Tests Assuming Equal Variance (Enter Means) Introduction This procedure provides sample size and power calculations for one- or two-sided two-sample t-tests when the variances of

Confidence Intervals for the Difference Between Two Means Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means

NCSS Statistical Software Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

Week 4: Standard Error and Confidence Intervals Health Sciences M.Sc. Programme Applied Biostatistics Week 4: Standard Error and Confidence Intervals Sampling Most research data come from subjects we think of as samples drawn from a larger population.

Introduction to Hypothesis Testing. Hypothesis Testing. Step 1: State the Hypotheses Introduction to Hypothesis Testing 1 Hypothesis Testing A hypothesis test is a statistical procedure that uses sample data to evaluate a hypothesis about a population Hypothesis is stated in terms of the

How To Test For Significance On A Data Set Non-Parametric Univariate Tests: 1 Sample Sign Test 1 1 SAMPLE SIGN TEST A non-parametric equivalent of the 1 SAMPLE T-TEST. ASSUMPTIONS: Data is non-normally distributed, even after log transforming.

Sample Size and Power in Clinical Trials Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance

Hypothesis Testing for Beginners Hypothesis Testing for Beginners Michele Piffer LSE August, 2011 Michele Piffer (LSE) Hypothesis Testing for Beginners August, 2011 1 / 53 One year ago a friend asked me to put down some easy-to-read notes

t Tests in Excel The Excel Statistical Master By Mark Harmon Copyright 2011 Mark Harmon t-tests in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com www.excelmasterseries.com

5.1 Identifying the Target Parameter University of California, Davis Department of Statistics Summer Session II Statistics 13 August 20, 2012 Date of latest update: August 20 Lecture 5: Estimation with Confidence intervals 5.1 Identifying

Class 19: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.1) Spring 204 Class 9: Two Way Tables, Conditional Distributions, Chi-Square (Text: Sections 2.5; 9.) Big Picture: More than Two Samples In Chapter 7: We looked at quantitative variables and compared the

An Introduction to Statistics Course (ECOE 1302) Spring Semester 2011 Chapter 10- TWO-SAMPLE TESTS The Islamic University of Gaza Faculty of Commerce Department of Economics and Political Sciences An Introduction to Statistics Course (ECOE 130) Spring Semester 011 Chapter 10- TWO-SAMPLE TESTS Practice

THE FIRST SET OF EXAMPLES USE SUMMARY DATA... EXAMPLE 7.2, PAGE 227 DESCRIBES A PROBLEM AND A HYPOTHESIS TEST IS PERFORMED IN EXAMPLE 7. THERE ARE TWO WAYS TO DO HYPOTHESIS TESTING WITH STATCRUNCH: WITH SUMMARY DATA (AS IN EXAMPLE 7.17, PAGE 236, IN ROSNER); WITH THE ORIGINAL DATA (AS IN EXAMPLE 8.5, PAGE 301 IN ROSNER THAT USES DATA FROM

6.4 Normal Distribution Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under

Hypothesis Testing: Two Means, Paired Data, Two Proportions Chapter 10 Hypothesis Testing: Two Means, Paired Data, Two Proportions 10.1 Hypothesis Testing: Two Population Means and Two Population Proportions 1 10.1.1 Student Learning Objectives By the end of this

1.5 Oneway Analysis of Variance Statistics: Rosie Cornish. 200. 1.5 Oneway Analysis of Variance 1 Introduction Oneway analysis of variance (ANOVA) is used to compare several means. This method is often used in scientific or medical experiments

Non-Inferiority Tests for Two Means using Differences Chapter 450 on-inferiority Tests for Two Means using Differences Introduction This procedure computes power and sample size for non-inferiority tests in two-sample designs in which the outcome is a continuous

Week 3&4: Z tables and the Sampling Distribution of X Week 3&4: Z tables and the Sampling Distribution of X 2 / 36 The Standard Normal Distribution, or Z Distribution, is the distribution of a random variable, Z N(0, 1 2 ). The distribution of any other normal

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1 Hypothesis testing So far, we ve talked about inference from the point of estimation. We ve tried to answer questions like What is a good estimate for a typical value? or How much variability is there

Comparing Means in Two Populations Comparing Means in Two Populations Overview The previous section discussed hypothesis testing when sampling from a single population (either a single mean or two means from the same population). Now we

Standard Deviation Estimator CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of

Point and Interval Estimates Point and Interval Estimates Suppose we want to estimate a parameter, such as p or µ, based on a finite sample of data. There are two main methods: 1. Point estimate: Summarize the sample by a single number

CALCULATIONS & STATISTICS CALCULATIONS & STATISTICS CALCULATION OF SCORES Conversion of 1-5 scale to 0-100 scores When you look at your report, you will notice that the scores are reported on a 0-100 scale, even though respondents

II. DISTRIBUTIONS distribution normal distribution. standard scores Appendix D Basic Measurement And Statistics The following information was developed by Steven Rothke, PhD, Department of Psychology, Rehabilitation Institute of Chicago (RIC) and expanded by Mary F. Schmidt,

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96 1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

STAT 350 Practice Final Exam Solution (Spring 2015) PART 1: Multiple Choice Questions: 1) A study was conducted to compare five different training programs for improving endurance. Forty subjects were randomly divided into five groups of eight subjects

Two Related Samples t Test Two Related Samples t Test In this example 1 students saw five pictures of attractive people and five pictures of unattractive people. For each picture, the students rated the friendliness of the person

2 Sample t-test (unequal sample sizes and unequal variances) Variations of the t-test: Sample tail Sample t-test (unequal sample sizes and unequal variances) Like the last example, below we have ceramic sherd thickness measurements (in cm) of two samples representing

Part 3. Comparing Groups. Chapter 7 Comparing Paired Groups 189. Chapter 8 Comparing Two Independent Groups 217 Part 3 Comparing Groups Chapter 7 Comparing Paired Groups 189 Chapter 8 Comparing Two Independent Groups 217 Chapter 9 Comparing More Than Two Groups 257 188 Elementary Statistics Using SAS Chapter 7 Comparing

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression Objectives: To perform a hypothesis test concerning the slope of a least squares line To recognize that testing for a

General Method: Difference of Means. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n 1, n 2 ) 1. General Method: Difference of Means 1. Calculate x 1, x 2, SE 1, SE 2. 2. Combined SE = SE1 2 + SE2 2. ASSUMES INDEPENDENT SAMPLES. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n

BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420 BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp. 394-398, 404-408, 410-420 1. Which of the following will increase the value of the power in a statistical test

NCSS Statistical Software. One-Sample T-Test Chapter 205 Introduction This procedure provides several reports for making inference about a population mean based on a single sample. These reports include confidence intervals of the mean or median,

Introduction to Hypothesis Testing I. Terms, Concepts. Introduction to Hypothesis Testing A. In general, we do not know the true value of population parameters - they must be estimated. However, we do have hypotheses about what the true

CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE 1 2 CONTENTS OF DAY 2 I. More Precise Definition of Simple Random Sample 3 Connection with independent random variables 3 Problems with small populations 8 II. Why Random Sampling is Important 9 A myth,

Simple Regression Theory II 2010 Samuel L. Baker SIMPLE REGRESSION THEORY II 1 Simple Regression Theory II 2010 Samuel L. Baker Assessing how good the regression equation is likely to be Assignment 1A gets into drawing inferences about how close the

Independent samples t-test. Dr. Tom Pierce Radford University Independent samples t-test Dr. Tom Pierce Radford University The logic behind drawing causal conclusions from experiments The sampling distribution of the difference between means The standard error of

Section 7.1. Introduction to Hypothesis Testing. Schrodinger s cat quantum mechanics thought experiment (1935) Section 7.1 Introduction to Hypothesis Testing Schrodinger s cat quantum mechanics thought experiment (1935) Statistical Hypotheses A statistical hypothesis is a claim about a population. Null hypothesis

Chapter 2. Hypothesis testing in one population Chapter 2. Hypothesis testing in one population Contents Introduction, the null and alternative hypotheses Hypothesis testing process Type I and Type II errors, power Test statistic, level of significance

Means, standard deviations and. and standard errors CHAPTER 4 Means, standard deviations and standard errors 4.1 Introduction Change of units 4.2 Mean, median and mode Coefficient of variation 4.3 Measures of variation 4.4 Calculating the mean and standard

Tutorial 5: Hypothesis Testing Tutorial 5: Hypothesis Testing Rob Nicholls nicholls@mrc-lmb.cam.ac.uk MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................

Practice problems for Homework 12 - confidence intervals and hypothesis testing. Open the Homework Assignment 12 and solve the problems. Practice problems for Homework 1 - confidence intervals and hypothesis testing. Read sections 10..3 and 10.3 of the text. Solve the practice problems below. Open the Homework Assignment 1 and solve the

How To Compare Birds To Other Birds STT 430/630/ES 760 Lecture Notes: Chapter 7: Two-Sample Inference 1 February 27, 2009 Chapter 7: Two Sample Inference Chapter 6 introduced hypothesis testing in the one-sample setting: one sample is obtained

Calculating P-Values. Parkland College. Isela Guerra Parkland College. Recommended Citation Parkland College A with Honors Projects Honors Program 2014 Calculating P-Values Isela Guerra Parkland College Recommended Citation Guerra, Isela, "Calculating P-Values" (2014). A with Honors Projects.

p ˆ (sample mean and sample Chapter 6: Confidence Intervals and Hypothesis Testing When analyzing data, we can t just accept the sample mean or sample proportion as the official mean or proportion. When we estimate the statistics

Psychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck! Psychology 60 Fall 2013 Practice Exam Actual Exam: Next Monday. Good luck! Name: 1. The basic idea behind hypothesis testing: A. is important only if you want to compare two populations. B. depends on

Chapter 3 RANDOM VARIATE GENERATION Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

Normal distribution. ) 2 /2σ. 2π σ Normal distribution The normal distribution is the most widely known and used of all distributions. Because the normal distribution approximates many natural phenomena so well, it has developed into a

individualdifferences 1 Simple ANalysis Of Variance (ANOVA) Oftentimes we have more than two groups that we want to compare. The purpose of ANOVA is to allow us to compare group means from several independent samples. In general,

research/scientific includes the following: statistical hypotheses: you have a null and alternative you accept one and reject the other 1 Hypothesis Testing Richard S. Balkin, Ph.D., LPC-S, NCC 2 Overview When we have questions about the effect of a treatment or intervention or wish to compare groups, we use hypothesis testing Parametric

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing. Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative

Variables Control Charts MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. Variables

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

Statistics Review PSY379 Statistics Review PSY379 Basic concepts Measurement scales Populations vs. samples Continuous vs. discrete variable Independent vs. dependent variable Descriptive vs. inferential stats Common analyses

Name: Date: Use the following to answer questions 3-4: Name: Date: 1. Determine whether each of the following statements is true or false. A) The margin of error for a 95% confidence interval for the mean increases as the sample size increases. B) The margin

How To Check For Differences In The One Way Anova MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

CHAPTER 14 NONPARAMETRIC TESTS CHAPTER 14 NONPARAMETRIC TESTS Everything that we have done up until now in statistics has relied heavily on one major fact: that our data is normally distributed. We have been able to make inferences

Principles of Hypothesis Testing for Public Health Principles of Hypothesis Testing for Public Health Laura Lee Johnson, Ph.D. Statistician National Center for Complementary and Alternative Medicine johnslau@mail.nih.gov Fall 2011 Answers to Questions

Statistics 2014 Scoring Guidelines AP Statistics 2014 Scoring Guidelines College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. AP Central is the official online home

Mind on Statistics. Chapter 12 Mind on Statistics Chapter 12 Sections 12.1 Questions 1 to 6: For each statement, determine if the statement is a typical null hypothesis (H 0 ) or alternative hypothesis (H a ). 1. There is no difference

Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4) Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume Final Exam Practice Problem Answers The following data set consists of data gathered from 77 popular breakfast cereals. The variables in the data set are as follows: Brand: The brand name of the cereal

SCHOOL OF HEALTH AND HUMAN SCIENCES DON T FORGET TO RECODE YOUR MISSING VALUES SCHOOL OF HEALTH AND HUMAN SCIENCES Using SPSS Topics addressed today: 1. Differences between groups 2. Graphing Use the s4data.sav file for the first part of this session. DON T FORGET TO RECODE YOUR

BA 275 Review Problems - Week 5 (10/23/06-10/27/06) CD Lessons: 48, 49, 50, 51, 52 Textbook: pp. 380-394 BA 275 Review Problems - Week 5 (10/23/06-10/27/06) CD Lessons: 48, 49, 50, 51, 52 Textbook: pp. 380-394 1. Does vigorous exercise affect concentration? In general, the time needed for people to complete

DATA INTERPRETATION AND STATISTICS PholC60 September 001 DATA INTERPRETATION AND STATISTICS Books A easy and systematic introductory text is Essentials of Medical Statistics by Betty Kirkwood, published by Blackwell at about 14. DESCRIPTIVE

Section 13, Part 1 ANOVA. Analysis Of Variance Section 13, Part 1 ANOVA Analysis Of Variance Course Overview So far in this course we ve covered: Descriptive statistics Summary statistics Tables and Graphs Probability Probability Rules Probability

UNDERSTANDING THE DEPENDENT-SAMPLES t TEST UNDERSTANDING THE DEPENDENT-SAMPLES t TEST A dependent-samples t test (a.k.a. matched or paired-samples, matched-pairs, samples, or subjects, simple repeated-measures or within-groups, or correlated groups)

NONPARAMETRIC STATISTICS 1. depend on assumptions about the underlying distribution of the data (or on the Central Limit Theorem) NONPARAMETRIC STATISTICS 1 PREVIOUSLY parametric statistics in estimation and hypothesis testing... construction of confidence intervals computing of p-values classical significance testing depend on assumptions

Dongfeng Li. Autumn 2010 Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

Tests for Two Proportions Chapter 200 Tests for Two Proportions Introduction This module computes power and sample size for hypothesis tests of the difference, ratio, or odds ratio of two independent proportions. The test statistics

Odds ratio, Odds ratio test for independence, chi-squared statistic. Odds ratio, Odds ratio test for independence, chi-squared statistic. Announcements: Assignment 5 is live on webpage. Due Wed Aug 1 at 4:30pm. (9 days, 1 hour, 58.5 minutes ) Final exam is Aug 9. Review

5/31/2013. Chapter 8 Hypothesis Testing. Hypothesis Testing. Hypothesis Testing. Outline. Objectives. Objectives C H 8A P T E R Outline 8 1 Steps in Traditional Method 8 2 z Test for a Mean 8 3 t Test for a Mean 8 4 z Test for a Proportion 8 6 Confidence Intervals and Copyright 2013 The McGraw Hill Companies, Inc.

Chapter 7 Section 1 Homework Set A Chapter 7 Section 1 Homework Set A 7.15 Finding the critical value t *. What critical value t * from Table D (use software, go to the web and type t distribution applet) should be used to calculate the

Association Between Variables Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi

Statistics I for QBIC. Contents and Objectives. Chapters 1 7. Revised: August 2013 Statistics I for QBIC Text Book: Biostatistics, 10 th edition, by Daniel & Cross Contents and Objectives Chapters 1 7 Revised: August 2013 Chapter 1: Nature of Statistics (sections 1.1-1.6) Objectives Lecture 5: Non-Parametric Tests (I) KimHuat LIM lim@stats.ox.ac.uk http://www.stats.ox.ac.uk/~lim/teaching.html Slide 1 5.1 Outline (i) Overview of Distribution-Free Tests (ii) Median Test for Two Independent Chapter 7 - Practice Problems 1 SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. Provide an appropriate response. 1) Define a point estimate. What is the