Fixed-Effect Versus Random-Effects Models

Save this PDF as:

Size: px
Start display at page:

Transcription

1 CHAPTER 13 Fixed-Effect Versus Random-Effects Models Introduction Definition of a summary effect Estimating the summary effect Extreme effect size in a large study or a small study Confidence interval The null hypothesis Which model should we use? Model should not be based on the test for heterogeneity Concluding remarks INTRODUCTION In Chapter 11 and Chapter 1 we introduced the fixed-effect and randomeffects models. Here, we highlight the conceptual and practical differences between them. Consider the forest plots in Figures 13.1 and 13.. They include the same six studies, but the first uses a fixed-effect analysis and the second a random-effects analysis. These plots provide a context for the discussion that follows. DEFINITION OF A SUMMARY EFFECT Both plots show a summary effect on the bottom line, but the meaning of this summary effect is different in the two models. In the fixed-effect analysis we assumethatthetrueeffectsizeisthesame in all studies, and the summary effect is our estimate of this common effect size. In the random-effects analysis we assume that the true effect size varies from one study to the next, and that the studies in our analysis represent a random sample of effect sizes that could Introduction to Meta-Analysis. Michael Borenstein, L. V. Hedges, J. P. T. Higgins and H. R. Rothstein 009 John Wiley & Sons, Ltd. ISBN:

2 78 Fixed-Effect Versus Random-Effects Models Impact of Intervention (Fixed effect) Std Diff (g) Relative Weight Standardized mean difference (g) and 95% confidence interval Carroll Grant Peck Donat Stewart Young Summary % 13% 8% 39% 10% 18% 100% Figure 13.1 Fixed-effect model forest plot showing relative weights. Impact of Intervention (Random effects) Std Diff (g) Relative Weight Standardized mean difference (g) with 95% confidence and prediction intervals Carroll Grant Peck Donat Stewart Young Summary % 16% 13% 3% 14% 18% 100% Figure 13. Random-effects model forest plot showing relative weights. have been observed. The summary effect is our estimate of the mean of these effects. ESTIMATING THE SUMMARY EFFECT Under the fixed-effect model we assume that the true effect size for all studies is identical, and the only reason the effect size varies between studies is sampling error (error in estimating the effect size). Therefore, when assigning

3 Chapter 13: Fixed-Effect Versus Random-Effects Models weights to the different studies we can largely ignore the information in the smaller studies since we have better information about the same effect size in the larger studies. By contrast, under the random-effects model the goal is not to estimate one true effect, but to estimate the mean of a distribution of effects. Since each study provides information about a different effect size, we want to be sure that all these effect sizes are represented in the summary estimate. This means that we cannot discount a small study by giving it a very small weight (the way we would in a fixed-effect analysis). The estimate provided by that study may be imprecise, but it is information about an effect that no other study has estimated. By the same logic we cannot give too much weight to a very large study (the way we might in a fixed-effect analysis). Our goal is to estimate the mean effect in a range of studies, and we do not want that overall estimate to be overly influenced by any one of them. In these graphs, the weight assigned to each study is reflected in the size of the box (specifically, the area) for that study. Under the fixed-effect model there is a wide range of weights (as reflected in the size of the boxes) whereas under the random-effects model the weights fall in a relatively narrow range. For example, compare the weight assigned to the largest study (Donat) with that assigned to the smallest study (Peck) under the two models. Under the fixed-effect model Donat is given about five times as much weight as Peck. Under the random-effects model Donat is given only 1.8 times as much weight as Peck. EXTREME EFFECT SIZE IN A LARGE STUDY OR A SMALL STUDY How will the selection of a model influence the overall effect size? In this example Donat is the largest study, and also happens to have the highest effect size. Under the fixed-effect model Donat was assigned a large share (39%) of the total weight and pulled the mean effect up to By contrast, under the random-effects model Donat was assigned a relatively modest share of the weight (3%). It therefore had less pull on the mean, which was computed as Similarly, Carroll is one of the smaller studies and happens to have the smallest effect size. Under the fixed-effect model Carroll was assigned a relatively small proportion of the total weight (1%), and had little influence on the summary effect. By contrast, under the random-effects model Carroll carried a somewhat higher proportion of the total weight (16%) and was able to pull the weighted mean toward the left. The operating premise, as illustrated in these examples, is that whenever is nonzero, the relative weights assigned under random effects will be more balanced than those assigned under fixed effects. As we move from fixed effect to random effects, extreme studies will lose influence if they are large, and will gain influence if they are small.

4 80 Fixed-Effect Versus Random-Effects Models CONFIDENCE INTERVAL Under the fixed-effect model the only source of uncertainty is the within-study (sampling or estimation) error. Under the random-effects model there is this same source of uncertainty plus an additional source (between-studies variance). It follows that the variance, standard error, and confidence interval for the summary effect will always be larger (or wider) under the random-effects model than under the fixed-effect model (unless T is zero, in which case the two models are the same). In this example, the standard error is for the fixed-effect model, and for the random-effects model. Study A Study B Study C Study D Fixed-effect model Effect size and 95% confidence interval Summary Figure 13.3 Very large studies under fixed-effect model. Study A Study B Study C Study D Random-effects model Effect size and 95% confidence interval Summary Figure 13.4 Very large studies under random-effects model.

5 Chapter 13: Fixed-Effect Versus Random-Effects Models Consider what would happen if we had five studies, and each study had an infinitely large sample size. Under either model the confidence interval for the effect size in each study would have a width approaching zero, since we know the effect size in that study with perfect precision. Under the fixed-effect model the summary effect would also have a confidence interval with a width of zero, since we know the common effect precisely (Figure 13.3). By contrast, under the random-effects model the width of the confidence interval would not approach zero (Figure 13.4). While we know the effect in each study precisely, these effects have been sampled from a universe of possible effect sizes, and provide only an estimate of the mean effect. Just as the error within a study will approach zero only as the sample size approaches infinity, so too the error of these studies as an estimate of the mean effect will approach zero only as the number of studies approaches infinity. More generally, it is instructive to consider what factors influence the standard error of the summary effect under the two models. The following formulas are based on a meta-analysis of means from k one-group studies, but the conceptual argument applies to all meta-analyses. The within-study variance of each mean depends on the standard deviation (denoted ) of participants scores and the sample size of each study (n). For simplicity we assume that all of the studies have the same sample size and the same standard deviation (see Box 13.1 for details). Under the fixed-effect model the standard error of the summary effect is given by rffiffiffiffiffiffiffiffiffiffiffiffi SE M ¼ : ð13:1þ k n It follows that with a large enough sample size the standard error will approach zero, and this is true whether the sample size is concentrated on one or two studies, or dispersed across any number of studies. Under the random-effects model the standard error of the summary effect is given by rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi SE M ¼ k n þ : ð13:þ k The first term is identical to that for the fixed-effect model and, again, with a large enough sample size, this term will approach zero. By contrast, the second term (which reflects the between-studies variance) will only approach zero as the number of studies approaches infinity. These formulas do not apply exactly in practice, but the conceptual argument does. Namely, increasing the sample size within studies is not sufficient to reduce the standard error beyond a certain point (where that point is determined by and k). If there is only a small number of studies, then the standard error could still be substantial even if the total n is in the tens of thousands or higher.

6 8 Fixed-Effect Versus Random-Effects Models BOX 13.1 FACTORS THAT INFLUENCE THE STANDARD ERROR OF THE SUMMARY EFFECT. To illustrate the concepts with some simple formulas, let us consider a metaanalysis of studies with the very simplest design, such that each study comprises a single sample of n observations with standard deviation. We combine estimates of the mean in a meta-analysis. The variance of each estimate is V Yi ¼ n so the (inverse-variance) weight in a fixed-effect meta-analysis is W i ¼ 1 =n ¼ n and the variance of the summary effect under the fixed-effect model the standard error is given by V M ¼ X k 1 i¼1 W i ¼ 1 k n= ¼ k n : Therefore under the fixed-effect model the (true) standard error of the summary mean is given by SE M ¼ rffiffiffiffiffiffiffiffiffiffiffiffi : k n Under the random-effects model the weight awarded to each study is W i ¼ 1 n þ and the (true) standard error of the summary mean turns out to be SE M rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ¼ k n þ : k

7 Chapter 13: Fixed-Effect Versus Random-Effects Models THE NULL HYPOTHESIS Often, after computing a summary effect, researchers perform a test of the null hypothesis. Under the fixed-effect model the null hypothesis being tested is that there is zero effect in every study. Under the random-effects model the null hypothesis being tested is that the mean effect is zero. Although some may treat these hypotheses as interchangeable, they are in fact different, and it is imperative to choose the test that is appropriate to the inference a researcher wishes to make. WHICH MODEL SHOULD WE USE? The selection of a computational model should be based on our expectation about whether or not the studies share a common effect size and on our goals in performing the analysis. Fixed effect It makes sense to use the fixed-effect model if two conditions are met. First, we believe that all the studies included in the analysis are functionally identical. Second, our goal is to compute the common effect size for the identified population, and not to generalize to other populations. For example, suppose that a pharmaceutical company will use a thousand patients to compare a drug versus placebo. Because the staff can work with only 100 patients at a time, the company will run a series of ten trials with 100 patients in each. The studies are identical in the sense that any variables which can have an impact on the outcome are the same across the ten studies. Specifically, the studies draw patients from a common pool, using the same researchers, dose, measure, and so on (we assume that there is no concern about practice effects for the researchers, nor for the different starting times of the various cohorts). All the studies are expected to share a common effect and so the first condition is met. The goal of the analysis is to see if the drug works in the population from which the patients were drawn (and not to extrapolate to other populations), and so the second condition is met, as well. In this example the fixed-effect model is a plausible fit for the data and meets the goal of the researchers. It should be clear, however, that this situation is relatively rare. The vast majority of cases will more closely resemble those discussed immediately below. Random effects By contrast, when the researcher is accumulating data from a series of studies that had been performed by researchers operating independently, it would be unlikely that all the studies were functionally equivalent. Typically, the subjects or interventions in these studies would have differed in ways that would have impacted on

8 84 Fixed-Effect Versus Random-Effects Models the results, and therefore we should not assume a common effect size. Therefore, in these cases the random-effects model is more easily justified than the fixed-effect model. Additionally, the goal of this analysis is usually to generalize to a range of scenarios. Therefore, if one did make the argument that all the studies used an identical, narrowly defined population, then it would not be possible to extrapolate from this population to others, and the utility of the analysis would be severely limited. A caveat There is one caveat to the above. If the number of studies is very small, then the estimate of the between-studies variance ( ) will have poor precision. While the random-effects model is still the appropriate model, we lack the information needed to apply it correctly. In this case the reviewer may choose among several options, each of them problematic. One option is to report the separate effects and not report a summary effect. The hope is that the reader will understand that we cannot draw conclusions about the effect size and its confidence interval. The problem is that some readers will revert to vote counting (see Chapter 8) and possibly reach an erroneous conclusion. Another option is to perform a fixed-effect analysis. This approach would yield a descriptive analysis of the included studies, but would not allow us to make inferences about a wider population. The problem with this approach is that (a) we do want to make inferences about a wider population and (b) readers will make these inferences even if they are not warranted. A third option is to take a Bayesian approach, where the estimate of is based on data from outside of the current set of studies. This is probably the best option, but the problem is that relatively few researchers have expertise in Bayesian meta-analysis. Additionally, some researchers have a philosophical objection to this approach. For a more general discussion of this issue see When does it make sense to perform a meta-analysis in Chapter 40. MODEL SHOULD NOT BE BASED ON THE TEST FOR HETEROGENEITY In the next chapter we will introduce a test of the null hypothesis that the betweenstudies variance is zero. This test is based on the amount of between-studies variance observed, relative to the amount we would expect if the studies actually shared a common effect size. Some have adopted the practice of starting with a fixed-effect model and then switching to a random-effects model if the test of homogeneity is statistically significant. This practice should be strongly discouraged because the decision to use the random-effects model should be based on our understanding of whether or not all studies share a common effect size, and not on the outcome of a statistical test (especially since the test for heterogeneity often suffers from low power).

9 Chapter 13: Fixed-Effect Versus Random-Effects Models If the study effect sizes are seen as having been sampled from a distribution of effect sizes, then the random-effects model, which reflects this idea, is the logical one to use. If the between-studies variance is substantial (and statistically significant) then the fixed-effect model is inappropriate. However, even if the between-studies variance does not meet the criterion for statistical significance (which may be due simply to low power) we should still take account of this variance when assigning weights. If T turns out to be zero, then the random-effects analysis reduces to the fixed-effect analysis, and so there is no cost to using this model. On the other hand, if one has elected to use the fixed-effect model a priori but the test of homogeneity is statistically significant, then it would be important to revisit the assumptions that led to the selection of a fixed-effect model. CONCLUDING REMARKS Our discussion of differences between the fixed-model and the random-effects model focused largely on the computation of a summary effect and the confidence intervals for the summary effect. We did not address the implications of the dispersion itself. Under the fixed-effect model we assume that all dispersion in observed effects is due to sampling error, but under the random-effects model we allow that some of that dispersion reflects real differences in effect size across studies. In the chapters that follow we discuss methods to quantify that dispersion and to consider its substantive implications. Although throughout this book we define a fixed-effect meta-analysis as assuming that every study has a common true effect size, some have argued that the fixedeffect method is valid without making this assumption. The point estimate of the effect in a fixed-effect meta-analysis is simply a weighted average and does not strictly require the assumption that all studies estimate the same thing. For simplicity and clarity we adopt a definition of a fixed-effect meta-analysis that does assume homogeneity of effect. SUMMARY POINTS A fixed-effect meta-analysis estimates a single effect that is assumed to be common to every study, while a random-effects meta-analysis estimates the mean of a distribution of effects. Study weights are more balanced under the random-effects model than under the fixed-effect model. Large studies are assigned less relative weight and small studies are assigned more relative weight as compared with the fixed-effect model. The standard error of the summary effect and (it follows) the confidence intervals for the summary effect are wider under the random-effects model than under the fixed-effect model.

10 86 Fixed-Effect Versus Random-Effects Models The selection of a model must be based solely on the question of which model fits the distribution of effect sizes, and takes account of the relevant source(s) of error. When studies are gathered from the published literature, the randomeffects model is generally a more plausible match. The strategy of starting with a fixed-effect model and then moving to a random-effects model if the test for heterogeneity is significant is a mistake, and should be strongly discouraged.

Meta-Analysis Fixed effect vs. random effects

Meta-Analysis Fixed effect vs. random effects Michael Borenstein Larry Hedges Hannah Rothstein www.meta-analysis.com 2007 Borenstein, Hedges, Rothstein 1 Section: Fixed effect vs. random effects models...4

Introduction to Meta-Analysis

Introduction to Meta-Analysis Michael Borenstein Larry Hedges Hannah Rothstein www.meta-analysis.com Page 1 Dedication Dedicated in honor of Sir Iain Chalmers and to the memory of Dr. Thomas Chalmers.

When Does it Make Sense to Perform a Meta-Analysis?

CHAPTER 40 When Does it Make Sense to Perform a Meta-Analysis? Introduction Are the studies similar enough to combine? Can I combine studies with different designs? How many studies are enough to carry

Meta-Regression CHAPTER 20

CHAPTER 20 Meta-Regression Introduction Fixed-effect model Fixed or random effects for unexplained heterogeneity Random-effects model INTRODUCTION In primary studies we use regression, or multiple regression,

To open a CMA file > Download and Save file Start CMA Open file from within CMA

Example name Effect size Analysis type Level Diastolic BP Mean difference Basic Basic Synopsis This analysis includes five studies where persons who donated a kidney were compared with persons in a control

Complex Data Structures

PART 5 Complex Data Structures Introduction to Meta-Analysis. Michael Borenstein, L. V. Hedges, J. P. T. Higgins and H. R. Rothstein 2009 John Wiley & Sons, Ltd. ISBN: 978-0-470-05724-7 CHAPTER 22 Overview

Systematic Reviews and Meta-analyses

Systematic Reviews and Meta-analyses Introduction A systematic review (also called an overview) attempts to summarize the scientific evidence related to treatment, causation, diagnosis, or prognosis of

Size of a study. Chapter 15

Size of a study 15.1 Introduction It is important to ensure at the design stage that the proposed number of subjects to be recruited into any study will be appropriate to answer the main objective(s) of

Confidence Intervals for One Standard Deviation Using Standard Deviation

Chapter 640 Confidence Intervals for One Standard Deviation Using Standard Deviation Introduction This routine calculates the sample size necessary to achieve a specified interval width or distance from

Hypothesis tests: the t-tests

Hypothesis tests: the t-tests Introduction Invariably investigators wish to ask whether their data answer certain questions that are germane to the purpose of the investigation. It is often the case that

Confidence Intervals for the Difference Between Two Means

Chapter 47 Confidence Intervals for the Difference Between Two Means Introduction This procedure calculates the sample size necessary to achieve a specified distance from the difference in sample means

Pooling and Meta-analysis. Tony O Hagan

Pooling and Meta-analysis Tony O Hagan Pooling Synthesising prior information from several experts 2 Multiple experts The case of multiple experts is important When elicitation is used to provide expert

CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

1 2 CONTENTS OF DAY 2 I. More Precise Definition of Simple Random Sample 3 Connection with independent random variables 3 Problems with small populations 8 II. Why Random Sampling is Important 9 A myth,

WISE Power Tutorial All Exercises

ame Date Class WISE Power Tutorial All Exercises Power: The B.E.A.. Mnemonic Four interrelated features of power can be summarized using BEA B Beta Error (Power = 1 Beta Error): Beta error (or Type II

A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

CHAPTER 5. A POPULATION MEAN, CONFIDENCE INTERVALS AND HYPOTHESIS TESTING 5.1 Concepts When a number of animals or plots are exposed to a certain treatment, we usually estimate the effect of the treatment

Chapter 8 Hypothesis Testing Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing

Chapter 8 Hypothesis Testing 1 Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing 8-3 Testing a Claim About a Proportion 8-5 Testing a Claim About a Mean: s Not Known 8-6 Testing

2 Precision-based sample size calculations

Statistics: An introduction to sample size calculations Rosie Cornish. 2006. 1 Introduction One crucial aspect of study design is deciding how big your sample should be. If you increase your sample size

Effect Size Calculations and Elementary Meta-Analysis

Effect Size Calculations and Elementary Meta-Analysis David B. Wilson, PhD George Mason University August 2011 The End-Game Forest-Plot of Odds-Ratios and 95% Confidence Intervals for the Effects of Cognitive-Behavioral

Interpretation of forest plots Part I

Interpretation of forest plots Part I 1 At the end of this lecture, you should be able to understand the principles and uses of forest plot. You should also be able to modify the forest plots. 2 What is

UCLA STAT 13 Statistical Methods - Final Exam Review Solutions Chapter 7 Sampling Distributions of Estimates

UCLA STAT 13 Statistical Methods - Final Exam Review Solutions Chapter 7 Sampling Distributions of Estimates 1. (a) (i) µ µ (ii) σ σ n is exactly Normally distributed. (c) (i) is approximately Normally

Sample Size Determination

Sample Size Determination Population A: 10,000 Population B: 5,000 Sample 10% Sample 15% Sample size 1000 Sample size 750 The process of obtaining information from a subset (sample) of a larger group (population)

1. How different is the t distribution from the normal?

Statistics 101 106 Lecture 7 (20 October 98) c David Pollard Page 1 Read M&M 7.1 and 7.2, ignoring starred parts. Reread M&M 3.2. The effects of estimated variances on normal approximations. t-distributions.

HypoTesting. Name: Class: Date: Multiple Choice Identify the choice that best completes the statement or answers the question.

Name: Class: Date: HypoTesting Multiple Choice Identify the choice that best completes the statement or answers the question. 1. A Type II error is committed if we make: a. a correct decision when the

Sampling and Hypothesis Testing

Population and sample Sampling and Hypothesis Testing Allin Cottrell Population : an entire set of objects or units of observation of one sort or another. Sample : subset of a population. Parameter versus

Hypothesis Testing. Chapter Introduction

Contents 9 Hypothesis Testing 553 9.1 Introduction............................ 553 9.2 Hypothesis Test for a Mean................... 557 9.2.1 Steps in Hypothesis Testing............... 557 9.2.2 Diagrammatic

Methods for Meta-analysis in Medical Research

Methods for Meta-analysis in Medical Research Alex J. Sutton University of Leicester, UK Keith R. Abrams University of Leicester, UK David R. Jones University of Leicester, UK Trevor A. Sheldon University

Web appendix: Supplementary material. Appendix 1 (on-line): Medline search strategy

Web appendix: Supplementary material Appendix 1 (on-line): Medline search strategy exp Venous Thrombosis/ Deep vein thrombosis.mp. Pulmonary embolism.mp. or exp Pulmonary Embolism/ recurrent venous thromboembolism.mp.

MINITAB ASSISTANT WHITE PAPER

MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way

Chapter 16 Multiple Choice Questions (The answers are provided after the last question.)

Chapter 16 Multiple Choice Questions (The answers are provided after the last question.) 1. Which of the following symbols represents a population parameter? a. SD b. σ c. r d. 0 2. If you drew all possible

UNDERSTANDING THE INDEPENDENT-SAMPLES t TEST

UNDERSTANDING The independent-samples t test evaluates the difference between the means of two independent or unrelated groups. That is, we evaluate whether the means for two independent groups are significantly

Simple Linear Regression Chapter 11

Simple Linear Regression Chapter 11 Rationale Frequently decision-making situations require modeling of relationships among business variables. For instance, the amount of sale of a product may be related

Null Hypothesis H 0. The null hypothesis (denoted by H 0

Hypothesis test In statistics, a hypothesis is a claim or statement about a property of a population. A hypothesis test (or test of significance) is a standard procedure for testing a claim about a property

Extending Hypothesis Testing. p-values & confidence intervals

Extending Hypothesis Testing p-values & confidence intervals So far: how to state a question in the form of two hypotheses (null and alternative), how to assess the data, how to answer the question by

Confidence Intervals for Cp

Chapter 296 Confidence Intervals for Cp Introduction This routine calculates the sample size needed to obtain a specified width of a Cp confidence interval at a stated confidence level. Cp is a process

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

STATS8: Introduction to Biostatistics Data Exploration Babak Shahbaba Department of Statistics, UCI Introduction After clearly defining the scientific problem, selecting a set of representative members

Experimental Design. Power and Sample Size Determination. Proportions. Proportions. Confidence Interval for p. The Binomial Test

Experimental Design Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 To this point in the semester, we have largely

Constructing and Interpreting Confidence Intervals

Constructing and Interpreting Confidence Intervals Confidence Intervals In this power point, you will learn: Why confidence intervals are important in evaluation research How to interpret a confidence

Two-sample t-tests. - Independent samples - Pooled standard devation - The equal variance assumption

Two-sample t-tests. - Independent samples - Pooled standard devation - The equal variance assumption Last time, we used the mean of one sample to test against the hypothesis that the true mean was a particular

STATISTICAL ANALYSIS AND INTERPRETATION OF DATA COMMONLY USED IN EMPLOYMENT LAW LITIGATION

STATISTICAL ANALYSIS AND INTERPRETATION OF DATA COMMONLY USED IN EMPLOYMENT LAW LITIGATION C. Paul Wazzan Kenneth D. Sulzer ABSTRACT In employment law litigation, statistical analysis of data from surveys,

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.

Power and Sample Size Determination

Power and Sample Size Determination Bret Hanlon and Bret Larget Department of Statistics University of Wisconsin Madison November 3 8, 2011 Power 1 / 31 Experimental Design To this point in the semester,

Statistics 2015 Scoring Guidelines

AP Statistics 2015 Scoring Guidelines College Board, Advanced Placement Program, AP, AP Central, and the acorn logo are registered trademarks of the College Board. AP Central is the official online home

Social Studies 201 Notes for November 19, 2003

1 Social Studies 201 Notes for November 19, 2003 Determining sample size for estimation of a population proportion Section 8.6.2, p. 541. As indicated in the notes for November 17, when sample size is

Introduction to Statistical Methods in Meta-Analysis. Jian Wang October 31, 2013

Introduction to Statistical Methods in Meta-Analysis Jian Wang October 31, 2013 What is meta-analysis? Meta-analysis: the statistical synthesis of information from multiple independent studies. Increase

The Philosophy of Hypothesis Testing, Questions and Answers 2006 Samuel L. Baker

HYPOTHESIS TESTING PHILOSOPHY 1 The Philosophy of Hypothesis Testing, Questions and Answers 2006 Samuel L. Baker Question: So I'm hypothesis testing. What's the hypothesis I'm testing? Answer: When you're

, then the form of the model is given by: which comprises a deterministic component involving the three regression coefficients (

Multiple regression Introduction Multiple regression is a logical extension of the principles of simple linear regression to situations in which there are several predictor variables. For instance if we

Sampling Distributions and the Central Limit Theorem

135 Part 2 / Basic Tools of Research: Sampling, Measurement, Distributions, and Descriptive Statistics Chapter 10 Sampling Distributions and the Central Limit Theorem In the previous chapter we explained

Important Probability Distributions OPRE 6301

Important Probability Distributions OPRE 6301 Important Distributions... Certain probability distributions occur with such regularity in real-life applications that they have been given their own names.

STATISTICS 151 SECTION 1 FINAL EXAM MAY

STATISTICS 151 SECTION 1 FINAL EXAM MAY 2 2009 This is an open book exam. Course text, personal notes and calculator are permitted. You have 3 hours to complete the test. Personal computers and cellphones

Fairfield Public Schools

Mathematics Fairfield Public Schools AP Statistics AP Statistics BOE Approved 04/08/2014 1 AP STATISTICS Critical Areas of Focus AP Statistics is a rigorous course that offers advanced students an opportunity

13 Two-Sample T Tests

www.ck12.org CHAPTER 13 Two-Sample T Tests Chapter Outline 13.1 TESTING A HYPOTHESIS FOR DEPENDENT AND INDEPENDENT SAMPLES 270 www.ck12.org Chapter 13. Two-Sample T Tests 13.1 Testing a Hypothesis for

Confidence Intervals for Cpk

Chapter 297 Confidence Intervals for Cpk Introduction This routine calculates the sample size needed to obtain a specified width of a Cpk confidence interval at a stated confidence level. Cpk is a process

How Meta-Analysis Increases Statistical Power

Psychological Methods Copyright 2003 by the American Psychological Association, Inc. 2003, Vol. 8, No. 3, 243 253 1082-989X/03/\$12.00 DOI: 10.1037/1082-989X.8.3.243 How Meta-Analysis Increases Statistical

The alternative hypothesis,, is the statement that the parameter value somehow differs from that claimed by the null hypothesis. : 0.5 :>0.5 :<0.

Section 8.2-8.5 Null and Alternative Hypotheses... The null hypothesis,, is a statement that the value of a population parameter is equal to some claimed value. :=0.5 The alternative hypothesis,, is the

Software for Publication Bias

CHAPTER 11 Software for Publication Bias Michael Borenstein Biostat, Inc., USA KEY POINTS Various procedures for addressing publication bias are discussed elsewhere in this volume. The goal of this chapter

Hypothesis Tests for a Population Proportion

Hypothesis Tests for a Population Proportion MATH 130, Elements of Statistics I J. Robert Buchanan Department of Mathematics Fall 2015 Review: Steps of Hypothesis Testing 1. A statement is made regarding

Sample Size and Power in Clinical Trials

Sample Size and Power in Clinical Trials Version 1.0 May 011 1. Power of a Test. Factors affecting Power 3. Required Sample Size RELATED ISSUES 1. Effect Size. Test Statistics 3. Variation 4. Significance

N Mean Std. Deviation Std. Error of Mean

DEPARTMENT OF POLITICAL SCIENCE AND INTERNATIONAL RELATIONS Posc/Uapp 815 SAMPLE QUESTIONS FOR THE FINAL EXAMINATION I. READING: A. Read Agresti and Finalay, Chapters 6, 7, and 8 carefully. 1. Ignore the

Module 5 Hypotheses Tests: Comparing Two Groups

Module 5 Hypotheses Tests: Comparing Two Groups Objective: In medical research, we often compare the outcomes between two groups of patients, namely exposed and unexposed groups. At the completion of this

SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question.

Practice for Chapter 9 and 10 The acutal exam differs. SHORT ANSWER. Write the word or phrase that best completes each statement or answers the question. Find the number of successes x suggested by the

Sample size estimation is an important concern

Sample size and power calculations made simple Evie McCrum-Gardner Background: Sample size estimation is an important concern for researchers as guidelines must be adhered to for ethics committees, grant

Hypothesis Testing. April 21, 2009

Hypothesis Testing April 21, 2009 Your Claim is Just a Hypothesis I ve never made a mistake. Once I thought I did, but I was wrong. Your Claim is Just a Hypothesis Confidence intervals quantify how sure

AP Statistics 2001 Solutions and Scoring Guidelines

AP Statistics 2001 Solutions and Scoring Guidelines The materials included in these files are intended for non-commercial use by AP teachers for course and exam preparation; permission for any other use

Two-Variable Regression: Interval Estimation and Hypothesis Testing

Two-Variable Regression: Interval Estimation and Hypothesis Testing Jamie Monogan University of Georgia Intermediate Political Methodology Jamie Monogan (UGA) Confidence Intervals & Hypothesis Testing

Statistical Inference

Statistical Inference Idea: Estimate parameters of the population distribution using data. How: Use the sampling distribution of sample statistics and methods based on what would happen if we used this

Introduction. Hypothesis Testing. Hypothesis Testing. Significance Testing

Introduction Hypothesis Testing Mark Lunt Arthritis Research UK Centre for Ecellence in Epidemiology University of Manchester 13/10/2015 We saw last week that we can never know the population parameters

Statistical Analysis I

CTSI BERD Research Methods Seminar Series Statistical Analysis I Lan Kong, PhD Associate Professor Department of Public Health Sciences December 22, 2014 Biostatistics, Epidemiology, Research Design(BERD)

12: Analysis of Variance. Introduction

1: Analysis of Variance Introduction EDA Hypothesis Test Introduction In Chapter 8 and again in Chapter 11 we compared means from two independent groups. In this chapter we extend the procedure to consider

6. Duality between confidence intervals and statistical tests

6. Duality between confidence intervals and statistical tests Suppose we carry out the following test at a significance level of 100α%. H 0 :µ = µ 0 H A :µ µ 0 Then we reject H 0 if and only if µ 0 does

Does Sample Size Still Matter?

Does Sample Size Still Matter? David Bakken and Megan Bond, KJT Group Introduction The survey has been an important tool in academic, governmental, and commercial research since the 1930 s. Because in

Math 62 Statistics Sample Exam Questions

Math 62 Statistics Sample Exam Questions 1. (10) Explain the difference between the distribution of a population and the sampling distribution of a statistic, such as the mean, of a sample randomly selected

Planning sample size for randomized evaluations

TRANSLATING RESEARCH INTO ACTION Planning sample size for randomized evaluations Simone Schaner Dartmouth College povertyactionlab.org 1 Course Overview Why evaluate? What is evaluation? Outcomes, indicators

Thomas Stratmann Mercatus Center Scholar Professor of Economics, George Mason University

Thomas Stratmann Mercatus Center Scholar Professor of Economics, George Mason University to Representative Scott Garrett Chairman of the Capital Markets and Government Sponsored Enterprises Subcommittee

Business Statistics. Lecture 8: More Hypothesis Testing

Business Statistics Lecture 8: More Hypothesis Testing 1 Goals for this Lecture Review of t-tests Additional hypothesis tests Two-sample tests Paired tests 2 The Basic Idea of Hypothesis Testing Start

General Method: Difference of Means. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n 1, n 2 ) 1.

General Method: Difference of Means 1. Calculate x 1, x 2, SE 1, SE 2. 2. Combined SE = SE1 2 + SE2 2. ASSUMES INDEPENDENT SAMPLES. 3. Calculate df: either Welch-Satterthwaite formula or simpler df = min(n

The Basics of a Hypothesis Test

Overview The Basics of a Test Dr Tom Ilvento Department of Food and Resource Economics Alternative way to make inferences from a sample to the Population is via a Test A hypothesis test is based upon A

An Introduction to Statistics Course (ECOE 1302) Spring Semester 2011 Chapter 10- TWO-SAMPLE TESTS

The Islamic University of Gaza Faculty of Commerce Department of Economics and Political Sciences An Introduction to Statistics Course (ECOE 130) Spring Semester 011 Chapter 10- TWO-SAMPLE TESTS Practice

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Opening Example CHAPTER 13 SIMPLE LINEAR REGREION SIMPLE LINEAR REGREION! Simple Regression! Linear Regression Simple Regression Definition A regression model is a mathematical equation that descries the

2. Hypotheses reflect past experience with similar questions (educated propositions about cause). 3. Multiple hypotheses should be proposed whenever p

Large Sample tests of hypothesis Main points in this chapter 1. Standard method to test research questions. 2. Discussion - risks involved when decision based on the test is incorrect. 3. Detailed discussion

3.4 Statistical inference for 2 populations based on two samples

3.4 Statistical inference for 2 populations based on two samples Tests for a difference between two population means The first sample will be denoted as X 1, X 2,..., X m. The second sample will be denoted

Confidence Interval: pˆ = E = Indicated decision: < p <

Hypothesis (Significance) Tests About a Proportion Example 1 The standard treatment for a disease works in 0.675 of all patients. A new treatment is proposed. Is it better? (The scientists who created

LAB 4 ASSIGNMENT CONFIDENCE INTERVALS AND HYPOTHESIS TESTING. Using Data to Make Decisions

LAB 4 ASSIGNMENT CONFIDENCE INTERVALS AND HYPOTHESIS TESTING This lab assignment will give you the opportunity to explore the concept of a confidence interval and hypothesis testing in the context of a

11. Logic of Hypothesis Testing

11. Logic of Hypothesis Testing A. Introduction B. Significance Testing C. Type I and Type II Errors D. One- and Two-Tailed Tests E. Interpreting Significant Results F. Interpreting Non-Significant Results

The California Critical Thinking Skills Test. David K. Knox Director of Institutional Assessment

The California Critical Thinking Skills Test David K. Knox Director of Institutional Assessment What is the California Critical Thinking Skills Test? The California Critical Thinking Skills Test (CCTST)

Hypothesis testing for µ:

University of California, Los Angeles Department of Statistics Statistics 13 Elements of a hypothesis test: Hypothesis testing Instructor: Nicolas Christou 1. Null hypothesis, H 0 (always =). 2. Alternative

OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique - ie the estimator has the smallest variance

Lecture 5: Hypothesis Testing What we know now: OLS is not only unbiased it is also the most precise (efficient) unbiased estimation technique - ie the estimator has the smallest variance (if the Gauss-Markov

Online 12 - Sections 9.1 and 9.2-Doug Ensley

Student: Date: Instructor: Doug Ensley Course: MAT117 01 Applied Statistics - Ensley Assignment: Online 12 - Sections 9.1 and 9.2 1. Does a P-value of 0.001 give strong evidence or not especially strong

PUBLIC HEALTH OPTOMETRY ECONOMICS. Kevin D. Frick, PhD

Chapter Overview PUBLIC HEALTH OPTOMETRY ECONOMICS Kevin D. Frick, PhD This chapter on public health optometry economics describes the positive and normative uses of economic science. The terms positive

Chapter 7 Section 7.1: Inference for the Mean of a Population

Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used

Introduction to. Hypothesis Testing CHAPTER LEARNING OBJECTIVES. 1 Identify the four steps of hypothesis testing.

Introduction to Hypothesis Testing CHAPTER 8 LEARNING OBJECTIVES After reading this chapter, you should be able to: 1 Identify the four steps of hypothesis testing. 2 Define null hypothesis, alternative

The Sampling Distribution of the Mean Confidence Intervals for a Proportion

Math 130 Jeff Stratton Name The Sampling Distribution of the Mean Confidence Intervals for a Proportion Goal: To gain experience with the sampling distribution of the mean, and with confidence intervals

Unit 26 Estimation with Confidence Intervals

Unit 26 Estimation with Confidence Intervals Objectives: To see how confidence intervals are used to estimate a population proportion, a population mean, a difference in population proportions, or a difference

How to Conduct a Hypothesis Test

How to Conduct a Hypothesis Test The idea of hypothesis testing is relatively straightforward. In various studies we observe certain events. We must ask, is the event due to chance alone, or is there some

1. What is the critical value for this 95% confidence interval? CV = z.025 = invnorm(0.025) = 1.96

1 Final Review 2 Review 2.1 CI 1-propZint Scenario 1 A TV manufacturer claims in its warranty brochure that in the past not more than 10 percent of its TV sets needed any repair during the first two years

7 Hypothesis testing - one sample tests

7 Hypothesis testing - one sample tests 7.1 Introduction Definition 7.1 A hypothesis is a statement about a population parameter. Example A hypothesis might be that the mean age of students taking MAS113X

T O P I C 1 1 Introduction to statistics Preview Introduction In previous topics we have looked at ways of gathering data for research purposes and ways of organising and presenting it. In topics 11 and

ELEMENTARY STATISTICS IN CRIMINAL JUSTICE RESEARCH: THE ESSENTIALS

ELEMENTARY STATISTICS IN CRIMINAL JUSTICE RESEARCH: THE ESSENTIALS 2005 James A. Fox Jack Levin ISBN 0-205-42053-2 (Please use above number to order your exam copy.) Visit www.ablongman.com/replocator

Hypothesis tests, confidence intervals, and bootstrapping

Hypothesis tests, confidence intervals, and bootstrapping Business Statistics 41000 Fall 2015 1 Topics 1. Hypothesis tests Testing a mean: H0 : µ = µ 0 Testing a proportion: H0 : p = p 0 Testing a difference