Multiple Testing. Joseph P. Romano, Azeem M. Shaikh, and Michael Wolf. Abstract

Similar documents
False Discovery Rates

Uncertainty quantification for the family-wise error rate in multivariate copula models

Chapter 4: Statistical Hypothesis Testing

Multiple forecast model evaluation

Controlling the Danger of False Discoveries in Estimating Multiple Treatment Effects

A direct approach to false discovery rates

Fallback tests for co-primary endpoints

Bootstrapping Big Data

Package dunn.test. January 6, 2016

Non Parametric Inference

Finding statistical patterns in Big Data

Statistical issues in the analysis of microarray data

Controlling the number of false discoveries: application to high-dimensional genomic data

Exact Nonparametric Tests for Comparing Means - A Personal Summary

Two-Way ANOVA tests. I. Definition and Applications...2. II. Two-Way ANOVA prerequisites...2. III. How to use the Two-Way ANOVA tool?...

BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp , ,

Department of Economics

Strong control, conservative point estimation and simultaneous conservative consistency of false discovery rates: a unified approach

INFERENCE FOR THE IDENTIFIED SET IN PARTIALLY IDENTIFIED ECONOMETRIC MODELS. JOSEPH P. ROMANO Stanford University, Stanford, CA , U.S.A.

Testing Multiple Secondary Endpoints in Confirmatory Comparative Studies -- A Regulatory Perspective

Testing the profitability of technical trading rules on stock markets

Permutation Tests for Comparing Two Populations

The Bonferonni and Šidák Corrections for Multiple Comparisons

Confidence Intervals for One Standard Deviation Using Standard Deviation

Sample Size and Power in Clinical Trials

Permutation P-values Should Never Be Zero: Calculating Exact P-values When Permutations Are Randomly Drawn

On Marginal Effects in Semiparametric Censored Regression Models

1. How different is the t distribution from the normal?

Monte Carlo testing with Big Data

Chapter 6: Multivariate Cointegration Analysis

Two-Sample T-Tests Assuming Equal Variance (Enter Means)

Nonparametric tests these test hypotheses that are not statements about population parameters (e.g.,

Recall this chart that showed how most of our course would be organized:

Multiple Hypothesis Testing in Experimental Economics

Testing predictive performance of binary choice models 1

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Experimental Design. Power and Sample Size Determination. Proportions. Proportions. Confidence Interval for p. The Binomial Test

THE FUNDAMENTAL THEOREM OF ARBITRAGE PRICING

Fixed-Effect Versus Random-Effects Models

3.4 Statistical inference for 2 populations based on two samples

LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING

Master s Thesis. PERFORMANCE OF BETA-BINOMIAL SGoF MULTITESTING METHOD UNDER DEPENDENCE: A SIMULATION STUDY

CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

Chapter 8 Hypothesis Testing Chapter 8 Hypothesis Testing 8-1 Overview 8-2 Basics of Hypothesis Testing

Two-Sample T-Tests Allowing Unequal Variance (Enter Difference)

Multiple-Comparison Procedures

False Discovery Rate Control with Groups

Name: Date: Use the following to answer questions 3-4:

Multivariate Analysis of Ecological Data

Introduction to General and Generalized Linear Models

The Variability of P-Values. Summary

From the help desk: Bootstrapped standard errors

STATISTICA Formula Guide: Logistic Regression. Table of Contents

E3: PROBABILITY AND STATISTICS lecture notes

The Assumption(s) of Normality

Association Between Variables

Econometric analysis of the Belgian car market

Chapter 2. Hypothesis testing in one population

MODIFIED PARAMETRIC BOOTSTRAP: A ROBUST ALTERNATIVE TO CLASSICAL TEST

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

CHAPTER 2 Estimating Probabilities

Big Data: The Computation/Statistics Interface

Unit 31 A Hypothesis Test about Correlation and Slope in a Simple Linear Regression

Nonparametric Tests for Randomness

Chapter 1 Introduction. 1.1 Introduction

Non-Inferiority Tests for Two Proportions

A study on the bi-aspect procedure with location and scale parameters

Understanding Confidence Intervals and Hypothesis Testing Using Excel Data Table Simulation

Tutorial 5: Hypothesis Testing

Chapter 4: Vector Autoregressive Models

Testing against a Change from Short to Long Memory

Non-Inferiority Tests for One Mean

Hypothesis testing - Steps

Point Biserial Correlation Tests

Hypothesis Testing for Beginners

Redwood Building, Room T204, Stanford University School of Medicine, Stanford, CA

NCSS Statistical Software

NCSS Statistical Software

FULLY MODIFIED OLS FOR HETEROGENEOUS COINTEGRATED PANELS

Hypothesis testing. c 2014, Jeffrey S. Simonoff 1

Exploratory Analysis of Marketing Data: Trees vs. Regression

A THEORETICAL COMPARISON OF DATA MASKING TECHNIQUES FOR NUMERICAL MICRODATA

Solución del Examen Tipo: 1

Comparing Features of Convenient Estimators for Binary Choice Models With Endogenous Regressors

Tests for One Proportion

Multiple Comparisons Using R

Confidence Intervals for the Difference Between Two Means

1 Portfolio mean and variance

Wooldridge, Introductory Econometrics, 4th ed. Chapter 7: Multiple regression analysis with qualitative information: Binary (or dummy) variables

Stepwise Regression. Chapter 311. Introduction. Variable Selection Procedures. Forward (Step-Up) Selection

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

15 Limit sets. Lyapunov functions

Stat 5102 Notes: Nonparametric Tests and. confidence interval

AC - Clinical Trials

Introduction to Detection Theory

Multiple Linear Regression in Data Mining

Do Direct Stock Market Investments Outperform Mutual Funds? A Study of Finnish Retail Investors and Mutual Funds 1

Maastricht University. High Frequency Trading Rules in Forex Markets - A Model Confidence Set Approach

A Review of Cross Sectional Regression for Financial Data You should already know this material from previous study

II. DISTRIBUTIONS distribution normal distribution. standard scores

Transcription:

Multiple Testing Joseph P. Romano, Azeem M. Shaikh, and Michael Wolf Abstract Multiple testing refers to any instance that involves the simultaneous testing of more than one hypothesis. If decisions about the individual hypotheses are based on the unadjusted marginal p-values, then there is typically a large probability that some of the true null hypotheses will be rejected. Unfortunately, such a course of action is still common. In this article, we describe the problem of multiple testing more formally and discuss methods which account for the multiplicity issue. In particular, recent developments based on resampling result in an improved ability to reject false hypotheses compared to classical methods such as Bonferroni. KEY WORDS: Multiple Testing, Familywise Error Rate, Resampling 1

Multiple testing refers to any instance that involves the simultaneous testing of several hypotheses. This scenario is quite common in much of empirical research in economics. Some examples include: (i) one fits a multiple regression model and wishes to decide which coefficients are different from zero; (ii) one compares several forecasting strategies to a benchmark and wishes to decide which strategies are outperforming the benchmark; (iii) one evaluates a program with respect to multiple outcomes and wishes to decide for which outcomes the program yields significant effects. If one does not take the multiplicity of tests into account, then the probability that some of the true null hypotheses are rejected by chance alone may be unduly large. Take the case of S = 100 hypotheses being tested at the same time, all of them being true, with the size and level of each test exactly equal to α. For α = 0.05, one expects five true hypotheses to be rejected. Further, if all tests are mutually independent, then the probability that at least one true null hypothesis will be rejected is given by 1 0.95 100 = 0.994. Of course, there is no problem if one focuses on a particular hypothesis, and only one of them, a priori. The decision can still be based on the corresponding marginal p-value. The problem only arises if one searches the list of p-values for significant results a posteriori. Unfortunately, the latter case is much more common. Notation Suppose data X is generated from some unknown probability distribution P. In anticipation of asymptotic results, we may write X = X (n), where n typically refers to the sample size. A model assumes that P belongs to a certain family of probability distributions, though we make no rigid requirements for this family; it may be a parametric, semiparametric, or nonparametric model. Consider the problem of simultaneously testing a null hypothesis H s against the alternative hypothesis H s, for s = 1,..., S. A multiple testing procedure (MTP) is a rule which makes some decision about each H s. The term false discovery refers to the rejection of a true null hypothesis. Also, let I(P ) denote the set of true null hypotheses, that is, s I(P ) if and only if H s is true. We also assume that a test of the individual hypothesis H s is based on a test statistic T n,s, with large values indicating evidence against H s. A marginal p-value for testing H s is denoted by ˆp n,s. Familywise Error Rate Accounting for the multiplicity of individual tests can be achieved by controlling an appropriate error rate. The traditional or classical familywise error rate (FWE) is the probability of one 2

or more false discoveries: FWE P = P {reject at least one hypothesis H s : s I(P )}. Control of the FWE means that, for a given significance level α, FWE P α for any P. (1) Control of the FWE allows one to be 1 α confident that there are no false discoveries among the rejected hypotheses. Note that control of the FWE is equated with finite-sample control: (1) is required to hold for any given sample size n. However, such a requirement can often only be achieved under strict parametric assumptions or for special randomization set-ups. Instead, we then settle for asymptotic control of the FWE: lim sup n Methods Based on Marginal p-values FWE P α for any P. (2) MTPs falling in this category are derived from the marginal or individual p-values. They do not attempt to incorporate any information about the dependence structure between these p-values. There are two advantages to such methods. First, we might only have access to the list of p-values from a past study, but not to the underlying complete data set. Second, such methods can be very quickly implemented. On the other hand, as discussed later, such methods are generally suboptimal in terms of power. To show that such methods control the desired error rate, we need a condition on the p-values corresponding to the true null hypotheses: H s true s I(P ) = P {ˆp n,s u} u for any u (0, 1). (3) Condition (3) merely asserts that, when testing H s alone, the test that rejects H s when ˆp n,s u has level u, that is, it is a proper p-value. The classical method to control the FWE is the Bonferroni method, which rejects H s if and only if ˆp n,s α/s. More generally, the weighted Bonferroni method rejects H s if ˆp n,s w s α/s, where the constants w s, satisfying w s 0 and s w s = 1, reflect the importance of the individual hypotheses. An improvement is obtained by the method of Holm (1979). The marginal p-values are ordered from smallest to largest: ˆp n,(1) ˆp n,(2)... ˆp n,(s) with their corresponding null hypotheses labeled accordingly: H (1), H (2),..., H (S). Then, H (s) is rejected if and only if ˆp n,(j) α/(s j + 1) for j = 1,..., s. In other words, the method starts with testing the most significant hypothesis by comparing its p-value to α/s, just as the Bonferroni method. If the hypothesis is rejected, then the method moves on to the second most significant hypothesis by comparing its p-value to α/(s 1), and so on, until the procedure comes to a stop. Necessarily, 3

all hypotheses rejected by Bonferroni will also be rejected by Holm, but potentially a few more will be rejected, too. So, trivially, the method is more powerful, though it still controls the FWE under (3). If it is known that the p-values are suitably positive dependent, then further improvements can be obtained with the use of Simes identity; see Sarkar (1998). So far, we have assumed finite-sample validity of the null p-values expressed by (3). However, often p-values are derived by asymptotic approximations or resampling methods, only guaranteeing asymptotic validity instead: H s true s I(P ) = lim sup P {ˆp n,s u} u for any u (0, 1). (4) n Under this more realistic condition, the MTPs presented in this section only provide asymptotic control of the FWE in the sense of (2). Single-step versus Stepwise Methods In single-step MTPs, individual test statistics are compared to their critical values simultaneously, and after this simultaneous joint comparison, the multiple testing method stops. Often there is only one common critical value, but this need not be the case. More generally, the critical value for the sth test statistic may depend on s. An example is the weighted Bonferroni method discussed above. Often single-step methods can be improved in terms of power via stepwise methods, while still maintaining control of the desired error rate. Stepdown methods start with a single-step method but then continue by possibly rejecting further hypotheses in subsequent steps. This is achieved by decreasing the critical values for the remaining hypotheses depending on the hypotheses already rejected in previous steps. As soon as no further hypotheses are rejected, the method stops. The Holm (1979) method discussed above is a stepdown method. Stepdown methods therefore improve upon single-step methods by possibly rejecting less significant hypotheses in subsequent steps. In contrast, there also exist stepup methods that step up to examine the more significant hypotheses in subsequent steps. See Romano and Shaikh (2006). More general methods to construct MTPs which control the FWE can be obtained by the closure method; see Hochberg and Tamhane (1987). Resampling Methods Accounting for Dependence Methods based on p-values often achieve (asymptotic) control of the FWE by assuming (i) a worst-case dependence structure or (ii) a convenient dependence structure (such as mutual independence). This has two potential disadvantages. In case of (i), the method can be quite suboptimal in terms of power if the true dependence structure is quite far away from the worst-case scenario. In case of (ii), if the convenient dependence structure does not hold, even 4

asymptotic control may not result. As an example for case (i), consider the Bonferroni method. If the p-values were perfectly dependent, then the cut-off value could be changed from α/s to α. While perfect dependence is rare, this example serves to make a point. In the realistic scenario of strong cross-dependence, the cut-off value could be changed to something a lot larger than α/s while still maintaining control of the FWE. Hence, it is desirable to account for the underlying dependence structure. Of course, this dependence structure is unknown and must be (implicitly) estimated from the available data. Consistent estimation, in general, requires that the sample size grows to infinity. Therefore, in this subsection, we will settle for asymptotic control of the FWE. In addition, we will specialize to making simultaneous inference on the elements of a parameter vector θ = (θ 1,..., θ S ) T. Assume the individual hypotheses are one-sided of the form: Modifications for two-sided hypotheses are straightforward. H s : θ s 0 vs. H s : θ s > 0. (5) The test statistics are of the form T n,s = ˆθ n,s /ˆσ n,s. Here, ˆθ n,s is an estimator of θ s computed from X (n). Further, ˆσ n,s is either a standard error for ˆσ n,s or simply equal to 1/ n in case such a standard error is not available or only very difficult to obtain. We start by discussing a single-step method. An idealized method would reject all H s for which T n,s d 1 where d 1 is the 1 α quantile under P of the random variable max s (ˆθ n,s θ s )/ˆσ n,s. Naturally, the quantile d 1 does not only depend on the marginal distributions of the centered statistics (ˆθ n,s θ s )/ˆσ n,s but, crucially, also on their dependence structure. Since P is unknown, the idealized critical value d 1 is not available. But it can be estimated consistently under weak regularity conditions as follows. Take ˆd 1 as the 1 α quantile under ˆP n of max s (ˆθ n,s ˆθ n,s )/ˆσ n,s. Here, ˆPn is an unrestricted estimate of P. Further ˆθ n,s and ˆσ n,s are the estimator of θ s and its standard error (or simply 1/ n), respectively, computed from X (n), where X (n), ˆP n. In other words, we use the bootstrap to estimate d 1. The particular choice of ˆPn depends on the situation. In particular, if the data are collected over time a suitable time series bootstrap needs to be employed; see Davison and Hinkley (1997) and Lahiri (2003). We have thus described a single-step MTP. However, a stepdown improvement is possible. In any given step j, we simply discard the hypotheses that have been rejected so far and apply the single-step MTP to the remaining universe of non-rejected hypotheses. The resulting critical value ˆd j necessarily satisfies ˆd j ˆd j 1 so that new rejections may result; otherwise the method stops. This bootstrap stepdown MTP provides asymptotic control of the FWE under remarkably weak regularity conditions. Mainly, it is assumed that n(ˆθ n θ) converges in distribution to a (multivariate) continuous limit distribution and that the bootstrap consistently estimates this limit distribution. In addition, if standard errors are employed for ˆσ n,s, as opposed to simply using 1/ n, then it is assumed that they converge to the same non-zero limiting values in probability, both in the real world and in the bootstrap world. Under even weaker regularity conditions, a subsampling approach could be used instead; see Romano and Wolf 5

(2005). Furthermore, when a randomization setup applies, randomization methods can be used as an alternative; see Romano and Wolf (2005) again. Related methods are developed in White (2000) and Hansen (2005). However, both works are restricted to single-step methods. In addition, White (2000) does not consider Studentized test statistics. Stepwise bootstrap methods to control the FWE are already proposed in Westfall and Young (1993). An important difference in their approach is that they bootstrap under the joint null; that is, they use a restricted estimate of P where the contraints of all null hypotheses jointly hold. This approach requires the so-called subset pivotality condition and is generally less valid than the approaches discussed so far based on an unrestricted estimate of P ; for instance, see Example 4.1 of Romano and Wolf (2005). Generalized Error Rates So far, attention has been restricted to the FWE. Of course, this criterion is very strict; not even a single true hypothesis is allowed to be rejected. When S is very large, the corresponding multiple testing procedure (MTP) might result in low power, where we loosely define power as the ability to reject false null hypotheses. Let F denote the number of false rejections and let R denote the total number of rejections. The false discovery proportion (FDP) is defined as FDP = (F/R) 1{R > 0}, where 1{ } denotes the indicator function. Instead of the FWE, we may consider the probability of the FDP exceeding a small, pre-specified proportion: P {FDP > γ}, for some γ [0, 1). The special choice of γ = 0 simplifies to the traditional FWE. Another alternative to the FWE is the false discovery rate (FDR), defined to be the expected value of the FDP: FDR P = E P [FDP]. By allowing for a small (expected) fraction of false discoveries, we can generally gain a lot of power compared with FWE control, especially when S is large. For a discussion of MTPs that provide (asymptotic) control of the FDP and the FDR, the reader is referred to Romano et al. (2008a, 2008b) and the references therein. 6

References Davison, A. C. and Hinkley, D. V. (1997). Bootstrap Methods and their Application. Cambridge University Press, Cambridge. Hansen, P. R. (2005). A test for superior predictive ability. Journal of Business and Economics Statistics, 23:365 380. Hochberg, Y. and Tamhane, A. (1987). Multiple Comparison Procedures. Wiley, New York. Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scandinavian Journal of Statistics, 6:65 70. Lahiri, S. N. (2003). Resampling Methods for Dependent Data. Springer, New York. Romano, J. P. and Shaikh, A. M. (2006). Stepup procedures for control of generalizations of the familywise error rate. Annals of Statistics, 34(4):1850 1873. Romano, J. P., Shaikh, A. M., and Wolf, M. (2008a). Control of the false discovery rate under dependence using the bootstrap and subsampling (with discussion). Test, 17(3):417 442. Romano, J. P., Shaikh, A. M., and Wolf, M. (2008b). Formalized data snooping based on generalized error rates. Econometric Theory, 24(2):404 447. Romano, J. P. and Wolf, M. (2005). Exact and approximate stepdown methods for multiple hypothesis testing. Journal of the American Statistical Association, 100(469):94 108. Sarkar, S. K. (1998). Some probability inequalities for ordered MT P 2 random variables: a proof of simes conjecture. Annals of Statistics, 26:494 504. Westfall, P. H. and Young, S. S. (1993). Resampling-Based Multiple Testing: Examples and Methods for p-value Adjustment. John Wiley, New York. White, H. L. (2000). A reality check for data snooping. Econometrica, 68(5):1097 1126. 7