Raimund Alt (Autor) Multiple Hypotheses Testing in the Linear Regression Model with Applications to Economics and Finance

Similar documents
Exact Nonparametric Tests for Comparing Means - A Personal Summary

False Discovery Rates

How To Check For Differences In The One Way Anova

Multiple-Comparison Procedures

The Bonferonni and Šidák Corrections for Multiple Comparisons

Introduction to Hypothesis Testing

Recall this chart that showed how most of our course would be organized:

Fairfield Public Schools

MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS

Section 13, Part 1 ANOVA. Analysis Of Variance

A Review of Cross Sectional Regression for Financial Data You should already know this material from previous study

CHAPTER 13. Experimental Design and Analysis of Variance

The Null Hypothesis. Geoffrey R. Loftus University of Washington


Association Between Variables

A Statistical Analysis of Popular Lottery Winning Strategies

Rank-Based Non-Parametric Tests

MODIFIED PARAMETRIC BOOTSTRAP: A ROBUST ALTERNATIVE TO CLASSICAL TEST

Interaction between quantitative predictors

CHAPTER 14 ORDINAL MEASURES OF CORRELATION: SPEARMAN'S RHO AND GAMMA

Multiple Comparison Methods for Data Review of Census of Agriculture Press Releases

Chapter 6: The Information Function 129. CHAPTER 7 Test Calibration

1. How different is the t distribution from the normal?

Solución del Examen Tipo: 1

3. Mathematical Induction

Statistical issues in the analysis of microarray data

The VAR models discussed so fare are appropriate for modeling I(0) data, like asset returns or growth rates of macroeconomic time series.

Chapter 5 Analysis of variance SPSS Analysis of variance

TEACHING OF STATISTICS IN NEWLY INDEPENDENT STATES: THE CASE OF KAZAKSTAN

Additional sources Compilation of sources:

BA 275 Review Problems - Week 6 (10/30/06-11/3/06) CD Lessons: 53, 54, 55, 56 Textbook: pp , ,

ANOVA ANOVA. Two-Way ANOVA. One-Way ANOVA. When to use ANOVA ANOVA. Analysis of Variance. Chapter 16. A procedure for comparing more than two groups

X X X a) perfect linear correlation b) no correlation c) positive correlation (r = 1) (r = 0) (0 < r < 1)

Ajit Kumar Patra (Autor) Crystal structure, anisotropy and spin reorientation transition of highly coercive, epitaxial Pr-Co films

S+SeqTrial User s Manual

Bivariate Statistics Session 2: Measuring Associations Chi-Square Test

p ˆ (sample mean and sample

E3: PROBABILITY AND STATISTICS lecture notes

Statistical tests for SPSS

Factoring & Primality

FORECASTING DEPOSIT GROWTH: Forecasting BIF and SAIF Assessable and Insured Deposits

Mind on Statistics. Chapter 12

INFLATION, INTEREST RATE, AND EXCHANGE RATE: WHAT IS THE RELATIONSHIP?

Gastvortrag: Solvency II, Asset Liability Management, and the European Bond Market Theory and Empirical Evidence

START Selected Topics in Assurance

Analysis of Variance (ANOVA) Using Minitab

13: Additional ANOVA Topics. Post hoc Comparisons

Chi Square Tests. Chapter Introduction

Journal of Financial and Economic Practice

8. Comparing Means Using One Way ANOVA

SPSS and AMOS. Miss Brenda Lee 2:00p.m. 6:00p.m. 24 th July, 2015 The Open University of Hong Kong

CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

Sensitivity Analysis 3.1 AN EXAMPLE FOR ANALYSIS

Some Essential Statistics The Lure of Statistics

Name: Date: Use the following to answer questions 3-4:

A Test Of The M&M Capital Structure Theories Richard H. Fosberg, William Paterson University, USA

Likelihood Approaches for Trial Designs in Early Phase Oncology

Comparison of frequentist and Bayesian inference. Class 20, 18.05, Spring 2014 Jeremy Orloff and Jonathan Bloom

Mortgage Loan Approvals and Government Intervention Policy

Statistics 2014 Scoring Guidelines

1 Overview and background

Example: Boats and Manatees

The Strategic Priorities Consultation Survey

Introduction to Hypothesis Testing OPRE 6301

Data Mining - Evaluation of Classifiers

INTERPRETING THE ONE-WAY ANALYSIS OF VARIANCE (ANOVA)

Permutation Tests for Comparing Two Populations

Testing Multiple Secondary Endpoints in Confirmatory Comparative Studies -- A Regulatory Perspective

Unit 26 Estimation with Confidence Intervals

MONT 107N Understanding Randomness Solutions For Final Examination May 11, 2010

CHANCE ENCOUNTERS. Making Sense of Hypothesis Tests. Howard Fincher. Learning Development Tutor. Upgrade Study Advice Service

CHAPTER 13 SIMPLE LINEAR REGRESSION. Opening Example. Simple Regression. Linear Regression

Multiple Comparisons Using R

QUANTITATIVE METHODS BIOLOGY FINAL HONOUR SCHOOL NON-PARAMETRIC TESTS

In this chapter, you will learn improvement curve concepts and their application to cost and price analysis.

Non-Inferiority Tests for One Mean

Multiple Regression Analysis A Case Study

Introduction to Share Buyback Valuation

COMPARISONS OF CUSTOMER LOYALTY: PUBLIC & PRIVATE INSURANCE COMPANIES.

1 Another method of estimation: least squares

Pearson's Correlation Tests

International Statistical Institute, 56th Session, 2007: Phil Everson

Lottery Combinatorics

Biostatistics: Types of Data Analysis

Testing Value Relevance of Accounting Earnings: Theory and Method

Maximally Selected Rank Statistics in R

What s New in Econometrics? Lecture 8 Cluster and Stratified Sampling

EPS 625 INTERMEDIATE STATISTICS FRIEDMAN TEST

Multiple Linear Regression in Data Mining

Fixed-Effect Versus Random-Effects Models

The Term Structure of Interest Rates, Spot Rates, and Yield to Maturity

11 Linear and Quadratic Discriminant Analysis, Logistic Regression, and Partial Least Squares Regression

Testing Gaussian and Non-Gaussian Break Point Models: V4 Stock Markets

The Student-Project Allocation Problem

Longitudinal Meta-analysis

Commentary: What Do Budget Deficits Do?

Chapter 6: Multivariate Cointegration Analysis

Two-Sample T-Tests Assuming Equal Variance (Enter Means)

Quantitative Methods in Enterprises Behavior Analysis under Risk an Uncertainty. Kayvan Miri LAVASSANI 2. Bahar MOVAHEDI 3.

Study Guide for the Final Exam

Transcription:

Raimund Alt (Autor) Multiple Hypotheses Testing in the Linear Regression Model with Applications to Economics and Finance https://cuvillier.de/de/shop/publications/2694 Copyright: Cuvillier Verlag, Inhaberin Annette Jentzsch-Cuvillier, Nonnenstieg 8, 37075 Göttingen, Germany Telefon: +49 (0)551 54724-0, E-Mail: info@cuvillier.de, Website: https://cuvillier.de

Chapter 1 Introduction Multiple hypotheses testing is a branch of statistics which is concerned with specific issues arising when two or more null hypotheses are tested simultaneously. While multiple test procedures are applied in a number of fields, it seems that they are mostly used in biometrics and medical statistics. Incidentally, one of the earliest contributions to the subject is due to the eminent biometrician Fisher (1935[27]) who proposed the application of two now classical test procedures the Bonferroni procedure and the LSD procedure in the context of pairwise multiple comparisons of means. One of the most important issues in multiple hypotheses testing is the adequate control of type I error probabilities. As is well known, a type I error is committed, whenever a true null hypothesis is rejected. Controlling type I error probabilities in a multiple test situation is a nontrivial problem, because the occurrence of multiple type I errors has to be taken into account. To illustrate this so-called multiplicity effect, let us consider the case of testing two null hypotheses H 01 and H 02 against the alternative hypotheses H 11 and H 12, respectively. We assume that for each null hypothesis there is an exact level α test, i.e. each test has the property, that the probability of falsely rejecting its respective null hypothesis is equal to some prespecified level α. Provided that all combinations of the two null hypotheses are 1

logically possible, the type I error probabilities for this multiple test situation are as follows: Table 1.1 Type I error probabilities when testing H 01 and H 02 State of the world Type I error probability (1) H 01 true, H 02 true P (H 01 is rejected or H 02 is rejected) (2) H 01 true, H 02 false α (3) H 01 false, H 02 true α (4) H 01 false, H 02 false 0 While the type I error probabilities in the states (2) and (3) are bounded by α, which is a direct consequence of the way the tests were defined, this does not hold for state (1). If both null hypotheses are true, it is not guaranteed that the probability of committing any type I error is bounded by α. On the contrary, the type I error probability for state (1) satisfies the inequality α P (H 01 is rejected or H 02 is rejected) 2α. In general we are not able to calculate the exact probability, unless the dependence structure of the tests is explicitly known. The situation becomes somewhat dramatic, if we consider the testing of three null hypotheses H 01, H 02 and H 03 against the alternative hypotheses H 11, H 12 and H 13, respectively. Again we assume that for each null hypothesis there is an exact level α test. Provided that all combinations of the three null hypotheses are logically possible, we get the following type I error probabilities: 2

Table 1.2 Type I error probabilities when testing H 01, H 02 and H 03 State of the world Type I error probability (1) H 01 true, H 02 true, H 03 true P (H 01 is rejected or H 02 is rejected or H 03 is rejected) (2) H 01 true, H 02 true, H 03 false P (H 01 is rejected or H 02 is rejected) (3) H 01 true, H 02 false, H 03 true P (H 01 is rejected or H 03 is rejected (4) H 01 false, H 02 true, H 03 true P (H 02 is rejected or H 03 is rejected) (5) H 01 true, H 02 false, H 03 false α (6) H 01 false, H 02 true, H 03 false α (7) H 01 false, H 02 false, H 03 true α (8) H 01 false, H 02 false, H 03 false 0 It turns out that in four out of eight states the probability of committing any type I error is not bounded by α. If we consider the general case of testing n null hypotheses H 01,..., H 0n, implying 2 n possible states, the type I error probability is under control only in n states. In all other states, ignoring the state where all null hypotheses are false, the type I error probability may be substantially larger than the given level α. The reason for this multiplicity effect is simple. The textbook-like approach described above does not take into account the specific nature of multiple test situations. As a consequence the inflated occurrence of type I errors results in spurious significance in the employed tests. To get an intuition of how large the type I error probabilities may be, let us consider the general case of testing n null hypotheses under the additional assumption, that the corresponding tests are independent, given that all null hypotheses are true. In this case we are able to derive a simple formula for the so-called global type I error probability: P (at least one of the true null hypotheses is rejected) = 1 P (none of the true null hypotheses is rejected) n = 1 (1 α) i=1 = 1 (1 α) n. Below we list some values of the formula 1 (1 α) n for α = 0.05. 3

Table 1.3 Global type I error probability n 1 (1 α) n 2 0.0975 3 0.1426 4 0.1855 5 0.2262 6 0.2649 7 0.3017 8 0.3366 9 0.3698 10 0.4013 It turns out that the global type I error probability, i.e. the probability of committing any type I error, given that all null hypotheses are true, can become quite large, even for small values of n. For example, the case n = 2 yields a global type I error probability of 0.0975, which comes rather close to the upper bound 2α = 0.10. Compared with their established position in biometrics and medical statistics, multiple test procedures are only of marginal importance in fields like economics or finance. This is somewhat surprising since the relevant literature abounds of examples, where the testing of several null hypotheses is far more the rule than the exception. But, it is a matter of fact that empirical researchers working in these fields usually do not adjust for multiplicity when two or more null hypotheses are tested within a given model. Nevertheless there are a number of references emphasizing the importance of multiple test procedures for applied research. Some of them are given in the following chapters, two of them are mentioned here. The first reference is Epstein (1987[25]), who made an interesting remark in his book A History of Econometrics, when he referred to the early work of the Cowles Commission: The problem of multiple hypotheses was a primary objection by mathematical statisticians in 4

the 1940 s to the Cowles Commission methodology. Marschak [... ] interpreted this problem very narrowly as the proper adjustment to the size of significance tests when the same data are used repeatedly for discriminating among hypotheses. Epstein added: This problem [multiple hypotheses, R.A.] tended to be neglected by the next generation of practitioners but urgently needs more careful attention for future modeling efforts. The second reference is Savin (1984[80]), who wrote a survey article entitled Multiple Hypothesis Testing for one of the volumes of the internationally renowned series Handbooks of Economics. The article presented an overview about the properties of the Bonferroni and the Scheffé procedure within the linear regression model. Unfortunately the author seemed not to be aware of the path-breaking developments that have taken place in the field of multiple hypotheses testing during the 1970 s. In particular he did not mention the influential papers by Marcus, Peritz and Gabriel (1976[66]) and Holm (1979[43]). Marcus, Peritz and Gabriel (1976[66]) introduced the concept of closed test procedures, the application of which results in the construction of multi-step or stepwise procedures. These procedures have become very popular in biometrics and medical statistics, which is due to the fact, that they are in general more powerful than traditional single-step procedures. Holm (1979[43]) suggested a sequentially rejective version of a closed test procedure, which is at least as powerful as the Bonferroni procedure (but in general more powerful), while still controlling the type I error probability for each combination of true null hypotheses. As a consequence, Holm s version, also called Bonferroni-Holm procedure, has more and more replaced the classical Bonferroni procedure. Meanwhile there are several modern textbooks available, e.g. Hochberg and Tamhane (1987[39]) or Hsu (1996[47]), replacing Miller s (1981[69]) classic but outdated monograph. A survey article covering the latest developments in the field of multiple hypotheses testing was recently published by Pigeot (2000[76]). Unfortunately closed test procedures are virtually unknown in the economics and financial literature, aside from a few exceptions like Neusser (1991[71]), Madlener and Alt (1996[65]) 5