# Bootstrap Methods and Their Application

 To view this video please enable JavaScript, and consider upgrading to a web browser that supports HTML5 video
Save this PDF as:

Size: px
Start display at page:

## Transcription

1 Bootstrap Methods and Their Application cac Davison and DV Hinkley

2

3 Contents Preface i 1 Introduction 1 2 The Basic Bootstraps Introduction Parametric Simulation Nonparametric Simulation Simple Condence Intervals Reducing Error Statistical Issues Nonparametric Approximations for Variance and Bias Subsampling Methods Bibliographic Notes Problems Practicals 67 3 Further Ideas Introduction Several Samples Semiparametric Models Smooth Estimates of F Censoring Missing Data Finite Population Sampling Hierarchical Data 101 3

4 4 Contents 39 Bootstrapping the Bootstrap Bootstrap Diagnostics Choice of Estimator from the Data Bibliographic Notes Problems Practicals Tests Introduction Resampling for Parametric Tests Nonparametric Permutation Tests Nonparametric Bootstrap Tests Adjusted P-values Estimating Properties of Tests Bibliographic Notes Problems Practicals Condence Intervals Introduction Basic Condence Limit Methods Percentile Methods Theoretical Comparison of Methods Inversion of Signicance Tests Double Bootstrap Methods Empirical Comparison of Bootstrap Methods Multiparameter Methods Conditional Condence Regions Prediction Bibliographic Notes Problems Practicals Linear Regression Introduction Least Squares Linear Regression Multiple Linear Regression Aggregate Prediction Error and Variable Selection Robust Regression Bibliographic Notes Problems 321

5 Contents 5 68 Practicals Further Topics in Regression Introduction Generalized Linear Models Survival Data Other Nonlinear Models Misclassication Error Nonparametric Regression Bibliographic Notes Problems Practicals Complex Dependence Introduction Time Series Point Processes Bibliographic Notes Problems Practicals Improved Calculation Introduction Balanced Bootstraps Control Methods Importance Resampling Saddlepoint Approximation Bibliographic Notes Problems Practicals Semiparametric Likelihood Inference Likelihood Multinomial-Based Likelihoods Bootstrap Likelihood Likelihood Based on Condence Sets Bayesian Bootstraps Bibliographic Notes Problems Practicals 526

6 6 0 Contents 11 Computer Implementation Introduction Basic Bootstraps Further Ideas Tests Condence Intervals Linear Regression Further Topics in Regression Time Series Improved Simulation Semiparametric Likelihoods 556 Appendix 1Cumulant Calculations 558

7 Preface The publication in 1979 of Bradley Efron's rst article on bootstrap methods was a major event in Statistics, at once synthesizing some of the earlier resampling ideas and establishing a new framework for simulation-based statistical analysis The idea of replacing complicated and often inaccurate approximations to biases, variances, and other measures of uncertainty by computer simulations caught the imagination of both theoretical researchers and users of statistical methods Theoreticians sharpened their pencils and set about establishing mathematical conditions under which the idea could work Once they had overcome their initial skepticism, applied workers sat down at their terminals and began to amass empirical evidence that the bootstrap often did work better than traditional methods The early trickle of papers quickly became a torrent, with new additions to the literature appearing every month, and it was hard to see when would be a good moment to try and chart the waters Then the organizers of COMPSTAT'92 invited us to present a course on the topic, and shortly afterwards we began to write this book We decided to try and write a balanced account of resampling methods, to include basic aspects of the theory which underpinned the methods, and to show as many applications as we could in order to illustrate the full potential of the methods warts and all We quickly realized that in order for us and others to understand and use the bootstrap, we would need suitable software, and producing it led us further towards a practically-oriented treatment Our view was cemented by two further developments: the appearance of the two excellent books, one by Peter Hall on the asymptotic theory and the other on basic methods by Bradley Efron and Robert Tibshirani and the chance to give further courses that included practicals Our experience has been that hands-on computing is essential in coming to grips with resampling ideas, so we have included practicals in this book, as well as more theoretical problems As the book expanded, we realized that a fully comprehensive treatmentwas i

9 Preface iii workofacdavison through the awardofanadvanced Research Fellowship We also acknowledge support from the U S National Science Foundation We must also mention the Friday evening sustenance provided at the Eagle and Child, the Lamb and Flag, and the Royal Oak The projects of many authors have ourished in these amiable establishments Finally,we thank our families, friends and colleagues for their patience while this project absorbed our time and energy A C Davison and D V Hinkley Lausanne and Santa Barbara October 1996

10

11 1 Introduction The explicit recognition of uncertaintyiscentral to the statistical sciences Notions such as prior information, probability models, likelihood, standard errors and condence limits are all intended to formalize uncertainty and thereby make allowance for it In simple situations, the uncertainty of an estimate may be gauged by analytical calculation based on an assumed probability model for the available data But in more complicated problems this approach canbe tedious and dicult, and its results are potentially misleading if inappropriate assumptions or simplications have been made For illustration, consider Table 11, which is taken from a larger tabulation (Table 74) of the numbers of AIDS reports in England and Wales from mid-1983 to the end of 1992 Reports are cross-classied by diagnosis period and length of reporting-delay, in three-month intervals A blank in the table corresponds to an unknown (as-yet unreported) entry The problem is to predict the states of the epidemic in 1991 and 1992, which depend heavily on the values missing at the bottom right of the table The data support the assumption that the reporting delay does not depend on the diagnosis period In this case a simple model is that the number of reports in row j and column k of the table has a Poisson distribution with mean jk = exp( j + k ) If all the cells of the table are regarded as independent, then total number of unreported diagnoses in period j has a Poisson distribution with mean X X jk = exp( j ) exp( k ) k k where the sum is over columns with blanks in row j Theeventual total of as-yet unreported diagnoses from period j can be estimated by replacing j and k by estimates derived from the incomplete table, and thence we obtain the predicted total for period j Such predictions are shown by the solid line 1

12 2 1 Introduction Diagnosis Reporting-delayinterval (quarters): Total period reports to end Year Quarter 0 y of Table 11 Numbers of AIDS reports in England and Wales to the end of 1992 (De Angelis and Gilks, 1994), extracted from Table 74 A y indicates a reporting-delay less than one month in Figure 11, together with the observed total reports to the end of 1992 How good are these predictions? It would be tedious but possible to put pen to paper and estimate the prediction uncertainty through calculations based on the Poisson model But in fact the data are much morevariable than that model would suggest, and by failing to take this into account we would believe that the predictions are more accurate than they really are Furthermore, a better approach would be to use a semiparametric model to smooth out the evident variability of the increase in diagnoses from quarter to quarter the corresponding prediction is the dotted line in Figure 11 Analytical calculations for this model would be very unpleasant, and a more exible line of attack is needed While more than one approach is possible, the one that we shall develop based on computer simulation is both exible and straightforward Purpose of the Book Our central goal is to describe how the computer can be harnessed to obtain reliable standard errors, condence intervals, and other measures of uncertainty for a wide range of problems The key idea is to resample from the original data either directly or via a tted model to create replicate data sets,

13 Introduction 3 Figure 11 Predicted diagnoses from a parametric model (solid) and a semiparametric model (dots) tted to the AIDS data, together with the actual totals to the end of 1992 (+) Diagnoses Time from which the variability of the quantities of interest can be assessed without long-winded and error-prone analytical calculation Because this approach involves repeating the original data analysis procedure with many replicate sets of data, these are sometimes called computer-intensive methods Another name for them is bootstrap methods, because to use the data to generate more data seems analogous to a trick used by the ctional Baron Munchausen, who when he found himself at the bottom of a lake got out by pulling himself up by his bootstraps In the simplest nonparametric problems we do literally sample from the data, and a common initial reaction is that this is a fraud In fact it is not It turns out that a wide range of statistical problems can be tackled this way, liberating the investigator from the need to over-simplify complex problems The approach can also be applied in simple problems, to check the adequacy of standard measures of uncertainty, to relax assumptions, and to give quick approximate solutions An example of this is random sampling to estimate the permutation distribution of a nonparametric test statistic It is of course true that in many applications we can be fairly condent in a particular parametric model and the standard analysis based on that model Even so, it can still be helpful to see what can be inferred without particular parametric model assumptions This is in the spirit of robustness of validity of the statistical analysis performed Nonparametric bootstrap analysis allows us to do this

14 4 1 Introduction Despite its scope and usefulness, resampling must be carefully applied Unless certain basic ideas are understood, it is all too easy to produce a solution to the wrong problem, or a bad solution to the right one Bootstrap methods are intended to help avoid tedious calculations based on questionable assumptions, and this they do But they cannot replace clear critical thought about the problem, appropriate design of the investigation and data analysis, and incisive presentation of conclusions In this book we describe how resampling methods can be used, and evaluate their performance, in a wide range of contexts Our focus is on the methods and their practical application rather than on the underlying theory, accounts of which are available elsewhere This book is intended to be useful to the many investigators who want to know how and when the methods can safely be applied, and how to tell when things have gone wrong The mathematical level of the book reects this: we have aimed for a clear account ofthekey ideas without an overload of technical detail Examples Bootstrap methods can be applied both when there is a well-dened probability model for data and when there is not In our initial development of the methods we shall make frequent use of two simple examples, one of each type, to illustrate the main points Example 11 (Air-conditioning data) Table 12 gives n = 12 times between failures of air-conditioning equipment, for which we wish to estimate the underlying mean or its reciprocal, the failure rate A simple model for this problem is that the times are sampled from an exponential distribution The dotted line in the left panel of Figure 12 is the cumulative distribution function (CDF) 0 y 0, F (y) = 1 ; exp(;y=) y>0 for the tted exponential distribution with mean set equal to the sample average, y = 108:083 The solid line on the same plot is the nonparametric equivalent, the empirical distribution function (EDF) for the data, which Table 12 Service-hours between failures of the air-conditioning equipment ina Boeing 720 jet aircraft (Proschan, 1963)

15 Introduction 5 places equal probabilities n ;1 =0:083 at each sample value Comparison of the two curves suggests that the exponential model ts reasonably well An alternative view of this is shown in the right panel of the gure, which is an exponential Q-Q plot a plot of ordered data values y (j) against the standard exponential quantiles F ;1 j n +1 =1 = ; log 1 ; j : n +1 Figure 12 Summary displays for the air-conditioning data The left panel shows the EDF for the data, ^F (solid), and the CDF of a tted exponential distribution (dots) The right panel showsaplotofthe ordered failure times against exponential quantiles, with the tted exponential model shown as the dotted line CDF Ordered failure times Failure time y Quantiles of standard exponential Although these plots suggest reasonable agreement with the exponential model, the sample is rather too small to have much condence in this In the data source the more general gamma model with mean and index is used its density is f (y) = 1 ;() y ;1 exp(;y=) y>0 > 0: (11) For our sample the estimated index is ^ =0:71, which does not dier significantly (P =0:29) from the value = 1 that corresponds to the exponential model Our reason for mentioning this will become apparent in Chapter 2 Basic properties of the estimator T = Y for are easy to obtain theoretically under the exponential model For example, it is easy to show that T is unbiased and has variance 2 =n Approximate condence intervals for can be calculated using these properties in conjunction with a normal approximation for the distribution of T, although this does not work very well: we

16 6 1 Introduction can tell this because Y= has an exact gamma distribution, which leadsto exact condence limits Things are more complicated under the more general gamma model, because the index is only estimated, and so in a traditional approach we would use approximations such as a normal approximation for the distribution of T,orachi-squared approximation for the log likelihood ratio statistic The parametric simulation methods of Section 22 can be used alongside these approximations, to diagnose problems with them, or to replace them entirely Example 12 (City population data) Table 13 reports n = 49 data pairs, each corresponding to a US city, the pair being the 1920 and 1930 populations of the city, whichwedenoteby u and x The data are plotted in Figure 13 Interest here is in the ratio of means, because this would enable us to estimate the total population of the US in 1930 from the 1920 gure If the cities form a random sample with (U X) denoting the pair of population values for a randomly selected city, then the total 1930 population is the product of the total 1920 population and the ratio of expectations = E(X)=E(U) This ratio is the parameter of interest In this case there is no obvious parametric model for the joint distribution of (U X), so it is natural to estimate by its empirical analog, T = X= U,the ratio of sample averages We are then concerned with the uncertainty in T If we had a plausible parametric model for example, that the pair (U X) has a bivariate lognormal distribution then theoretical calculations like those in Example 11 would lead to bias and variance estimates for use in a normal approximation, which in turn would provide approximate condence intervals for Without such a model we must use nonparametric analysis It is still possible to estimate the bias and variance of T,aswe shall see, and this makes normal approximation still feasible, as well as more complex approaches to setting condence intervals Example 11 is special in that an exact distribution is available for the statistic of interest and can be used to calculate condence limits, at least under the exponential model But for parametric models in general this will not be true In Section 22 we shall show how to use parametric simulation to obtain approximate distributions, either by approximating moments for use in normal approximations, or when these are inaccurate directly In Example 12 we make no assumptions about the form of the data disribution But still, as we shall show in Section 23, simulation can be used to obtain properties of T,even to approximate its distribution Much of Chapter 2 is devoted to this

17 Introduction 7 Table 13 Populations in thousands of n =49 large United States cities in 1920 (u) and in 1930 (x) (Cochran, 1977, p 152) u x u x u x Figure 13 Populations of 49 large United States cities (in 1000s) in 1920 and population population Layout of the Book Chapter 2 describes the properties of resampling methods for use with single samples from parametric and nonparametric models, discusses practical mat-

18 8 1 Introduction ters such as the numbers of replicate data sets required, and outlines delta methods for variance approximation based on dierent forms of jackknife It also contains a basic discussion of condence intervals and of the ideas that underlie bootstrap methods Chapter 3 outlines how the basic ideas are extended to several samples, semiparametric and smooth models, simple cases where data have hierarchical structure or are sampled from a nite population, and to situations where data are incomplete because censored or missing It goes on to discuss howthe simulation output itself may be used to detect problems so-called bootstrap diagnostics and how it may be useful to bootstrap the bootstrap In Chapter 4 we review the basic principles of signicance testing, and then describe Monte Carlo tests, including those using Markov Chain simulation, and parametric bootstrap tests This is followed by discussion of nonparametric permutation tests, and the more general methods of semi- and nonparametric bootstrap tests A double bootstrap method is detailed for improved approximation of P-values Condence intervals are the subject of Chapter 5 After outlining basic ideas, we describe how to construct simple condence intervals based on simulations, and then go on to more complex methods, such as the studentized bootstrap, percentile methods, the double bootstrap and test inversion The main methods are compared empirically in Section 57, Then there are brief accounts of condence regions for multivariate parameters, and of prediction intervals The three subsequent chapters deal with more complex problems Chapter 6 describes how the basic resampling methods may be applied in linear regression problems, including tests for coecients, prediction analysis, and variable selection Chapter 7 deals with more complex regression situations: generalized linear models, other nonlinear models, semi- and nonparametric regression, survival analysis, and classication error Chapter 8 details methods appropriate for time series, spatial data, and point processes Chapter 9 describes how variance reduction techniques such as balanced simulation, control variates, and importance sampling can be adapted to yield improved simulations, with the aim of reducing the amount of simulation needed for an answer of given accuracy It also shows how saddlepoint methods can sometimes be used to avoid simulation entirely Chapter 10 describes various semiparametric versions of the likelihood function, the ideas underlying which are closely related to resampling methods It also briey outlines a Bayesian version of the bootstrap Chapters 2{10 contain problems intended to reinforce the reader's understanding of both methods and theory, and in some cases problems develop topics that could not be included in the text Some of these demand a knowl-

19 Introduction 9 edge of moments and cumulants, basic facts about which are sketched in the Appendix The book also contains practicals that apply resampling routines written in the S language to sets of data The practicals are intended to reinforce the ideas in each chapter, to supplement the more theoretical problems, and to give examples on which readers can base analyses of their own data It would be possible to give dierent sorts of course based on this book One would be a `theoretical' course based on the problems and another an `applied' course based on the practicals we prefer to blend the two Although a library of routines for use with the statistical package SPlus is bundled with it, most of the book can be read without reference to particular software packages Apart from the practicals, the exception to this is Chapter 11, which is a short introduction to the main resampling routines, arranged roughly in the order with which the corresponding ideas appear in earlier chapters Readers intending to use the bundled routines will nd it useful to work through the relevant sections of Chapter 11 before attempting the practicals Notation Although we believe that our notation is largely standard, there are not enough letters in the English and Greek alphabets for us to be entirely consistent Greek letters such as, and generally denote parameters or other unknowns, while is used for error rates in connexion with signicance tests and condence sets English letters X, Y, Z, and so forth are used for random variables, which takevalues x, y, z Thus the estimator T has observed value t,whichmay be an estimate of the unknown parameter The letter V is used foravariance estimate, and the letter p for a probability, except for regression models, where p is the number of covariates Probability, expectation, variance and covariance are denoted Pr(), E(), var() andcov( ), while the joint cumulant ofy 1, Y 1 Y 2 and Y 3 is denoted cum(y 1 Y 1 Y 2 Y 3 ) We use IfAg to denote the indicator random variable, which takes values one if the event A is true and zero otherwise A related function is the Heaviside function 0 H(u) = u<0, 1 u 0 We use#fag to denote the numberofelements in the set A, and#fa r g for the number of events A r that occur in a sequence A 1 A 2 ::: The data values in a sample of size n are typically denoted by y 1 ::: y n, the observed values of the random variables Y 1 ::: Y n their average is y = n ;1 P y j

20 10 1 Introduction We mostly reserve Z for random variables that are standard normal, at least approximately, and use Q for random variables with other (approximately) known distributions As usual N( 2 ) represents the normal distribution with mean and variance 2, while z is often the quantile of the standard normal distribution The letter R is reserved for the number of replicate simulations Simulated copies of a statistic T are denoted Tr, r =1 ::: R, whose ordered values are T(1) T (R) Expectation, variance and probability calculated with respect to the simulation distribution are written Pr (), E () andvar () Where possible we avoid boldface type, and rely on the context to make it plain when we are dealing with vectors or matrices a T denotes the matrix transpose of a vector or matrix a We use PDF, CDF, and EDF as shorthand for probability density function, cumulative distribution function, and empirical distribution function The letters F and G are used for CDFs, and f and g are generally used for the corresponding PDFs An exception to this is that frj denotes the frequency with which y j appears in the rth resample The end of each example is marked, and the end of each algorithm is marked

21 2 The Basic Bootstraps 21 Introduction In this chapter we discuss techniques which are applicable to a single, homogeneous sample of data, denoted by y 1 ::: y n The sample values are thought of as the outcomes of independent and identically distributed random variables Y 1 ::: Y n whose probability density function (PDF) and cumulative distribution function (CDF) we shall denote by f and F, respectively The sample is to be used to make inferences about a population characteristic, generically denoted by, using a statistic T whose value in the sample is t We assume for the moment that the choice of T has been made and that it is an estimate for, which we take to be a scalar Our attention is focussed on questions concerning the probability distribution of T For example, what are its bias, its standard error, or its quantiles? What are likely values under a certain null hypothesis of interest? How do we calculate condence limits for using T? There are two situations to distinguish, the parametric and the nonparametric When there is a particular mathematical model, with adjustable constants or parameters that fully determine f, such a model is called parametric and statistical methods based on this model are parametric methods In this case the parameter of interest is a component of or function of Whennosuch mathematical model is used, the statistical analysis is nonparametric, and uses only the fact that the random variables Y j are independent and identically distributed Even if there is a plausible parametric model, a nonparametric analysis can still be useful to assess the robustness of conclusions drawn from a parametric analysis An important role is played in nonparametric analysis by the empirical distribution which puts equal probabilities n ;1 at each sample value y j The corresponding estimate of F is the empirical distribution function (EDF) ^F 11

22 12 2 The Basic Bootstraps which is dened as the sample proportion More formally ^F (y) = #fy j yg : n ^F(y) = 1 n nx j=1 H(y ; y j ) (21) where H(u) is the unit step function which jumps from 0 to 1 at u = 0 Notice that the values of the EDF are xed (0 1 n 2 n ::: n ), so the EDF is equivalent n to its points of increase, the ordered values y (1) y (n) of the data An example of the EDF was shown in the left panel of Figure 12 When there are repeat values in the sample, as would often occur with discrete data, the EDF assigns probabilities proportional to the sample frequencies at each distinct observed value y The formal denition (21) still applies The EDF plays the role of tted model when no mathematical form is assumed for F, analogous to a parametric CDF with parameters replaced by their estimates #fag means the number of times the event A occurs 211 Statistical functions Many simple statistics can be thought of in terms P of properties of the EDF For example, the sample average y = n ;1 y j is the mean of the EDF see Example 21 below More generally the statistic of interest t will be a symmetric function of y 1 ::: y n, meaning that t is unaected by reordering the data This implies that t depends only on the ordered values y (1) y (n), or equivalently on the EDF ^F Often this can be expressed simply as t = t( ^F ), where t() isastatistical function essentially just a mathematical expression of the algorithm for computing t from ^F Such a statistical function is of central importance in the nonparametric case because it also denes the parameter of interest through the \algorithm" = t(f ) This corresponds to the qualitative idea that is a characteristic of the population described by F Simple examples of such functions are the mean and variance of Y,which are respectively dened as t(f )= Z ydf(y) t(f )= Z Z y 2 df (y) ; ydf(y) 2 : (22) The same denition of applies in parametric problems, although then is more usually dened explicitly as one of the model parameters The relationship between the estimate t and ^F can usually be expressed as t = t( ^F ), corresponding to the relation = t(f )between the characteristic of interest and the underlying distribution The statistical function t() denes

### The Variability of P-Values. Summary

The Variability of P-Values Dennis D. Boos Department of Statistics North Carolina State University Raleigh, NC 27695-8203 boos@stat.ncsu.edu August 15, 2009 NC State Statistics Departement Tech Report

### Gamma Distribution Fitting

Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

### Multivariate Analysis of Ecological Data

Multivariate Analysis of Ecological Data MICHAEL GREENACRE Professor of Statistics at the Pompeu Fabra University in Barcelona, Spain RAUL PRIMICERIO Associate Professor of Ecology, Evolutionary Biology

### Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts

### Bootstrapping Big Data

Bootstrapping Big Data Ariel Kleiner Ameet Talwalkar Purnamrita Sarkar Michael I. Jordan Computer Science Division University of California, Berkeley {akleiner, ameet, psarkar, jordan}@eecs.berkeley.edu

### SAS Software to Fit the Generalized Linear Model

SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling

### Multivariate Normal Distribution

Multivariate Normal Distribution Lecture 4 July 21, 2011 Advanced Multivariate Statistical Methods ICPSR Summer Session #2 Lecture #4-7/21/2011 Slide 1 of 41 Last Time Matrices and vectors Eigenvalues

### Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model

Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written

### EC 6310: Advanced Econometric Theory

EC 6310: Advanced Econometric Theory July 2008 Slides for Lecture on Bayesian Computation in the Nonlinear Regression Model Gary Koop, University of Strathclyde 1 Summary Readings: Chapter 5 of textbook.

### 2DI36 Statistics. 2DI36 Part II (Chapter 7 of MR)

2DI36 Statistics 2DI36 Part II (Chapter 7 of MR) What Have we Done so Far? Last time we introduced the concept of a dataset and seen how we can represent it in various ways But, how did this dataset came

### 15.062 Data Mining: Algorithms and Applications Matrix Math Review

.6 Data Mining: Algorithms and Applications Matrix Math Review The purpose of this document is to give a brief review of selected linear algebra concepts that will be useful for the course and to develop

### Least Squares Estimation

Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David

### Descriptive Statistics. Understanding Data: Categorical Variables. Descriptive Statistics. Dataset: Shellfish Contamination

Descriptive Statistics Understanding Data: Dataset: Shellfish Contamination Location Year Species Species2 Method Metals Cadmium (mg kg - ) Chromium (mg kg - ) Copper (mg kg - ) Lead (mg kg - ) Mercury

### Exploratory Data Analysis

Exploratory Data Analysis Johannes Schauer johannes.schauer@tugraz.at Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction

### Nonparametric adaptive age replacement with a one-cycle criterion

Nonparametric adaptive age replacement with a one-cycle criterion P. Coolen-Schrijner, F.P.A. Coolen Department of Mathematical Sciences University of Durham, Durham, DH1 3LE, UK e-mail: Pauline.Schrijner@durham.ac.uk

### Spatial Statistics Chapter 3 Basics of areal data and areal data modeling

Spatial Statistics Chapter 3 Basics of areal data and areal data modeling Recall areal data also known as lattice data are data Y (s), s D where D is a discrete index set. This usually corresponds to data

### 3.6: General Hypothesis Tests

3.6: General Hypothesis Tests The χ 2 goodness of fit tests which we introduced in the previous section were an example of a hypothesis test. In this section we now consider hypothesis tests more generally.

### 4. Introduction to Statistics

Statistics for Engineers 4-1 4. Introduction to Statistics Descriptive Statistics Types of data A variate or random variable is a quantity or attribute whose value may vary from one unit of investigation

### The Bootstrap. 1 Introduction. The Bootstrap 2. Short Guides to Microeconometrics Fall Kurt Schmidheiny Unversität Basel

Short Guides to Microeconometrics Fall 2016 The Bootstrap Kurt Schmidheiny Unversität Basel The Bootstrap 2 1a) The asymptotic sampling distribution is very difficult to derive. 1b) The asymptotic sampling

### 1 Example of Time Series Analysis by SSA 1

1 Example of Time Series Analysis by SSA 1 Let us illustrate the 'Caterpillar'-SSA technique [1] by the example of time series analysis. Consider the time series FORT (monthly volumes of fortied wine sales

### Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

### Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

### Statistics in Retail Finance. Chapter 6: Behavioural models

Statistics in Retail Finance 1 Overview > So far we have focussed mainly on application scorecards. In this chapter we shall look at behavioural models. We shall cover the following topics:- Behavioural

### Calculating Interval Forecasts

Calculating Chapter 7 (Chatfield) Monika Turyna & Thomas Hrdina Department of Economics, University of Vienna Summer Term 2009 Terminology An interval forecast consists of an upper and a lower limit between

### BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I

BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential

### LOGNORMAL MODEL FOR STOCK PRICES

LOGNORMAL MODEL FOR STOCK PRICES MICHAEL J. SHARPE MATHEMATICS DEPARTMENT, UCSD 1. INTRODUCTION What follows is a simple but important model that will be the basis for a later study of stock prices as

### 6 Scalar, Stochastic, Discrete Dynamic Systems

47 6 Scalar, Stochastic, Discrete Dynamic Systems Consider modeling a population of sand-hill cranes in year n by the first-order, deterministic recurrence equation y(n + 1) = Ry(n) where R = 1 + r = 1

### BASIC STATISTICAL METHODS FOR GENOMIC DATA ANALYSIS

BASIC STATISTICAL METHODS FOR GENOMIC DATA ANALYSIS SEEMA JAGGI Indian Agricultural Statistics Research Institute Library Avenue, New Delhi-110 012 seema@iasri.res.in Genomics A genome is an organism s

### SUMAN DUVVURU STAT 567 PROJECT REPORT

SUMAN DUVVURU STAT 567 PROJECT REPORT SURVIVAL ANALYSIS OF HEROIN ADDICTS Background and introduction: Current illicit drug use among teens is continuing to increase in many countries around the world.

### 1 Another method of estimation: least squares

1 Another method of estimation: least squares erm: -estim.tex, Dec8, 009: 6 p.m. (draft - typos/writos likely exist) Corrections, comments, suggestions welcome. 1.1 Least squares in general Assume Y i

### Geostatistics Exploratory Analysis

Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa Master of Science in Geospatial Technologies Geostatistics Exploratory Analysis Carlos Alberto Felgueiras cfelgueiras@isegi.unl.pt

### From the help desk: Bootstrapped standard errors

The Stata Journal (2003) 3, Number 1, pp. 71 80 From the help desk: Bootstrapped standard errors Weihua Guan Stata Corporation Abstract. Bootstrapping is a nonparametric approach for evaluating the distribution

### International College of Economics and Finance Syllabus Probability Theory and Introductory Statistics

International College of Economics and Finance Syllabus Probability Theory and Introductory Statistics Lecturer: Mikhail Zhitlukhin. 1. Course description Probability Theory and Introductory Statistics

### Analysis of Bayesian Dynamic Linear Models

Analysis of Bayesian Dynamic Linear Models Emily M. Casleton December 17, 2010 1 Introduction The main purpose of this project is to explore the Bayesian analysis of Dynamic Linear Models (DLMs). The main

### Nonparametric statistics and model selection

Chapter 5 Nonparametric statistics and model selection In Chapter, we learned about the t-test and its variations. These were designed to compare sample means, and relied heavily on assumptions of normality.

### SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation

SYSM 6304: Risk and Decision Analysis Lecture 3 Monte Carlo Simulation M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu September 19, 2015 Outline

### Permutation Tests for Comparing Two Populations

Permutation Tests for Comparing Two Populations Ferry Butar Butar, Ph.D. Jae-Wan Park Abstract Permutation tests for comparing two populations could be widely used in practice because of flexibility of

### Introduction to General and Generalized Linear Models

Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby

### Supplement to Call Centers with Delay Information: Models and Insights

Supplement to Call Centers with Delay Information: Models and Insights Oualid Jouini 1 Zeynep Akşin 2 Yves Dallery 1 1 Laboratoire Genie Industriel, Ecole Centrale Paris, Grande Voie des Vignes, 92290

### Econometric Analysis of Cross Section and Panel Data Second Edition. Jeffrey M. Wooldridge. The MIT Press Cambridge, Massachusetts London, England

Econometric Analysis of Cross Section and Panel Data Second Edition Jeffrey M. Wooldridge The MIT Press Cambridge, Massachusetts London, England Preface Acknowledgments xxi xxix I INTRODUCTION AND BACKGROUND

### Review of Random Variables

Chapter 1 Review of Random Variables Updated: January 16, 2015 This chapter reviews basic probability concepts that are necessary for the modeling and statistical analysis of financial data. 1.1 Random

### The Best of Both Worlds:

The Best of Both Worlds: A Hybrid Approach to Calculating Value at Risk Jacob Boudoukh 1, Matthew Richardson and Robert F. Whitelaw Stern School of Business, NYU The hybrid approach combines the two most

### ROC Graphs: Notes and Practical Considerations for Data Mining Researchers

ROC Graphs: Notes and Practical Considerations for Data Mining Researchers Tom Fawcett 1 1 Intelligent Enterprise Technologies Laboratory, HP Laboratories Palo Alto Alexandre Savio (GIC) ROC Graphs GIC

### Regression Modeling Strategies

Frank E. Harrell, Jr. Regression Modeling Strategies With Applications to Linear Models, Logistic Regression, and Survival Analysis With 141 Figures Springer Contents Preface Typographical Conventions

### A Semi-parametric Approach for Decomposition of Absorption Spectra in the Presence of Unknown Components

A Semi-parametric Approach for Decomposition of Absorption Spectra in the Presence of Unknown Components Payman Sadegh 1,2, Henrik Aalborg Nielsen 1, and Henrik Madsen 1 Abstract Decomposition of absorption

### Nonparametric tests, Bootstrapping

Nonparametric tests, Bootstrapping http://www.isrec.isb-sib.ch/~darlene/embnet/ Hypothesis testing review 2 competing theories regarding a population parameter: NULL hypothesis H ( straw man ) ALTERNATIVEhypothesis

### Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0

Probability & Statistics Primer Gregory J. Hakim University of Washington 2 January 2009 v2.0 This primer provides an overview of basic concepts and definitions in probability and statistics. We shall

### Recall this chart that showed how most of our course would be organized:

Chapter 4 One-Way ANOVA Recall this chart that showed how most of our course would be organized: Explanatory Variable(s) Response Variable Methods Categorical Categorical Contingency Tables Categorical

### Chapter 7 Section 7.1: Inference for the Mean of a Population

Chapter 7 Section 7.1: Inference for the Mean of a Population Now let s look at a similar situation Take an SRS of size n Normal Population : N(, ). Both and are unknown parameters. Unlike what we used

### NCSS Statistical Software

Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

### On Small Sample Properties of Permutation Tests: A Significance Test for Regression Models

On Small Sample Properties of Permutation Tests: A Significance Test for Regression Models Hisashi Tanizaki Graduate School of Economics Kobe University (tanizaki@kobe-u.ac.p) ABSTRACT In this paper we

### business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel

### A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution

A Primer on Mathematical Statistics and Univariate Distributions; The Normal Distribution; The GLM with the Normal Distribution PSYC 943 (930): Fundamentals of Multivariate Modeling Lecture 4: September

### Tutorial 5: Hypothesis Testing

Tutorial 5: Hypothesis Testing Rob Nicholls nicholls@mrc-lmb.cam.ac.uk MRC LMB Statistics Course 2014 Contents 1 Introduction................................ 1 2 Testing distributional assumptions....................

### Inferential Statistics

Inferential Statistics Sampling and the normal distribution Z-scores Confidence levels and intervals Hypothesis testing Commonly used statistical methods Inferential Statistics Descriptive statistics are

### Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.

Course Catalog In order to be assured that all prerequisites are met, students must acquire a permission number from the education coordinator prior to enrolling in any Biostatistics course. Courses are

### Handling attrition and non-response in longitudinal data

Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein

### The Big 50 Revision Guidelines for S1

The Big 50 Revision Guidelines for S1 If you can understand all of these you ll do very well 1. Know what is meant by a statistical model and the Modelling cycle of continuous refinement 2. Understand

Statistics Graduate Courses STAT 7002--Topics in Statistics-Biological/Physical/Mathematics (cr.arr.).organized study of selected topics. Subjects and earnable credit may vary from semester to semester.

### INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)

INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its

### NCSS Statistical Software

Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the

### Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies

Hedging Illiquid FX Options: An Empirical Analysis of Alternative Hedging Strategies Drazen Pesjak Supervised by A.A. Tsvetkov 1, D. Posthuma 2 and S.A. Borovkova 3 MSc. Thesis Finance HONOURS TRACK Quantitative

### Learning Objective. Simple Hypothesis Testing

Exploring Confidence Intervals and Simple Hypothesis Testing 35 A confidence interval describes the amount of uncertainty associated with a sample estimate of a population parameter. 95% confidence level

### A teaching experience through the development of hypertexts and object oriented software.

A teaching experience through the development of hypertexts and object oriented software. Marcello Chiodi Istituto di Statistica. Facoltà di Economia-Palermo-Italy 1. Introduction (1) This paper is concerned

### Poisson Models for Count Data

Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the

### Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.

Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing

### Data Analysis Tools. Tools for Summarizing Data

Data Analysis Tools This section of the notes is meant to introduce you to many of the tools that are provided by Excel under the Tools/Data Analysis menu item. If your computer does not have that tool

### 15.1 Least Squares as a Maximum Likelihood Estimator

15.1 Least Squares as a Maximum Likelihood Estimator 651 should provide (i) parameters, (ii) error estimates on the parameters, and (iii) a statistical measure of goodness-of-fit. When the third item suggests

### CA200 Quantitative Analysis for Business Decisions. File name: CA200_Section_04A_StatisticsIntroduction

CA200 Quantitative Analysis for Business Decisions File name: CA200_Section_04A_StatisticsIntroduction Table of Contents 4. Introduction to Statistics... 1 4.1 Overview... 3 4.2 Discrete or continuous

### Introduction to Logistic Regression

OpenStax-CNX module: m42090 1 Introduction to Logistic Regression Dan Calderon This work is produced by OpenStax-CNX and licensed under the Creative Commons Attribution License 3.0 Abstract Gives introduction

### Revenue Management with Correlated Demand Forecasting

Revenue Management with Correlated Demand Forecasting Catalina Stefanescu Victor DeMiguel Kristin Fridgeirsdottir Stefanos Zenios 1 Introduction Many airlines are struggling to survive in today's economy.

### R 2 -type Curves for Dynamic Predictions from Joint Longitudinal-Survival Models

Faculty of Health Sciences R 2 -type Curves for Dynamic Predictions from Joint Longitudinal-Survival Models Inference & application to prediction of kidney graft failure Paul Blanche joint work with M-C.

### http://www.jstor.org This content downloaded on Tue, 19 Feb 2013 17:28:43 PM All use subject to JSTOR Terms and Conditions

A Significance Test for Time Series Analysis Author(s): W. Allen Wallis and Geoffrey H. Moore Reviewed work(s): Source: Journal of the American Statistical Association, Vol. 36, No. 215 (Sep., 1941), pp.

### PREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE

PREDICTIVE DISTRIBUTIONS OF OUTSTANDING LIABILITIES IN GENERAL INSURANCE BY P.D. ENGLAND AND R.J. VERRALL ABSTRACT This paper extends the methods introduced in England & Verrall (00), and shows how predictive

### Chapter 3 RANDOM VARIATE GENERATION

Chapter 3 RANDOM VARIATE GENERATION In order to do a Monte Carlo simulation either by hand or by computer, techniques must be developed for generating values of random variables having known distributions.

### Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures

Testing Group Differences using T-tests, ANOVA, and Nonparametric Measures Jamie DeCoster Department of Psychology University of Alabama 348 Gordon Palmer Hall Box 870348 Tuscaloosa, AL 35487-0348 Phone:

### WEEK #22: PDFs and CDFs, Measures of Center and Spread

WEEK #22: PDFs and CDFs, Measures of Center and Spread Goals: Explore the effect of independent events in probability calculations. Present a number of ways to represent probability distributions. Textbook

### Probability and Statistics

CHAPTER 2: RANDOM VARIABLES AND ASSOCIATED FUNCTIONS 2b - 0 Probability and Statistics Kristel Van Steen, PhD 2 Montefiore Institute - Systems and Modeling GIGA - Bioinformatics ULg kristel.vansteen@ulg.ac.be

### Lecture 2: Descriptive Statistics and Exploratory Data Analysis

Lecture 2: Descriptive Statistics and Exploratory Data Analysis Further Thoughts on Experimental Design 16 Individuals (8 each from two populations) with replicates Pop 1 Pop 2 Randomly sample 4 individuals

### A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University

A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a

### The Binomial Distribution

The Binomial Distribution James H. Steiger November 10, 00 1 Topics for this Module 1. The Binomial Process. The Binomial Random Variable. The Binomial Distribution (a) Computing the Binomial pdf (b) Computing

### Dongfeng Li. Autumn 2010

Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

### MATH4427 Notebook 2 Spring 2016. 2 MATH4427 Notebook 2 3. 2.1 Definitions and Examples... 3. 2.2 Performance Measures for Estimators...

MATH4427 Notebook 2 Spring 2016 prepared by Professor Jenny Baglivo c Copyright 2009-2016 by Jenny A. Baglivo. All Rights Reserved. Contents 2 MATH4427 Notebook 2 3 2.1 Definitions and Examples...................................

### Normality Testing in Excel

Normality Testing in Excel By Mark Harmon Copyright 2011 Mark Harmon No part of this publication may be reproduced or distributed without the express permission of the author. mark@excelmasterseries.com

### CHI-Squared Test of Independence

CHI-Squared Test of Independence Minhaz Fahim Zibran Department of Computer Science University of Calgary, Alberta, Canada. Email: mfzibran@ucalgary.ca Abstract Chi-square (X 2 ) test is a nonparametric

### SPSS TRAINING SESSION 3 ADVANCED TOPICS (PASW STATISTICS 17.0) Sun Li Centre for Academic Computing lsun@smu.edu.sg

SPSS TRAINING SESSION 3 ADVANCED TOPICS (PASW STATISTICS 17.0) Sun Li Centre for Academic Computing lsun@smu.edu.sg IN SPSS SESSION 2, WE HAVE LEARNT: Elementary Data Analysis Group Comparison & One-way

### Numerical Summarization of Data OPRE 6301

Numerical Summarization of Data OPRE 6301 Motivation... In the previous session, we used graphical techniques to describe data. For example: While this histogram provides useful insight, other interesting

### BayesX - Software for Bayesian Inference in Structured Additive Regression

BayesX - Software for Bayesian Inference in Structured Additive Regression Thomas Kneib Faculty of Mathematics and Economics, University of Ulm Department of Statistics, Ludwig-Maximilians-University Munich

### Data analysis in supersaturated designs

Statistics & Probability Letters 59 (2002) 35 44 Data analysis in supersaturated designs Runze Li a;b;, Dennis K.J. Lin a;b a Department of Statistics, The Pennsylvania State University, University Park,

### Master s Theory Exam Spring 2006

Spring 2006 This exam contains 7 questions. You should attempt them all. Each question is divided into parts to help lead you through the material. You should attempt to complete as much of each problem

### A Software Tool for. Automatically Veried Operations on. Intervals and Probability Distributions. Daniel Berleant and Hang Cheng

A Software Tool for Automatically Veried Operations on Intervals and Probability Distributions Daniel Berleant and Hang Cheng Abstract We describe a software tool for performing automatically veried arithmetic

### Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary

Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:

### Advanced Spatial Statistics Fall 2012 NCSU. Fuentes Lecture notes

Advanced Spatial Statistics Fall 2012 NCSU Fuentes Lecture notes Areal unit data 2 Areal Modelling Areal unit data Key Issues Is there spatial pattern? Spatial pattern implies that observations from units

### 6. Distribution and Quantile Functions

Virtual Laboratories > 2. Distributions > 1 2 3 4 5 6 7 8 6. Distribution and Quantile Functions As usual, our starting point is a random experiment with probability measure P on an underlying sample spac

### INTRODUCTORY STATISTICS

INTRODUCTORY STATISTICS FIFTH EDITION Thomas H. Wonnacott University of Western Ontario Ronald J. Wonnacott University of Western Ontario WILEY JOHN WILEY & SONS New York Chichester Brisbane Toronto Singapore

### The Delta Method and Applications

Chapter 5 The Delta Method and Applications 5.1 Linear approximations of functions In the simplest form of the central limit theorem, Theorem 4.18, we consider a sequence X 1, X,... of independent and

### CONTENTS OF DAY 2. II. Why Random Sampling is Important 9 A myth, an urban legend, and the real reason NOTES FOR SUMMER STATISTICS INSTITUTE COURSE

1 2 CONTENTS OF DAY 2 I. More Precise Definition of Simple Random Sample 3 Connection with independent random variables 3 Problems with small populations 8 II. Why Random Sampling is Important 9 A myth,