Estimation of average treatment effects based on propensity scores
|
|
|
- Steven Sims
- 9 years ago
- Views:
Transcription
1 LABORatorio R. Revelli Centre for Employment Studies Estimation of average treatment effects based on propensity scores Sascha O. Becker University of Munich Andrea Ichino EUI Collegio "Carlo Alberto" via Real Collegio, Moncalieri (TO) Tel / Fax [email protected] LABOR is an independent research centre within Coripe Piemonte
2 The Stata Journal (????)?, Number?, pp Estimation of average treatment effects based on propensity scores Sascha O. Becker University of Munich Andrea Ichino EUI Abstract. In this paper, we give a short overview of some propensity score matching estimators suggested in the evaluation literature and we provide a set of Stata programs which we illustrate using the National Supported Work (NSW) demonstrationwidelyknowninlaboreconomics. Keywords: propensity score, matching, average treatment effect, evaluation 1 Introduction In the evaluation literature, data often do not come from randomized trials but from (non-randomized) observational studies. In seminal work, Rosenbaum and Rubin (1983) proposed propensity score matching as a method to reduce the bias in the estimation of treatment effects with observational data sets. These methods have become increasingly popular in medical trials and in the evaluation of economic policy interventions. Since in observational studies assignment of subjects to the treatment and control groups is not random, the estimation of the effect of treatment may be biased by the existence of confounding factors. Propensity score matching is a way to correct the estimation of treatment effects controlling for the existence of these confounding factors based on the idea that the bias is reduced when the comparison of outcomes is performed using treated and control subjects who are as similar as possible. Since matching subjects on an n-dimensional vector of characteristics is typically unfeasible for large n, this method proposes to summarize pre-treatment characteristics of each subject into a single-index variable (the propensity score) which makes the matching feasible. In this paper, we give a short overview of some propensity score matching estimators suggested in the evaluation literature and we provide a set of Stata programs which we illustrate using the National Supported Work (NSW) demonstration widely known in labor economics. In using these programs, it should be kept in mind that they only allow to reduce, and not to eliminate, the bias generated by unobservable confounding factors. The extent to which this bias is reduced depends crucially on the richness and quality of the control variables on which the propensity score is computed and the matching performed. To be more precise, the bias is eliminated only if the exposure to treatment can be considered to be purely random among individuals who have the samevalueofthepropensityscore. c???? Stata Corporation notag1
3 2 Estimation of average treatment effects based on propensity scores 2 The propensity score The propensity score is defined by Rosenbaum and Rubin (1983) as the conditional probability of receiving a treatment given pre-treatment characteristics: p(x) Pr{D =1 X} = E{D X}. (1) where D = {0, 1} is the indicator of exposure to treatment and X is the multidimensional vector of pre-treatment characteristics. Rosenbaum and Rubin (1983) show that if the exposure to treatment is random within cells defined by X, it is also random within cells defined by the values of the mono-dimensional variable p(x). As a result, given a population of units denoted by i, if the propensity score p(x i ) is known the Average effect of Treatment on the Treated (ATT) can be estimated as follows: τ E{Y 1i Y 0i D i =1} (2) = E{E{Y 1i Y 0i D i =1,p(X i )}} = E{E{Y 1i D i =1,p(X i )} E{Y 0i D i =0,p(X i )} D i =1} where the outer expectation is over the distribution of (p(x i ) D i =1)andY 1i and Y 0i are the potential outcomes in the two counterfactual situations of (respectively) treatment and no treatment. Formally, the following two hypotheses are needed to derive (2) given (1). 1 Lemma 1. Balancing of pre-treatment variables given the propensity score. If p(x) is the propensity score, then D X p(x). (3) Lemma 2. Unconfoundedness given the propensity score. Suppose that assignment to treatment is unconfounded, i.e. Y 1,Y 0 D X. (4) Then assignment to treatment is unconfounded given the propensity score, i.e Y 1,Y 0 D p(x). (5) If the Balancing Hypothesis of Lemma 1 is satisfied, observations with the same propensity score must have the same distribution of observable (and unobservabe) characteristics independently of treatment status. In other words, for a given propensity score, exposure to treatment is random and therefore treated and control units should be on average observationally identical. Any standard probability model can be used to estimate the propensity score. For example, Pr{D i =1 X i } = F (h(x i )), where F (.) is the normal or the logistic cumulative distribution and h(x i ) is a function of covariates with linear and higher order terms. The choice of which higher order terms to include 1 See Rosenbaum and Rubin (1983) or Imbens (2000) for a proof.
4 S.O. Becker and A. Ichino 3 is determined solely by the need to obtain an estimate of the propensity score that satisfies the Balancing Hypothesis. Inasmuch as the specification of h(x i )whichsatisfies the Balancing Hypothesis is more parsimonious than the full set of interactions needed to match cases and controls on the basis of observables, the propensity score reduces the dimensionality problem of matching treated and control units on the basis of the multidimensional vector X. 2 The program pscore.ado estimates the propensity score and tests the Balancing Hypothesis (Lemma 1) according to the following algorithm: 3 1. Estimate the probit (or logit) model: Pr{D i =1 X i } = Φ(h(X i )) (6) where Φ denotes the normal (logistic) c.d.f. and h(x i ) is a starting specification which includes all the covariates as linear terms without interactions or higher order terms. 2. Split the sample in k equally spaced intervals of the propensity score, where k is determined by the user and the default is Within each interval test that the average propensity score of treated and control units do not differ. 4. If the test fails in one interval, split the interval in halves and test again. 5. Continue until, in all intervals, the average propensity score of treated and control units do not differ. 6. Within each interval, test that the means of each characteristic do not differ between treated and control units. This is a necessary condition for the Balancing Hypothesis If the means of one or more characteristics differ, inform the user that the balancing properties is not satisfied and that a less parsimonious specification of h(x i ) is needed. Steps 2-7 of the algorithm can be restricted to the common support. This restriction implies that the test of the balancing property is performed only on the observations whose propensity score belongs to the intersection of the supports of the propensity score of treated and controls. Imposing the common support condition in the estimation of the propensity score may improve the quality of the matches used to estimate the ATT. 5 2 It is important to note that the outcome plays no role in the algorithm for the estimation of the propensity score. This is equivalent, in this context, to what happens in controlled experiments in which the design of the experiment has to be specified independently of the outcome. 3 Note that the Unconfoundedness Hypothesis of Lemma 2 cannot be tested. 4 Note that it is not sufficient in the sense that the balancing may not hold for higher order moments of the distribution of characteristics. So, to be precise, the program does not test the Balancing Hypothesis, but only one of its implications. In future versions of the program we plan to add tests for higher moments of the distribution of characteristics. 5 See the next section for a further discussion of the common support condition.
5 4 Estimation of average treatment effects based on propensity scores 3 Matching estimators of the ATT based on the Propensity Score An estimate of the propensity score is not enough to estimate the ATT of interest using equation (2). The reason is that the probability of observing two units with exactly the same value of the propensity score is in principle zero since p(x) is a continuous variable. Various methods have been proposed in the literature to overcome this problem and four of the most widely used are Nearest Neighbor Matching, Radius Matching, Kernel Matching and Stratification Matching. Beginning with the latter, the Stratification method consists of dividing the range of variation of the propensity score in intervals such that within each interval treated and control units have on average the same propensity score. For practical purposes thesameblocksidentified by the algorithm that estimates the propensity score can be used. Then, within each interval in which both treated and control units are present, the difference between the average outcomes of the treated and the controls is computed. The ATT of interest is finally obtained as an average of the ATT of each block with weights given by the distribution of treated units across blocks. One of the pitfalls of the Stratification method is that it discards observations in blocks where either treated or control units are absent. This observation suggests an alternative way to match treated and control units, which consists of taking each treated unit and searching for the control unit with the closest propensity score, i.e. the Nearest Neighbor. Although it is not necessary, the method is usually applied with replacement, in the sense that a control unit can be a best match for more than one treated unit. Once each treated unit is matched with a control unit, the difference between the outcome of the treated units and the outcome of the matched control units is computed. The ATT of interest is then obtained by averaging these differences. While in the case of the Stratification method there may be treated units which are discarded because no control is available in the their block, in the case of the Nearest Neighbor method all treated units find a match. However, it is obvious that some of these matches are fairly poor because for some treated units the nearest neighbor may have a very different propensity score and nevertheless he would contribute to the estimation of the treatment effect independently of this difference. The Radius Matching and Kernel Matching methods offerasolutiontothisproblem.withradius Matching each treated unit is matched only with the control units whose propensity score falls in apredefined neighborhood of the propensity score of the treated unit. If the dimension of the neighborhood (i.e. the radius) is set to be very small it is possible that some treated units are not matched because the neighborhood does not contain control units. On the other hand, the smaller the size of the neighborhood the better is the quality of the matches. With Kernel Matching all treated are matched with a weighted average of all controls with weights that are inversely proportional to the distance between the propensity scores of treated and controls. It is clear from the above considerations that these four methods reach different points on the frontier of the tradeoff between quality and quantity of the matches and
6 S.O. Becker and A. Ichino 5 none of them is a priori superior to the others. Their joint consideration, however, offers a way to assess the robustness of the estimates. It should also be noted that with all these methods the quality of the matches may be improved by imposing the common support restriction. Note, however, that in this way high quality matches may be lost at the boundaries of the common support and the sample may be considerably reduced. So imposing the common support restriction is not necessarily better (see Lechner (2001a)). All of our programs allow for the common support option as discussed below. We now proceed to a more detailed and formal description of these estimators. We start with the joint analysis of Nearest Neighbor Matching and Radius Matching whichcanbedescribedinacommonframework,movingnexttokernel Matching and Stratification Matching. Nearest Neighbor Matching (attnd.ado and attnw.ado) and Radius Matching (attr.ado) Let T be the set of treated units and C the set of control units, and Yi T and Yj C be the observed outcomes of the treated and control units, respectively. Denote by C(i) the set of control units matched to the treated unit i with an estimated value of the propensity score of p i. Nearest neighbor matching sets C(i) =mink p i p j k (7) j which is a singleton set unless there are multiple nearest neighbors. In practice, the case of multiple nearest neighbors should be very rare, in particular if the set of characteristics X contains continuous variables; the likelihood of multiple nearest neighbors is further reduced if the propensity score is estimated and saved in double precision. In radius matching, C(i) ={p j k p i p j k<r}, (8) i.e. all the control units with estimated propensity scores falling within a radius r from p i are matched to the treated unit i. In both nearest neighbor and radius matching, denote the number of controls matched with observation i T by Ni C and define the weights w ij = 1 if j C(i) andw Ni C ij =0 otherwise. Then, the formula for both types of matching estimators can be written as follows (where M stands for either nearest neighbor matching or radius matching and the number of units in the treated group is denoted by N T ): τ M = = 1 X N T 1 N T i T X i T Y T i Y T i X j C(i) X i T w ij Y C j X j C(i) (9) w ij Yj C
7 6 Estimation of average treatment effects based on propensity scores = 1 X N T Yi T 1 i T N T X j C w j Y C j where the weights w j are defined by w j = Σ i w ij.. To derive the variances of these estimators the weights are assumed to be fixed and the outcomes are assumed to be independent across units. Var(τ M ) = = = 1 X (N T ) 2 Var(Yi T )+ X (w j ) 2 Var(Y C i T j C 1 (N T ) 2 j ) N T Var(Yi T )+ X (w j ) 2 Var(Y C j C 1 N T Var(Y i T )+ 1 X (N T ) 2 j C j ) (w j ) 2 Var(Yj C ). (10) In the programs attnd.ado, attnw.ado, andattr.ado, standard errors are obtained analytically using the above formula, or by bootstrapping using the bootstrap option. As far as the difference between attnd.ado and attnw.ado is concerned, it is most easily understood by briefly describing the way nearest neighbors are computationally determined in the two programs. To save on computing time, nearest neighbors are not determined by comparing treated observations to every single control, but by first sorting all records by the estimated propensity score and then searching forward and backward for the closest control unit(s). If for a treated unit forward and backward matches happen to be equally good, there are two computationally feasible options to obtain analytical standard errors while at the same time exploiting the very fast forward and backward search strategy: attnw.ado gives equal weight (hence the letters nw for nearest neighbor and equal weight) to the groups of forward and backward matches; attnd.ado randomly draws either the forward or backward matches (hence the letters nd for nearest neighbor and random draw). Inpractice, thecaseofmultiple nearest neighbors should be very rare, in particular if the set of X s contains continuous variables, in which case both attnw.ado and attnd.ado should give equal results. The likelihood of multiple nearest neighbors is further reduced if the propensity score is estimated and saved in double precision, which is what pscore.ado does by default. Kernel matching method (attk.ado) The kernel matching estimator is given by τ K = 1 N T X i T ( Y T i Pj C Y j CG( p j p i h n ) P k C G( p k p i h n ) ) (11)
8 S.O. Becker and A. Ichino 7 where G( ) is a kernel function and h n is a bandwidth parameter. conditions on the bandwidth and kernel, Pj C Y j C pj pi G( h n ) P k C G( p k p i h n ) Under standard (12) is a consistent estimator of the counterfactual outcome Y 0i. In the program attk.ado, standard errors are obtained by bootstrapping using the bootstrap option. Users can choose between the default Gaussian kernel or the Epanechnikov kernel. Stratification method (atts.ado) This method is based on the same stratification procedure used for estimating the propensity score. Note that by construction, in each block defined by this procedure the covariates are balanced and the assignment to treatment can be considered random. Hence, letting q index the blocks defined over intervals of the propensity score, within each block the program computes τ S q = Pi I(q) Y T i N T q Pj I(q) Y C j N C q (13) where I(q) is the set of units in block q while Nq T and Nq C are the numbers of treated and control units in block q. The estimator of the AT T in equation 2 based on the Stratification method is then computed with the following formula: τ S = QX q=1 τ S q P i I(q) D i P i D i where the weight for each block is given by the corresponding fraction of treated units and Q isthenumberofblocks. Assuming independence of outcomes across units, the variance of τ S is computed by " Var(τ S )= 1 N T Var(Yi T )+ QX q=1 N T q N T N T q N C q Var(Y C j ) In the program atts.ado, standard errors are obtained analytically using the above formula, or by bootstrapping using the bootstrap option. # (14) (15) 4 Syntax pscore and att* are regression-like commands
9 8 Estimation of average treatment effectsbasedonpropensityscores pscore treatment varlist weight if exp in range, pscore(newvar) blockid(newvar) detail logit comsup level(#) numblo(#) attnd outcome treatment varlist weight if exp in range, pscore(scorevar) logit index comsup detail bootstrap reps(#) noisily dots attnw outcome treatment varlist weight if exp in range, pscore(scorevar) logit index comsup detail bootstrap reps(#) noisily dots attr outcome treatment varlist weight if exp in range, pscore(scorevar) logit index radius(#) comsup detail bootstrap reps(#) noisily dots attk outcome treatment varlist weight if exp in range, pscore(scorevar) logit index epan bwidth(#) comsup detail bootstrap reps(#) noisily dots atts outcome treatment varlist if exp in range, pscore(scorevar) blockid(blockvar) comsup detail bootstrap reps(#) noisily dots 5 Options Note the following points: It is important to clean up your dataset before running this suite of Stata programs, in particular to delete observations with missing values. In pscore, the option pscore(newvar) is compulsory. pscore and the att* programs are closely related in that users will typically first run pscore to estimate the propensity score and test whether the balancing property holds, and then proceed to estimating the ATT with one or more of the att* programs. Note, however, thatallatt* programs are stand-alone programs, i.e. if users prefer to estimate the propensity score with their own procedure, they can do so, specifying the name of the estimated propensity score as an input variable in the att* programs. If users are confident about the correct specification for the propensity score, that specification can be used directly in the att* programs to estimate the
10 S.O. Becker and A. Ichino 9 propensity score without first running pscore. This is actually advisable if bootstrapped standard errors are requested because, when users do not specify their own previously estimated propensity score, the bootstrap encompasses the estimation of the propensity score based on the specification given by varlist. Reestimating the propensity score at each replication of the bootstrap procedure is recommended to account for the uncertainty associated with the estimation of the propensity score. Even more so when the comsup option is specified because in this case the region of common support changes with every bootstrap sample, and bootstrapped standard errors pick up this uncertainty as well. So, typically, users would first identify a specification satisfying the balancing property - using e.g. pscore - and then provide exactly this specification in varlist and use bootstrapped standard errors. For atts, inadditiontotheestimatedpropensityscore,usersmustprovidea variable containing the block identifier for the estimated propensity score. Therefore, in the case of atts it is most convenient to rely on pscore because it nicely generates both the estimated propensity score variable and the block identifier. 5.1 Options for pscore pscore(newvar) is a compulsory option and asks users to specify the variable name for the estimated propensity score. blockid(newvar) allows users to specify the variable name for the block number of the estimated propensity score. detail displays more detailed output documenting the steps performed to obtain the final results. logit uses a logit model to estimate the propensity score instead of the default probit model. comsup restricts the analysis of the balancing property to all treated plus those controls in the region of common support. A dummy variable named it:comsup is added to the dataset to identify the observations in the common support. level(#) allows to set the significance level of the tests of the balancing property. The default is Note that this significance level applies to the test of each single variable of the vector X of pre-treatment characteristics, i.e. the balancing property is not rejected only in case it holds for every single X. This is a relatively conservative approach because of the following argument. Assume that the significance level is set to 0.05, that X consists of 20 variables, and that the µ tests of the balancing property 20 are mutually independent. Then, with probability (0.05) 1 1 (0.95) 19 =0.37, one of the tests rejects the balancing property although it actually holds true. numblo(#) allows to set the number of blocks of equal score range to be used at the beginning of the test of the balancing hypothesis. The default is set to 5 blocks.
11 10 Estimation of average treatment effects based on propensity scores 5.2 Options common to all att* commands comsup restricts the computation of the ATT to the region of common support. detail displays more detailed output documenting the steps performed to obtain the final results. bootstrap bootstraps the standard error of the treatment effect. reps(integer) specifies the number of bootstrap replications to be performed. The default is 50. This option produces an effect only if the bootstrap option is specified. noisily requests that any output from the bootstrap replications be displayed. This option produces an effect only if the bootstrap option is specified. dots requests that a dot be placed on the screen at the beginning of each bootstrap replication. This option produces an effect only if the bootstrap option is specified. 5.3 Options for attnd and attnw pscore(scorevar) specifies the name of the user-provided variable containing the estimated propensity score; if this option is not specified, attnd and attnw will estimate the propensity score with the specification provided in varlist using a probit. logit allows a logit estimation of the propensity score instead of the default probit model when the option pscore(scorevar) is not specified by the user. index requires the use of the linear index as the propensity score when the option pscore(scorevar) is not specified by the user. 5.4 Options for attr pscore(scorevar) specifies the name of the user-provided variable containing the estimated propensity score; if this option is not specified, attr will estimate the propensity score with the specification provided in varlist using a probit. logit allows a logit estimation of the propensity score instead of the default probit model when the option pscore(scorevar) is not specified by the user. index requires the use of the linear index as the propensity score when the option pscore(scorevar) is not specified by the user. radius(#) specifiesthesizeoftheradius.thedefaultis Options for attk pscore(scorevar) specifies the name of the user-provided variable containing the estimated propensity score; if this option is not specified, attk will estimate the propensity score with the specification provided in varlist using a probit.
12 S.O. Becker and A. Ichino 11 logit allows a logit estimation of the propensity score instead of the default probit model when the option pscore(scorevar) is not specified by the user. index requires the use of the linear index as the propensity score when the option pscore(scorevar) is not specified by the user. epan specifies that the Epanechnikov kernel be used by kernel rather than the default Gaussian one. bwidth(#) specifies the bandwidth to be used when choosing the epan option. Default is This option produces an effect only if the Epanechnikov kernel is requested. 5.6 Options for atts pscore(scorevar) is a compulsory option which specifiesthenameoftheuser-provided variable containing the estimated propensity score. blockid(blockvar) is a compulsory option and specifiesthenameoftheuser-provided variable containing the block identifier of the estimated propensity score. 6 Example: NSW - PSID data We use data from Dehejia and Wahba (1999), DW for short, which is based on Lalonde s (1986) seminal study on the comparison between experimental and non-experimental methods for the evaluation of causal effects. The data combine the treated units from a randomized evaluation of the National Supported Work (NSW) demonstration with nonexperimental comparison units drawn from survey data. For the purpose of this section, we restrict our analysis to the so-called NSW-PSID-1 subsample, consisting of the male NSW treatment units and the largest of the three PSID subsamples (see DW99 for more detail). We use this dataset for two reasons: first, it is widely known in labor economics (starting with Lalonde, 1986, re-analysed by Dehejia and Wahba, 1999 and 2002, and by Smith and Todd, 2003) to illustrate the working of propensity score and matching techniques. Second, the data are publicly available at Rajeev Dehejia s website under the following address: rd247/nswdata.html. We tried to replicate the results produced by Dehejia and Wahba (1999) but - similar to Smith and Todd (2003) - have not been able to numerically replicate all of their estimates because of lack of detailed information in some crucial instances (e.g. number of blocks used in stratification, significance levels, exact procedure for testing the balancing property). However, we get qualitatively similar results. The outcome of interest is RE78 (real earnings in 1978); the treatment T is participation in the NSW treatment group. Control variables are age, education, Black (1 if black, 0 otherwise), Hispanic (1 if Hispanic, 0 otherwise), married (1 if married, 0 otherwise), nodegree (1 if no degree, 0 otherwise), RE75 (earnings in 1975), and RE74 (earnings in 1974). The treatment group contains 185 observations, the control group 2490 observations, so the total number of observations is 2675.
13 12 Estimation of average treatment effects based on propensity scores 6.1 Output from pscore The output from running pscore using the DW99 specification is as follows: 67. pscore T age age2 educ educ2 marr black hisp RE74 RE75 RE742 RE752 blacku74 >, pscore(mypscore) blockid(myblock) comsup numblo(5) level(0.005) logit; **************************************************** Algorithm to estimate the propensity score **************************************************** The treatment is T T Freq. Percent Cum Total Estimation of the propensity score Iteration 0: log likelihood = (output omitted ) Iteration 9: log likelihood = Logit estimates Number of obs = 2675 LR chi2(12) = Prob > chi2 = Log likelihood = Pseudo R2 = T Coef. Std. Err. z P> z [95% Conf. Interval] age age educ educ marr black hisp RE RE RE e e e e-09 RE e e e e-09 blacku _cons note: 22 failures and 0 successes completely determined. Note: the common support option has been selected The region of common support is [ , ] Description of the estimated propensity score in region of common support 6 educ2 denotes squared education, RE742 and RE752 denote the square of RE74 and RE75, respectively, and blacku74 is the interaction of black and a dummy for non-employment (i.e. zero earnings) in Note that when specifying the detail option, (even) more detailed output is displayed documenting the steps performed to obtain the final results.
14 S.O. Becker and A. Ichino 13 Estimated propensity score Percentiles Smallest 1% % % Obs % Sum of Wgt % Mean Largest Std. Dev % % Variance % Skewness % Kurtosis ****************************************************** Step 1: Identification of the optimal number of blocks Use option detail if you want more detailed output ****************************************************** The final number of blocks is 7 This number of blocks ensures that the mean propensity score is not different for treated and controls in each blocks Following the algorithm described in Section 2, blocks for which the average propensity scores of treated and controls differ are split in half. The algorithm continues until, in all blocks, the average propensity score of treated and controls does not differ. In our case, this happens for a number of seven blocks. Thereafter pscore proceeds to the test of the balancing property for each covariate. ********************************************************** Step 2: Test of balancing property of the propensity score Use option detail if you want more detailed output ********************************************************** The balancing property is satisfied When the detail option is not specified, the only output produced by pscore is a statement saying whether the balancing property is satisfied (which is the case for the DW data with p=0.005) or not. In the latter case the user is informed for which variable(s) in which block(s) the balancing property failed and a message is issued suggestingtotryadifferent specification of the propensity score. In case the balancing property holds, the final distribution of treated and controls across blocks is tabulated together with the inferior of each block: This table shows the inferior bound, the number of treated and the number of controls for each block Inferior of block T of pscore 0 1 Total
15 14 Estimation of average treatment effects based on propensity scores Total Note: the common support option has been selected ******************************************* End of the algorithm to estimate the pscore ******************************************* Note that we imposed the common support condition in this example using the comsup option. Consequently, block identifiers are missing for control observations outside the common support and the number of observations in the table is 1342 instead of After running pscore, users can proceed to estimate average treatment effects using one of the att* programs. 6.2 Output from attnd and attnw The typical output from attnd or attnw 8 is. attnd RE78 T age age2 educ educ2 marr black hisp RE74 RE75 RE742 RE752 blacku > 74, comsup boot reps(100) dots logit; The program is searching the nearest neighbor of each treated unit. This operation may take a while. ATT estimation with Nearest Neighbor Matching method (random draw version) Analytical standard errors n. treat. n. contr. ATT Std. Err. t Note: the numbers of treated and controls refer to actual nearest neighbour matches Bootstrapping of standard errors command: attnd RE78 T age age2 educ educ2 marr black hisp RE74 RE75 RE742 R > E752 blacku74, pscore() logit comsup statistic: r(attnd) (obs=2675)... >... 8 Remember that attnd and attnw will generally give the same results (except for bootstrapped standard errors) unless there are only discrete covariates and multiple nearest neighbors. This is not thecaseinourexample,andthereforetosavespacewereporthereonlytheoutputofattnd.
16 S.O. Becker and A. Ichino 15 Bootstrap statistics Variable Reps Observed Bias Std. Err. [95% Conf. Interval] bs (N) (P) (BC) ATT estimation with Nearest Neighbor Matching method (random draw version) Bootstrapped standard errors N = normal, P = percentile, BC = bias-corrected n. treat. n. contr. ATT Std. Err. t Note: the numbers of treated and controls refer to actual nearest neighbour matches Note that in this example, only 57 different controls have been matched to the 185 treated. These results are very close to the ones obtained by Dehejia and Wahba (1999). 6.3 Output from attr For attr with radius r = we obtain. attr RE78 T age age2 educ educ2 marr black hisp RE74 RE75 RE742 RE752 blacku7 > 4, comsup boot reps(100) dots logit radius(0.0001); The program is searching for matches of treated units within radius. This operation may take a while. ATT estimation with the Radius Matching method Analytical standard errors n. treat. n. contr. ATT Std. Err. t Note: the numbers of treated and controls refer to actual matches within radius Bootstrapping of standard errors command: attr RE78 T age age2 educ educ2 marr black hisp RE74 RE75 RE742 RE > 752 blacku74, pscore() logit comsup radius(.0001) statistic: r(attr) (obs=2675)... >... Bootstrap statistics Variable Reps Observed Bias Std. Err. [95% Conf. Interval]
17 16 Estimation of average treatment effects based on propensity scores bs (N) (P) (BC) ATT estimation with the Radius Matching method Bootstrapped standard errors N = normal, P = percentile, BC = bias-corrected n. treat. n. contr. ATT Std. Err. t Note: the numbers of treated and controls refer to actual matches within radius The large difference with respect to the caliper matching results of Dehejia and Wahba (2002) comes from the fact that caliper matching differs from radius matching in that the nearest control is used as a match if a treated unit has no control units within radius r. While caliper matching uses all treated units, our method only uses those treated that have control matches within radius r (here, 23 out of 185 treated). This example illustrates the sensitivity of the results to extreme assumptions used in the matching procedure. If the radius is chosen to be very small, many treated units are not matched and the results are no longer representative of the population of treated. For a more detailed discussion of this issue see Smith and Todd (2003). 6.4 Output from attk For attk theresultsareasfollows: 9. attk RE78 T age age2 educ educ2 marr black hisp RE74 RE75 RE742 RE752 blacku7 > 4, comsup boot reps(100) dots logit; The program is searching for matches of each treated unit. This operation may take a while. ATT estimation with the Kernel Matching method n. treat. n. contr. ATT Std. Err. t Note: Analytical standard errors cannot be computed. Use the bootstrap option to get bootstrapped standard errors. Bootstrapping of standard errors command: attk RE78 T age age2 educ educ2 marr black hisp RE74 RE75 RE742 RE > 752 blacku74, pscore() comsup logit bwidth(.06) statistic: r(attk) 9 Note that Dehejia and Wahba do not present results for kernel matching.
18 S.O. Becker and A. Ichino 17 (obs=2675)... >... Bootstrap statistics Variable Reps Observed Bias Std. Err. [95% Conf. Interval] bs (N) (P) (BC) ATT estimation with the Kernel Matching method Bootstrapped standard errors N = normal, P = percentile, BC = bias-corrected n. treat. n. contr. ATT Std. Err. t In kernel matching, all treated as well as all controls (in the common support which has been imposed here) are used. The estimate of the ATT is quite close to the one obtained with nearest neighbor matching. 6.5 Output from atts Finally, for atts with the blocks obtained in pscore: 10. atts RE78 T, pscore(mypscore) blockid(myblock) comsup boot reps(100) dots; ATT estimation with the Stratification method Analytical standard errors n. treat. n. contr. ATT Std. Err. t Bootstrapping of standard errors command: atts RE78 T, pscore(mypscore) blockid(myblock) comsup statistic: r(atts) (obs=2675)... > Note that there are two special cases as concerns the computation of the ATT and its analytical standard error. First, if there is no treated and/or no control unit in one (or more) of the blocks, the ATT is computed on the remaining blocks which practically amounts to imposing a (block-based) common support condition. Second, if there is exactly one treated and/or one control in one (or more) of the blocks, the ATT in that block can still be computed but the standard error cannot. In this case, atts will produce missing values for the standard error. However, bootstrapped standard errors can still be computed.
19 18 Estimation of average treatment effects based on propensity scores Bootstrap statistics Variable Reps Observed Bias Std. Err. [95% Conf. Interval] bs (N) (P) (BC) ATT estimation with the Stratification method Bootstrapped standard errors N = normal, P = percentile, BC = bias-corrected n. treat. n. contr. ATT Std. Err. t Here, the difference with respect to the DW99 results is slightly bigger than for nearest neighbor matching. This might be explained by a different number of blocks used in stratification, different significance levels, or a different procedure for testing the balancing property (see general remark at the beginning of the section 6). But overall, the results obtained by attnw, attk, and atts are quite close to each other, and taken together give evidence of a positive ATT in the range of $ associated with the NSW demonstration (when evaluated with non-experiental comparison groups) which is close to the experimental estimates of about 1700 $. 7 Saved Results The the att* commands save in r() Scalars r(nt*) r(nc*) r(att*) r(seatt*) r(tsatt*) r(bseatt*) r(btsatt*) number of treated used in the computation of att* number of controls used in the computation of att* ATT obtained by att* analytical s.e. for att* analytical t-stat for att* bootstrapped s.e. for att* bootstrapped t-stat. for att* r(mean1) mean outcome of matched treated for attk r(mean0) mean outcome of matched controls for attk 8 Acknowledgements We thank Rajeev Dehejia for providing us with his data. We would like to thank Marco Caliendo, Rajeev Dehejia, Hendrik Juerges, Torsten Persson, Anna Sanz de Galdeano,
20 S.O. Becker and A. Ichino 19 Barbara Sianesi and Daniela Vuri for very helpful discussions and for testing our programs on their data. The comments of an anonymous referee helped to considerably improve the paper and the programs. 9 References Cochran, W. and Rubin, D.B. (1973), Controlling Bias in Observational Studies, Sankhya 35, Dehejia, R.H. and S. Wahba (1999), Causal Effects in Nonexperimental Studies: Reevaluation of the Evaluation of Training Programs, Journal of the American Statistical Association 94, Dehejia, R.H. and S. Wahba (2002) Propensity Score Matching Methods for Non- Experimental Causal Studies, Review of Economics and Statistics, 84(1), Heckman, James, Hidehiko Ichimura, Jeffrey Smith and Petra Todd (1998): Characterizing Selection Bias Using Experimental Data, Econometrica, 66(5), Heckman, James, Hidehiko Ichimura and Petra Todd (1997): Matching As An Econometric Evaluation Estimator: Evidence from Evaluating a Job Training Program, Review of Economic Studies, 64(4), Heckman, James, Hidehiko Ichimura and Petra Todd (1998), Matching As An Econometric Evaluation Estimator, Review of Economic Studies, 65(2), Heckman, James., LaLonde, Robert, Smith, Jeffrey. (1999), The Economics and Econometrics of Active Labour Market Programme, in Ashenfelter, O. and Card, D. (eds.), The Handbook of Labor Economics, Volume III. Imbens, G.W. (2000), The Role of Propensity Score in Estimating Dose Response Functions, Biometrika, 87(3), Lechner, Michael (2001a). A note on the common support problem in applied evaluation studies, Discussion Paper , Department of Economics, University of St. Gallen. Lechner, M. (2001b), Identification and Estimation of Causal Effects of Multiple Treatments under the Conditional Independence Assumption, in: Lechner, M., Pfeiffer, F. (eds), Econometric Evaluation of Labour Market Policies, Heidelberg: Physica/Springer, p Jones, M. C., Marron, J. S. and Sheather, S. J. (1996): A Brief Survey of Bandwidth Selection for Density Estimation, Journal of the American Statistical Association, Vol. 91, No. 433,
21 20 Estimation of average treatment effects based on propensity scores Rosenbaum, P.R., and D.B. Rubin (1983) The Central Role of the Propensity Score in Observational Studies for Causal Effects, Biometrika 70(1), Rosenbaum, P.R., and D.B. Rubin (1984) Reducing Bias in Observational Studies using Subclassification on the Propensity Score, Journal of the American Statistical Association 79, Rubin, D.B. (1974), Estimating Causal Effects of Treatments in Randomised and Non-Randomised Studies, Journal of Educational Psychology, 66, Smith, Jeffrey and Petra Todd (2003), Does Matching Overcome Lalonde s Critique of Non Experimental Estimators?, forthcoming: Journal of Econometrics. Silverman, B.W. (1986) Density Estimation for Statistics and Data Analysis (London: Chapman and Hall). About the Authors Sascha O. Becker is assistant professor of Economics at the University of Munich, Germany. He is also affiliated with CESifo and IZA. Andrea Ichino is professor of Economics at the European University Institute, Florence, Italy. He is also affiliated with CESifo, CEPR and IZA. Please address any correspondence to Sascha O. Becker ([email protected]) or Andrea Ichino ([email protected]).
Editor Executive Editor Associate Editors Copyright Statement:
The Stata Journal Editor H. Joseph Newton Department of Statistics Texas A & M University College Station, Texas 77843 979-845-3142 979-845-3144 FAX [email protected] Associate Editors Christopher
AN INTRODUCTION TO MATCHING METHODS FOR CAUSAL INFERENCE
AN INTRODUCTION TO MATCHING METHODS FOR CAUSAL INFERENCE AND THEIR IMPLEMENTATION IN STATA Barbara Sianesi IFS Stata Users Group Meeting Berlin, June 25, 2010 1 (PS)MATCHING IS EXTREMELY POPULAR 240,000
Implementing Propensity Score Matching Estimators with STATA
Implementing Propensity Score Matching Estimators with STATA Barbara Sianesi University College London and Institute for Fiscal Studies E-mail: [email protected] Prepared for UK Stata Users Group, VII
Propensity scores for the estimation of average treatment effects in observational studies
Propensity scores for the estimation of average treatment effects in observational studies Leonardo Grilli and Carla Rampichini Dipartimento di Statistica Giuseppe Parenti Universit di Firenze Training
Profit-sharing and the financial performance of firms: Evidence from Germany
Profit-sharing and the financial performance of firms: Evidence from Germany Kornelius Kraft and Marija Ugarković Department of Economics, University of Dortmund Vogelpothsweg 87, 44227 Dortmund, Germany,
Why High-Order Polynomials Should Not be Used in Regression Discontinuity Designs
Why High-Order Polynomials Should Not be Used in Regression Discontinuity Designs Andrew Gelman Guido Imbens 2 Aug 2014 Abstract It is common in regression discontinuity analysis to control for high order
How To Find Out If A Local Market Price Is Lower Than A Local Price
Agricultural Contracts and Alternative Marketing Options: A Matching Analysis Ani L. Katchova University of Illinois Contact Information: Ani L. Katchova University of Illinois 326 Mumford Hall, MC-710
Standard errors of marginal effects in the heteroskedastic probit model
Standard errors of marginal effects in the heteroskedastic probit model Thomas Cornelißen Discussion Paper No. 320 August 2005 ISSN: 0949 9962 Abstract In non-linear regression models, such as the heteroskedastic
How to set the main menu of STATA to default factory settings standards
University of Pretoria Data analysis for evaluation studies Examples in STATA version 11 List of data sets b1.dta (To be created by students in class) fp1.xls (To be provided to students) fp1.txt (To be
Some Practical Guidance for the Implementation of Propensity Score Matching
DISCUSSION PAPER SERIES IZA DP No. 1588 Some Practical Guidance for the Implementation of Propensity Score Matching Marco Caliendo Sabine Kopeinig May 2005 Forschungsinstitut zur Zukunft der Arbeit Institute
Handling missing data in Stata a whirlwind tour
Handling missing data in Stata a whirlwind tour 2012 Italian Stata Users Group Meeting Jonathan Bartlett www.missingdata.org.uk 20th September 2012 1/55 Outline The problem of missing data and a principled
From the help desk: Swamy s random-coefficients model
The Stata Journal (2003) 3, Number 3, pp. 302 308 From the help desk: Swamy s random-coefficients model Brian P. Poi Stata Corporation Abstract. This article discusses the Swamy (1970) random-coefficients
Multinomial and Ordinal Logistic Regression
Multinomial and Ordinal Logistic Regression ME104: Linear Regression Analysis Kenneth Benoit August 22, 2012 Regression with categorical dependent variables When the dependent variable is categorical,
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
Using Stata for Categorical Data Analysis
Using Stata for Categorical Data Analysis NOTE: These problems make extensive use of Nick Cox s tab_chi, which is actually a collection of routines, and Adrian Mander s ipf command. From within Stata,
ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics
ESTIMATING AVERAGE TREATMENT EFFECTS: IV AND CONTROL FUNCTIONS, II Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Quantile Treatment Effects 2. Control Functions
HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009
HURDLE AND SELECTION MODELS Jeff Wooldridge Michigan State University BGSE/IZA Course in Microeconometrics July 2009 1. Introduction 2. A General Formulation 3. Truncated Normal Hurdle Model 4. Lognormal
The Probit Link Function in Generalized Linear Models for Data Mining Applications
Journal of Modern Applied Statistical Methods Copyright 2013 JMASM, Inc. May 2013, Vol. 12, No. 1, 164-169 1538 9472/13/$95.00 The Probit Link Function in Generalized Linear Models for Data Mining Applications
Discussion Section 4 ECON 139/239 2010 Summer Term II
Discussion Section 4 ECON 139/239 2010 Summer Term II 1. Let s use the CollegeDistance.csv data again. (a) An education advocacy group argues that, on average, a person s educational attainment would increase
ECON 142 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE #2
University of California, Berkeley Prof. Ken Chay Department of Economics Fall Semester, 005 ECON 14 SKETCH OF SOLUTIONS FOR APPLIED EXERCISE # Question 1: a. Below are the scatter plots of hourly wages
Average Redistributional Effects. IFAI/IZA Conference on Labor Market Policy Evaluation
Average Redistributional Effects IFAI/IZA Conference on Labor Market Policy Evaluation Geert Ridder, Department of Economics, University of Southern California. October 10, 2006 1 Motivation Most papers
Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.
Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C
Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015
Marginal Effects for Continuous Variables Richard Williams, University of Notre Dame, http://www3.nd.edu/~rwilliam/ Last revised February 21, 2015 References: Long 1997, Long and Freese 2003 & 2006 & 2014,
Least Squares Estimation
Least Squares Estimation SARA A VAN DE GEER Volume 2, pp 1041 1045 in Encyclopedia of Statistics in Behavioral Science ISBN-13: 978-0-470-86080-9 ISBN-10: 0-470-86080-4 Editors Brian S Everitt & David
SAS Software to Fit the Generalized Linear Model
SAS Software to Fit the Generalized Linear Model Gordon Johnston, SAS Institute Inc., Cary, NC Abstract In recent years, the class of generalized linear models has gained popularity as a statistical modeling
INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition)
INDIRECT INFERENCE (prepared for: The New Palgrave Dictionary of Economics, Second Edition) Abstract Indirect inference is a simulation-based method for estimating the parameters of economic models. Its
6.4 Normal Distribution
Contents 6.4 Normal Distribution....................... 381 6.4.1 Characteristics of the Normal Distribution....... 381 6.4.2 The Standardized Normal Distribution......... 385 6.4.3 Meaning of Areas under
Linear Threshold Units
Linear Threshold Units w x hx (... w n x n w We assume that each feature x j and each weight w j is a real number (we will relax this later) We will study three different algorithms for learning linear
From the help desk: Bootstrapped standard errors
The Stata Journal (2003) 3, Number 1, pp. 71 80 From the help desk: Bootstrapped standard errors Weihua Guan Stata Corporation Abstract. Bootstrapping is a nonparametric approach for evaluating the distribution
I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN
Beckman HLM Reading Group: Questions, Answers and Examples Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Algebra Slide 1 of
An Introduction to Basic Statistics and Probability
An Introduction to Basic Statistics and Probability Shenek Heyward NCSU An Introduction to Basic Statistics and Probability p. 1/4 Outline Basic probability concepts Conditional probability Discrete Random
Module 14: Missing Data Stata Practical
Module 14: Missing Data Stata Practical Jonathan Bartlett & James Carpenter London School of Hygiene & Tropical Medicine www.missingdata.org.uk Supported by ESRC grant RES 189-25-0103 and MRC grant G0900724
Statistical modelling with missing data using multiple imputation. Session 4: Sensitivity Analysis after Multiple Imputation
Statistical modelling with missing data using multiple imputation Session 4: Sensitivity Analysis after Multiple Imputation James Carpenter London School of Hygiene & Tropical Medicine Email: [email protected]
STATISTICA Formula Guide: Logistic Regression. Table of Contents
: Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS
MULTIPLE REGRESSION AND ISSUES IN REGRESSION ANALYSIS MSR = Mean Regression Sum of Squares MSE = Mean Squared Error RSS = Regression Sum of Squares SSE = Sum of Squared Errors/Residuals α = Level of Significance
Problem of Missing Data
VASA Mission of VA Statisticians Association (VASA) Promote & disseminate statistical methodological research relevant to VA studies; Facilitate communication & collaboration among VA-affiliated statisticians;
Introduction to General and Generalized Linear Models
Introduction to General and Generalized Linear Models General Linear Models - part I Henrik Madsen Poul Thyregod Informatics and Mathematical Modelling Technical University of Denmark DK-2800 Kgs. Lyngby
Multiple Imputation for Missing Data: A Cautionary Tale
Multiple Imputation for Missing Data: A Cautionary Tale Paul D. Allison University of Pennsylvania Address correspondence to Paul D. Allison, Sociology Department, University of Pennsylvania, 3718 Locust
From the help desk: hurdle models
The Stata Journal (2003) 3, Number 2, pp. 178 184 From the help desk: hurdle models Allen McDowell Stata Corporation Abstract. This article demonstrates that, although there is no command in Stata for
The Wage Return to Education: What Hides Behind the Least Squares Bias?
DISCUSSION PAPER SERIES IZA DP No. 8855 The Wage Return to Education: What Hides Behind the Least Squares Bias? Corrado Andini February 2015 Forschungsinstitut zur Zukunft der Arbeit Institute for the
Moving the Goalposts: Addressing Limited Overlap in Estimation of Average Treatment Effects by Changing the Estimand
DISCUSSIO PAPER SERIES IZA DP o. 347 Moving the Goalposts: Addressing Limited Overlap in Estimation of Average Treatment Effects by Changing the Estimand Richard K. Crump V. Joseph Hotz Guido W. Imbens
Sample Size Calculation for Longitudinal Studies
Sample Size Calculation for Longitudinal Studies Phil Schumm Department of Health Studies University of Chicago August 23, 2004 (Supported by National Institute on Aging grant P01 AG18911-01A1) Introduction
Association Between Variables
Contents 11 Association Between Variables 767 11.1 Introduction............................ 767 11.1.1 Measure of Association................. 768 11.1.2 Chapter Summary.................... 769 11.2 Chi
99.37, 99.38, 99.38, 99.39, 99.39, 99.39, 99.39, 99.40, 99.41, 99.42 cm
Error Analysis and the Gaussian Distribution In experimental science theory lives or dies based on the results of experimental evidence and thus the analysis of this evidence is a critical part of the
Returns to education on earnings in Italy: an evaluation using propensity score techniques
Returns to education on earnings in Italy: an evaluation using propensity score techniques Elena Cottini Università Cattolica del Sacro Cuore, Milan PRELIMINARY DRAFT (June 2005) ABSTRACT: This paper investigates
Handling attrition and non-response in longitudinal data
Longitudinal and Life Course Studies 2009 Volume 1 Issue 1 Pp 63-72 Handling attrition and non-response in longitudinal data Harvey Goldstein University of Bristol Correspondence. Professor H. Goldstein
Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes
Bias in the Estimation of Mean Reversion in Continuous-Time Lévy Processes Yong Bao a, Aman Ullah b, Yun Wang c, and Jun Yu d a Purdue University, IN, USA b University of California, Riverside, CA, USA
Linear Algebra Notes
Linear Algebra Notes Chapter 19 KERNEL AND IMAGE OF A MATRIX Take an n m matrix a 11 a 12 a 1m a 21 a 22 a 2m a n1 a n2 a nm and think of it as a function A : R m R n The kernel of A is defined as Note
Summary of Formulas and Concepts. Descriptive Statistics (Ch. 1-4)
Summary of Formulas and Concepts Descriptive Statistics (Ch. 1-4) Definitions Population: The complete set of numerical information on a particular quantity in which an investigator is interested. We assume
LOGIT AND PROBIT ANALYSIS
LOGIT AND PROBIT ANALYSIS A.K. Vasisht I.A.S.R.I., Library Avenue, New Delhi 110 012 [email protected] In dummy regression variable models, it is assumed implicitly that the dependent variable Y
Two Correlated Proportions (McNemar Test)
Chapter 50 Two Correlated Proportions (Mcemar Test) Introduction This procedure computes confidence intervals and hypothesis tests for the comparison of the marginal frequencies of two factors (each with
Package ATE. R topics documented: February 19, 2015. Type Package Title Inference for Average Treatment Effects using Covariate. balancing.
Package ATE February 19, 2015 Type Package Title Inference for Average Treatment Effects using Covariate Balancing Version 0.2.0 Date 2015-02-16 Author Asad Haris and Gary Chan
Dongfeng Li. Autumn 2010
Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis
Syntax Menu Description Options Remarks and examples Stored results Methods and formulas References Also see. Description
Title stata.com lpoly Kernel-weighted local polynomial smoothing Syntax Menu Description Options Remarks and examples Stored results Methods and formulas References Also see Syntax lpoly yvar xvar [ if
NCSS Statistical Software
Chapter 06 Introduction This procedure provides several reports for the comparison of two distributions, including confidence intervals for the difference in means, two-sample t-tests, the z-test, the
Section 14 Simple Linear Regression: Introduction to Least Squares Regression
Slide 1 Section 14 Simple Linear Regression: Introduction to Least Squares Regression There are several different measures of statistical association used for understanding the quantitative relationship
Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm
Mgt 540 Research Methods Data Analysis 1 Additional sources Compilation of sources: http://lrs.ed.uiuc.edu/tseportal/datacollectionmethodologies/jin-tselink/tselink.htm http://web.utk.edu/~dap/random/order/start.htm
4. Continuous Random Variables, the Pareto and Normal Distributions
4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random
III. INTRODUCTION TO LOGISTIC REGRESSION. a) Example: APACHE II Score and Mortality in Sepsis
III. INTRODUCTION TO LOGISTIC REGRESSION 1. Simple Logistic Regression a) Example: APACHE II Score and Mortality in Sepsis The following figure shows 30 day mortality in a sample of septic patients as
SIMPLE LINEAR CORRELATION. r can range from -1 to 1, and is independent of units of measurement. Correlation can be done on two dependent variables.
SIMPLE LINEAR CORRELATION Simple linear correlation is a measure of the degree to which two variables vary together, or a measure of the intensity of the association between two variables. Correlation
Poisson Models for Count Data
Chapter 4 Poisson Models for Count Data In this chapter we study log-linear models for count data under the assumption of a Poisson error structure. These models have many applications, not only to the
A Basic Introduction to Missing Data
John Fox Sociology 740 Winter 2014 Outline Why Missing Data Arise Why Missing Data Arise Global or unit non-response. In a survey, certain respondents may be unreachable or may refuse to participate. Item
Binary Diagnostic Tests Two Independent Samples
Chapter 537 Binary Diagnostic Tests Two Independent Samples Introduction An important task in diagnostic medicine is to measure the accuracy of two diagnostic tests. This can be done by comparing summary
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING
LAB 4 INSTRUCTIONS CONFIDENCE INTERVALS AND HYPOTHESIS TESTING In this lab you will explore the concept of a confidence interval and hypothesis testing through a simulation problem in engineering setting.
Introduction; Descriptive & Univariate Statistics
Introduction; Descriptive & Univariate Statistics I. KEY COCEPTS A. Population. Definitions:. The entire set of members in a group. EXAMPLES: All U.S. citizens; all otre Dame Students. 2. All values of
How To Check For Differences In The One Way Anova
MINITAB ASSISTANT WHITE PAPER This paper explains the research conducted by Minitab statisticians to develop the methods and data checks used in the Assistant in Minitab 17 Statistical Software. One-Way
Week 4: Standard Error and Confidence Intervals
Health Sciences M.Sc. Programme Applied Biostatistics Week 4: Standard Error and Confidence Intervals Sampling Most research data come from subjects we think of as samples drawn from a larger population.
Online Appendix The Earnings Returns to Graduating with Honors - Evidence from Law Graduates
Online Appendix The Earnings Returns to Graduating with Honors - Evidence from Law Graduates Ronny Freier Mathias Schumann Thomas Siedler February 13, 2015 DIW Berlin and Free University Berlin. Universität
Ordinal Regression. Chapter
Ordinal Regression Chapter 4 Many variables of interest are ordinal. That is, you can rank the values, but the real distance between categories is unknown. Diseases are graded on scales from least severe
Logistic Regression. Jia Li. Department of Statistics The Pennsylvania State University. Logistic Regression
Logistic Regression Department of Statistics The Pennsylvania State University Email: [email protected] Logistic Regression Preserve linear classification boundaries. By the Bayes rule: Ĝ(x) = arg max
Vector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
The Stata Journal. Lisa Gilmore Gabe Waggoner
The Stata Journal Editor H. Joseph Newton Department of Statistics Texas A & M University College Station, Texas 77843 979-845-3142; FAX 979-845-3144 [email protected] Associate Editors Christopher
MISSING DATA TECHNIQUES WITH SAS. IDRE Statistical Consulting Group
MISSING DATA TECHNIQUES WITH SAS IDRE Statistical Consulting Group ROAD MAP FOR TODAY To discuss: 1. Commonly used techniques for handling missing data, focusing on multiple imputation 2. Issues that could
Introduction to Fixed Effects Methods
Introduction to Fixed Effects Methods 1 1.1 The Promise of Fixed Effects for Nonexperimental Research... 1 1.2 The Paired-Comparisons t-test as a Fixed Effects Method... 2 1.3 Costs and Benefits of Fixed
Bootstrapping Big Data
Bootstrapping Big Data Ariel Kleiner Ameet Talwalkar Purnamrita Sarkar Michael I. Jordan Computer Science Division University of California, Berkeley {akleiner, ameet, psarkar, jordan}@eecs.berkeley.edu
Empirical Methods in Applied Economics
Empirical Methods in Applied Economics Jörn-Ste en Pischke LSE October 2005 1 Observational Studies and Regression 1.1 Conditional Randomization Again When we discussed experiments, we discussed already
Simple Linear Regression Inference
Simple Linear Regression Inference 1 Inference requirements The Normality assumption of the stochastic term e is needed for inference even if it is not a OLS requirement. Therefore we have: Interpretation
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses. Michael R. Powers[ 1 ] Temple University and Tsinghua University
A Coefficient of Variation for Skewed and Heavy-Tailed Insurance Losses Michael R. Powers[ ] Temple University and Tsinghua University Thomas Y. Powers Yale University [June 2009] Abstract We propose a
WHERE DOES THE 10% CONDITION COME FROM?
1 WHERE DOES THE 10% CONDITION COME FROM? The text has mentioned The 10% Condition (at least) twice so far: p. 407 Bernoulli trials must be independent. If that assumption is violated, it is still okay
Some probability and statistics
Appendix A Some probability and statistics A Probabilities, random variables and their distribution We summarize a few of the basic concepts of random variables, usually denoted by capital letters, X,Y,
Linear Classification. Volker Tresp Summer 2015
Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong
Descriptive Statistics
Y520 Robert S Michael Goal: Learn to calculate indicators and construct graphs that summarize and describe a large quantity of values. Using the textbook readings and other resources listed on the web
Confidence Intervals for Cp
Chapter 296 Confidence Intervals for Cp Introduction This routine calculates the sample size needed to obtain a specified width of a Cp confidence interval at a stated confidence level. Cp is a process
Multivariate Analysis of Variance. The general purpose of multivariate analysis of variance (MANOVA) is to determine
2 - Manova 4.3.05 25 Multivariate Analysis of Variance What Multivariate Analysis of Variance is The general purpose of multivariate analysis of variance (MANOVA) is to determine whether multiple levels
Standard Deviation Estimator
CSS.com Chapter 905 Standard Deviation Estimator Introduction Even though it is not of primary interest, an estimate of the standard deviation (SD) is needed when calculating the power or sample size of
Using Stata for One Sample Tests
Using Stata for One Sample Tests All of the one sample problems we have discussed so far can be solved in Stata via either (a) statistical calculator functions, where you provide Stata with the necessary
The Impact of Retail Payment Innovations on Cash Usage
1/23 The Impact of Retail Payment Innovations on Cash Usage Ben S.C. Fung a Kim P. Huynh a Leonard Sabetti b a Bank of Canada b George Mason University ECB-MNB Joint Conference Cost and efficiency of retail
Simple linear regression
Simple linear regression Introduction Simple linear regression is a statistical method for obtaining a formula to predict values of one variable from another where there is a causal relationship between
Discussion. Seppo Laaksonen 1. 1. Introduction
Journal of Official Statistics, Vol. 23, No. 4, 2007, pp. 467 475 Discussion Seppo Laaksonen 1 1. Introduction Bjørnstad s article is a welcome contribution to the discussion on multiple imputation (MI)
