Power Calculation Using the Online Variance Almanac (Web VA): A User s Guide



Similar documents
Designing Adequately Powered Cluster-Randomized Trials using Optimal Design

Power and sample size in multilevel modeling

Stability of School Building Accountability Scores and Gains. CSE Technical Report 561. Robert L. Linn CRESST/University of Colorado at Boulder

Chapter 5: Analysis of The National Education Longitudinal Study (NELS:88)

In the past two decades, the federal government has dramatically

Specifications for this HLM2 run

Best Practices in Using Large, Complex Samples: The Importance of Using Appropriate Weights and Design Effect Compensation

Technical Report. Teach for America Teachers Contribution to Student Achievement in Louisiana in Grades 4-9: to

U.S. Department of Education NCES NAEP. Tools on the Web

Introduction to Longitudinal Data Analysis

Introducing the Multilevel Model for Change

Abstract Title Page. Title: Conditions for the Effectiveness of a Tablet-Based Algebra Program

SUCCESSFUL SCHOOL RESTRUCTURING

The SURVEYFREQ Procedure in SAS 9.2: Avoiding FREQuent Mistakes When Analyzing Survey Data ABSTRACT INTRODUCTION SURVEY DESIGN 101 WHY STRATIFY?

Models for Longitudinal and Clustered Data

COMPARISON OF SEO SCHOLARS AND

Full-day Kindergarten

Imputing Attendance Data in a Longitudinal Multilevel Panel Data Set

The Effects of Early Education on Children in Poverty

Nebraska School Counseling State Evaluation

The Latent Variable Growth Model In Practice. Individual Development Over Time

HLM software has been one of the leading statistical packages for hierarchical

Rethinking the Cultural Context of Schooling Decisions in Disadvantaged Neighborhoods: From Deviant Subculture to Cultural Heterogeneity

Evaluating the Effect of Teacher Degree Level on Educational Performance Dan D. Goldhaber Dominic J. Brewer

Utah Comprehensive Counseling and Guidance Program Evaluation Report

Title: College enrollment patterns for rural Indiana high school graduates

Introduction to Data Analysis in Hierarchical Linear Models

How To Understand The Effects Of National Institute Of School Leadership

United States Government Accountability Office GAO. Report to Congressional Requesters. February 2009

STATISTICAL DATA ANALYSIS

What Large-Scale, Survey Research Tells Us About Teacher Effects On Student Achievement: Insights from the Prospects Study of Elementary Schools

Introduction to Multilevel Modeling Using HLM 6. By ATS Statistical Consulting Group

Running head: SCHOOL COMPUTER USE AND ACADEMIC PERFORMANCE. Using the U.S. PISA results to investigate the relationship between

College Readiness LINKING STUDY

71% of online adults now use video sharing sites

Module 14: Missing Data Stata Practical

An Evaluation of Kansas City Reading Programs for Turn the Page Kansas City

Disparities in Individuals Access and Use of Health IT in 2013

Comparing Private Schools and Public Schools Using Hierarchical Linear Modeling

A New Measure of Educational Success in Texas. Tracking the Success of 8th Graders into and through College

A C T R esearcli R e p o rt S eries Using ACT Assessment Scores to Set Benchmarks for College Readiness. IJeff Allen.

Missing Data & How to Deal: An overview of missing data. Melissa Humphries Population Research Center

NSSE Multi-Year Data Analysis Guide

Home Schooling Achievement

Frequently Asked Questions About Using The GRE Search Service

Evaluation of the MIND Research Institute s Spatial-Temporal Math (ST Math) Program in California

RMTD 404 Introduction to Linear Models

A Study of the Efficacy of Apex Learning Digital Curriculum Sarasota County Schools

Statistical Techniques Utilized in Analyzing TIMSS Databases in Science Education from 1996 to 2012: A Methodological Review

The Outcomes For CTE Students in Wisconsin

NCEE EVALUATION BRIEF April 2014 STATE REQUIREMENTS FOR TEACHER EVALUATION POLICIES PROMOTED BY RACE TO THE TOP

Mapping State Proficiency Standards Onto the NAEP Scales:

Findings from the Michigan School Readiness Program 6 to 8 Follow Up Study

Executive Summary. McWillie Elementary School

17% of cell phone owners do most of their online browsing on their phone, rather than a computer or other device

SPSS and AM statistical software example.

Linda K. Muthén Bengt Muthén. Copyright 2008 Muthén & Muthén Table Of Contents

Handling attrition and non-response in longitudinal data

Multilevel Modeling Tutorial. Using SAS, Stata, HLM, R, SPSS, and Mplus

Evaluation of the Tennessee Voluntary Prekindergarten Program: Kindergarten and First Grade Follow Up Results from the Randomized Control Design

What We Know About Developmental Education Outcomes

What are the Effects of Comprehensive Developmental Guidance Programs on Early Elementary Students Academic Achievement?

Evaluation of Rocketship Education s Use of DreamBox Learning s Online Mathematics Program. Haiwen Wang Katrina Woodworth

Multilevel Models for Social Network Analysis

An introduction to hierarchical linear modeling

Measuring BDC s impact on its clients

CDPHP CAHPS 4.0 Adult Medicaid Health Plan Survey

Search Engine Submission

Εισαγωγή στην πολυεπίπεδη μοντελοποίηση δεδομένων με το HLM. Βασίλης Παυλόπουλος Τμήμα Ψυχολογίας, Πανεπιστήμιο Αθηνών

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Do charter schools cream skim students and increase racial-ethnic segregation?

The Effect of Test Preparation on Student Performance

10. Analysis of Longitudinal Studies Repeat-measures analysis

2015 Michigan Department of Health and Human Services Adult Medicaid Health Plan CAHPS Report

OVERVIEW OF CURRENT SCHOOL ADMINISTRATORS

Transcription:

Power Calculation Using the Online Variance Almanac (Web VA): A User s Guide Larry V. Hedges & E.C. Hedberg This research was supported by the National Science Foundation under Award Nos. 0129365 and 0815295. Additional support was provided to Hedges by the Institute of Education Sciences under Award No. R305D110032. Any opinions, findings, and conclusions or recommendations expressed here are those of the author(s) and do not necessarily reflect the views of the National Science Foundation or Institute of Education Sciences. Center for Advancing Research and Communication in Science, Technology, Engineering, and Mathematics (ARC) http://arc.uchicago.edu NORC at the University of Chicago 1155 East 60 th Street Chicago, IL 60637

Table of Contents Page Introduction 1 The Web VA 3 Using the Web VA 4 Stata program for computing power using Web VA design parameters 8 Example power analyses 8 A design with no covariates (unconditional model) 9 A design using pretest as a covariate 10 A design using demographic variables as covariates 11 A design using both pretest and demographic covariates 12 Exhibits Exhibit 1: Surveys providing values for the Web VA 3 Exhibit 2: Opening page of the Web VA resource 4 Exhibit 3: Filter selection page of the Web VA resource 6 Exhibit 4: Sample output from the Web VA 7 Exhibit 5: Design parameters based on a national probability sample of Grade 8 12 Mathematics Achievement from All Schools in The National Education Longitudinal Study (NELS). (All Regions, All Urbanicities) Exhibit 6: Stata output from the RDPOWER program based on parameters for a 9 design with no covariates Exhibit 7: Stata output from the RDPOWER program based on parameters for a 11 design with no covariates Exhibit 8: Stata output from the RDPOWER program based on parameters for a 12 design with no covariates Exhibit 9: Stata output from the RDPOWER program based on parameters for a 13 design with no covariates

1 Introduction In planning any research study, it is crucial that the design be sufficiently sensitive to detect effects of a size that the treatment is expected to produce. Thus wise design requires statistical power and precision computations to ensure that effects that are expected have a high probability of being detected by the design. In designs that involve simple random sampling and assign individuals to treatments, power and precision calculations are straightforward. Statistical power at a given statistical significance level is determined entirely by the effect size expected, the sample size, and (if covariates are used) the number and effectiveness of covariates in explaining variance in the outcome (measured by the R 2 value). Thus choosing a design with a given statistical power simply involves finding the sample size that yields that power given the effect size and, if covariates are used, the R 2 value. However, not all research designs involve simple random sampling and assignment of individuals to treatments. Education studies frequently sample students within schools, which is not simple random sampling but a form of two-stage cluster sampling. Moreover experiments and quasi-experiments in education and the social sciences often assign intact groups of individuals (such as schools or classrooms) to treatments. Such studies are called cluster randomized or group randomized studies. In the experimental design literature, they are also called hierarchical experiments or experiments with groups as nested factors within treatments. In other contexts they are called multilevel experiments to emphasize the multiple levels of sampling (first groups, then individuals within groups). Studies with group assignment arise for many reasons. It may be impractical to assign individuals within the same group to different treatments (e.g., assign different students in the same classroom to different curricula or teaching methods). It may be politically infeasible to assign individuals within the same group to different treatments (e.g., assign different students in the same school to different class sizes or resource regimes). Finally, it may be theoretically impossible to assign different treatments to individuals within the same group (e.g., when the treatment is a whole school reform such as a management reform). The design of such studies poses challenges that are more complicated than studies with simple random sampling and assignment of individuals to treatments. For example, cluster sampling reduces efficiency, so that a cluster sample has less information than a simple random sample of the same size. (The reduction in information, where information is defined as increased estimation error variance, is sometimes called the design effect in survey statistics.) The reduction in efficiency of cluster sampling, combined with assignment of intact groups, means that studies with group assignment are less sensitive than studies with the same total sample size using simple random sampling and individual assignment. Moreover, because studies with group assignment involve multilevel sampling, statistical power computations are more complex and conventional software for

2 computing power is not intended for these designs (and will not provide the appropriate calculations). For example, in multilevel studies with group assignment, statistical power depends not only on total sample size, but also on how that sample size is allocated across levels of the design. A design that assigns 100 groups of 10 students each to treatments will have very different statistical power than a study that assigns 10 groups of 100 students each to treatments, even though they have the same total sample size of 1,000. The structure of the population variance decomposition (e.g., the proportion of total variation that arises between groups) also affects statistical power. This decomposition is often summarized by an intraclass correlation (ICC), which is literally the ratio of betweengroup variance to total variance in the population. As in the case in individual assignment, the use of covariates also affects statistical power in designs with group assignment. However, because group assignment with cluster sampling involves two levels of variation (group-level and individual within-group level), covariates can have effects on variation at either or both levels. (This is explicit when multilevel statistical analyses are used.) The effect of covariates depends on how much variation is explained at each level of the design and is usually summarized by the R 2 value at each level (i.e., a group level R 2 and an individual-level R 2 ). Thus statistical power calculations in designs with cluster sampling and group assignment to treatments require knowledge not only of effect size and sample sizes at each level of the design, but also information about population variance structure (summarized by the intraclass correlation) and (if covariates are used) information about the effectiveness of covariates in explaining variation summarized by the R 2 values at each level of the design. It is difficult to know (in the abstract and before a study has been carried out) what values of design parameters (intraclass correlations and R 2 values) should be used to plan a study. The purpose of the Online Variance Almanac (Web VA) is to provide users with a resource for principled planning of research designs: empirical evidence from surveys of academic achievement on design parameters for planning research studies.

3 The Web VA The Web VA allows users to access thousands of intraclass correlations and R 2 values from different datasets, grades, subjects, and subsamples. These values were obtained from analyses of four national surveys that used probability samples of students in kindergarten through 12 th grade in U.S. schools. The surveys included are listed in Exhibit 1. Exhibit 1: Surveys providing values for the Web VA The Early Childhood Longitudinal Program (ECLS) (http://nces.ed.gov/ecls/) The Longitudinal Study of American Youth (LSAY) (http://lsay.msu.edu/) The National Education Longitudinal Study (NELS) (http://nces.ed.gov/surveys/nels88/) Prospects: The Congressionally Mandated Study of Educational Growth and Opportunity. (http://www2.ed.gov/pubs/prospects/index.html) In analyzing the surveys to calculate the values available via the Web VA we considered the following: Achievement Domain. For each of these surveys we computed reference values for reading and mathematics achievement. Geographic Subgroups. For each of these surveys, we computed reference values for the entire nation and for geographic subgroups defined by region and by degree of urbanicity of the school location as defined by those surveys. School Subsample. For each achievement domain and geographic group, we also considered subsamples of schools defined by level of average achievement in the school and level of average SES in the school. Research Design. We computed design parameters for four different research designs. One design used no covariates. A second design used only a pretest on the same students one year (or in the NELS survey, two years) earlier. A third design used only demographic variables as covariates: a composite measure of socioeconomic status, gender, and indicators of Black and Hispanic race/ethnicity. A fourth design included both pretest and the demographic covariates. A full discussion of the methodology employed to calculate the ICCs and R2 values reported here is contained in Hedges and Hedberg s (2007) Intraclass Correlation Values for Planning Group Randomized Trials in Education. Also reported in that 2007 article are values for the nation as a whole (all regions, all urbanicities). The Web VA provides access to both the national values and values for specific subgroups (e.g., regions and urbanicities).

4 Using the Web VA The Center for Advancing Research and Communications in Science, Technology, Engineering, and Mathematics (ARC) provides access to the Web VA via https://arc.uchicago.edu/reese/variance-almanac-academic-achievement. Please follow these steps to utilize the Web VA resource to obtain reference values for computing statistical power in group randomized experiments. 1) Click on the Go to VA arrow from the web page above or go directly to URL: https://arcdata.uchicago.edu/. You will be taken to a web page like that shown in Exhibit 2. Exhibit 2: Opening page of the Web VA resource 2) If you would like to join our mailing list, please enter your e-mail address. Providing this information is optional; if you prefer not to provide this contact information, click on Skip to Search. 3) You will be asked to select filters for your query as shown in Exhibit 3. After each selection, you will need to click on Submit. These filters are: o Subject Mathematics Achievement Reading Achievement o Region [of the U.S.] All Regions Midwest Northeast South West

5 o Urbanicity All Urbanicities Rural Suburban Urban o Subsample All Schools Low Achieving Schools Low SES Schools o Grade (Select a grade, from kindergarten through 12 th grade from the dropdown list) o Data Source (Depending upon the grade selected, one or more of the following data sources will be available to choose from, as shown in the table below) Early Childhood Longitudinal Study (ECLS) Longitudinal Study of American Youth (LSAY) Cohort 1 Longitudinal Study of American Youth (LSAY) Cohort 2 National Education Longitudinal Study (NELS) Prospects Study of Educational Growth and Opportunity Cohort 1 Prospects Study of Educational Growth and Opportunity Cohort 2 Prospects Study of Educational Growth and Opportunity Cohort 3 Grade K LSAY Cohort 1* LSAY Cohort 2* Prospects Cohort 1 Prospects Cohort 2 Prospects Cohort 3 ECLS NELS X 1 X 2 X 3 X X X 4 X 5 X 6 X 7 X X 8 X X X 9 X X 10 X X X 11 X 12 X X *Math only

6 Exhibit 3: Filter selection page of the Web VA resource 4) Once all the filters have been selected, click on Submit. This will lead you to a new web page that displays a formatted table with the design parameters of interest and a separate table of sample information as shown in Exhibit 4. This page is formatted for easy printing. The parameter selections entered to generate these tables are included in the titles and footnotes.

7 Exhibit 4: Sample output from the Web VA CAUTION: Even though the total sample sizes of these surveys are large, the sample sizes for some of the subgroups are rather small. It is important to be sensitive to the sample sizes and particularly the standard errors of the design parameters here, which reflect the uncertainty of the estimates.

8 Stata program for computing power using Web VA design parameters Hedges and Hedberg developed the program RDPOWER, available for use with Stata 11, to calculate statistical power using output from the Web VA. To install this program in Stata, follow these four steps. a) Launch Stata 11. b) Make sure you are connected to the internet with administrative privileges. c) Type ssc install rdpower. d) Stata will report when the program is installed. Following are examples for using RDPOWER to calculate statistical power using Web VA results. With Stata open, you can also Type help rdpower to learn more about the program s features and to see additional examples. Example Power Analyses The Web VA gives parameters necessary to calculate statistical power for four designs: (1) design with no covariate (unconditional model); (2) design using pretest as a covariate; (3) design using demographic variables as covariates; and (4) design using both pretest and demographics covariates. Looking again at the sample Web VA results presented in Exhibit 4 (Design Parameters of interest reproduced in Exhibit 5), following are instructions for calculating power using RDPOWER for each design in turn. These results are based on 12 th grade math scores from NELS, using all schools from all regions and all urbanicities. For the following example, we will focus only on the intraclass correlation and R 2 values reproduced below. Exhibit 5: Design parameters based on a national probability sample of Grade 12 Mathematics Achievement from All Schools in The National Education Longitudinal Study (NELS). (All Regions, All Urbanicities) Covariates Used Intraclass Correlation Standard Error of the Intraclass Correlation Proportion of Variance Accounted for by Covariates (R 2 ) School Level None 0.2386 0.0109 -- -- Pretest (One school and one student level variable) SES, race, gender (Four school and four student level variables) Pretest, SES, race, gender (Five school and five student level variables) Student Level 0.0376 0.0051 0.9746 0.7985 0.0686 0.0068 0.7817 0.1018 0.0341 0.0054 0.9764 0.8010

9 A design with no covariates (unconditional model) Suppose that the design being considered uses no covariates. Let us calculate power for a two-level randomized design in which students are sampled within schools and treatment is assigned to schools (that is, treatment is at level 2). The intraclass correlation value for this model ( icc2 ) is the ICC value in the first row (0.2396). To calculate power for detecting an effect size (a standardized mean difference or d-index ) of 0.25, with 8 schools per treatment and 25 students per school (8 * 2 * 25, for a total sample size of 400 students), we type the following into Stata: rdpower crd2, es(0.25) n(25) m(8) icc2(0.2386) This command tells Stata to use the RDPOWER program to calculate power for a two-level cluster randomized design ( crd2 ), with an effect size ( es ) of 0.25, with 25 students per cluster (i.e., n =25) and 8 clusters per treatment (i.e., m =8), with an intraclass correlation at level 2 of 0.2386 (i.e., icc2 =0.2386). Exhibit 6 shows the output from this command. This tells us that we have 0.1465 power with this design. Exhibit 6: Stata output from the RDPOWER program based on parameters for a design with no covariates

10 A design using pretest as a covariate In many situations pretest data are available for each student. We have determined the R 2 values associated with entering a group-centered pretest score and the school average pretest score. Important: In this model we have entered 1 level 2 variable ( l2vars ). Still using the intraclass correlation value from the unconditional model (in this case 0.2386), we now enter the R 2 values from the pretest covariate model (cells highlighted in yellow in Exhibit 5). In this case, the between-school (i.e., the proportion reduction in error at level 2, or pre2 ) R 2 value is 0.9746 and the within-school (i.e., the proportion reduction in error at level 1, or pre1 ) R 2 value is 0.7985. Next we will calculate power for a two-level randomized design in which treatment is at level 2. To calculate power for detecting an effect size of 0.25, with 8 schools per treatment and 25 students per school (for a total sample size of 400 students), type the following into Stata: rdpower crd2, es(0.25) n(25) m(8) icc2(0.2386) pre2(0.9746) pre1(0.7985) l2vars(1) This tells Stata to use the RDPOWER program to calculate power for a two-level cluster randomized design ( crd2 ) with an effect size ( es ) of 0.25, with 25 students per cluster ( n =25) and 8 clusters per treatment ( m =8), with an intraclass correlation ( icc2 ) of 0.2386, a pre2 score of 0.9746, a pre1 score of 0.7985, and using 1 variable at level 2 ( l2vars =1). Exhibit 7 shows the output from this command. This tells us that the power for this design is 0.9865.

11 Exhibit 7: Stata output from the RDPOWER program based on parameters for a design with no covariates A design using demographic variables as covariates In many situations demographic data are available for each student. In our demographic covariates model we have determined the R 2 values associated with entering groupcentered demographic variables and their school averages. Important: In this model we have entered 4 level 2 variables ( l2vars ). Still using the intraclass correlation value from the unconditional models (in this case icc2 of 0.2386), we now enter the R 2 values from the demographic covariates model (cells highlighted in blue in Exhibit 5). As shown in Exhibit 5, the between-school (i.e., the pre2 ) R 2 value is 0.7817 and the within-school (i.e. the pre1 ) R 2 value is 0.1018. Next we will calculate power for a two-level cluster randomized design ( crd2 ) in which treatment is at level 2. To calculate power for detecting an effect size ( es ) of 0.25, with 8 schools per treatment and 25 students per school (for a total sample size of 400 students), type the following into Stata: rdpower crd2, es(0.25) n(25) m(8) icc2(0.2386) pre2(0.7817) pre1(0.1018) l2vars(4)

12 This tells Stata to use the RDPOWER program to calculate power for a two-level cluster randomized design ( crd2 ) with an effect size ( es ) of 0.25, with 25 students per cluster ( n = 25) and 8 clusters per treatment ( m =8), with a level-2 ICC ( icc2 ) of 0.2386, with a level 2 PRE score ( pre2 ) of 0.7817, a level 1 PRE score ( pre1 ) of 0.1018, and using 4 variables at level 2 ( l2vars ). Exhibit 8 shows the output from this command, which tells us that the power for this design is 0.3614. Exhibit 8: Stata output from the RDPOWER program based on parameters for a design with no covariates A design using both pretest and demographics covariates In many situations pretest and demographic data are available for each student. In our pretest and demographic covariates model, we have determined the R 2 values associated with entering group-centered pretest and demographic variables and their school averages. Important: In this model we have entered 5 level 2 variables. Still using the ICC value from the unconditional models (in this case 0.2386), we now enter the R 2 values from the pretest and demographic covariates model (cells highlighted in orange in Exhibit 5). In this case, the between-school (i.e., the pre2 ) R 2 value is 0.9764 and the within-school (i.e., the pre1 ) R 2 value is 0.8010.

13 Next we will calculate power for a two-level cluster randomized design in which treatment is at level 2. To calculate power for detecting an effect size of 0.25, with 8 schools per treatment and 25 students per school (for a total sample size of 400 students), type the following into Stata: rdpower crd2, es(0.25) n(25) m(8) icc2(0.2386) pre2(0.9764) pre1(0.8010) l2vars(5) This tells Stata to use the RDPOWER program to calculate power for a two-level cluster randomized design ( crd2 ), with an effect size ( es ) of 0.25, with 25 students per cluster ( n =25) and 8 clusters per treatment ( m =8), with a level 2 intraclass correlation ( icc2 ) of 0.2386, a level 2 PRE score ( pre2 ) of 0.9764, a level 1 PRE score ( pre1 ) of 0.8010, and using 5 variables at level 2 ( l2vars =5). Exhibit 9 shows the output from this command. This tells us that the power for this design is 0.9834. Exhibit 9: Stata output from the RDPOWER program based on parameters for a design with no covariates

14 Sources for more information For more information on power analysis in designs with cluster sampling and group assignment, see: Hedges, L.V., and Hedberg, E.C. (2007). Intraclass Correlations for Planning Group- Randomized Experiments in Education. Educational Evaluation and Policy Analysis, 29, 60-87. Hedges, L. V. & Rhoads, C. (2009). Statistical Power Analysis in Education Research (NCSER 2010-3006). Washington, DC: National Center for Special Education Research, Institute of Education Sciences, U.S. Department of Education. This report is available on the IES website at http://ies.ed.gov/ncser/pubs/20103006/pdf/20103006.pdf Raudenbush, S.W. (1997). Statistical Analysis and Optimal Design for Cluster Randomized Experiments. Psychological Methods, 2, 173-185. Raudenbush, S.W., and Liu, X. (2000). Statistical Power and Optimal Design for Multisite Randomized Trials. Psychological Methods, 5(3): 199-213. Schochet, P. (2008). Statistical Power for Random Assignment Evaluations of Education Programs. Journal of Educational and Behavioral Statistics, 33,62-87. For more information on the data sources used to estimate these design parameters see: Miller, Jon D. Longitudinal Study of American Youth, 1987-1994, and 2007 [Computer file]. ICPSR30263-v.1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2011-04-01. doi 10.3886/ICPSR30263. U.S. Dept. of Education. Prospects: The Congressionally Mandated Study of Educational Growth and Opportunity. http://www2.ed.gov/pubs/prospects/index.html U.S. Department of Education. Institute of Education Sciences. National Center for Education Statistics. Early Childhood Longitudinal Study (United States): Kindergarten Class of 1998-1999, Kindergarten-Eighth Grade Full Sample [Computer file]. ICPSR28023-v1. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor], 2011-05-27. doi 10.3886/ICPSR28023. U.S. Department of Education. National Center for Education Statistics. NATIONAL EDUCATION LONGITUDINAL STUDY: BASE YEAR THROUGH THIRD FOLLOW-UP, 1988-1994 [Computer file]. ICPSR version. Washington, D.C.: U.S. Department of Education, National Center for Education Statistics [producer], 1996. Ann Arbor, MI: Inter-university Consortium for Political and Social Research [distributor, 1999. doi 10.3886/ICPSR06961.

15 For questions regarding these instructions, or comments on the Web VA, please contact ARC at arc-info@norc.org.