1 Changing Distributions: How online college classes alter student and professor performance Eric Bettinger, Lindsay Fox, Susanna Loeb, and Eric Taylor CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY cepa.stanford.edu
2 Online Higher Education As of 2011, more than 3 in 10 US college students take at least one course online. Colleges are increasing their online presence by introducing completely online programs. These trends are separate from the fascination and discussion of MOOCs. Promise of online education is the reduction in costs. Little evidence exists on the effectiveness of online education and on how it changes the dynamics of traditional learning.
3 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY cepa.stanford.edu (Allen and Seaman, 2013)
4 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY cepa.stanford.edu
5 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY Still not much use of MOOCs cepa.stanford.edu
6 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY cepa.stanford.edu Growth in Online Programs Continued Growth in Overall and in Many Offerings Complete in 2002
7 Why Might Online Results for Selection Students Differ? Students who choose online may differ from those who don t Format/Technology Effects Access online at any time from any place Content Delivery Interactions Monitoring Peers (Bettinger, Loeb, Taylor 2014) Professors: (e.g. Bettinger and Long, 2010; Carrell and West, 2010; Figlio et al., 2013)
8 Small Research Literature Xu and Jaggars (2011, 13) negative on course completion & grades employ propensity score matching for introductory English and Math courses in Virginia community colleges (2011) and a distance IV in courses in in WA community colleges (2013) percentage points on completion, -.34 on grades Hart, Freidman, Hill 2014 negative effects in California community college paper uses a series of fixed effects models, including college-course fixed effects and individual fixed effects (~8.4 percentage points less likely to complete) Streich, 2013 negative effects on passing in community college Student and course fixed effects and an instrument based on the percent of course seats offered online students are percentage points less likely to pass an online class. The IV estimate falls 8.3 percentage points. Figlio et al., 2013 online scores are worse for subgroups (males, Hispanics, and lower-achieving students) Random assignment into economics course at U Florida Bowen, Chingos, Lack, Nygren, 2014 No detectable difference between hybrid and traditional courses Random assignment at 6 public institutions Streich, Maybe positive employment effects compares pre and post college employment for older students. Benefit largely in years immediately following initial enrollment; student may still be enrolled. Earnings fall less during enrolled periods for students who enroll in online classes. Large fixed benefit but an insignificant dosage effect in long run
9 Particularly interesting group Large and growing For-Profits Nearly 2.4 million undergraduate students (full-time equivalent) were enrolled at a for-profit college or university in the academic year. 18 %of all associate degrees (Deming, Goldin, and Katz 2012). Primarily Non-Traditional Students 30% previously attended college; 70% are starting college for the first time 25% of prior enrollees are churning and have attended at least two other institutions before coming to the for-profit sector. The look like they are experimenting with for-profits as a destination or to add a course for credit ~ 40% of all for-profit college students transfer to another college.
10 For-Profits Controversial Recruiting practices, Use of Federal Financial Aid (GAO 2010) Gainful Employment regulations require institutions to show graduates meet strict income-to-debt ratios and loan repayment rates to maintain eligibility (Federal Register 2010) Small Literature of effects Cellini and Chaudhary, 2012 students who enroll in associate s degree programs in for-profit colleges get earnings gains between 6 and 8 percent - 95% confidence interval range from -2.7 to 17.6 percent not different than public community colleges. Students who complete associate s degrees in for-profit earn ~22 percent, or 11 percent per year; some evidence that this is higher than returns for public sector graduates. Deming et al, 2014 Resume audit study suggests negative impacts
11 Some Myths/Facts on Online Education (From Doyle, 2009) Prediction 1: Online students will be very different from students taking courses on campuses Not really true Prediction 2: Most students who enroll online will do so exclusively No Prediction 3: Students will take classes online at distant universities Median is only 15 miles from students home Prediction 4: For-profit higher education institutions will consist primarily of online education At all types of institutions online enrollment accounts for about 15 percent of total
12 What s new This Study Compares Online with In-Person Courses (only small literature almost all on community colleges) For-Profit Institution Larger Scale with opportunities for identification Assesses effects on variation in performance as well as means Standardization vs. Monitoring Research Questions 1. How do online courses affect students performance in terms of course completion, grades and later enrollment? 2. How do online courses affect the variance of student performance? 3. How do online courses affect the variance in professor effectiveness?
13 DeVry University One of the largest for-profit institutions: DeVry enrolls over 100,000 undergraduates and 30,000 graduate students, and employs over 5,500 professors. ~5 percent of all for-profit undergraduates. ranks 7th in enrollment among for-profit colleges. The largest institution, University of Phoenix, enrolls just over 10 percent and the 15th largest enrolls 1 percent; collectively the top 15 enroll 52 percent of students at for-profit colleges. Began primarily as a technical school in the 1930s, but now most of the University s students major in business management, technology, health, or some combination; and 80 percent are seeking a bachelor s degree.. Approximately 100 local campuses around the United States where about 1/3 of courses take place. The other 2/3 of courses are conducted online. Geographic representativeness helps with identification
14 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY cepa.stanford.edu
15 Sample All DeVry University undergraduate course enrollments May 2009 through November 2013 Can only identify professors: May November 2013 term RQ1 (effects): Full Sample (n>281,000 students) RQ2 (student variance): Only Students who took both online and live RQ3 (professor variance) Only professors who taught both Outcomes measure student success using course grades Each professor assigns traditional A-F grades, which we convert to the standard 0-4 point equivalents measure student persistence in the subsequent semester and the semester one year later
16 All Online In-person (1) (2) (3) Observations Student-course-session 1,947,952 1,163, ,389 Students 197, ,198 99,636 Professors 5,287 2,588 3,366 Courses Sections 158,458 60,463 97,995 Student characteristics Female Age Online
17 Student prior academics All Online In-person Cumulative GPA (0.931) (0.935) (0.923) Failed any course Withdrew from any course Student course outcomes Grade (0-4) (1.316) (1.343) (1.275) A- or higher B- or higher C- or higher Passed Student persistence outcomes Enrolled next semester Credits attempted next semester (4.745) (4.646) (4.734) Enrolled semester one year later Credits attempted semester one year later (5.765) (5.498) (5.925)
18 Methods RQ1 Basic Model: = +, < ( ) + Outcome is a function of taking course online, prior gpa, observable student characteristics (gender, age), course fixed effects, time (27 terms), home campus indicators. Assign home campus based on the physical distance between the student s home address and the local campus addresses Cluster standard errors at section level
19 Methods RQ1 But, choice of online/live non-random Students who would do better may take one or the other; students who would do better in this particular class with one or other take it Use choice arising from idiosyncratic factors Distance Local in-person offerings Instruments indicator variable =1 if student i s home campus b offered course c on campus in a traditional class setting during term t Combined with the home campus fixed effects, ψbict, this limits the identifying variation to between-term variation within-campuses in the availability of an in-person option the distance in miles (Haversine) between student i s home address and the nearest local campus. (all students, students within 30 miles)
20 Results RQ1 Course grade (A = 4... F = 0) Enrolled Next Semester Enrolled One Year Later (1) (2) (3) (4) (5) (6) A. Instrument = Course available in person at home campus Local average treatment effect *** *** *** *** *** *** (0.011) (0.013) (0.004) (0.005) (0.005) (0.008) First stage: coefficient on excluded instrument (0.001) (0.001) (0.001) (0.001) (0.001) (0.001) Sample mean (st. dev.) for dep. var (1.316)
21 Results RQ1 Course grade (A = 4... F = 0) Enrolled Next Semester Enrolled One Year Later (1) (2) (3) (4) (5) (6) B. Instrument = Distance to home campus (excluding students more than 30 miles away) Local average treatment effect *** *** *** *** *** (0.030) (0.023) (0.007) (0.006) (0.010) (0.010) First stage: coefficient on excluded instrument (0.000) (0.000) (0.000) (0.000) (0.000) (0.000) Sample mean (st. dev.) for dep. var (1.304)
22 Robustness and Extensions 1. Availability might be predictable. Some campuses offer it using seemingly consistent patterns. Alternative instruments varying availability Robust to different instruments 2. Distributional effects Grade distribution is discrete Effects are throughout the distribution and stronger at top (In distance instruments, effects appear stronger at the bottom)
23 Grade: At Grade least A- Panel B. IV using Availability (current session) Grade: At least B- Grade: At least C- Passed Withdrew Online *** *** *** *** *** 0.024*** (standard error) (0.008) (0.003) (0.003) (0.002) (0.002) (0.002) Panel C. IV using Availability (current, next, prior sessions) Online *** *** *** *** *** 0.019*** (standard error) (0.011) (0.005) (0.004) (0.003) (0.002) (0.002) Panel D. IV using Availability (current, next 1 or 2, prior 1 or 2 sessions) Online *** *** *** *** *** 0.017*** (standard error) (0.013) (0.006) (0.005) (0.004) (0.003) (0.003) Panel E. IV using Availability (current, next 1, 2 or 3, prior 1, 2, or 3 sessions) Online *** *** *** *** *** 0.016*** (standard error) (0.014) (0.006) (0.005) (0.004) (0.003) (0.003)
24 Overview of RQ1. Relative Impact of Online Courses on Student Outcomes Outcomes are unambiguously worse. Grades are lower. Future enrollments are less intense and less likely to happen. Who are compliers? Mean prior Proportion Mean GPA female age (1) (2) (3) Compliers: will take the course in-person if offered, otherwise will take the course online Always-takers: always take the course online Never-takers: always take the course in-person
25 Methods - RQ2 (variance of student outcomes) Again, if students randomly assigned to take a course online or in-person, the parameter of interest could be estimated by: Here we just use controls: estimating γ using an extension our basic model for RQ1: = +, < ( ) + + (1 ) +, with the assumption ~ (, 2, 2 ). are random effects. use maximum likelihood. sample includes students who take both.
26 Variance in Student Outcomes Sample st. dev. course grade Online - in-person diff. (γ) P-value test diff. = 0 Studentcourse observatio ns (1) (2) (3) (4) Instrumental variables estimate ,947,952 Random effects estimate ,947,952 Random effects estimate (only students who took courses both online and in-person) ,023,610
27 Variance in Student Outcomes Credits Next Semester Credits Next Year Grade Panel A. IV model (all students) S.D. Difference Online vs. Brick Standard error [0.000] [0.328] [0.846] N (student-by-course) Panel B. Random Coefficients Model (all students) S.D. Difference Online vs. Brick Likelihood Ratio Test: Difference=0 (p-value) [0.000] [0.000] [0.000] N (student-by-course) Panel C. Random Coefficients Model (only students taking both) S.D. Difference Online vs. Brick Likelihood Ratio Test: Difference=0 (p-value) [0.000] [0.000] [0.000] N (student-by-course)
28 Methods RQ3 (variance of professor effects) Similar approach: replace the student random effects with random effects for professors, μ jict, and for course sections, θ sict. = +, < ( ) + ( ) + ( ) (1 ) + ( ) + ( ) (1 ) +, with the assumption ~ (, 2, , ). 2
29 Variance in Professor Effectiveness A. Professors who teach both in-person and online Course grade (A = 4... F = 0) Enrolled Next Semester Enrolled One Year Later Course grade (A = 4... F = 0) B. All professors Enrolled Next Semester Enrolled One Year Later (1) (2) (3) (4) (5) (6) Standard deviation of professor effects In-person classes Online classes In-person = online test p- value [0.244] [0.255] [0.026] [0.000] [0.354] [0.000] Student-course observations
30 Variance in Professor Effectiveness Grade: At least A- Grade: At least B- Grade: At least C- Passed Withdr ew Credits Next Semest er Enrolle d Next Semest er Credits Next Year Enrolle d Next Year Grade Panel A. Professors teaching both inperson and online Brick (standard deviation) Online (standard deviation) Likelihood Ratio Test: Difference=0 (p-value) N (student-by-course) Panel B. All Professors Brick (standard deviation) Online (standard deviation) Likelihood Ratio Test: Difference=0 (p-value) N (student-by-course)
31 Online Heterogeneity in Impacts by Prior Achievement Course grade (A = 4... F = 0) Dependent variable Enrolled Next Semester Enrolled One Year Later (1) (2) (3) (4) (5) (6) A. Instrument = Course available in person at home campus *** 0.214*** 0.027*** 0.043*** 0.031*** *** (0.011) (0.013) (0.004) (0.005) (0.005) (0.008) Online * prior grade point avg ** 0.101*** (centered at sample mean) (0.032) (0.027) (0.009) (0.008) (0.006) (0.006)
32 Summary of Key Findings Students perform consistently worse in online courses as measured by academic performance and retention Variance in student performance greater online But not driven by greater variance in professor performance actually less variance across professors To see professor effects
33 Density Differences Between Professors COLL148 Professor Effects : Sum of Quiz Scores and Final Student Standard Deviations by Professor Mean Student Score =0.142 Estimated 'Value-Added' =0.075 AFA Calculus 1 =0.05 AFA Calculus 2 / K-12 Math =0.14
34 Density Differences Between Professors PSYC110 Professor Effects : Sum of Quiz Scores and Final Student Standard Deviations by Professor Mean Student Score =0.137 Estimated 'Value-Added' =0.042 AFA Calculus 1 =0.05 AFA Calculus 2 / K-12 Math =0.14
35 Potential Implications Student accountability and responsibility might explain the increased variance. Figlio (2013) found that students in online settings were more likely to procrastinate. Design of courses could improve DeVry embedded student incentives to participate in the course prior to our analysis Motivated students likely succeed. Procrastinating students drive failure. Decreased professor variance can be both good and bad. Great teaching and bad teaching are stifled. Conformity increases. In other analysis, we can examine what percentage of time faculty use the exact same language as other faculty evidence of conformity.
36 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY Professors Consistent Behavior COLL148 Discussion Threads : Professor Behavior as Predictor cepa.stanford.edu Day of Course Fraction Logged In On or Before (cumulative)
37 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY Conformity in Speech, the Good cepa.stanford.edu
38 CENTER FOR EDUCATION POLICY ANALYSIS at STANFORD UNIVERSITY Conformity in Speech, the Bad cepa.stanford.edu
39 Contexts seem to matter Conformity Some evidence that conformity to bad modeling has negative outcomes: One post in one course frequently copied by other faculty had significant typos (e.g. acronym was spelled Accronmy ) Students who were randomly assigned to professors who perpetuated this misspelling were 1% less likely to be enrolled the next semester. Some professors corrected themselves, and their pre-correction conformity led to 3.7% decline in enrollment the next semester.
40 Valuation of Failure Some failure may be good Online courses may facilitate new enrollments, and we cannot distinguish whether the overall increase in new enrollments compensates for reduced effectiveness. Online courses generate some cost savings due to less expensive faculty and decreased overhead. Costs savings may compensate for decreased effectiveness. Future studies can shed light on this.