An Analysis of the NRC's Assessment of the Doctoral Programs in Public Affairs

Similar documents
A Guide to the Methodology of the National Research Council Assessment of Doctorate Programs

2010 National Research Council Data-Based Assessment of Research Doctorate Programs

These two errors are particularly damaging to the perception by students of our program and hurt our recruiting efforts.

The Path to Being an Economics Professor: What Difference Does the Graduate School Make? Zhengye Chen. University of Chicago

Top Universities Have Top Economics Departments

DESCRIPTIVE STATISTICS. The purpose of statistics is to condense raw data to make it easier to answer specific questions; test hypotheses.

Methodological Overview of 2010 NRC Study

Report to the American Sociological Association Council Regarding the 2010 National Research Council Assessment of Doctorate Programs*

03 The full syllabus. 03 The full syllabus continued. For more information visit PAPER C03 FUNDAMENTALS OF BUSINESS MATHEMATICS

National Research Council (NRC) Assessment of Doctoral Programs

College Readiness LINKING STUDY

Multiple Optimization Using the JMP Statistical Software Kodak Research Conference May 9, 2005

Title: Lending Club Interest Rates are closely linked with FICO scores and Loan Length

business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar

We are often interested in the relationship between two variables. Do people with more years of full-time education earn higher salaries?

Consultant Management Estimating Tool

A Guide to the Methodology of the National Research Council Assessment of Doctorate Programs

Correlation key concepts:

Descriptive Statistics

Chapter 12 Academic & Reputational Rankings

Simple Predictive Analytics Curtis Seare

Simulation and Risk Analysis

To: Political Science Alumni From: Scott Bennett, Head Re: Political Science Department Update Date: November, 2010

Research-Doctorate Programs in the Biomedical Sciences: Selected Findings from the NRC Assessment

Analyzing and interpreting data Evaluation resources from Wilder Research

Pearson s Correlation

Predictive Analytics: Extracts from Red Olive foundational course

WHAT TYPES OF ON-LINE TEACHING RESOURCES DO STUDENTS PREFER

The Relationship between School/Department Rankings, Student Achievements, and Student Experiences: The Case of Psychology

Lecture 2: Descriptive Statistics and Exploratory Data Analysis

A Color Placement Support System for Visualization Designs Based on Subjective Color Balance

Chapter 23. Inferences for Regression

2014 Information Technology Survey Results

Correlational Research. Correlational Research. Stephen E. Brock, Ph.D., NCSP EDS 250. Descriptive Research 1. Correlational Research: Scatter Plots

BUAD 310 Applied Business Statistics. Syllabus Fall 2013

Section 3 Part 1. Relationships between two numerical variables

Statistics. Measurement. Scales of Measurement 7/18/2012

AN ILLUSTRATION OF COMPARATIVE QUANTITATIVE RESULTS USING ALTERNATIVE ANALYTICAL TECHNIQUES

ANALYSIS OF TREND CHAPTER 5

The 1995 NRC Ratings of Doctoral Programs: A Hedonic Model

Multiple Regression in SPSS This example shows you how to perform multiple regression. The basic command is regression : linear.

QUANTIFIED THE IMPACT OF AGILE. Double your productivity. Improve Quality by 250% Balance your team performance. Cut Time to Market in half

Improving Graduate Programs at the University of Miami National University of Ireland, Galway

A Correlation of. to the. South Carolina Data Analysis and Probability Standards

Multiple Regression. Page 24

UMass Amherst Programs in NRC Assessment of Research Doctorate Programs Updated April, 2011

" Y. Notation and Equations for Regression Lecture 11/4. Notation:

STUDENT ATTITUDES TOWARD WEB-BASED COURSE MANAGEMENT SYSTEM FEATURES

The Study of Relationship between Customer Relationship Management, Patrons, and Profitability (A Case Study: all Municipals of Kurdistan State)

How To Rank A Program

STA-201-TE. 5. Measures of relationship: correlation (5%) Correlation coefficient; Pearson r; correlation and causation; proportion of common variance

Correlation Coefficient The correlation coefficient is a summary statistic that describes the linear relationship between two numerical variables 2

EXECUTIVE REMUNERATION AND CORPORATE PERFORMANCE. Elizabeth Krauter Almir Ferreira de Sousa

A Percentile is a number in a SORTED LIST that has a given percentage of the data below it.

The importance of graphing the data: Anscombe s regression examples

Application of a Linear Regression Model to the Proactive Investment Strategy of a Pension Fund

Schools Value-added Information System Technical Manual

The Healthcare Leadership Model Appraisal Hub. 360 Assessment User Guide

The Influence of a Summer Bridge Program on College Adjustment and Success: The Importance of Early Intervention and Creating a Sense of Community

3: Summary Statistics

HMRC Tax Credits Error and Fraud Additional Capacity Trial. Customer Experience Survey Report on Findings. HM Revenue and Customs Research Report 306

How Customer Satisfaction Drives Return On Equity for Regulated Electric Utilities

Marginal Person. Average Person. (Average Return of College Goers) Return, Cost. (Average Return in the Population) (Marginal Return)

MATHEMATICAL METHODS OF STATISTICS

430 Statistics and Financial Mathematics for Business

Patient satisfaction in German hospitals. EHMA Annual Conference, Innsbruck, June 26, 2009 Markus Jochem, Dr. Susanne Klein, Hamburg

Program Quality Assessment. William Wiener

Review Jeopardy. Blue vs. Orange. Review Jeopardy

SPSS ADVANCED ANALYSIS WENDIANN SETHI SPRING 2011

FACTS AT A GLANCE. Higher Education Graduation Rates. Finding a Benchmark Prepared by Richard Sanders. Summary of Findings. Data and Methodology

Supplemental Online Appendices. Air pollution around Schools Affects Student Health and Academic Performance

Statistical skills example sheet: Spearman s Rank

3. Data Analysis, Statistics, and Probability

Value, size and momentum on Equity indices a likely example of selection bias



We provide the following resources online at compstorylab.org/share/papers/dodds2014a/ and at

Adequacy of Biomath. Models. Empirical Modeling Tools. Bayesian Modeling. Model Uncertainty / Selection

2016 Rankings. Released March 2015

OPTIONS TRADING AS A BUSINESS UPDATE: Using ODDS Online to Find A Straddle s Exit Point

A Comparison of Training & Scoring in Distributed & Regional Contexts Writing

BSBMKG408B Conduct market research

Introduction to Regression and Data Analysis

Sections 2.11 and 5.8

Appendix Figure 1 The Geography of Consumer Bankruptcy

Family Connection by Naviance

Underutilization in U.S. Labor Markets

The skill content of occupations across low and middle income countries: evidence from harmonized data

>> BEYOND OUR CONTROL? KINGSLEY WHITE PAPER

Z - Scores. Why is this Important?

Accounting in Community Colleges: Who Teaches, Who Studies?

Predicting the Performance of a First Year Graduate Student

Impact / Performance Matrix A Strategic Planning Tool

Effectiveness of online teaching of Accounting at University level

Introduction. Research Problem. Larojan Chandrasegaran (1), Janaki Samuel Thevaruban (2)

Algebra 1 Course Information

Algebra II EOC Practice Test

Directions for using SPSS

Transcription:

An Analysis of the NRC's Assessment of the Doctoral Programs in Public Affairs Göktuğ Morçöl & Sehee Han Pennsylvania State University Prepared for the NASPAA Annual Conference November 2014, Albuquerque, NM

Background Information We conducted analyses on the National Research Council s (NRC) Report on PhD Programs in Public Affairs (http://sites.nationalacademies.org/pga/resdoc/). We extracted the data from the NRC spreadsheet (http://www.nap.edu/rdp/). Background Information on the NRC Study: The NRC studied 5004 doctoral programs in various fields at 212 universities. The data were gathered in 2005 and 2006. The report was published in 2010. In the Public Affairs category, there were 54 programs.

Next NRC study on PhD Programs? There is a possibility that the NRC will conduct another study in the coming years. there are some preliminary conversations within the NRC, and with our partners, about whether and how to conduct this study, but there are no firm plans on the table at the moment. (email message from a National Research Council representative)

Three Categories of Variables Used in the NRC Rankings Faculty productivity: Publishing patterns Research funding Awards for scholarship Student characteristics: Student support Completion rates Diversity of the academic environment Diversity among faculty and students (Source: Jeremiah P. Ostriker, Paul W. Holland, Charlotte V. Kuh, & James A. Voytuk (Eds.), A Revised Guide to the Methodology of the Data-Based Assessment of Research-Doctorate Programs in the United States (2010); Committee to Assess Research (http://www.nap.edu/catalog/12974.html).

Types of Rankings in the NRC Report Survey-based rankings (S Rankings) Regression-based rankings (R Rankings) Separate rankings for the three dimensions of program quality: Research activity Student support and outcomes Diversity of academic environment

S an R Rankings in the NRC Report S RANKINGS: Based on a survey among faculty members at different institutions. They were asked to: Weight of (assigned importance to) 21 characteristics that the study committee determined to be factors contributing to program quality. The weights of characteristics vary by field based on faculty survey responses in each of those fields. R RANKINGS: An index of the 21 program quality variables based on the weights calculated from faculty ratings of a sample of programs in their field. Multiple regression and principal components analyses were used to develop the index scores. (For more details, see the slides on methodology at the end.)

Question: What are the most important factors contributing to the NRC s S and R rankings? The following tables and charts display the 5 th percentile rankings of the programs: Their best rankings after the top 5% of the 500 simulations were removed. (For more information about how the NRC calculated the percentile rankings, see the slides on methodology at the end.)

Most important factors contributing to S and R Rankings S RANKINGS Pearson Spearman 1. Research Activity (5th Q) 0.92 0.93 2. Average # of Publications Per Faculty -0.76-0.78 3. % Faculty w Grants -0.74-0.74 4. Average GRE scores -0.68-0.70 5. Average Citations per Publication -0.66-0.72 R RANKINGS Pearson Spearman 1. Average GRE Scores -0.73-0.72 2. % International Students -0.63-0.67 3. Research Activity (5th Quartile) 0.63 0.62 4. Is student work space provided? -0.54-0.55 5. Average # of Publications Per Faculty -0.50-0.56 (Color codes in the tables: Red: Student-related variable ;Black Faculty-related variable) Pearson and Spearman correlations are very similar. The biggest contributors to S rankings are faculty-related factors. Both student-related and faculty-related factors contributed to R rankings.

Question: How do the most important faculty-related and student-related factors relate to the R rankings?

Top Two Faculty-Related Factors & R Rankings: 1. Research Activity (Cubic is the best fitting line.) Research activity does not pay off for some highly productive programs.

Top Two Faculty-Related Factors & R Rankings: 2. Faculty Publications (Quadratic is the best fitting line.) Faculty publications do not pay off for some highly productive programs.

Top Two Student-Related Factors & R Rankings: GRE Scores (Linear is the best fitting line.) GRE scores are linearly related to rankings.

Top Two Student-Related Factors & R Rankings: International Students (Cubic is the best fitting line.) Some highly ranked programs have smaller percentages of international students!?

Questions: Are there regional differences in R rankings? Is there a difference between public and private institutions in their R rankings?

Differences among the Regions of US in R Rankings (sig. of F=.161)

R Rankings of Public vs. Private Universities (sig. of T=.018)

Question: Are there similarities between the NRC (doctoral) R rankings and the US News & World Report rankings of master s programs?

NRC Doctoral Rankings and US News Master s Degree Rankings NRC R Rank NRC S Rank US News Rank 2014 Pearson Spearman Pearson Spearman Pearson Spearman US News Average Assessment Score in 2007 US News Rank of Public Affairs Master's Programs in 2007 -.573 ** -.613 ** -.447 * -.467 ** (n=31) (n=31) (n=31) (n=31).568 **.613 **.379 *.467 ** 0.322 0.813 (n=31) (n=31) (n=31) (n=31) (n=31) (n=31) US News Rank of Public Affairs Master's Programs in 2014.787 **.798 **.665 **.670 ** (n=51) (n=51) (n=51) (n=51) ** Correlation is significant at the 0.01 level (2-tailed). * Correlation is significant at the 0.05 level (2-tailed). NRC and US News rankings are correlated. Spearman correlations are higher. US News rankings are consistent over the years.

US News Rankings of Master s Programs (2014) and NRC Rankings of PhD Programs (2005) NRC (2005) and US News (2014) rankings are quite linearly related.

Conclusions Both faculty productivity and student characteristics matter in the NRC rankings. Faculty productivity contributes more to survey-based (S) rankings. Faculty members seem to be actually rating their colleagues productivity when they rate other programs. NRC report notes: Research activity is the dimensional measure that most closely tracks the overall measures of program quality, because in all fields, both the survey-based or direct measure based on abstract faculty preferences and the regressionbased measure also puts high weight on the measures of research productivity in addition to the measure of program size. (Source: A Revised Guide to the Methodology of the Data-Based Assessment of Research- DoctoratePrograms in the United States; http://www.nap.edu/catalog/12974.html)

Conclusions Private universities rank significantly higher than public universities. NRC rankings of doctoral programs are highly correlated with US News rankings of master s programs.

Thank you.

The following slides are about the methodology of the NRC study.

Categories of variables that were weighed by survey participants

Explanation of percentile rankings (S and R rankings) For every program variable, two random values are generated one for the data value and one for the weight. The product of these summed across the 21 variables is then used to calculate a rating, which is compared with other program ratings to get a ranking. The uncertainty in program rankings is quantified, in part, by calculating the S Ranking and R Ranking, respectively, of a given program 500 times, each time with a different and randomly selected half-sample of respondents. The resulting 500 rankings are numerically ordered and the lowest and highest five percent are excluded. The 5th and 95th percentile rankings in the ordered list of 500 define the range of rankings are shown in the tables.

Explanation of percentile rankings (direct quotes from the NRC report) Because of the various sources of uncertainty, which are discussed at greater length in Appendix A, each ranking is expressed as a range of values. These ranges were obtained by taking into account the different sources of uncertainty in these ratings (statistical variability from the estimation, program data variability, and variability among raters). The measure of uncertainty is expressed by reporting the end points of a range that includes 90 percent of all the ratings for a program. These are the 5th percentile point and the 95th percentile point. We obtain both the survey-based weights and coefficients from regressions through calculations carried out 500 times, each time with a different randomly chosen set of faculty, to generate a distribution of ratings that reflects their uncertainties. For both the S and the R rankings, we obtain the range of rankings for each program by trimming the bottom five percent and the top five percent of the 500 rankings to obtain the range that includes 90 percent of the program s rankings. This method of calculating ratings and rankings takes into account variability in rater assessment of what contributes to program quality within a field, variability in values of the measures for a particular program, and the range of error in the statistical estimation. It is important to note that these techniques give us a range of rankings for most programs. We do not know the exact ranking for each program, and to try to obtain one by averaging, for example could be misleading, because we have not imposed any particular distribution on the range of rankings. (Source: A Revised Guide to the Methodology of the Data-Based Assessment of Research-Doctorate Programs in the United States (2010) http://www.nap.edu/catalog/12974.html, pp. 17-18)

Summary of the methods used in calculating the S and R rankings

A more detailed view of methods of calculating R and S rankings

An even more detailed view of the methods of calculating R and S rankings

An example of calculations of R ratings (Source: Revised methodology guide, p. 22)

An example of calculations of R ratings (Source: Revised methodology guide, p. 22)