Performance Management as Evaluation. Prof. Jeffrey Smith Department of Economics University of Michigan



Similar documents
The Role of Performance Management in Workforce Investment Programs

Performance Management of U.S. Job Training Programs: Lessons from the Job Training Partnership Act

ARE PROGRAM PARTICIPANTS GOOD EVALUATORS? Jeffrey Smith University of Michigan UWO, IFS, NBER, IZA and PSI

Profit-sharing and the financial performance of firms: Evidence from Germany

WHAT COUNTS? WHAT DOESN T? DOES IT MATTER? THE ACCOUNTING STANCE IN LABOR/SOCIAL POLICY BENEFIT-COST ANALYSIS

SOCIAL EXPERIMENTATION

Improving Public-Sector Performance Management: One Step Forward, Two Steps Back?

Value-Added Measures of Educator Performance: Clearing Away the Smoke and Mirrors

Assessing the Performance of Performance Standards in Public Bureaucracies

Improving the U.S. Workforce System by Transforming Its Performance Measurement System

Return on Investment: Cost vs. Benefits. James J. Heckman, University of Chicago

APPROACHES TO ADJUSTING WORKFORCE DEVELOPMENT PERFORMANCE MEASURES

The Returns to Community College Career Technical Education Programs in California

HOW MUCH IS TOO MUCH?

Incentive Compensation in the Banking Industry:

The 2013 Officer Training Experiment Wesley G. Skogan, Northwestern University

CHAPTER 4: FINANCIAL ANALYSIS OVERVIEW

Part 7. Capital Budgeting

Why High-Order Polynomials Should Not be Used in Regression Discontinuity Designs

Learning objectives. The Theory of Real Business Cycles

Basic Data Analysis. Stephen Turnbull Business Administration and Public Policy Lecture 12: June 22, Abstract. Review session.

Inequality, Mobility and Income Distribution Comparisons

Effects of Inflation Unanticipated Inflation in the Labor Market

What training programs are most successful?

HELP Interest Rate Options: Equity and Costs

Composite performance measures in the public sector Rowena Jacobs, Maria Goddard and Peter C. Smith

Below is a suggested framework and outline for the detailed evaluation plans with column names defined as follows:

Calculating Profitability

OBJECTIVE ASSESSMENT OF FORECASTING ASSIGNMENTS USING SOME FUNCTION OF PREDICTION ERRORS

The Persistence of Head Start Program Effects: What Role Does Subsequent Schooling Play? Katherine Magnuson & Hilary Shager UW-Madison

Basic Concepts in Research and Data Analysis

Standard Costs Overview

Exploring different measures of wage flexibility in a developing economy context: The case for Turkey

Research Methods & Experimental Design

Numbers 101: Growth Rates and Interest Rates

Chapter 18. MODERN PRINCIPLES OF ECONOMICS Third Edition

Performance & Reporting Requirements

IMPACT EVALUATION: INSTRUMENTAL VARIABLE METHOD

Planning sample size for randomized evaluations

A Best Practices Guide

Quality Assurance Program

Performance management in federal employment and training programs

Moral Hazard. Itay Goldstein. Wharton School, University of Pennsylvania

How Much Equity Does the Government Hold?

6/5/2013. Predicting Student Loan Delinquency and Default. Outline. Introduction - Motivation (1) Reuben Ford CASFAA Conference, Ottawa June 10, 2013

THE NEW COSTS OF UNIONIZATION

Florida Study of Career and Technical Education

Mgmt 469. Model Specification: Choosing the Right Variables for the Right Hand Side

Modeling customer retention

Town of Plainville Helpful Business Resources

DEVELOPING AN OUTCOME MEASUREMENT SYSTEM FOR YOUR NONPROFIT

Measuring performance in credit management

How To Improve Your Return On People

New Metrics Briefing 2: Feedback to the GELP Metrics co-design group Simon Breakspear, Cambridge University

10 Tips to Education Assistance Program Excellence

Discussion of The Costs and Benefits of Financial Advice

Let Your Call Center Customer Service Representatives be a Judge!

MEETING OF TRUST BOARD EXECUTIVE SUMMARY AGENDA ITEM 4.2

Response to Critiques of Mortgage Discrimination and FHA Loan Performance

INCENTIVES AND THEIR DYNAMICS IN PUBLIC SECTOR PERFORMANCE MANAGEMENT SYSTEMS

Temporary Work as an Active Labor Market Policy: Evaluating an Innovative Program for Disadvantaged Youths

Overtime Pay Mandates Are No Boon for Employees

WHAT IS MISSING FROM WORKFORCE MANAGEMENT?

ARE YOUR CUSTOMER SERVICE METRICS TELLING THE TRUTH? Many rank frontline teams unfairly.

Simplify and Focus the Education Tax Incentives. June, 2006

Volume Title: Producer Dynamics: New Evidence from Micro Data. Volume Author/Editor: Timothy Dunne, J. Bradford Jensen, and Mark J.

ITPMG. February IT Performance Management: The Framework Initiating an IT Performance Management Program

The Collaborative Service Delivery Matrix:

Is There an Insider Advantage in Getting Tenure?

Fixed-Effect Versus Random-Effects Models

Recommendations on Performance Accountability In the Workforce Education and Training System

ability to accumulate retirement resources while increasing their retirement needs; and

Synchronizing Medicare policy across payment models

Inflation. Chapter Money Supply and Demand

Colorado Springs School District 11

Earnings in private jobs after participation to post-doctoral programs : an assessment using a treatment effect model. Isabelle Recotillet

Workforce Investment Act Non-Experimental Net Impact Evaluation FINAL REPORT

Scale-Free Extreme Programming

Recommendations to Improve the Louisiana System of Accountability for Teachers, Leaders, Schools, and Districts: Second Report

Teacher Value-Added and Comparisons with Other Measures of Teacher Effectiveness. Summary

QR Funding and Its Role in Online Research

OUTLINE OF PRINCIPLES OF IMPACT EVALUATION

WORKFORCE MANAGEMENT ISSUES RELATED TO RETIREMENT PLANS

Workshop: Predictive Analytics to Understand and Control Flight Risk

DESIGN THINKING IS THE COMPETITIVE ADVANTAGE THAT BUSINESS LEGAL SERVICES PROVIDERS NEED

Barbara M. Wheeling Montana State University Billings

The Interaction of Workforce Development Programs and Unemployment Compensation by Individuals with Disabilities in Washington State

Economic Systems and Decision Making

RESEARCH REPORT 15-3

NONPROFIT PERFORMANCE MANAGEMENT WORKBOOK

QUANTIFIED THE IMPACT OF AGILE. Double your productivity. Improve Quality by 250% Balance your team performance. Cut Time to Market in half

Measurement Information Model

Does how primary care physicians are paid impact on their behaviour?

ECON20310 LECTURE SYNOPSIS REAL BUSINESS CYCLE

GLOSSARY OF EVALUATION TERMS

THE OREGON STATE REHABILITATION COUNCIL AND OFFICE OF VOCATIONAL REHABILITATION SERVICES 2008 CLIENT CONSUMER SATISFACTION SURVEY

Student Loans: Do College Students Borrow Too Much (or Not Enough)? Appendix: Data and Methods

OMANI SCIENCE TEACHERS USE OF COOPERATIVE WORK IN BASIC EDUCATION SCHOOLS

Minitab Tutorials for Design and Analysis of Experiments. Table of Contents

An Exploration into the Relationship of MLB Player Salary and Performance

Transcription:

Performance Management as Evaluation Prof. Jeffrey Smith Department of Economics University of Michigan Workshop on Market Mechanisms in Active Labor Market Policy October 24, 2008 IAB, Nürnberg

Introduction Thanks! English / German Focus mainly on performance measurement among the trinity of topics for the workshop Link performance measurement to program evaluation Provide a strong and perhaps provocative critic to stimulate discussion

Defining performance management Part of the reinventing government movement Common to both the Clinton and Bush II administrations Providing a bottom line where none exists Use of short term measures to provide feedback and allocate rewards James Q. Wilson and Bureaucracy: why is the government performing these tasks?

Example of a performance management system US Job Training Partnership Act Federal, state and local system Performance measures largely outcome levels Regression adjusted performance targets Budgetary rewards

Other examples of performance management systems US Workforce Investment Act US Small Business Association loans Canada ALMP Canada CSLP

Motivations for performance management 1. Align agent actions with principal preferences rather than agent preferences Ex: serving the hard-to-serve versus those with large impacts 2. Increase agent effort 3. Quick and dirty evaluation The literature focuses mainly on the first motivation. Why?

Alternative solutions to principal agent issues Solutions different for preference alignment and effort Close monitoring (for effort) or detailed regulations (for choices); but what about local knowledge as in Hayek (1945)? Hire line workers who share the preferences of the principal in terms of whom to serve and how to serve them and/or effort Develop a professional culture among line workers These alternative solutions do not address the evaluation issue

Issues from the principal-agent literature Rewards should relate to agent control over outcomes Ex: ALMP and unemployment rate; CSLP and fraction with PSE What is measured is rewarded; this matters when there are many dimensions to program goals Measures should be closely related to the objects of interest It should be easier to improve the measure by engaging in the desired activity than by engaging in strategic manipulation

Quick and dirty evaluation Want to monitor in real time for rapid feedback Want to monitor in real time for rapid course correction Low cost: want to spend money on services not on evaluation Simplicity increases caseworker buy-in Simplicity increases public understanding Ex: school and hospital report cards in the US

Serious evaluation Estimate the discounted stream of impacts relative to the counterfactual of non-participation (or later participation?) Experiments Econometric evaluation via e.g. matching Consider all relevant outcomes (and monetize as required) Subtract off costs (including excess burden)

Issues with serious evaluation Costly to produce; requires expensive econometricians etc. Long term impacts take a long time to construct Cannot provide rapid feedback Non-experimental methods difficult for policy-makers Competing non-experimental estimates with different implications Cost measures complex due to fixed costs, other funding streams

Performance measures: short-run outcome levels Almost universally used in performance systems Ex: employment 13 weeks after termination Outcomes may not equal impacts Short run!= long run Ignores costs Can be manipulated via cream-skimming or timing of termination Same points hold with counts, plus incentive to serve too many

Before-after differences Now used in US WIA system Before-after differences may not equal impacts Short run!= long run Ignores costs Can be manipulated via selective enrollment Misleading positive values due to Ashenfelter s dip

Impacts and performance measures: conceptual issues Correlate estimated impacts with observed values of performance measures at the individual level Estimated impacts from subgroup variation in experiments (a regression of outcomes on X, D and interactions between X and D) Issue: low variation in impacts among subgroups Issue: subgroup impacts often imprecisely measured Issue: impacts may vary on other dimensions Issue: does not capture effects on effort (performance measurement increases impacts for all groups)

Impacts and performance measures: evidence National Job Training Partnership Act Study Barnow (1999), Heckman, Heinrich and Smith (2002) No consistent relationship between performance measures and experimental impacts National Job Corps Study Burghardt and Schochet (2001) No relationship between center-level impacts and center-level measured performance Other literature see the Barnow and Smith survey finds no consistent effect (with one exception)

Sample sizes and the limits of performance management Cells defined by site, subgroup and service type may be quite small Even cells defined by only one or two of these may be small Some outcomes, such as earnings and wages, have high variances These facts imply that performance measures at disaggregate may have a large noise component But, disaggregate measures are most useful for management May be over-selling what these measures can do; see lit on classroom-level test score averages in the US

Serious impacts in the short run? MTI = Medium Term Indicators in Canada Years of effort (consultant full employment) Matching in real time Can serious evaluation really be automated? Tradeoff between sophistication and automation / clarity On-going randomization mit Treffer?

Can customer satisfaction measures do the job? Some measures are like participant self-evaluations Widely used in evaluations; given top billing when the econometric estimates are weak Included in some performance management systems e.g. US WIA Customers potentially include clients and firms Customers have information that evaluators do not Good optics and nominal satisfaction illusion

Example questions Did the program help you get a job? Would you recommend the program to your best friend? How satisfied were you with the services you received from the program?

Evidence on customer satisfaction measures in ALMP There is almost no evidence linking questions to impacts There is no evidence comparing alternative questions Are individuals good evaluators? More precisely, can individuals construct the required counterfactual? The story of the ineffective study intervention

Econometric evidence: where it comes from Key issue: obtaining person-specific impacts Smith, Whalley and Wilcox (2006) and related work with Tanya Byker and Sebastian Calonico Compare person-specific impacts based on sub-group variation in experimental impacts to self-evaluation measures Data from US JTPA Study, Connecticut Jobs First and National Supported Work Demonstration Parallel analysis uses person-specific impacts from quantile treatment effects framework under rank preservation

Econometric evidence: findings JTPA No correlation between impacts and self-evaluations Self-evaluations predicted by outcomes, changes and sites Connecticut Jobs First (work in progress) Some evidence of a weak positive correlation National Supported Work (work in progress) Some evidence of a weak positive correlation Summary: existing customer satisfaction questions are not a good proxy for impacts

Alternative customer satisfaction measures Existing evidence: bad idea or bad questions? Would questions that ask directly about counterfactuals do better? Ex: If you have not participated in the program, what is the probability that you would be employed today? Recent survey methods work on how to ask about probabilities would help here

What are customer satisfaction measures good for? Ask about things the customer directly experiences Ex: friendly staff, extra visits, waiting time, mistakes Do not ask about things that require a counterfactual (either implicitly or explicitly) Use customer service measures to make relative comparisons and so avoid nominal satisfaction illusion Think about regression adjustment for characteristics of eligible population and local economic environment?

Concluding thoughts What we do not know: What problem performance management should solve? Are there alternatives that would do a better job? What are the effects of current performance systems? How to stop governments from misleading the public? What we do know: Performance measures a very poor proxy for impacts (schlecht!) Quick and dirty evaluation may be worse than nothing Customer satisfaction methods not (now) a solution