Methodological Issues in Comparing Hospital Performance: Measures, Risk Adjustment, and Public Reporting Harlan M. Krumholz, MD Yale University School of Medicine July 31, 2015 2015 National Forum on Pay for Performance
Comparison of health care spending in Taiwan and the World s major countries
NHI public satisfaction ratings 1995-2014
Quality of Expat Health in Taiwan
Cheng, 2015
It would be great if Prof. Krumholz could share the US experiences in developing hospital report cards regarding:
1. How to select appropriate indicators/performance metrics to better represent the performance of hospitals.
Avedis Donabedian (1919 2000) Published Evaluating the Quality of Medical Care in 1966 Separates quality measures into 3 categories: Structure Process Outcome
Types of Quality Measures Outcome Process Structure Assumes good medical care will follow
Types of Quality Measures Outcome Structure Assumes good medical care will follow Process Assesses whether good medical care has been applied
Structure Measures Pros & Cons Pros Measure important foundational concepts Easy to measure and compare
Structure Measures Pros & Cons Pros Measure important foundational concepts Easy to measure and compare Cons Relationship to quality is assumed Improving performance may not improve outcomes
Structure Measures Define the presence or absence of specific care resources Include: Facilities and equipment Number of beds EHR capabilities Professional and organizational resources Licensure and certification Joint Commission accreditation Policies and procedures Registry participation
Process Measures Assess whether or not specific care is performed Include: Clinical and preventive care CMS Core measures
Process Measures Examples Applicable to hospitals: Aspirin administration to heart attack patients Timing of antibiotic initiation for pneumonia
Process Measures Pros & Cons Pros Evolve from evidence-based practice guidelines Combine consensus-based information with clear instructions for improving care
Process Measures Pros & Cons Pros Evolve from evidence-based practice guidelines Combine consensus-based information with clear instructions for improving care Cons Can be burdensome to collect Exclude some patients from measurement May not correlate well with outcomes
Outcome Measures Assess what actually happens to the patient Survival Unintended treatment effects Symptom relief Include: Acute clinical events Health status Patient experience of care
Outcome Measures Examples Applicable to hospitals: 30-day all-cause mortality and readmission following AMI, heart failure and pneumonia H-CAHPS Survey
Outcome Measures Pros & Cons Pros Measure what matters most to patients
Outcome Measures Pros & Cons Pros Measure what matters most to patients Cons Must be influenced by care Requires adequate risk adjustment to account for differences in case mix Influenced by many factors More difficult to measure
What makes a good outcome measure? Well thought out measure concept Considering potential applications in design Measure specifications adhere to accepted standards
A well thought out measure concept Addresses measurement and performance gaps Measurement gap: Not currently measured Performance gap: variation in performance not explained by variation in patient case mix Considers feasibility, usability, reliability and validity
Considering potential applications in design Clinical care Quality improvement Accountability
Measure specifications Population to be measured (cohort) Result to be measured (outcome) Outcome attribution (who is responsible) (Risk adjustment) To be covered in Why risk adjust?
Measure specifications Population to be measured (cohort) Result to be measured (outcome of interest) Outcome attribution (who is responsible) (Risk adjustment) To be covered in Why risk adjust?
Measure specifications Population to be measured (cohort) Result to be measured (outcome) Outcome attribution (who is responsible) (Risk adjustment) To be covered in Why risk adjust?
Measure specifications Population to be measured (cohort) Result to be measured (outcome) Outcome attribution (who is responsible)
Measure Cohort Reliably identifiable Clinically coherent Sample size, depending upon application
Measure Outcome Reliably identifiable Standard measurement period
Outcome attribution Ownership of care Concept of shared responsibility
2. The mechanism to select risk factors for each outcome indicators, especially to get/build consensus from medical professionals.
Hospital Care Outcome Patient Status on Presentation Other Effects
Patient Status on Presentation Expected Outcome
Expected Outcome versus Observed Outcome
Purpose of risk adjustment To help define EXPECTED OUTCOME Level the playing field Account for patient-level factors that influence outcome but do not reflect care quality
Avoid risk factors that Represent complications of care Represent healthcare system attributes Will potentially mask disparities
3. The statistic models used for risk adjustment, and their strength and weakness.
Approach to risk adjustment Account for patient case mix Account for clustering of patients at measurement unit (hospital, group practice, etc.) Avoid risk factors that Represent complications of care Represent healthcare system attributes (e.g., discharge disposition) Will potentially mask disparities (e.g., SES or race)
Approach to risk adjustment Account for patient case mix Account for clustering of patients at measurement unit (hospital, group practice, etc.) Avoid risk factors that Represent complications of care Represent healthcare system attributes (e.g., discharge disposition) Will potentially mask disparities (e.g., SES or race)
4. The issues regarding quality of data, especially when claim data is the major source for measuring performance.
5. The policy/approach that help medical professionals and consumers to understand report card, especially the result of adjusted outcomes.
Aspects of outcome measures specific to accountability: Higher threshold of scientific reliability and validity Must be fair to measured entities Should assign performance categories with high degree of confidence Must define comparator
Aspects of outcome measures specific to accountability: Higher threshold of scientific reliability and validity Must be fair to measured entities Should assign performance categories conservatively Must define comparator
Aspects of outcome measures specific to accountability: Higher threshold of scientific reliability and validity Must be fair to measured entities Should assign performance categories conservatively Must define comparator
Aspects of outcome measures specific to accountability: Higher threshold of scientific reliability and validity Must be fair to measured entities Should assign performance categories with high degree of confidence Must define comparator
Aspects of outcome measures specific to accountability: Higher threshold of scientific reliability and validity Must be fair to measured entities Should assign performance categories with high degree of confidence Must define comparator
CMS s hospital AMI 30-day all-cause mortality measure Reported as risk-standardized mortality rate (RSMR) Measures all-cause mortality after AMI admissions Captures deaths within 30 days of admission Publicly reported on HospitalCompare.gov
Risk-standardized mortality rate (RSMR) = Hospital s predicted deaths Hospital's expected deaths national unadjusted mortality rate Predicted = number of deaths within 30 days predicted on basis of hospital s performance with its observed case mix Expected = number of deaths expected on basis of the nation s performance with that hospital s case mix
Performance categories for public reporting Hospitals categorized as Better, Worse, or No Different than the national rate Use 95% interval estimate (like 95% confidence interval) to define categories Measure of relative performance
6. Advice to accelerate/promote public reporting of hospital performance.