Methodological Considerations When Evaluating the Implementation of Large Scale Electronic Health Record Systems Amirhossein Takian, M.D, MD Ph.D AcademyHealth ARM 2012 Orlando USA 1
2
EVALUATIONS ARE NEEDED TO TEST THE PROMISE OF HEALTHCARE INFORMATICS Medicine s Dickensian Past Modernising Informating Integrating Healthcare s Utopian Future The Old System The New System Inconsistent Error-Prone Fragmented Inefficient Doctor-Centred Reactive Latest IT Policy Evidence-Based Safe Connected Accountable Patient-Centred Proactive There is a big need to put the claims for healthcare informatics to the test EHR EVALUATIONS 3
The scope of English NPfIT 4
The EHR delivery structure in England (Robertson et al. 2010) 5
Overview of our dataset (Sheikh et al. 2011) Total no. of site interviews Hours of on site No. of site other Other data collected (e.g. (by WP) observations documents field notes; outpatient surveys; CLICS surveys) Total: 498 590 498 38 sets of field notes; WPs1 3: 310 130 CLICS surveys; 4,684 WP4: 36 outpatient surveys WP5: 60 WP6: 37 6
7
5 Key Challenges of national EHR evaluations 1. How to make evaluation of national implementations ti of EHR less of an afterthought? 2. How to ensure the independence of such evaluations? 3. Which methodologies/study designs should we be employing? 4. Should these evaluations be predominantly formative, summative or both? 5. How to meaningfully synthesise the findings across such complex, context bound interventions/evaluations? 8
1. Evaluation as afterthought or forethought? Evaluations of EHRs still tend to be afterthought, particularly in the context of national, politically driven implementations In the context of NPfIT, such evaluations were forced on the government by UKacademics These evaluations were therefore commissioned in haste, but still after key decisions had been undertaken/procurements had been made, etc. Can evaluations be moved onto the front foot? 9
10
2. The independence of evaluations Our evaluation was commissioned by NHS CFHEP, a semiindependent body at University of Birmingham, funded by the DH Although independent, we were still dependant on government bodies to recruit hospitals, obtain relevant documentary evidence/information, etc This was however not always possible: concerns about the usefulness of independent evaluations, parallel government evaluations and commercial confidentiality Major concerns about yet more unwelcome publicity 11
12
3. Methodological considerations RCTs as the gold standard for effectiveness, but inherently less suitable for studying safety Conventional designs rooted in a positivist ontology, assuming a single truth, uncovered by controlling for confounding variables Suitability and possibility of such approaches in the context of a national EHR implementation: would never be repeated Practicalities to control lfor changes in policy, leadership, government, economic climate? 13
OUR ADOPTED SOCIO TECHNICAL VIEW System Human The Health Functions Perspectives Care System Structure Technology Work Organisation Role of Medicines What has been put in place? What does the intervention look like? What new work practices emerge? What is the role of the intervention in the wider healthcare system? Process Processing Social Interactions Management of Care Delivery What is done that is different? What processes emerge and change? How do communication patterns and workflow change? What organisational changes emerge? Outcome Validity of Processing Quality of The Service Potential for Change What has been achieved? Is the service safe and reliable? Is the experience of the service satisfactory? How might the intervention be used in the future? (Cornford et al. 1994) EHR EVALUATIONS 14
4. Formative or summative evaluations? Most experimental studies are summative with interim analyses discouraged/penalised Our outset plan to undertake both: formative & summative Despite requests from participating hospitals, funders discouraged us from providing any formative feedback Given that these national experiments are never going to be repeated, it can however be argued that a formative evaluation is, if anything, the more important 15
5. Synthesising findings in national EHR evaluations Systematic reviews can be used, but simplistic attempts at meta analysis are out of the question Rather, a great need to consider using realist synthesis techniques, aiming to understand relevant context mechanism outcome (CMO) configurations Such CMO configurations offer insights into the likely transferability/generalizability of findings across settings Needs catalysers for lessons to be shared/learned 16
17
SO, TO CONCLUDE... National EHR programs cannot be dissociated from the national social and political context, as well as the local contexts Implementation not follow a standard pattern, national ones unique! There should be no standard way to evaluate implementation There is a need to balance positivist vs. interpretive methods There is a need for reflexivity and adaptability to avoid reductionism A willingness to embrace new ontological, epistemological and methodological lassumptions: Evaluation based on an appropriate and meaningful framework? A shift fromvalue judgements to exploring and interpreting A move from hypothetico deductive studies to contextualised studies But the question arises as to when evaluation ends? 18
Questions/comments Amir Takian M.D., Ph.D School of Health Sciences & Social Care Brunel University London Room 112, Mary Seacole Building, Uxbridge UK E amir.takian@brunel.ac.uk 19