The Effect of EHR on Hospital Quality

Size: px
Start display at page:

Download "The Effect of EHR on Hospital Quality"

Transcription

1 Beyond Adoption: Does Meaningful Use of EHR Improve Quality of Care? Yu-Kai Lin, Mingfeng Lin, Hsinchun Chen University of Arizona Abstract Electronic health record (EHR) system holds great promise in transforming healthcare. Existing empirical literature typically focused on its adoption, and found mixed evidence on whether EHR improves care. The federal initiative for meaningful use (MU) of EHR aims to maximize the potential of quality improvement, yet there is little empirical study on the impact of the initiative and, more broadly, the relation between MU and quality of care. Leveraging features of the Medicare EHR Incentive Program for exogenous variations, we examine the impact of MU on healthcare quality. We found evidence that MU significantly improves quality of care. More importantly, this effect is greater in historically disadvantaged hospitals such as small, non-teaching, or rural hospitals. These findings contribute not only to the literature on Health IT, but also the broader literature of IT adoption and the business impacts of IT as well. Keywords: Meaningful use, MU, electronic health records, EHR, quality of care This Version: November 24 th,

2 1. Introduction Researchers, observers, patients and other stakeholders have long deplored the woeful conditions of the U.S. healthcare system (Bentley et al. 2008; Bodenheimer 2005; IOM 2001). While the nation s healthcare expenditure accounts for almost 18% of the gross domestic product (GDP), about 34% of national health expenditures (or $910 billion) are considered wasteful (Berwick and Hackbarth 2012). Meanwhile, preventable medical errors cause nearly 100,000 deaths and cost $17.1 billion annually (IOM 2001; Bos et al. 2011). To address these issues, researchers and policy-makers have been advocating the adoption and use of Health Information Technology (HIT), with the hope that it will help transform and modernize healthcare (Agarwal et al. 2010). An important element of the HIT initiatives is the implementation of Electronic Health Records (EHR). A full-fledged EHR system contains not only patient data, but also several interconnected applications that facilitate daily clinical practice, including patient record management, clinical decision support, order entry, safety alert, health information exchange, among others. EHR with a clinical decision support system (CDSS) can implement screening, diagnostic and treatment recommendations from clinical guidelines so as to enable evidence-based medicine (Eddy 2005). Similarly, the functionality of computerized physician order entry (CPOE) in EHR can detect and reduce safety issues regarding over-dosing, medication allergy, and adverse drug interactions (Ransbotham and Overby 2010). However, until recently most U.S. hospitals and office-based practices had been slow in adopting EHR systems. Jha and colleagues (2009) reported that in a national survey, less than 10 percent of hospitals had an EHR system in Similarly, DesRoches et al. (2008) found a 17 percent adoption rate of EHR in office-based practices in early More importantly, studies have found mixed evidence on whether the adoption of EHR improves quality of care (Black et al. 2011; Himmelstein et al. 2010), which further casts doubt on the benefits of adopting EHR. One potential explanation for the mixed effect from EHR adoption is that hospitals may not be actually taking advantage of EHR, even if the system has been installed (Devaraj and Kohli 2003). 2

3 Through the Health Information Technology for Economic and Clinical Health (HITECH) Act, the federal government has been taking steps to promote meaningful use (MU) of EHR to maximize the potential of quality improvement (Blumenthal 2010). The HITECH Act committed $29 billion dollars over 10 years to incentivize hospitals and clinical professionals to achieve the MU objectives of EHR (Blumenthal 2011). Under this law, the Centers for Medicare & Medicaid Services (CMS) has been the executive agency for the incentive programs since Through these programs, eligible hospitals and professionals can receive incentive payments from Medicare, Medicaid or both if they successfully demonstrate MU. In addition, there will be financial penalties to the hospitals and professionals if they fail to meet the MU objectives by 2015; that is, they will not receive the full Medicare reimbursement from CMS. The programs designate multiple stages of MU, where each stage has incremental scopes of MU objectives and measures. 1 With the implementation of these incentive programs, recent surveys show a significant growth of EHR adoption and MU (Adler-Milstein et al. 2014). However, the ultimate goal of this national campaign is to improve the quality of care for patients (Blumenthal and Tavenner 2010; Classen and Bates 2011). So far, however, there have been no empirical studies on the quality effect of MU. This study seeks to fulfill this gap in the literature by examining the relation between the MU of EHR technology and changes in hospital care quality. One major challenge in identifying the effect of EHR or MU on quality of care is endogeneity. The decision to adopt an EHR system is often correlated with hospital s characteristics, some of which not observable to researchers. For example, small, nonteaching, and rural hospitals are slow in adopting EHR (DesRoches et al. 2012), and they are also likely to perform worse than their counterparts. Conversely, many better-performing healthcare institutions are also pioneers in EHR adoptions. Examples include Mayo Clinic, which introduced EHR as early as the 1970s; and Kaiser Permanente, which had invested about $4 billion on its EHR system before the EHR Incentive Programs (Snyder 2013). Such 1 3

4 endogeneity in observational data could lead to erroneous inference on the effect of adoption or MU on quality of care. We address this empirical challenge by exploiting some unique features of the Medicare EHR Incentive Program. Specifically, under the guidelines of this program, hospitals that demonstrate and maintain MU are able to receive annual incentive payments up to four years, starting from the year they attest meeting the MU criteria. 2 We strategically identify treatment and control hospitals to study whether meaningful use of EHR technology improves quality of care. Through our identification strategy and multiple empirical specifications and robustness tests, we find supporting evidence that MU significantly improves quality. 2. Related Literature In this section, we review the existing literature that directly inform our analyses. We start with a review and synthesis of existing studies that focus on the effects of Electronic Health Records, and highlight the gap that our study is seeking to fill. Since our focus is the Meaningful Use of EHR, in the second subsection we discuss a related, albeit smaller, literature on MU. We also briefly review some representative work from the broader literature of IT adoption and value that inform our study. 2.1 Literature on the Effects of EHR Given the potential of EHR to change the routines of healthcare delivery, reduce costs, and minimize errors, there has been a large and growing literature on this topic. Most directly related to our study are the empirical ones. In this section, we systematically review published empirical studies on the effect of EHR, and compare them with our study. We focus on studies published in and after 2010 to avoid significant overlapping with prior review papers (Black et al. 2011; Chaudhry et al. 2006). For each study, 2 The attestation procedure involves filling a formal form on the CMS EHR Incentive Programs Registration and Attestation website. Hospitals will need to report the vendor and product model of its EHR system and enter their measures regarding each of the required MU criteria. For more information about the registration and attestation procedure, see Guidance/Legislation/EHRIncentivePrograms/downloads/HospAttestationUserGuide.pdf 4

5 we summarize the main data sources, data period, data units, main dependent and independent variables, identification strategy, type of analysis, and main findings. The result of our literature search and analysis is shown in Table 1. To facilitate comparison, we list this study in the last row of the table. As can be seen, our study is one of the first to study the effect of MU on healthcare quality by using a more objective set of measurements for MU, a unique and recent dataset, and empirical identification methods that leverages features of the Medicare EHR Incentive Program. We discuss our findings from our literature search and categorization in the remaining of this subsection. Main data sources; dependent and independent variables. It is apparent in Table 1 that Healthcare Information and Management Systems Society s Analytics Database (HADB) is a predominant data source for research on HIT or EHR. HADB contains information about the adoption of hundreds of HIT applications, including EHR, CPOE, CDSS, etc., in over four thousand U.S. hospitals. HADS-based studies typically define and identify a set of HIT applications that are pertinent to the research goals. Most of the main independent variables in Table 1 are derived from HADB. Many studies further distinguish stages or capabilities of HIT or EHR implementation based on the adoption records in HADB. For instance, in studying the effect of EHR adoption on quality of care, Jones et al (2010) determine EHR capability using four HIT applications, i.e., clinical data repository, electronic patient record, CDSS, and CPOE. A hospital is said to adopt advanced EHR if the hospital adopts all four applications, basic EHR if at least one, and no EHR if none. Dranove et al (2012) also distinguish basic and advanced EHR systems, but using a different set of applications as the criteria. Other than the distinction between basic and advanced EHR, a number of studies try to mimic the HITECH MU criteria by mapping them to similar applications in HADB (Appari et al. 2013; Hah and Bharadwaj 2012). We will address the issues of such mapping in Section 3.1 when we discuss our construction of the MU variable in this study. In addition to the fields provided by HADB, it is often necessary to include other data sources so as to identify hospital characteristics and performance. This includes American Hospital Association (AHA) Annual Survey Database, CMS Case Reports (CMS-CR), and CMS Hospital Compare (CMS-HC) database. 5

6 For dependent variables, the two most popular ones among past studies are hospital operational cost and process quality. The former is supplied by the CMS-CR data, while the latter is available from the CMS-HC database. While process quality of care and MU have shown in a number of prior studies, the variables were typically constructed using the data from CMS-HC and HADB, respectively. We, instead, use two new data sources to construct the main dependent and independent variables. For our main dependent variable, process quality of care, we obtain healthcare performance measures from the Joint Commission (JC). For our primary independent variable, MU, we use data from the Medicare EHR Incentive Program. Following prior studies, we also use a number of other data sources to construct control variables. Section 3 provides an in-depth discussion of the datasets and variables in our study. Data period. It is noteworthy from Table 1 that all the studies, even the most recent ones, are based on data from 2010 or earlier. According to Jha et al (2009), this is the time that the rate and degree of EHR use were both low in U.S. hospitals. Specifically, at this period comprehensive EHR system was used in only 1.5% of U.S. hospitals and just an additional 7.6% had a basic system. Since the U.S. healthcare system has undergone dramatic policy changes since 2009, there is a significant practical and scientific need for new data and new empirical analyses. To understand the impact of the HITECH Act and the latest progress of MU among U.S. hospitals, we use data from around 2012 for our analyses. Identification strategy and analysis. As discussed earlier, an important empirical challenge in assessing the impact of EHR is endogeneity; we therefore review how prior literature addresses this concern. Column 7 of Table 1 summarizes research designs used in each paper to address endogeneity concerns. We can see that these studies employ various econometric strategies such as fixed effects (Appari et al. 2013; Dranove et al. 2012; Miller and Tucker 2011; Furukawa et al. 2010; McCullough et al. 2010), difference-in-differences (McCullough et al. 2013; Jones et al. 2010), instrument variables (Furukawa et al. 2010; Miller and Tucker 2011), and propensity adjustments (Dey et al. 2013; Appari et al. 2012; Jones et al. 2010). In addition, the majority of studies in Table 1 employed panel data analysis, 6

7 because cross-sectional datasets will not capture the impact of IT adoption if early adopters differ from other hospitals along other quality-enhancing dimensions. (Miller and Tucker 2011, p. 292) Our analyses are based on a panel dataset. We exploit features of the Medicare EHR Incentive Programs as an exogenous variation, and adopt a number of empirical strategies (difference-in-differences and first-difference) to verify and ensure that our findings are robust. We also use propensity score matching (Rosenbaum and Rubin 1983) to alleviate potential bias from treatment selection, i.e., whether or not a hospital demonstrates MU in Full details of our identification strategy is discussed in Section 4. Main findings. The last column of Table 1 makes it clear the inconclusive findings on the effect of EHR adoption in prior studies: 6 positive, 4 negative, and 4 mixed results. The 4 mixed results are either because EHR had effect on only a subset of measures or because the effects are significant only under certain conditions. For instance, McCullough et al. (2010) find that the use of EHR and CPOE significantly improved the use of vaccination and appropriate antibiotic in pneumonia patients, but for the same population EHR had no effects on increasing smoking cessation advice nor taking blood culture before antibiotic. Similarly, Furukawa (2011) find that advanced EHR systems significantly improved the throughput of emergency department, but basic EHR systems did not. Our results from various estimators consistently show that attaining MU significantly improves quality of care. Moreover, we also find that the magnitude of this effect varies by several hospital characteristics, such as hospital size, hospital ownership, geographical region, and urban status of the hospital location. We find that hospitals traditionally deemed with weaker quality, e.g., s e.g., small, nonteaching or rural hospitals, attained larger quality improvements than their counterparts. 2.2 Literature on Meaningful Use Compared to adoption, meaningful use of EHR is a much more challenging goal for hospitals and healthcare providers (Classen and Bates 2011). Some prior studies in Table 1 have examined MU, but use less formal measurements to identify MU. For example, Appari et al. (2013) examine how MU 7

8 impacts process quality by mapping the HITECH MU criteria to HADB. The authors define five levels of EHR capabilities in which the top two levels satisfy the functionality requirements in the MU criteria. The authors note that (Appari et al. 2013, p. 358): While complete satisfaction of 2011 MU objectives requires fulfilling clinical and administrative activities using EHR systems, here we measure only whether a hospital system has the functional capabilities to meet the objectives as we have no data on whether they actually accomplished the activities. Several other studies also use MU functionalities to define MU (e.g., McCullough et al. 2013; Hah and Bharadwaj 2012). A recent systematic review by Jones et al. (2014) focuses on the effects of MU on three outcomes: quality, safety and efficiency. The review includes a total of 236 studies published from January 2010 to August The review also uses MU functionalities to as a taxonomy to characterize the literature. Jones et al. (2014) conclude that most of the studies focused on evaluating CDSS and CPOE, and rarely addressed the other MU functionalities. By contrast, as we will discuss in Section 3.1, our paper is one of the first to use a systematic and government-mandated public health program to identify MU. Finally and more broadly, our study also draws on and contributes to the long-standing literature on the consequences of technology adoption and IT business value, of which healthcare IT is but one example (Davis et al. 1989; Ajzen 1991; Attewell 1992; Brynjolfsson and Hitt 1996; Wejnert 2002; Venkatesh et al. 2003; Tambe and Hitt 2012). Whereas a dominant variable of interest in this literature is the adoption of technologies, we focus on the meaningful use of a technology and investigate how it affects an important outcome. 3. Data We integrate data from multiple sources. Consistent with prior research (Appari et al. 2013), we use the Medicare provider number as a common identifier to link all hospital-level information. Table 2 8

9 summarizes the variables and their data sources, which we discuss in turn. Also consistent with prior studies in related literature (see Section 2; e.g., Appari et al. 2013; McCullough et al. 2013; Furukawa et al. 2010; Jones et al. 2012), we investigate non-federal acute care hospitals in 50 states and the District of Columbia. 3.1 Meaningful Use An important difference between our study and those reviewed in Table 1 is how we construct our main independent variable, MU, and its data period. We use data directly from the Medicare EHR Incentive Program. This dataset is new and unique, and to the best of our knowledge, has not been used in any prior empirical studies. The latest data, released in May 2014, covers the MU attestation records of U.S. hospitals as of early This dataset reflects the most recent development of EHR adoption and meaningful use in the United States. The CMS EHR Incentive Programs website provides data about the programs and the recipients of the incentive. 3 The recipient data reveals in which year a hospital demonstrated that it had met the MU criteria. We look only at the records from the Medicare EHR Incentive Program but not from the Medicaid program for two reasons. The first reason is data availability. As of the time we conducted this study, hospital-level information from the Medicaid EHR Incentive Program has not been released. This is presumably due to the fact that the Medicaid program is locally run by each state agency, which creates difficulties in aggregating detailed information from multiple sources. In contrast, the Medicare Incentive Program is run solely by CMS so that information is centralized and more accessible. The second reason that we use only the Medicare data is its representativeness. The latest statistics from CMS shows that 96.7 percent hospitals which successfully attested MU before January 2014 received incentive payments from Medicare, and 94.1 percent of these hospitals also received payments from Medicaid. Since most hospitals register and obtain incentive payments from both programs, the hospitals in the Medicare 3 9

10 program should mostly overlap with those in the Medicaid program. We therefore focus on the data from the Medicare program to study the quality effect of MU. 4 Using data directly from the Medicare EHR Incentive Program allows us to mitigate three important shortcomings in prior works that rely on HADB or AHA Healthcare IT Database. First, instead of mapping the MU criteria to the records of HIT applications in HADB, our approach is much more direct and objective. There will be no information loss or measurement errors from indirectly representing MU using secondary data sources. Second, existing HIT survey databases rely on self-reported data. To our knowledge there is no data auditing process to verify the correctness of the data. In contrast, there are pre- and post-payment audits in the Medicare EHR Incentive Program to ensure the accuracy of the attainment of MU objectives, and hence the integrity of data. Third and finally, the MU criteria comprise not only what EHR functionalities a hospital possesses but also how they are used. The simple presence of HIT in a hospital does not directly imply meaningful use of the technology (McCullough et al. 2013). As such, instead of just requiring having an EHR functionality to record patient s problem list, the guidelines of Medicare EHR Incentive Program require the following criterion, among others, to be met before the hospital can be considered MU: More than 80% of all unique patients admitted to the eligible hospital or critical access hospital have at least one entry or an indication that no problems are known for the patient recorded as structured data. (Emphasis added) 5 Demonstration of actual use of EHR and HIT capabilities is critical in understanding and explaining the impact of HIT or EHR (Devaraj and Kohli 2003; Kane and Alavi 2008). However, proof or demonstration of use is typically not recorded in existing HIT survey databases, and hence imposed a 4 Although the Medicare patient population is elder than the regular patient population, it has no effect on our study. This is because we are looking at Medicare certified providers instead of the Medicare patients. Given that Medicare is the largest payer in the US, almost all hospitals, especially acute care hospitals that we are studying, accept Medicare patients. 5 Guidance/Legislation/EHRIncentivePrograms/Downloads/MU_Stage1_ReqOverview.pdf 10

11 critical research limitation in HIT evaluation. Since system usage is an integral part in determining whether MU is met in the Medicare EHR Incentive Program, our MU variable naturally captures this missing dimension. These three important shortcomings were often neglected in prior studies, and can potentially explain the mixed findings on the effect of EHR systems in the literature (Agarwal et al. 2010; Kohli and Devaraj 2003). The newly released data from the Medicare EHR incentive program allows us to circumvent these issues, and provide new empirical evidence on the effect of MU on quality of care. 3.2 Quality of Care There are several sources which provide hospital quality data, including CMS-HC, JC, National Ambulatory Medical Care Survey (NAMCS), and National Hospital Ambulatory Medical Care Survey (NHAMCS). The first two put more emphasis on inpatient settings whereas the last two, outpatient (Ma and Stafford 2005). Nonetheless, all these data sources emphasize evidence-based care process (Chassin et al. 2010) when deriving quality measures for hospitals. In other words, quality of care is only considered high if a hospital follows the processes and interventions that will lead to improved outcomes, as suggested by clinical evidence. It is noteworthy that the relationships among different quality metrics (e.g. process quality, patient satisfaction, 30-day readmission rate, and in-hospital mortality) are weak or inconsistent (Shwartz et al. 2011; Jha et al. 2007). For instance, in a prospective cohort study with a nationally representative sample (N=51,946; panel data), Fenton et al. (2012) find that higher patient satisfaction was surprisingly associated with greater total expenditures and higher mortality rate (both significant at the 0.05 level). Although the process quality does not necessarily reduce 30-day mortality or readmission, it has been a primary quality metric used in prior studies (see Table 1) because it is actionable, targets long-term benefits, and requires less risk-adjustment (Rubin et al. 2001). By the same token, Chatterjee and Joynt (2014) argue that: Although process measures remain minimally correlated with outcomes and may represent clinical concepts that are somewhat inaccessible to patients, they do have 11

12 independent value as a marker of a hospital s ability to provide widely accepted, guideline-based clinical care. We obtain hospital quality measures from the Joint Commission, formerly known as the Joint Commission on Accreditation of Healthcare Organizations. The Joint Commission is a not-for-profit organization that aims to promote care quality and safety. It is critical for a hospital to be accredited by the Joint Commission in order to obtain a service license and to qualify as a Medicare certified provider (Brennan 1998). The Joint Commission has long been developing metrics for quality measurement and improvement. 6 There are currently 10 core measure sets, categorized by conditions such as heart attack, heart failure, pneumonia, surgical care infection prevention, among others. In each core measure set, there are a number of measures specific to the corresponding medical condition. Examples of the quality measures are as follows: Percentage of acute myocardial infarction patients with beta-blocker prescribed at discharge Percentage of heart failure patients with discharge instructions Percentage of adult smoking cessation advice/counseling These metrics are largely aligned with the process quality measures in the CMS-HC dataset that are more commonly used in prior research (see Table 1). We find that the Joint Commission quality measures are more comprehensive than the process measures in CMS-HC, since many quality measures are tracked by the former but not by the later. In addition to their comprehensiveness, quality measures from the Joint Commission are updated quarterly, whereas those from CMS-HC are updated only annually. We therefore use the quality measures from the Joint Commission. 7 Since the JC quality metric has multiple core measure sets, and each core measure set can contain multiple specific measures, we derive a composite quality score to represent the overall process quality of In fact, our identification strategy would not have been possible without quarterly quality data. As of the time of writing, the latest process-of-care quality metric and patient experience metric in the CMS-HC dataset are available for April 2012 to March Similarly, the outcome-of-care quality metrics in CMS-HC, i.e., 30-day readmission/mortality rates, are available from July 2010 to June As such, none of these quality metrics is recent enough to permit a clean empirical identification. See Section 4.2 for the identification strategy in this study. 12

13 a hospital. The interpretation of the composite score is intuitive, useful and quantitative: to what degree (in percentage) does a hospital follows guideline recommendations. Consistent with prior studies (Appari et al. 2013; Chen et al. 2010), the composite quality score is derived as an average of all specific measures weighted by the number of eligible patients in each measurement. In the construction of quality score, we exclude measures that have less than five eligible samples in a hospital in order to ensure data reliability Control Variables Prior studies on the effect of EHR typically include a set of control variables to capture the heterogeneity among hospitals (Angst et al. 2010; Appari et al. 2013; Devaraj and Kohli 2003). The HADB contains hospital information about the year formed, which allows us to calculate hospital age as of For hospitals whose age information is missing, we manually performed search engine queries and successfully identified about 50 of them by looking up the About Us or similar pages on hospital websites. A second important control is hospital size. The size of hospitals has been shown to be positively correlated with EHR adoption as well as quality of care (DesRoches et al. 2012). We operationalize hospital size using the total number of beds in the CMS-CR data. We also use the CMS-CR data to capture hospital various throughput measures: annual (Medicare) discharges or inpatient days (Miller and Tucker 2011). To allow more intuitive interpretations on the effects of these throughput measures, we rescale the original values to the unit of a thousand before entering them in our models. To differentiate the general health condition of the served patient population, we use the transferadjusted case mix index (TACMI) from the CMS Inpatient Prospective Payment System (IPPS). The CMS-HC dataset provide information regarding the ownership of a hospital, which can be broadly categorized into either government, non-profit, or proprietary hospitals. We control for the teaching status of hospitals. Specifically, teaching status is a dichotomized variable determined by whether the hospital is a member of the Association of American Medical 8 Our results are qualitatively similar without imposing this cut off threshold. 13

14 Colleges. We further identify whether or not a hospital is located in a rural area by mapping its zip code to the Rural-Urban Commuting Area (RUCA) version 2.0 taxonomy (Angst et al. 2012; Hall et al. 2006). Finally, we control for the region of a hospital by matching its zip code to one of the four census regions: Midwest, Northeast, South, or West. 4. Empirical Strategy As discussed earlier, a key challenge to identifying the quality impact of EHR adoption or MU is endogeneity. This section describes how we address endogeneity concerns to approximate a randomized experiment from observation data, in order to learn about causal relationships (Angrist and Krueger 1999). We begin with a brief overview of the Medicare EHR Incentive Program, followed by our identification strategy motivated by some unique features of this program. 4.1 Medicare EHR Incentive Program for the Eligible Hospitals Under the auspices of HITECH legislation, the goal of the CMS EHR Incentive Programs is to promote meaningful use of EHR through financial incentives. To receive the incentive payment, a hospital must achieve 14 core objectives as well as 5 out of 10 menu objectives (Table 3), each accompanied with a very specific measure (Blumenthal and Tavenner 2010). Since the inception of the programs in 2011, thousands of hospitals have achieved the MU objectives. Full detail of the programs is available through the programs website (Footnote 3), but here, we highlight its incentive features that help identify the effect of meaningful use on quality of care. Hospitals must demonstrate MU by 2015 at the latest, or they will be financially penalized. If they demonstrate MU before 2015, they will receive annual incentive payments from the Medicare EHR Incentive Program. The amount of these annual payments is determined by multiplying the following three factors: initial amount, Medicare share, and transfer factor. The initial amount is at least $2 million, and more if the hospital discharges a specified number of patients. The Medicare share is, roughly, the fraction of Medicare inpatient-bed-days in total inpatient-bed-days in the hospital s fiscal year. Most 14

15 important, the transfer factor varies by payment year and the time that the hospital demonstrates MU (Table 4). A hospital can receive these annual incentive payments up to four years, with the amount decreasing each year by the transfer factor. If a hospital demonstrates MU in 2013 or earlier, it can receive 4 years of payments with the transfer factor decreases each year from 1, 0.75, 0.5, to If a hospital demonstrates MU in 2014, it can only receive 3 years of payments with the transfer factor decreases each year from 0.75, 0.5, to Starting from 2015, hospitals that are not meaningfully using EHR technology will be penalized by a mandated Medicare payment adjustment, in which they will not receive the full amount of Medicare reimbursements. The degree of payment adjustment will double in 2016 and triple in The transfer factor and the cumulative penalties, therefore, incentivize hospitals that did not meet the criteria of MU to adopt and meaningfully use EHR sooner rather than later Identification Strategy Although the financial incentive is the same from an eligible hospital to enter the program and attest MU during 2011 and 2013 (i.e., 4 years of payments), we assume that hospitals would proceed to attest achieving MU and obtain the incentive payments once they have met the criteria. This assumption is consistent with a basic premise in the accounting and finance literature: income or earnings in the present is generally preferable to earnings in the future (Feltham and Ohlson 1999; Ohlson 1995), especially given the low cost of the attestation procedure. This is also a realistic assumption given the financial burden to hospitals of acquiring and implementing an EHR system and the financial subsidies available for achievement of MU. Therefore, for the hospitals that began to attest MU in 2012, we assume that they did not meet the MU criteria in 2011 or earlier, but did achieve MU in 2012 and later. 9 Payments from the Medicare EHR Incentive Program represent a nontrivial amount of incoming cash flow for the hospitals. From our data, we see that in the first three years of the program, the median annual payment to hospitals is $1.4 million (with the highest being $7.2 million). To put this number in context: one source ( estimates that the average profit per hospital in 2011 is around $10.7 million. Alternatively, data provided by HADB show that in 2011, the median difference between revenue and operation cost is slightly below $1 million. In either case, the payment incentive from the Medicare EHR Incentive Program is substantial. 15

16 For the purposes of identification and estimation, we consider MU as a dichotomous status regarding whether or not a hospital reaches the MU regulation criteria. To identify the quality effect of MU, we obtain longitudinal MU attainment records from the Medicare EHR Incentive Program, which provides data from 2011 to early 2014 (Section 3.1). We construct a panel dataset from this and other data sources, and employ a difference-in-differences (DID) identification strategy to tease out the quality effect of attaining MU. We define our treatment group as the hospitals that attained MU in 2012, but not before. A key and novel component in our identification strategy is that we consider two control groups: hospitals that attained MU in 2011 at the onset of the EHR Incentive Program (henceforth denoted by AlwaysMU) and hospitals that had not yet achieved MU by the end of 2012 (henceforth denoted by NeverMU). The AlwaysMU control group is comprised of hospitals that had reached MU prior to the implementation of the incentive program, therefore the incentive program had little or no impact on their MU status. Using these hospitals as a comparison group allows to estimate the effect of MU on quality of care for hospitals that sped up their process of reaching the MU status due to the incentive program. 10 On the other hand, the NeverMU control group includes hospitals that have not yet reached the MU status as of the end of Since these hospitals are likely to be in the process of speeding up their progress toward MU, using them as an alternative control group provides a more conservative (less optimistic) estimate of the effect on quality of care. In other words, although hospital s decisions on expanding resources to reach the MU status may be endogenous, these two distinct but complementary control groups allow us to obtain a robust upper and lower bounds for the unbiased MU effect: for the 10 One may argue that hospitals in the AlwaysMU group may also have responded to the legislation and sped up their MU status. While plausible, this is unlikely to be a first-order issue due to the limited amount of time between the laws and the time that we study, and the length of time for hospitals to implement EHR and obtain MU. HITECH was passed by the congress in 2009, but the detailed mandates in the Incentive Program were not announced until August If a hospital did not have EHR at that time, it would have taken about two years to implement it (Miller and Tucker 2011). If the hospital already had EHR, it would have taken about another 3 years (median) to move from adoption to MU. These numbers were obtained by following the approach in Appari et al. (2013): for each hospital in the treatment group, we use HADB ( ) to identify the time difference between implementation of all MU functionalities and attestation of MU. While this calculation is only an approximation, these numbers suggest that hospitals in AlwaysMU can be reasonably expected to have reached MU prior to the announcement of the incentive program. Further support of this argument can be seen in Figure 1 later in the paper: only about 18% of the hospitals demonstrated MU in

17 unobservable or omitted variables that may confound our MU estimate, their mean effect is likely to be monotonic with the timing of EHR adoption and MU. Using two control groups therefore provides the upper and lower bounds of the estimate. 11 More specifically, for each hospital in the treatment and control groups we construct a two-period panel with a pre-treatment period quality score taken from the fourth quarter of 2011 and a post-treatment period quality score taken from the first quarter of With this panel data set-up, the average treatment effect of attaining MU is the difference between the pre-post, within-subjects differences of the treatment and control groups. We estimate the following model: Quality = β + β TreatmentGroup + β PostPeriod + it 0 1 i 2 t β ( ) 3 TreatmentGroup i PostPeriod t + ci + Xitδ + uit, (1) Subscripts i (= 1 N) and t (= 1 or 2) index individual hospitals and time periods, respectively. Quality it represents the quality score of hospital i at time t. TreatmentGroup i and PostPeriod t are indicators for the treatment group and the post-period respectively. TreatmentGroup i is 1 if hospital i is in the treatment group; 0 otherwise. PostPeriod t is 1 if time t=2, i.e., the post-treatment period; and 0 if time t=1, i.e., the pre-treatment period. Parameter c i absorbs hospital-level, time-invariant unobserved effects. X it is a vector of control variables that we introduced in Section 3.3. Finally, ε it are the idiosyncratic errors which change across i and t. Model (1) can be estimated by either the fixed effects estimator or by the random effects estimator (Wooldridge 2002). While we conduct and report both, we note that in a two-period panel, a simple yet effective way to estimate fixed effects models is through a first-differencing transformation: ( ) Quality = α + α TreatmentGroup PostPeriod + X ϕ + u (2) it 0 1 i t it it where ΔQuality it = Quality i2 Quality i1, ΔX it = X i2 X i1, and Δu it = u i2 u i1. Since the hospital-level fixed effects, i.e., c i, is assumed to be time invariant, they cancel out after first differencing. The first-difference 11 We also considered using a traditional instrument variable approach where the instruments for MU status is the MU saturation rate (percentage of hospitals that had reached MU status prior to a focal hospital, within a 25 mile radius of the hospital), since hospitals are more likely to reach MU status due to competitive forces. We obtained qualitatively similar results. 17

18 (FD) model yields an identical estimates as the fixed effects model, but is easier to implement. Therefore, we will proceed with our analyses using the random-effects DID model and the FD model. With the two empirical models, the main interest of our analyses is the estimates of TreatmentGroup i PostPeriod t. A positive and significant estimate will support the hypothesis that meaningful use of EHR improves hospitals quality of care. 4.3 Validating the DID Identification Assumption A critical assumption in the DID identification strategy is the parallel historical trends of the dependent variable between treatment and control groups (Bertrand et al. 2004). That is, absent the treatment (in our case, the implementation of the incentive program, the treatment and control groups should demonstrate similar trends over time in terms of the outcome variable. This assumption is not trivial since the three hospital groups may present distinct characteristics. Since the historical JC hospital quality data are not publicly available, we use the quality measures from the CMS-HC dataset as a proxy. As mentioned in Section 3.2, the quality measures in JC and CMS-HC are largely aligned, but the former is updated quarterly, whereas those in the later is updated annually. Figure 1(a) shows the annual quality trends of the treatment and control groups from 2010 to While there are no signs that the DID assumption is violated for the NeverMU control group, the historical quality trends between treatment group and control group AlwaysMU are not perfectly parallel (albeit only slightly). We address this issue by using propensity score matching (Rosenbaum and Rubin 1983; Xue et al. 2011; Brynjolfsson et al. 2011; Mithas and Krishnan 2009). Propensity score matching matches observations in treatment and control groups, according to their propensities in receiving the treatment as a function of their observable traits. In our context, for each hospital in the treatment group, propensity score matching identifies the most comparable hospital in the control group. This helps excluding hospitals that are vastly different in their unobservable qualities from the treatment group. For the matching process, in addition to a broad set hospital characteristics, we also include quality changes from 2010 to 2011 and from 2011 to 2012 when calculating propensity scores. Figure 1(b) shows that after 18

19 matching, the three hospitals groups present parallel historical quality trends. We then conduct DID between the treatment group and the subset of control group hospitals. We report results without and with the matching procedure. 5. Results The data from the Medicare EHR Incentive Program show a significant uptake of EHR and MU among acute care hospitals from 2011 to 2013 (Figure 2). In 2011, the average rate of MU attainment was 18% across states. The percentage increased to 54% in 2012 and 86% in As of November 2014, 4,283 hospitals have achieved the Stage 1 MU. These statistics alleviate the concern that only good hospitals participate in the program, and at the same time, suggest that the incentives are so substantial that EHR adoption and MU in US acute care hospitals was accelerated from 18% to 86% in just two years. Table 5 shows some key summary statistics of our dataset. There are 2,344 hospitals in our dataset, in which 914 belong to the treatment group, 483 the AlwaysMU control group, and 947 the NeverMU control group. 12 Table 5 uncovers some other interesting patterns across these groups. When compared to hospitals in the AlwaysMU control group, hospitals in the treatment group had significantly lower case mix, and were more likely to locate in rural areas. When compared to those in the NeverMU control group, treatment hospitals had significantly higher throughputs, measured by both the number of inpatient days as well as the number of discharges. These differences could simultaneously correlate with the quality of care and treatment assignment (i.e., acquiring MU in 2012). Therefore, as a robustness check we employ propensity score matching to identify a subset of hospitals within the respective control group that are as similar as empirically possible to those in the treatment group. Additional robustness checks include quantile analyses and censored regression analyses. To reveal greater policy and managerial insights, we a) construct a continuous MU variable and investigate the relation between the 12 Based on the CMS-HC dataset, there were 4,860 hospitals in the US by the end of Among them, 3,459 were acute care hospitals. We excluded federal, tribal and physician-owned hospitals (n=231) and hospitals that were located outside 50 states and DC, e.g., Guam, Virgin Islands, etc. (n=55). Finally, hospitals with missing values in any variables of our models were deleted listwise. 19

20 degree of MU and the degree of quality improvement, and b) conduct a stratification analysis to reveal the potentially heterogeneous effect of MU among different types of hospitals. 5.1 Main Results Figure 3 presents the mean quality changes among the three hospital groups from the pre-treatment period to the post-treatment period. We can see that compared to either of the two control groups, the treatment group exhibits significantly greater quality improvement from the pre-treatment period to the posttreatment period. To further examine the effect, Table 6 summarizes our estimations across eight different model setups, based on the choice of estimator, the choice of control group, and whether or not to include control variables. These results consistently show that meaningful use of EHR has a significant and positive effect on quality of care. We note that the random-effects DID estimator and the FD estimator yield highly consistent estimates on the quality effect of MU. The quality effect of MU ranges roughly between 0.32 and 0.47 across different models. The incremental gain is consistent with findings in Appari et al. (2013). To better understand the size of this effect, we illustrate it in the context of an important indicator of care quality: hospital readmission. Readmission is an important problem in healthcare because it signifies poor quality of care and generates very high costs (Jencks et al. 2009; Bardhan et al. 2011). The CMS-HC dataset shows an average 30-day hospital-wide readmission rates of 16% (approximately one in six) in acute care hospitals at the end of Our data show that a 0.4 quality improvement can roughly translate to 0.14% reduction in readmission rate. 13 With over 20 million annual inpatient discharges in U.S. hospitals and the estimate cost of $7,400 per readmission (Friedman and Basu 2004), the 0.14% 13 Prior studies show that the relation between process quality and readmission rate is insignificant (Shwartz et al. 2011). One potential explanation is that hospitals with high process quality are more likely to attract complex patient cases, which then incurs a higher readmission rate. We use TACMI to address this issue, which is an index used to describe the complexity of a hospital s overall patient base. Our derivation is as follows. We categorize hospitals by whether their TACMI values are above or below the median TACMI value. The mean quality scores for the high and low TACMI groups are 98.08% and 96.94%, respectively, in the pre-treatment period. In the same period, the high TACMI group had a mean 30-day hospital-wide readmission rates of 15.79% and low TACMI hospitals 16.20%. Based on the above quality-readmission relationship, 0.4% quality improvement from MU can be translated to 0.14% reduction in readmission rate [ ( ) / ( )]. 20

21 reduction in readmission rate represents up to 28,000 fewer readmission cases and up to $207.2 million cost savings per year. While the magnitude of this effect may not seem striking when compared to the overall healthcare expenditure, it is still not trivial. More importantly, it indicates that the effect of MU on healthcare quality improvement is indeed in the right direction, even for the first stage of MU over the short period of time for which we have data. 5.2 Propensity Score Matching We conduct propensity score matching to address the issues that there are significant differences in observable hospital characteristics among hospital groups and that historical quality trends of the three hospital groups are not perfectly parallel. We use the Matching package in R, which optimizes covariate balance through a genetic search algorithm (Sekhon 2011). Each treatment hospital is matched with three hospitals in the AlwaysMU (and subsequently NeverMU) control group. We then apply the empirical models on the matched data to examine the robustness of the prior findings, so as to derive a more conservative estimate of the impact of MU against these two control groups. Table 7 shows the results from the matched samples. Across different models, the estimates of TreatmentGroup i PostPeriod t remain positive and significant, suggesting the robustness of the MU effect in our main results. 5.3 Other Robustness Checks Quantile Analysis Prior research has shown that the ceiling effect of healthcare quality can be an important issue in health IT research because it affects our interpretation on the effect of health IT (Jones et al. 2010). As pointed out by Jones et al. (2010), one unit improvement in quality score from 95% to 96% is considerably more difficult than one unit improvement below that level, say from 70% to 71%. Quantile regression estimates the treatment effect at different quantile, specifically at the median, so as to overcome the impacts from ceiling effect (Koenker and Bassett 1978). It, hence, can be considered a robust check for our earlier main results. Table 8 presents the results from the quantile analysis using the 21

22 DID specification. When using AlwaysMU as the control group, the effect is significant at the median. When using NeverMU as the control group, the effect is significant at the median and the lower quartile. Table 9 further presents the results using the FD specification. Note that due to the first differencing transformation, the lower quantiles show a greater MU effect than the upper quantiles. Across all these specifications however, we see that MU has a positive and statistically significant impact on quality of care at the median. Censored Regression Analysis Another empirical issue is data censoring; that is, our quality metric is strictly bounded in 0 and 100. We found that a number of hospitals attain the maximum quality score. Specifically, in the pretreatment period, 37 treatment hospitals, 22 AlwaysMU hospitals, and 35 NeverMU hospitals have the maximum quality score (100). In the post-treatment period, the numbers are 60, 47, and 48, respectively. These top-censored observations may lead to bias in our OLS estimations. To address this, we use a Tobit model to estimate the effect of MU under the DID specification (Tobin 1958). From Table 10 we find that the effect of MU remains positively significant with a set of similar coefficients comparing to the main results. This suggests that the censored observations do not have a strong impact on our prior estimations. 5.4 Continuous MU Variable All the analyses we presented so far consider MU as a dichotomous status: hospital either obtained MU or not obtained MU. Although this is true from the perspective of the Medicare EHR Incentive Program, and indeed provided a useful metric and specific goal, it is intuitive to ask if a greater degree of MU could lead to a greater degree of quality improvement, provided that the hospital has reached the minimum requirements specified by the MU regulation. To answer this question, we look into the treatment group and the AlwaysMU control group. These hospitals had achieved MU, and the data from the Medicare EHR Incentive Program contains information about these hospitals performances on the core and the menu MU objectives (see Section 4.1). We examine different ways to construct the continuous MU variable. We first consider two scenarios: one with only the core measures and the other with both the 22

23 core and the menu measures. For each scenario, we then calculate the average as well as the product of the measures. Table 11 shows the results from our analysis of the continuous MU variable. Our estimation indicates that a higher degree of MU indeed provides a greater improvement in quality of care. This finding is consistent with the early work by Devaraj and Kohli (2003) and reaffirms the importance of measuring and factoring actual use in studying the business impacts of IT. 5.5 MU Effects and Hospital Characteristics: A Stratification Analysis Our main results in Table 6 showed that MU has positive and significant effect on quality of care. However, this effect may be heterogeneous across different hospitals. To draw proper policy implications from our analyses, we investigate how hospital and environmental characteristics influence the effect of MU through a stratification analysis. We consider a number of characteristics, including hospital size, ownership, teaching status, region, and urban status. We conduct a stratification analysis to tease out the quality effect of MU for various subsamples (strata) of hospitals. In each stratum, we estimate the FD model using only treatment and control hospitals in that stratum. As an example, in the small size stratum we estimate the model using data only from hospitals with less than 100 beds in both the treatment and control groups. As another example, in the government ownership stratum we focus only on hospitals owned by government agencies. We choose the FD model instead of the DID model because in some strata, there is no variation on certain timeconstant control variables. For instance, in the AlwaysMU control group, the small size stratum has no teaching hospitals, rendering the DID estimation impossible. The FD estimation, on the other hand, does not have this problem since the time-independent control variables have no impact on the estimation. Table 12 and Figure 4 show the results from our stratification analysis. The results are interesting and have several important policy implications. We find that hospitals traditionally deemed to have lower quality, such as small, non-teaching, and rural hospitals can in fact attain greater quality improvement from meaningful use of EHR than other hospitals. Specifically, the effect of MU in small hospitals is over four times more than the MU effect in large hospitals (0.98 vs. 0.23, model 2 in Figure 4). Similarly, the 23

24 effect in rural hospitals is eight times more than the MU effect in urban hospitals (1.16 vs. 0.13, model 2 in Figure 4). It is noteworthy that these disadvantaged hospitals were also the ones that had been shown to be slower in adopting EHR (Jha et al. 2009; DesRoches et al. 2012). These results suggest that the Medicare EHR Incentive Program not only accelerated the overall adoption and MU of EHR technology in general, but more importantly, it significantly enhanced the quality for disadvantaged hospitals that are in greater needs for better care. In other words, MU of EHR can potentially be an effective approach to mitigate healthcare disparity. 6. Conclusions The goal of this study is to investigate the relationship between hospitals meaningful use (MU) of electronic health records (EHR) and quality of care. Through multiple empirical specifications and numerous robustness checks, we find that meaningful use of EHR significantly improved quality of care. More importantly, disadvantaged (small, non-teaching, or rural) hospitals tend to attain a greater degree of quality improvement from MU. The results from this study are important for three reasons. First, while there have been multiple studies on the beneficial effects on quality of care resulting from implementation of EHR or MU, to the best of our knowledge, this study is the first one that has a formal, objective measurement of MU. Second, we are one of the first to leverage the Medicare EHR Incentive Program for exogenous variations in identifying the clinical impact of MU. Our findings provide strong empirical evidence on the positive quality impact from meaningfully use EHR technology. Third, from a policy evaluation perspective, the findings justify and support the effectiveness of the Medicare EHR Incentive Program and the goal of the HITECH Act. As the federal initiative begins to move toward the Stage 2 MU, 14 this study gives an early assessment of the clinical benefit and policy implications of the MU initiative. 14 Stage 2 MU requires a greater degree of system usage, consolidates a number of Stage 1 MU measures, and introduces a few new measures. A comprehensive comparison of Stage 1 and Stage 2 MU objectives is available at Guidance/Legislation/EHRIncentivePrograms/Downloads/Stage1vsStage2CompTablesforHospitals.pdf 24

25 Some limitations of our study point to several directions for future research. First, there are multiple stages of MU that the Medicare EHR Incentive Program intends to implement. In this study we only considered the Stage 1 MU, which comprises basic but essential objectives for meaningful use of EHR. Second, there have been discussions on the limitation of existing quality measures (Kern et al. 2009; Jones et al. 2010). While we found the effect of MU on the existing quality measures, it would have been ideal if richer and more accurate measures were available. Despite these limitations, replicating our empirical framework to higher stages of MU or better measures of healthcare quality, are all straightforward. Most importantly, our study represents an important first step in understanding the effect of not just adoption, but meaningful use of EHR technology on quality of care. Finally, and more broadly, by moving beyond adoption and focusing on the meaningful use of IT, our study also contributes to the long and growing literature in information systems on the adoption and value of information technologies (Brynjolfsson and Hitt 1996; Banker and Kauffman 2004) by examining a different but socially important outcome metric: quality of healthcare services. 25

26 Tables and Figures Study Agha (2014) Appari et al. (2013) Dey et al. (2013) McCullough et al. (2013) Appari et al. (2012) Dranove et al. (2012) Hah and Bharadwaj (2012) Furukawa (2011) Miller and Tucker (2011) Main Data Sources AHA, HADB, MC CMS-HC, HADB CMS-CR, HADB AHA, HADB, MC CMS-IPPS, CMS-HC, HADB AHA, CMS- CR, HADB Data Period Table 1: Summary of prior studies on the effects of EHR Data Units 3,880 hospitals 3,921 hospitals NA 1,011 hospitals ,953 hospitals ,603 hospitals AHA, HADB ,231 hospitals 2,557 hospitals Main Dependent Variables Hospital saving and quality Main Independent Variables Use of HIT (EMR or CDS) Identification strategy Analysis Main Findings FE PDA Negative. HIT has no effect on medical expenditures and patient outcomes. Process quality EMR capability FE PDA Positive. Increased EHR capability yielded increased process quality Operational performance Patient outcome (mortality) Medication administration quality Hospital operating costs Hospital operation and financial performance EHR capability PA CSA Positive. EHR capability was positively associated with operational performance Use of EHR and CPOE Use of CPOE and emar DID PDA Negative. There was no relationship between HIT and mortality PA CSA Positive. Use of emar and CPOE improved adherence to medication guidelines EHR adoption FE PDA Mixed. EHR adoption was initially associated with increased cost, which decreased after 3 years if complementary conditions were met. HIT use and HIT capital None PDA Positive. HIT use and HIT capital positively related to operation and financial performance NHAMCS EDs ED throughput EMR capability IV CSA Mixed. Advanced EHR improved ED efficiency, but basic EHR did not. CDC-VSCP, HADB ,764 hospitals Neonatal mortality EHR adoption IV, FE PDA Positive. EHR reduced neonatal mortality 26

27 Romano and Stafford (2011) Furukawa et al. (2010) Himmelstein et al. (2010) Jones et al. (2010) McCullough et al. (2010) This paper NAMCS, NHAMCS COSHPD, HADB CMS-CR, DHA, HADB AHA, CMS- HC, HADB AHA, CMS- HC, HADB CMS-CR, CMS-EHRIP, CMS-HC, HADB, JC , Q4, 2013Q1 243,478 patient visits 326 hospitals in California Approx. 4,000 hospitals 2,086 hospitals 3,401 hospitals 2,747 hospitals Ambulatory quality Nurse staffing and nurse-sensitive patient outcomes Hospital costs and quality Use of EHR and CDS EHR implementation Degree of Computerization None CSA Negative. EHR and CDS were not associated with ambulatory care quality FE PDA Negative. EHR systems did not decease hospital costs, length of stay, and nurse staffing levels. None CSA Negative. Computerization had no effect on hospital costs and quality Process quality EHR capability DID, FE, PA PDA Mixed. Adopting basic EHR significantly increased care quality of heart failure, but adopting advanced EHR significantly decrease care quality of acute myocardial infarction and heart failure. Process quality Process quality HIT adoption (EHR & CPOE) Meaningful use of EHR FE PDA Mixed. HIT adoption improved 2 of 6 process quality measures DID, FD, PA PDA Positive. Meaningful use of EHR significantly improves quality of care, and the effect is larger among hospitals which are small, non-teaching or located in rural area. Technology use/adoption is determined a year before. Note. AHA=American Hospital Association Annual Survey; CDC-VSCP=Centers for Disease Control (CDC) and Prevention's Vital Statistics Cooperative Program; CDS=Clinical decision support; CMS-AIPPS=CMS Inpatient Prospective Payment System; CMS-CR=CMS Cost Reports; CMS-EHRIP=CMS EHR Incentive Programs; CMS-HC=CMS Hospital Compare database; COSHPD=California Office of Statewide Health Planning and Development Annual Financial Disclosure Reports and Patient Discharge Databases; CPOE=Computerized physician order entry; DID=Difference-in-differences; DHA=Dartmouth Health Atlas; ED=Emergency department; EHR=Electronic health records; FD=Firstdifference; FE=Fixed-effects; HADB=Healthcare Information and Management Systems Society s Analytics Database; IV=Instrument variables; JC=the Joint Commission; MC=Medicare claims; NAMCS=National Ambulatory Medical Care Survey; NHAMCS=National Hospital Ambulatory Medical Care Survey; PA=Propensity adjustments; WSDH=Washington State Department of Health hospital database 27

28 Table 2: Data Descriptions Variable Type Description Source Meaningful Use (MU) Quality of Care Binary, time-invariant Numeric, timevarying Age Numeric, timeinvariant Size Numeric, timevarying Annual Discharges Numeric, timevarying Annual Inpatient Numeric, timevarying Days Annual Medicare Numeric, timevarying Discharges Annual Medicare Numeric, timevarying Inpatient Days Transfer adjusted Numeric, timevarying case mix index (TACMI) Teaching status Binary, time-invariant Ownership Categorical, time-invariant Rural area Binary, time- Region invariant Categorical, time-invariant To indicate whether a hospital reaches MU in a CMSspecific point of time EHRIP The composite quality score for the process of care, JC range from 0 (lowest quality) to 100 (highest quality) Age of hospital as of 2012 (2012 the year formed) HADB Total number of hospital beds Total number of inpatient discharges in a year Total number of inpatient days in a year Total number of Medicare inpatient discharges in a year Total number of Medicare inpatient days in a year A value used to characterize the overall severity of the patient base of the hospital Whether the hospital is a member in COTH CMS-CR CMS-CR CMS-CR CMS-CR CMS-CR CMS-IPPS COTH Whether the hospital is owned by a government, CMS-HC non-profit, or proprietary agency Whether the hospital is located in a rural area RUCA 2.0, CMS-HC Whether the hospital is located in Midwest, Northeast, South, or West Note. COTH=Council of Teaching Hospitals; RUCA= Rural Urban Commuting Area. Other abbreviations follow Table 1. Census regions, CMS-HC 28

29 Table 3: Stage 1 MU Core and Menu Objectives Core Objectives 1 CPOE for Medication Orders 2 Drug Interaction Checks 3 Maintain Problem List 4 Active Medication List 5 Medication Allergy List 6 Record Demographics 7 Record Vital Signs 8 Record Smoking Status 9 Clinical Quality Measures 10 Clinical Decision Support Rule 11 Electronic Copy of Health Information 12 Discharge Instructions 13 Electronic Exchange of Clinical Information 14 Protect Electronic Health Information Menu Objectives 1 Drug Formulary Checks 2 Advanced Directives 3 Clinical Lab Test Results 4 Patient Lists 5 Patient-specific Education Resources 6 Medication Reconciliation 7 Transition of Care Summary 8 Immunization Registries Data Submission 9 Reportable Lab Results 10 Syndromic Surveillance Data Submission Table 4: Transfer factors for eligible hospitals Demonstrate MU Payment

30 Table 5. Summary Statistics Control Group AlwaysMU Control Group NeverMU Treatment Group P- Value P- Value # of hospitals Mean Age (37.39) (33.44) (37.53) Mean Size (201.9) (194.9) (182) Mean Total Inpatient Discharges (10.87) (10.66) (9.71) Mean Total Inpatient Days (58.49) (55.84) (51.4) Mean Medicare Inpatient Discharges 3.75 (3.182) (3.344) (3.072) Mean Medicare Inpatient Days (18.55) (18.7) (17.56) Mean TACMI (0.261) (0.255) < (0.269) Percent of Teaching Hospitals 10.5 % 11.6 % % 0.34 Percent of Rural Hospitals 30.2 % 20.7 % < % Percent of Government Hospitals 15.2 % 12.4 % % Percent of Nonprofit Hospitals 67.2 % 60.7 % % Percent of Proprietary Hospitals 17.6 % 26.9 % < % Percent of Hospitals in the Midwest 24 % 23 % % Percent of Hospitals in the Northeast 19.7 % 14.1 % % Percent of Hospitals in the South 39.9 % 45.3 % % 0.13 Percent of Hospitals in the West 16.4 % 17.6 % %

31 Table 6. Panel Data Models for the Quality Effect of MU Choice of estimator DID DID DID DID FD FD FD FD Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU Includes control variables Yes Yes Yes Yes TreatmentGroup -.629*** -.337** (.154) (.145) (.133) (.129) PostPeriod.389***.322***.262***.180** (.133) (.136) (.078) (.078) TreatmentGroup PostPeriod.319**.320**.446***.479***.319**.328**.446***.438*** (.155) (.156) (.111) (.112) (.155) (.156) (.111) (.112) Age -.003* -.003** (.001) (.001) Size (.001) (.001) (.001) (.001) TotalDischarges.074** (.020) (.022) (.040) (.017) TotalInpatientDays -.014* (.005) (.005) (.014) (.014) MedicareDischarges.224*.397*** (.077) (.078) (.156) (.151) MedicareInpatientDays *** (.015) (.014) (.039) (.030) TACMI 2.447*** 2.067*** * (.428) (.335) (.820) (.715) Rural -.777*** -.394*** (.162) (.154) Teach (.142) (.139) Nonprofit.576***.314** (.200) (.153) Proprietary 1.220***.722*** (.226) (.210) Midwest.765***.613*** (.220) (.210) Northeast.591***.741*** (.255) (.219) South ** (.227) (.200) Constant *** *** *** ***.389***.407***.262***.313*** (.121) (.805) (.094) (.600) (.133) (.137) (.078) (.087) Number of observations Note. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01) 31

32 Table 7. Results from the Matched Dataset Choice of estimator DID DID DID DID FD FD FD FD Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU Includes control variables Yes Yes Yes Yes TreatmentGroup -.332*** -.362*** (.073) (.067) (.072) (.067) PostPeriod.434***.332***.344***.265*** (.056) (.058) (.041) (.042) TreatmentGroup PostPeriod.281***.305***.351***.379***.281***.285***.351***.343*** (.071) (.072) (.059) (.059) (.071) (.072) (.059) (.059) Age -.005*** -.003*** (.001) (.001) Size (.000) (.000) (.001) (.001) TotalDischarges.074***.034*** (.010) (.011) (.020) (.008) TotalInpatientDays -.015*** -.006* *** (.002) (.003) (.007) (.008) MedicareDischarges.181***.471*** (.036) (.043) (.071) (.083) MedicareInpatientDays -.031** -.072*** (.007) (.008) (.019) (.018) TACMI 2.844*** 2.194*** *** (.215) (.197) (.405) (.379) Rural -.730*** -.498*** (.075) (.081) Teach -.399*** -.239** (.071) (.078) Nonprofit.755***.229*** (.095) (.083) Proprietary 1.192***.821*** (.122) (.107) Midwest.822***.656*** (.104) (.119) Northeast.669***.826*** (.122) (.125) South.216**.379*** (.109) (.117) Constant *** *** *** ***.434***.438***.344***.410*** (.051) (.379) (.051) (.349) (.056) (.060) (.041) (.046) Number of observations Note. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01) 32

33 Table 8. Results from the Quantile Analysis with the DID Specification Choice of control group AlwaysMU AlwaysMU AlwaysMU NeverMU NeverMU NeverMU Quantile TreatmentGroup -.597*** -.468*** -.222*** -.183* (.135) (.091) (.077) (.135) (.095) (.062) PostPeriod.614***.424***.302***.297***.319***.234*** (.131) (.064) (.053) (.124) (.085) (.054) TreatmentGroup PostPeriod * ***.241**.078 (.162) (.108) (.093) (.181) (.123) (.081) Age -.004*** -.002*** -.001** -.002*** -.002*** -.002*** (.001) (.001) (.001) (.001) (.001) (.001) Size ** (.001) (.001) (.000) (.001) (.001) (.000) TotalDischarges.063***.026***.018*.030*.027***.010 (.021) (.014) (.012) (.028) (.014) (.012) TotalInpatientDays -.009** -.004* *** (.006) (.004) (.003) (.006) (.003) (.003) MedicareDischarges.124*.057** -.054*.355***.108***.009 (.081) (.050) (.044) (.091) (.047) (.060) MedicareInpatientDays ** *** -.013** (.015) (.010) (.008) (.017) (.009) (.012) TACMI 1.624***.850***.287*** 1.870***.949***.437*** (.242) (.161) (.128) (.176) (.140) (.112) Rural *** -.476*** -.150* -.649*** -.302*** (.162) (.118) (.085) (.138) (.086) (.057) Teach *** -.152** ** (.150) (.094) (.076) (.148) (.101) (.106) Nonprofit.822***.486***.139*.628***.265***.156** (.207) (.149) (.079) (.167) (.115) (.059) Proprietary 1.334***.949***.513*** 1.037***.634***.384*** (.227) (.157) (.091) (.188) (.141) (.075) Midwest.486***.305***.174**.606***.310***.215*** (.141) (.090) (.084) (.129) (.087) (.072) Northeast.378***.206*** ***.362***.290*** (.164) (.085) (.095) (.144) (.081) (.080) South **.126*.295**.217***.282*** (.151) (.074) (.086) (.140) (.084) (.070) Constant *** *** *** *** *** *** (.475) (.309) (.246) (.355) (.262) (.187) Number of observations Note. Bootstrapped standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01) 33

34 Table 9. Results from the Quantile Analysis with the FD Specification Choice of control group AlwaysMU AlwaysMU AlwaysMU NeverMU NeverMU NeverMU Quantile TreatmentGroup -.597*** -.468*** -.222*** -.183* (.135) (.091) (.077) (.135) (.095) (.062) PostPeriod.614***.424***.302***.297***.319***.234*** (.131) (.064) (.053) (.124) (.085) (.054) TreatmentGroup PostPeriod.228**.098* ***.383***.228*** (.105) (.067) (.044) (.093) (.053) (.043) Age -.006*** -.004*** -.003*** -.003*** -.004*** -.002*** (.001) (.001) (.000) (.001) (.001) (.001) Size (.001) (.001) (.000) (.001) (.000) (.000) TotalDischarges.114***.053*** ***.030*** -.006** (.019) (.014) (.012) (.029) (.014) (.015) TotalInpatientDays -.024*** -.012*** *** -.008*** (.005) (.003) (.003) (.006) (.003) (.004) MedicareDischarges.130* ***.109***.059** (.083) (.056) (.045) (.090) (.054) (.059) MedicareInpatientDays *** (.015) (.011) (.009) (.016) (.010) (.011) TACMI 2.544*** 1.240***.373*** 2.096*** 1.087***.399*** (.317) (.187) (.128) (.190) (.119) (.117) Constant *** *** *** *** *** *** (.513) (.290) (.192) (.310) (.183) (.177) Number of observations Note. Bootstrapped standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01) 34

35 Table 10. Results from the Censored Regression Analysis with the DID Specification Choice of control group AlwaysMU AlwaysMU NeverMU NeverMU Includes control variables Yes Yes TreatmentGroup -.342*** -.376*** (.075) (.069) (.070) (.066) PostPeriod.503***.384***.370***.273*** (.075) (.070) (.070) (.066) TreatmentGroup.246**.276***.356***.384*** PostPeriod (.107) (.098) (.099) (.094) Age -.005*** -.004*** (.001) (.001) Size ** (.001) (.000) TotalDischarges.074***.053*** (.017) (.013) TotalInpatientDays -.015*** -.007** (.004) (.003) MedicareDischarges.163***.441*** (.058) (.055) MedicareInpatientDays -.029*** -.071*** (.011) (.011) TACMI 3.014*** 2.691*** (.142) (.127) Rural -.686*** -.384*** (.067) (.063) Teach -.397*** -.309*** (.103) (.099) Nonprofit.780***.229*** (.072) (.069) Proprietary 1.356***.930*** (.093) (.085) Midwest.829***.704*** (.084) (.079) Northeast.655***.920*** (.092) (.086) South.245***.470*** (.080) (.073) Constant *** *** *** *** (.053) (.239) (.049) (.212) Number of observations Note. Standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01) 35

36 Control Group Continuous MU Variable Construction Table 11. Analysis of Continuous MU Variable using the FD Specification AlwaysMU AlwaysMU NeverMU NeverMU AlwaysMU AlwaysMU NeverMU NeverMU Avg CMs Prod CMs Avg CMs Prod CMs Avg CMs Prod CMs Avg CMs Prod CMs & MMs & MMs & MMs & MMs ContinuousMU.381**.694***.374**.600**.504***.783***.505***.867*** (.170) (.257) (.168) (.284) (.122) (.193) (.122) (.220) Size (.001) (.001) (.001) (.001) (.001) (.001) (.001) (.001) TotalDischarges (.040) (.040) (.040) (.040) (.017) (.017) (.017) (.017) TotalInpatientDays (.014) (.014) (.014) (.014) (.014) (.014) (.014) (.014) MedicareDischarges (.157) (.155) (.157) (.155) (.151) (.151) (.151) (.151) MedicareInpatientDays (.039) (.039) (.039) (.039) (.030) (.030) (.030) (.030) TACMI * * * * (.819) (.822) (.819) (.823) (.714) (.718) (.714) (.719) Constant.391***.339***.396***.421***.298***.299***.299***.324*** (.137) (.130) (.136) (.122) (.087) (.083) (.087) (.083) Number of observations Note 1. CMs = core MU measures; MMs = menu MU measures Note 2. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01) Table 12. Stratification Analysis on the Quality Effect of MU Control Group AlwaysMU Control Group NeverMU # of Tr/Co # of Tr/Co Dimension Category Hospitals MU estimate Hospitals MU estimate Size Small (less than 100 beds) 255 / (.64)** 255 / (.32)*** Medium (100 to 300 beds) 426 / (.15) 426 / (.13)* Large (more than 300 beds) 233 / (.12) 233 / (.12)* Ownership Government 139 / (.55) 139 / (.33)*** Nonprofit 614 / (.17)** 614 / (.12)*** Proprietary 161 / (.38) 161 / (.32) Teaching status Teach 96 / (.17) 96 / (.21) Non-teach 818 / (.18)** 818 / (.12)*** Region Midwest 219 / (.17)** 219 / (.23) Northeast 180 / (.45) 180 / (.21)*** South 365 / (.31) 365 / (.20)*** West 150 / (.17) 150 / (.22) Urban status Urban 638 / (.10) 638 / (.10) Rural 276 / (.61)** 276 / (.29)*** Note. Robust standard errors are shown in parentheses. (*p < 0.1; **p < 0.05; ***p < 0.01) 36

37 Figure 1. Historical quality trends of the treatment and control groups Figure 2. Proportions of acute care hospitals attaining MU in each state from 2011 to

Health Analytics and Predictive Modeling: Four Essays on Health Informatics

Health Analytics and Predictive Modeling: Four Essays on Health Informatics Health Analytics and Predictive Modeling: Four Essays on Health Informatics Item type Authors Publisher Rights text; Electronic Dissertation Lin, Yu-Kai The University of Arizona. Copyright is held by

More information

US Hospital Information Systems Overview and Outlook, 2013 2020 Managing Information in an Era of Reform

US Hospital Information Systems Overview and Outlook, 2013 2020 Managing Information in an Era of Reform US Hospital Information Systems Overview and Outlook, 2013 2020 Managing Information in an Era of Reform December 2014 Contents Section Slide Number Executive Summary 11 Market Background 19 The EHR Landscape

More information

The Impact of the Medicare Rural Hospital Flexibility Program on Patient Choice

The Impact of the Medicare Rural Hospital Flexibility Program on Patient Choice The Impact of the Medicare Rural Hospital Flexibility Program on Patient Choice Gautam Gowrisankaran Claudio Lucarelli Philipp Schmidt-Dengler Robert Town January 24, 2011 Abstract This paper seeks to

More information

Health Information Technology in the United States: Information Base for Progress. Executive Summary

Health Information Technology in the United States: Information Base for Progress. Executive Summary Health Information Technology in the United States: Information Base for Progress 2006 The Executive Summary About the Robert Wood Johnson Foundation The Robert Wood Johnson Foundation focuses on the pressing

More information

New Rules for the HITECH Electronic Health Records Incentive Program and Meaningful Use

New Rules for the HITECH Electronic Health Records Incentive Program and Meaningful Use January 18, 2010 To our friends and clients: Dechert s Health Law Practice monitors developments related to healthcare reform and periodically issues a Dechert Healthcare Reform Update. Each Update provides

More information

HITECH Act Update: An Overview of the Medicare and Medicaid EHR Incentive Programs Regulations

HITECH Act Update: An Overview of the Medicare and Medicaid EHR Incentive Programs Regulations HITECH Act Update: An Overview of the Medicare and Medicaid EHR Incentive Programs Regulations The Health Information Technology for Economic and Clinical Health Act (HITECH Act) was enacted as part of

More information

A STRATIFIED APPROACH TO PATIENT SAFETY THROUGH HEALTH INFORMATION TECHNOLOGY

A STRATIFIED APPROACH TO PATIENT SAFETY THROUGH HEALTH INFORMATION TECHNOLOGY A STRATIFIED APPROACH TO PATIENT SAFETY THROUGH HEALTH INFORMATION TECHNOLOGY Table of Contents I. Introduction... 2 II. Background... 2 III. Patient Safety... 3 IV. A Comprehensive Approach to Reducing

More information

Status of Electronic Health Records in Missouri Hospitals HIDI SPECIAL REPORT JULY 2012

Status of Electronic Health Records in Missouri Hospitals HIDI SPECIAL REPORT JULY 2012 Status of Electronic Health Records in Missouri Hospitals HIDI SPECIAL REPORT JULY 2012 HIDI SPECIAL REPORT INTRODUCTION The steady progress that Missouri hospitals continue to demonstrate in their adoption

More information

Adoption of Electronic Health Record Systems among U.S. Non-federal Acute Care Hospitals: 2008-2013

Adoption of Electronic Health Record Systems among U.S. Non-federal Acute Care Hospitals: 2008-2013 ONC Data Brief No. 16 May 2014 Adoption of Electronic Health Record Systems among U.S. Non-federal Acute Care Hospitals: 2008-2013 Dustin Charles, MPH; Meghan Gabriel, PhD; Michael F. Furukawa, PhD The

More information

GAO ELECTRONIC HEALTH RECORDS. First Year of CMS s Incentive Programs Shows Opportunities to Improve Processes to Verify Providers Met Requirements

GAO ELECTRONIC HEALTH RECORDS. First Year of CMS s Incentive Programs Shows Opportunities to Improve Processes to Verify Providers Met Requirements GAO United States Government Accountability Office Report to Congressional Committees April 2012 ELECTRONIC HEALTH RECORDS First Year of CMS s Incentive Programs Shows Opportunities to Improve Processes

More information

Adoption of Electronic Health Record Systems among U.S. Non- Federal Acute Care Hospitals: 2008-2014

Adoption of Electronic Health Record Systems among U.S. Non- Federal Acute Care Hospitals: 2008-2014 ONC Data Brief No. 23 April 2015 Adoption of Electronic Health Record Systems among U.S. Non- Federal Acute Care Hospitals: 2008-2014 Dustin Charles, MPH; Meghan Gabriel, PhD; Talisha Searcy, MPA, MA The

More information

ONC Data Brief No. 9 March 2013. Adoption of Electronic Health Record Systems among U.S. Non-federal Acute Care Hospitals: 2008-2012

ONC Data Brief No. 9 March 2013. Adoption of Electronic Health Record Systems among U.S. Non-federal Acute Care Hospitals: 2008-2012 ONC Data Brief No. 9 March 2013 Adoption of Electronic Health Record Systems among U.S. Non-federal Acute Care Hospitals: 2008-2012 Dustin Charles, MPH; Jennifer King, PhD; Vaishali Patel, PhD; Michael

More information

Third Annual Status of Electronic Health Records in Missouri Hospitals HIDI SPECIAL REPORT

Third Annual Status of Electronic Health Records in Missouri Hospitals HIDI SPECIAL REPORT Third Annual Status of Electronic Health Records in Missouri Hospitals HIDI SPECIAL REPORT OCTOBER 2013 HIDI SPECIAL REPORT Introduction Throughout the past three years, the nation s hospitals have made

More information

Meaningful Use. Medicare and Medicaid EHR Incentive Programs

Meaningful Use. Medicare and Medicaid EHR Incentive Programs Meaningful Use Medicare and Medicaid Table of Contents What is Meaningful Use?... 1 Table 1: Patient Benefits... 2 What is an EP?... 4 How are Registration and Attestation Being Handled?... 5 What are

More information

Subject: Electronic Health Records: Number and Characteristics of Providers Awarded Medicare Incentive Payments for 2011

Subject: Electronic Health Records: Number and Characteristics of Providers Awarded Medicare Incentive Payments for 2011 United States Government Accountability Office Washington, DC 20548 July 26, 2012 Congressional Committees Subject: Electronic Health Records: Number and Characteristics of Providers Awarded Medicare Incentive

More information

EHR Incentive Program for Medicare Hospitals: Calculating Payments Last Updated: May 2013

EHR Incentive Program for Medicare Hospitals: Calculating Payments Last Updated: May 2013 EHR Incentive Program for Medicare Hospitals: Calculating Payments Last Updated: May 2013 The Medicare Electronic Health Record (EHR) Incentive Program provides incentive payments for eligible acute care

More information

EHR Incentive Program FAQs posted on the CMS website as of 10/15/2013

EHR Incentive Program FAQs posted on the CMS website as of 10/15/2013 9808 9809 Can eligible professionals (EPs) receive electronic health record (EHR) incentive payments from both the Medicare and Medicaid programs? My electronic health record (EHR) system is CCHIT certified.

More information

THE TRILLION DOLLAR CONUNDRUM: COMPLEMENTARITIES AND HEALTH INFORMATION TECHNOLOGY

THE TRILLION DOLLAR CONUNDRUM: COMPLEMENTARITIES AND HEALTH INFORMATION TECHNOLOGY ONLINE APPENDIX TO THE TRILLION DOLLAR CONUNDRUM: COMPLEMENTARITIES AND HEALTH INFORMATION TECHNOLOGY By David Dranove, Chris Forman, Avi Goldfarb, and Shane Greenstein NOT FOR PUBLICATION Appendix Table

More information

Impact of Meaningful Use and Healthcare Transformation On Patient Access

Impact of Meaningful Use and Healthcare Transformation On Patient Access Impact of Meaningful Use and Healthcare Transformation On Patient Access Copyright 2011 BluePrint Healthcare IT. All rights reserved NAHAM Northeast Conference October 2011 Stamford, CT Introduction 1.

More information

Toward Meaningful Use of HIT

Toward Meaningful Use of HIT Toward Meaningful Use of HIT Fred D Rachman, MD Health and Medicine Policy Research Group HIE Forum March 24, 2010 Why are we talking about technology? To improve the quality of the care we provide and

More information

Courtesy of Columbia University and the ONC Health IT Workforce Curriculum program

Courtesy of Columbia University and the ONC Health IT Workforce Curriculum program Special Topics in Vendor-Specific Systems: Quality Certification of Commercial EHRs Lecture 5 Audio Transcript Slide 1: Quality Certification of Electronic Health Records This lecture is about quality

More information

Electronic Health Records: Number and Characteristics of Providers Awarded Medicare Incentive Payments for 2011 2012

Electronic Health Records: Number and Characteristics of Providers Awarded Medicare Incentive Payments for 2011 2012 441 G St. N.W. Washington, DC 20548 October 24, 2013 Congressional Committees Electronic Health Records: Number and Characteristics of Providers Awarded Medicare Incentive Payments for 2011 2012 Widespread

More information

Physician Motivations for Adoption of Electronic Health Records Dawn Heisey-Grove, MPH; Vaishali Patel, PhD

Physician Motivations for Adoption of Electronic Health Records Dawn Heisey-Grove, MPH; Vaishali Patel, PhD ONC Data Brief No. 21 December 2014 Physician Motivations for Adoption of Electronic Health Records Dawn Heisey-Grove, MPH; Vaishali Patel, PhD In 2009, Congress committed to supporting the adoption and

More information

Summary of the Proposed Rule for the Medicare and Medicaid Electronic Health Records (EHR) Incentive Program (Eligible Professionals only)

Summary of the Proposed Rule for the Medicare and Medicaid Electronic Health Records (EHR) Incentive Program (Eligible Professionals only) Summary of the Proposed Rule for the Medicare and Medicaid Electronic Health Records (EHR) Incentive Program (Eligible Professionals only) Background Enacted on February 17, 2009, the American Recovery

More information

December 23, 2010. Dr. David Blumenthal National Coordinator for Health Information Technology Department of Health and Human Services

December 23, 2010. Dr. David Blumenthal National Coordinator for Health Information Technology Department of Health and Human Services December 23, 2010 Dr. David Blumenthal National Coordinator for Health Information Technology Department of Health and Human Services RE: Prioritized measurement concepts Dear Dr. Blumenthal: Thank you

More information

InteliChart. Putting the Meaningful in Meaningful Use. Meeting current criteria while preparing for the future

InteliChart. Putting the Meaningful in Meaningful Use. Meeting current criteria while preparing for the future Putting the Meaningful in Meaningful Use Meeting current criteria while preparing for the future The Centers for Medicare & Medicaid Services designed Meaningful Use (MU) requirements to encourage healthcare

More information

MEANINGFUL USE Stages 1 & 2

MEANINGFUL USE Stages 1 & 2 MEANINGFUL USE Stages 1 & 2 OVERVIEW Meaningful Use is the third step in the journey to receive funds under the CMS EHR Incentive Programs. Meaningful Use (MU) is the utilization of certified electronic

More information

Details for: CMS PROPOSES DEFINITION OF MEANINGFUL USE OF CERTIFIED ELECTRONIC HEALTH RECORDS (EHR) TECHNOLOGY. Wednesday, December 30, 2009

Details for: CMS PROPOSES DEFINITION OF MEANINGFUL USE OF CERTIFIED ELECTRONIC HEALTH RECORDS (EHR) TECHNOLOGY. Wednesday, December 30, 2009 Details for: CMS PROPOSES DEFINITION OF MEANINGFUL USE OF CERTIFIED ELECTRONIC HEALTH RECORDS (EHR) TECHNOLOGY Return to List For Immediate Release: Contact: Wednesday, December 30, 2009 CMS Office of

More information

EARLY ASSESSMENT THAT CMS FACES OB STACLES IN OVERSEEING

EARLY ASSESSMENT THAT CMS FACES OB STACLES IN OVERSEEING Department of Health and Human Services OFFICE OF INSPECTOR GENERAL EARLY ASSESSMENT THAT CMS FACES FINDS OB STACLES IN OVERSEEING THE MEDICARE EHR INCENTIVE PROGRAM Daniel R. Levinson Inspector General

More information

Meaningful Use Preparedness 07/24/2015

Meaningful Use Preparedness 07/24/2015 Meaningful Use Preparedness HEALTHCARE FINANCIAL MANAGEMENT ASSOCIATION 07/24/2015 Agenda Incentive Payments Measures Tracking Physicians Tracking Payment Audits EHR Incentive Program Meaningful Use The

More information

Colloquium for Systematic Reviews in International Development Dhaka

Colloquium for Systematic Reviews in International Development Dhaka Community-Based Health Insurance Schemes: A Systematic Review Anagaw Derseh Pro. Arjun S. Bed Dr. Robert Sparrow Colloquium for Systematic Reviews in International Development Dhaka 13 December 2012 Introduction

More information

Considering Meaningful Use Participation when Acquiring a Hospital or Professional Practice

Considering Meaningful Use Participation when Acquiring a Hospital or Professional Practice WHITE PAPER Considering Meaningful Use Participation when Acquiring a Hospital or Professional Practice An Encore Point of View By Paul Murphy, MBA & Amy Thorpe MBA, PMP, FHIMSS February 2015 AN ENCORE

More information

Meaningful Use in 2015 and Beyond Changes for Stage 2

Meaningful Use in 2015 and Beyond Changes for Stage 2 Meaningful Use in 2015 and Beyond Changes for Stage 2 Jennifer Boaz Transformation Support Specialist Proprietary 1 Definitions AIU = Adopt, Implement or Upgrade EP = Eligible Professional API = Application

More information

HOSPITAL VALUE- BASED PURCHASING. Initial Results Show Modest Effects on Medicare Payments and No Apparent Change in Quality-of- Care Trends

HOSPITAL VALUE- BASED PURCHASING. Initial Results Show Modest Effects on Medicare Payments and No Apparent Change in Quality-of- Care Trends United States Government Accountability Office Report to Congressional Committees October 2015 HOSPITAL VALUE- BASED PURCHASING Initial Results Show Modest Effects on Medicare Payments and No Apparent

More information

ACO #11 -- Percent of Primary Care Physicians Who Successfully Qualify for an EHR Program Incentive Payment

ACO #11 -- Percent of Primary Care Physicians Who Successfully Qualify for an EHR Program Incentive Payment ACO #11 -- Percent of Primary Care Physicians Who Successfully Qualify for an EHR Program Incentive Payment Measure Information Form (MIF) Data Source ACO Final Participant Lists Medicare Part B Carrier

More information

The recently enacted Health Information Technology for Economic

The recently enacted Health Information Technology for Economic Investments in Health Information Technology Driven by HITECH Act Marcy Wilder, Donna A. Boswell, and BarBara Bennett The authors review provisions of the new stimulus package that authorize billions of

More information

SIMULATION MODELING OF ELECTRONIC HEALTH RECORDS ADOPTION IN THE U.S. HEALTHCARE SYSTEM

SIMULATION MODELING OF ELECTRONIC HEALTH RECORDS ADOPTION IN THE U.S. HEALTHCARE SYSTEM SIMULATION MODELING OF ELECTRONIC HEALTH RECORDS ADOPTION IN THE U.S. HEALTHCARE SYSTEM NADIYE O. ERDIL Ph.D. Department of Systems Science and Industrial Engineering State University of New York (SUNY)

More information

US Hospital Information Systems Overview and Outlook, 2013 2020

US Hospital Information Systems Overview and Outlook, 2013 2020 Brochure More information from http://www.researchandmarkets.com/reports/3067782/ US Hospital Information Systems Overview and Outlook, 2013 2020 Description: The hospital information systems (HIS) market

More information

Subject: Electronic Health Records: Number and Characteristics of Providers Awarded Medicaid Incentive Payments for 2011

Subject: Electronic Health Records: Number and Characteristics of Providers Awarded Medicaid Incentive Payments for 2011 United States Government Accountability Office Washington, DC 20548 December 13, 2012 Congressional Committees Subject: Electronic Health Records: Number and Characteristics of Providers Awarded Medicaid

More information

Hospital Performance Differences by Ownership

Hospital Performance Differences by Ownership 100 TOP HOSPITALS RESEARCH HIGHLIGHTS This paper evaluates whether hospital ownership is associated with differing levels of performance on Truven Health 100 Top Hospitals balanced scorecard measures.

More information

Dear Honorable Members of the Health information Technology (HIT) Policy Committee:

Dear Honorable Members of the Health information Technology (HIT) Policy Committee: Office of the National Coordinator for Health Information Technology 200 Independence Avenue, S.W. Suite 729D Washington, D.C. 20201 Attention: HIT Policy Committee Meaningful Use Comments RE: DEFINITION

More information

EMR Benefits and Benefit Realization Methods of Stage 6 and 7 Hospitals Hospitals with advanced EMRs report numerous benefits.

EMR Benefits and Benefit Realization Methods of Stage 6 and 7 Hospitals Hospitals with advanced EMRs report numerous benefits. EMR Benefits and Benefit Realization Methods of Stage 6 and 7 Hospitals Hospitals with advanced EMRs report numerous benefits February, 2012 2 Table of Contents 3 Introduction... 4 An Important Question...

More information

Premier ACO Collaboratives Driving to a Patient-Centered Health System

Premier ACO Collaboratives Driving to a Patient-Centered Health System Premier ACO Collaboratives Driving to a Patient-Centered Health System As a nation we all must work to rein in spiraling U.S. healthcare costs, expand access, promote wellness and improve the consistency

More information

Meaningful Use Updates. HIT Summit September 19, 2015

Meaningful Use Updates. HIT Summit September 19, 2015 Meaningful Use Updates HIT Summit September 19, 2015 Meaningful Use Updates Nadine Owen, BS,CHTS-IS, CHTS-IM Health IT Analyst Hawaii Health Information Exchange No other relevant financial disclosures.

More information

CMS AND ONC FINAL REGULATIONS DEFINE MEANINGFUL USE AND SET STANDARDS FOR ELECTRONIC HEALTH RECORD INCENTIVE PROGRAM

CMS AND ONC FINAL REGULATIONS DEFINE MEANINGFUL USE AND SET STANDARDS FOR ELECTRONIC HEALTH RECORD INCENTIVE PROGRAM CMS AND ONC FINAL REGULATIONS DEFINE MEANINGFUL USE AND SET STANDARDS FOR ELECTRONIC HEALTH RECORD INCENTIVE PROGRAM The Centers for Medicare & Medicaid Services (CMS) and the Office of the National Coordinator

More information

TORCH Meaningful Use Assessment Program

TORCH Meaningful Use Assessment Program TORCH Meaningful Use Assessment Program TORCH Meaningful Use Assessment Program Introduction & Background This is a significant and challenging time for the healthcare industry. With the unprecedented

More information

Eligibility of Rural Hospitals for the 340B Drug Discount Program

Eligibility of Rural Hospitals for the 340B Drug Discount Program Public Hospital Pharmacy Coalition www.phpcrx.org (A Coalition of the National Association of Public Hospitals and Health Systems) Eligibility of Rural Hospitals for the 340B Drug Discount Program Prepared

More information

2014: Volume 4, Number 1. A publication of the Centers for Medicare & Medicaid Services, Office of Information Products & Data Analytics

2014: Volume 4, Number 1. A publication of the Centers for Medicare & Medicaid Services, Office of Information Products & Data Analytics 2014: Volume 4, Number 1 A publication of the Centers for Medicare & Medicaid Services, Office of Information Products & Data Analytics Medicare Post-Acute Care Episodes and Payment Bundling Melissa Morley,¹

More information

Critical Access Hospitals Receipt of Medicare and Medicaid Electronic Health Record Incentive Payments

Critical Access Hospitals Receipt of Medicare and Medicaid Electronic Health Record Incentive Payments Policy Brief #37 January 2015 Critical Access Hospitals Receipt of Medicare and Medicaid Electronic Health Record Incentive Payments Peiyin Hung, MSPH; Michelle Casey, MS; Ira Moscovice, PhD University

More information

Texas Medicaid EHR Incentive Program

Texas Medicaid EHR Incentive Program Texas Medicaid EHR Incentive Program Medicaid HIT Team July 23, 2012 Why Health IT? Benefits of Health IT A 2011 study* found that 92% of articles published from July 2007 to February 2010 reached conclusions

More information

Main Section. Overall Aim & Objectives

Main Section. Overall Aim & Objectives Main Section Overall Aim & Objectives The goals for this initiative are as follows: 1) Develop a partnership between two existing successful initiatives: the Million Hearts Initiative at the MedStar Health

More information

OVERALL IMPLEMENTATION CONSIDERATIONS

OVERALL IMPLEMENTATION CONSIDERATIONS Donald Berwick, M.D., M.P.H. Administrator Centers for Medicare & Medicaid Services Department of Health and Human Services Hubert H. Humphrey Building 200 Independence Avenue, S.W., Room 445-G Washington,

More information

An Overview of Meaningful Use: FAQs

An Overview of Meaningful Use: FAQs An Overview of Meaningful Use: FAQs On Feb. 17, 2009, President Obama signed the American Recovery and Reinvestment Act of 2009 (ARRA) into law. This new law includes provisions (known as the HITECH Act)

More information

February 24, 2012 (202) 690-6145 CMS PROPOSES DEFINITION OF STAGE 2 MEANINGFUL USE OF CERTIFIED ELECTRONIC HEALTH RECORDS (EHR) TECHNOLOGY

February 24, 2012 (202) 690-6145 CMS PROPOSES DEFINITION OF STAGE 2 MEANINGFUL USE OF CERTIFIED ELECTRONIC HEALTH RECORDS (EHR) TECHNOLOGY DEPARTMENT OF HEALTH & HUMAN SERVICES Centers for Medicare & Medicaid Services Room 352-G 200 Independence Avenue, SW Washington, DC 20201 Office of Communications FACT SHEET FOR IMMEDIATE RELEASE Contact:

More information

Medicare Advantage Stars: Are the Grades Fair?

Medicare Advantage Stars: Are the Grades Fair? Douglas Holtz-Eakin Conor Ryan July 16, 2015 Medicare Advantage Stars: Are the Grades Fair? Executive Summary Medicare Advantage (MA) offers seniors a one-stop option for hospital care, outpatient physician

More information

Social Security Eligibility and the Labor Supply of Elderly Immigrants. George J. Borjas Harvard University and National Bureau of Economic Research

Social Security Eligibility and the Labor Supply of Elderly Immigrants. George J. Borjas Harvard University and National Bureau of Economic Research Social Security Eligibility and the Labor Supply of Elderly Immigrants George J. Borjas Harvard University and National Bureau of Economic Research Updated for the 9th Annual Joint Conference of the Retirement

More information

STATEMENT OF TIM GRONNIGER DIRECTOR OF DELIVERY SYSTEM REFORM CENTERS FOR MEDICARE & MEDICAID SERVICES

STATEMENT OF TIM GRONNIGER DIRECTOR OF DELIVERY SYSTEM REFORM CENTERS FOR MEDICARE & MEDICAID SERVICES STATEMENT OF TIM GRONNIGER DIRECTOR OF DELIVERY SYSTEM REFORM CENTERS FOR MEDICARE & MEDICAID SERVICES ON EXAMINING THE MEDICARE PART D MEDICATION THERAPY MANAGEMENT PROGRAM BEFORE THE U.S. HOUSE COMMITTEE

More information

Improving Hospital Performance

Improving Hospital Performance Improving Hospital Performance Background AHA View Putting patients first ensuring their care is centered on the individual, rooted in best practices and utilizes the latest evidence-based medicine is

More information

Medicare & Medicaid EHR Incentive Program Final Rule. Implementing the American Recovery & Reinvestment Act of 2009

Medicare & Medicaid EHR Incentive Program Final Rule. Implementing the American Recovery & Reinvestment Act of 2009 Medicare & Medicaid EHR Incentive Program Final Rule Implementing the American Recovery & Reinvestment Act of 2009 Overview American Recovery & Reinvestment Act (Recovery Act) February 2009 Medicare &

More information

TABLE 22 MAXIMUM TOTAL AMOUNT OF EHR INCENTIVE PAYMENTS FOR A MEDICARE EP WHO DOES NOT PREDOMINATELY FURNISH SERVICES IN A HPSA

TABLE 22 MAXIMUM TOTAL AMOUNT OF EHR INCENTIVE PAYMENTS FOR A MEDICARE EP WHO DOES NOT PREDOMINATELY FURNISH SERVICES IN A HPSA The second paper in this series began an overview of the provider requirements within the final rule on meaningful use, published by the Centers for Medicare and Medicaid Services on July 28, 2010. This

More information

A Framework for Prioritizing Economic Statistics Programs

A Framework for Prioritizing Economic Statistics Programs A Framework for Prioritizing Economic Statistics Programs Thomas L. Mesenbourg, Ronald H. Lee Associate Director for Economic Programs Senior Financial Advisor Office of the Associate Director for Economic

More information

Interconnectivity Respiratory Therapy and the Electronic Health Record

Interconnectivity Respiratory Therapy and the Electronic Health Record Interconnectivity Respiratory Therapy and the Electronic Health Record A non-geeks understanding Prepared for the ISRC state conference June 2011 Speaker: Patti Baltisberger RRT (turquoise rhinestone alien)

More information

T h e M A RY L A ND HEALTH CARE COMMISSION

T h e M A RY L A ND HEALTH CARE COMMISSION T h e MARYLAND HEALTH CARE COMMISSION Discussion Topics Overview Learning Objectives Electronic Health Records Health Information Exchange Telehealth 2 Overview - Maryland Health Care Commission Advancing

More information

ELECTRONIC HEALTH RECORD PROGRAMS. Participation Has Increased, but Action Needed to Achieve Goals, Including Improved Quality of Care

ELECTRONIC HEALTH RECORD PROGRAMS. Participation Has Increased, but Action Needed to Achieve Goals, Including Improved Quality of Care United States Government Accountability Office Report to Congressional Committees March 2014 ELECTRONIC HEALTH RECORD PROGRAMS Participation Has Increased, but Action Needed to Achieve Goals, Including

More information

what value-based purchasing means to your hospital

what value-based purchasing means to your hospital Paul Shoemaker what value-based purchasing means to your hospital CMS has devised an intricate way to measure a hospital s quality of care to determine whether the hospital qualifies for incentive payments

More information

To: From: Date: Subject: Proposed Rule on Meaningful Use Requirements Stage 2 Measures, Payment Penalties, Hardship Exceptions and Appeals

To: From: Date: Subject: Proposed Rule on Meaningful Use Requirements Stage 2 Measures, Payment Penalties, Hardship Exceptions and Appeals MEMORANDUM To: PPSV Clients and Friends From: Barbara Straub Williams Date: Subject: Proposed Rule on Meaningful Use Requirements Stage 2 Measures, Payment Penalties, Hardship Exceptions and Appeals The

More information

Meaningful Use Rules Proposed for Electronic Health Record Incentives Under HITECH Act By: Cherilyn G. Murer, JD, CRA

Meaningful Use Rules Proposed for Electronic Health Record Incentives Under HITECH Act By: Cherilyn G. Murer, JD, CRA Meaningful Use Rules Proposed for Electronic Health Record Incentives Under HITECH Act By: Cherilyn G. Murer, JD, CRA Introduction On December 30, 2009, The Centers for Medicare & Medicaid Services (CMS)

More information

Reduction in Medication Errors due to Adoption of Computerized Provider Order Entry Systems

Reduction in Medication Errors due to Adoption of Computerized Provider Order Entry Systems UCSD Informatics Journal Club Webinar May 2, 2013 Reduction in Medication Errors due to Adoption of Computerized Provider Order Entry Systems David Radley, PhD Institute for Healthcare Improvement & The

More information

Frequently Asked Questions (FAQs)

Frequently Asked Questions (FAQs) Registration and Enrollment... 2 Provider Registration- First Year Applicants... 2 Provider Registration- Returning Applicants... 2 Provider Eligibility... 3 Eligibility Eligible Professionals... 3 Eligibility

More information

June 15, 2015 VIA ELECTRONIC SUBMISSION

June 15, 2015 VIA ELECTRONIC SUBMISSION Charles N. Kahn III President & CEO June 15, 2015 VIA ELECTRONIC SUBMISSION Andrew M. Slavitt Acting Administrator Centers for Medicare & Medicaid Services Department of Health and Human Services Attention:

More information

Contact: Barbara J Stout RN, BSC Implementation Specialist University of Kentucky Regional Extension Center 859-323-4895

Contact: Barbara J Stout RN, BSC Implementation Specialist University of Kentucky Regional Extension Center 859-323-4895 Contact: Barbara J Stout RN, BSC Implementation Specialist University of Kentucky Regional Extension Center 859-323-4895 $19.2B $17.2B Provider Incentives $2B HIT (HHS/ONC) Medicare & Medicaid Incentives

More information

The Meaningful Use Stage 2 Final Rule: Overview and Outlook

The Meaningful Use Stage 2 Final Rule: Overview and Outlook The Meaningful Use Stage 2 Final Rule: Overview and Outlook Devi Mehta, JD, MPH Cand. 1 Taylor Burke, JD, LLM 2 Lara Cartwright-Smith, JD, MPH 3 Jane Hyatt Thorpe, JD 4 Introduction On August 23, 2012,

More information

Welcome to the Meaningful Use and Data Analytics PowerPoint presentation in the Data Analytics Toolkit. In this presentation, you will be introduced

Welcome to the Meaningful Use and Data Analytics PowerPoint presentation in the Data Analytics Toolkit. In this presentation, you will be introduced Welcome to the Meaningful Use and Data Analytics PowerPoint presentation in the Data Analytics Toolkit. In this presentation, you will be introduced to meaningful use and the role of data analytics in

More information

Guidelines for Patient-Centered Medical Home (PCMH) Recognition and Accreditation Programs. February 2011

Guidelines for Patient-Centered Medical Home (PCMH) Recognition and Accreditation Programs. February 2011 American Academy of Family Physicians (AAFP) American Academy of Pediatrics (AAP) American College of Physicians (ACP) American Osteopathic Association (AOA) Guidelines for Patient-Centered Medical Home

More information

A predictive analytics platform powered by non-medical staff reduces cost of care among high-utilizing Medicare fee-for-service beneficiaries

A predictive analytics platform powered by non-medical staff reduces cost of care among high-utilizing Medicare fee-for-service beneficiaries A predictive analytics platform powered by non-medical staff reduces cost of care among high-utilizing Medicare fee-for-service beneficiaries Munevar D 1, Drozd E 1, & Ostrovsky A 2 1 Avalere Health, Inc.

More information

The Household Level Impact of Public Health Insurance. Evidence from the Urban Resident Basic Medical Insurance in China. University of Michigan

The Household Level Impact of Public Health Insurance. Evidence from the Urban Resident Basic Medical Insurance in China. University of Michigan The Household Level Impact of Public Health Insurance Evidence from the Urban Resident Basic Medical Insurance in China University of Michigan Jianlin Wang April, 2014 This research uses data from China

More information

Stage 2 of Meaningful Use: Ten Points of Interest

Stage 2 of Meaningful Use: Ten Points of Interest November 8, 2012 Practice Group: Health Care Stage 2 of Meaningful Use: Ten Points of Interest By Patricia C. Shea On September 4, 2012, the Department of Health and Human Services, Centers for Medicare

More information

Introduction. The History of CMS/JCAHO Measure Alignment

Introduction. The History of CMS/JCAHO Measure Alignment Release Notes: Introduction - Version 2.2 Introduction The History of CMS/JCAHO Measure Alignment In early 1999, the Joint Commission solicited input from a wide variety of stakeholders (e.g., clinical

More information

Medicare Advantage Star Ratings: Detaching Pay from Performance Douglas Holtz- Eakin, Robert A. Book, & Michael Ramlet May 2012

Medicare Advantage Star Ratings: Detaching Pay from Performance Douglas Holtz- Eakin, Robert A. Book, & Michael Ramlet May 2012 Medicare Advantage Star Ratings: Detaching Pay from Performance Douglas Holtz- Eakin, Robert A. Book, & Michael Ramlet May 2012 EXECUTIVE SUMMARY Rewarding quality health plans is an admirable goal for

More information

Meaningful Use Timeline

Meaningful Use Timeline Eligible Hospitals and CAHs (Federal Fiscal Year Base) Meaningful Use Timeline Year One: October 1, 2010 Reporting year begins for eligible hospitals and CAHs. July 3, 2011 Last day for eligible hospitals

More information

Sustainable Growth Rate (SGR) Repeal and Replace: Comparison of 2014 and 2015 Legislation

Sustainable Growth Rate (SGR) Repeal and Replace: Comparison of 2014 and 2015 Legislation Sustainable Growth Rate (SGR) Repeal and Replace: Comparison of 2014 and 2015 Legislation Proposal 113 th Congress - - H.R.4015/S.2000 114 th Congress - - H.R.1470 SGR Repeal and Annual Updates General

More information

A Guide to Understanding and Qualifying for Meaningful Use Incentives

A Guide to Understanding and Qualifying for Meaningful Use Incentives A Guide to Understanding and Qualifying for Meaningful Use Incentives A White Paper by DrFirst Copyright 2000-2012 DrFirst All Rights Reserved. 1 Table of Contents Understanding and Qualifying for Meaningful

More information

MEDICAL ASSISTANCE STAGE 2 SUMMARY

MEDICAL ASSISTANCE STAGE 2 SUMMARY MEDICAL ASSISTANCE STAGE 2 SUMMARY OVERVIEW On September 4, 2012, CMS published a final rule that specifies the Stage 2 Meaningful Use criteria that eligible professionals (EPs), eligible hospitals (EHs)

More information

The Road to Meaningful Use EHR Stimulus Payments. By Amy S. Leopard, Walter & Haverfield LLP

The Road to Meaningful Use EHR Stimulus Payments. By Amy S. Leopard, Walter & Haverfield LLP The Road to Meaningful Use EHR Stimulus Payments By Amy S. Leopard, Walter & Haverfield LLP On July 28, 2010, the Centers for Medicare and Medicaid Services (CMS) published a final rule regarding what

More information

Outcomes-based payment for population health management

Outcomes-based payment for population health management Outcomes-based payment for population health management February 10, 2016 Introduction PURPOSE OF THIS PAPER Since July 2014, the Delaware Center for Health Innovation (DCHI) has been convening stakeholders

More information

Choosing a Medicare Part D Plan: Are Medicare Beneficiaries Choosing Low-Cost Plans?

Choosing a Medicare Part D Plan: Are Medicare Beneficiaries Choosing Low-Cost Plans? THE MEDICARE DRUG BENEFIT Choosing a Medicare Part D Plan: Are Medicare Beneficiaries Choosing Low-Cost Plans? Prepared By: Jonathan Gruber MIT For: The Henry J. Kaiser Family Foundation March 2009 This

More information

Accountable Care: Implications for Managing Health Information. Quality Healthcare Through Quality Information

Accountable Care: Implications for Managing Health Information. Quality Healthcare Through Quality Information Accountable Care: Implications for Managing Health Information Quality Healthcare Through Quality Information Introduction Healthcare is currently experiencing a critical shift: away from the current the

More information

Moving Closer to Clarity

Moving Closer to Clarity Meaningful Use: Moving Closer to Clarity 28 July 2010 MEANINGFUL USE: Moving Closer to Clarity Table of Contents Caveats page 2 Meaningful Use Final Regulation page 3 Meaningful User page 4 Objectives

More information

Increase Participation Through Partial Incentives

Increase Participation Through Partial Incentives February 26, 2010 Ms. Charlene M. Frizzera Acting Administrator Centers for Medicare & Medicaid Services Attn. CMS-0033-P P.O. Box 8016 Baltimore, MD 21244-8016 Dear Ms. Frizzera, I am writing on behalf

More information

Frequently Asked Questions: Electronic Health Records (EHR) Incentive Payment Program

Frequently Asked Questions: Electronic Health Records (EHR) Incentive Payment Program 1. Where did the Electronic Health Records (EHR) Incentive Program originate? The American Recovery and Reinvestment Act (ARRA) was signed into law on February 17, 2009, and established a framework of

More information

A Study by the National Association of Urban Hospitals September 2012

A Study by the National Association of Urban Hospitals September 2012 The Potential Impact of the Affordable Care Act on Urban Safety-Net Hospitals A Study by the National Association of Urban Hospitals September 2012 Introduction One by one and provision by provision, the

More information

Iowa Medicaid Health Information Technology (HIT) and Electronic Health Record (EHR) Incentive Payment Program for Eligible Hospitals

Iowa Medicaid Health Information Technology (HIT) and Electronic Health Record (EHR) Incentive Payment Program for Eligible Hospitals Iowa Medicaid Health Information Technology (HIT) and Electronic Health Record (EHR) Incentive Payment Program for Eligible Hospitals May 2012 CONTENTS How to Determine If Hospital is Eligible for EHR

More information

TESTIMONY. The Potential Benefits and Costs of Increased Adoption of Health Information Technology RICHARD HILLESTAD CT-312.

TESTIMONY. The Potential Benefits and Costs of Increased Adoption of Health Information Technology RICHARD HILLESTAD CT-312. TESTIMONY The Potential Benefits and Costs of Increased Adoption of Health Information Technology RICHARD HILLESTAD CT-312 July 2008 Testimony presented before the Senate Finance Committee on July 17,

More information

Electronic Health Records: What it Means for Today s Radiologist By: Anne Reynolds

Electronic Health Records: What it Means for Today s Radiologist By: Anne Reynolds Electronic Health Records: What it Means for Today s Radiologist By: Anne Reynolds Introduction Program Overview In February of 2009, President Obama signed the American Recovery and Reinvestment Act of

More information

Home Health Care Today: Higher Acuity Level of Patients Highly skilled Professionals Costeffective Uses of Technology Innovative Care Techniques

Home Health Care Today: Higher Acuity Level of Patients Highly skilled Professionals Costeffective Uses of Technology Innovative Care Techniques Comprehensive EHR Infrastructure Across the Health Care System The goal of the Administration and the Department of Health and Human Services to achieve an infrastructure for interoperable electronic health

More information

Assessing the 2015 MA Star ratings

Assessing the 2015 MA Star ratings Intelligence Brief On October 10, 2014, CMS released the Medicare Advantage (MA) Star ratings for 2015. We analyzed CMS s data covering 691 MA plan contracts across the 50 states to determine which types

More information

Health Information Technology in the United States: Progress and Challenges Ahead, 2014

Health Information Technology in the United States: Progress and Challenges Ahead, 2014 Health Information Technology in the United States: Progress and Challenges Ahead, 2014 About the Robert Wood Johnson Foundation For more than 40 years the Robert Wood Johnson Foundation has worked to

More information

Meaningful Use Criteria for Eligible Hospitals and Eligible Professionals (EPs)

Meaningful Use Criteria for Eligible Hospitals and Eligible Professionals (EPs) Meaningful Use Criteria for Eligible and Eligible Professionals (EPs) Under the Electronic Health Record (EHR) meaningful use final rules established by the Centers for Medicare and Medicaid Services (CMS),

More information