User login
Hospital Readmissions and Preventability
Hospital readmissions cost Medicare $15 to $17 billion per year.[1, 2] In 2010, the Hospital Readmission Reduction Program (HRRP), created by the Patient Protection and Affordable Care Act, authorized the Centers for Medicare and Medicaid Services (CMS) to penalize hospitals with higher‐than‐expected readmission rates for certain index conditions.[3] Other payers may follow suit, so hospitals and health systems nationwide are devoting significant resources to reducing readmissions.[4, 5, 6]
Implicit in these efforts are the assumptions that a significant proportion of readmissions are preventable, and that preventable readmissions can be identified. Unfortunately, estimates of preventability vary widely.[7, 8] In this article, we examine how preventable readmissions have been defined, measured, and calculated, and explore the associated implications for readmission reduction efforts.
THE MEDICARE READMISSION METRIC
The medical literature reveals substantial heterogeneity in how readmissions are assessed. Time periods range from 14 days to 4 years, and readmissions may be counted differently depending on whether they are to the same hospital or to any hospital, whether they are for the same (or a related) condition or for any condition, whether a patient is allowed to count only once during the follow‐up period, how mortality is treated, and whether observation stays are considered.[9]
Despite a lack of consensus in the literature, the approach adopted by CMS is endorsed by the National Quality Forum (NQF)[10] and has become the de facto standard for calculating readmission rates. CMS derives risk‐standardized readmission rates for acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN), using administrative claims data for each Medicare fee‐for‐service beneficiary 65 years or older.[11, 12, 13, 14] CMS counts the first readmission (but not subsequent ones) for any cause within 30 days of the index discharge, including readmissions to other facilities. Certain planned readmissions for revascularization are excluded, as are patients who left against medical advice, transferred to another acute‐care hospital, or died during the index admission. Admissions to psychiatric, rehabilitation, cancer specialty, and children's hospitals[12] are also excluded, as well as patients classified as observation status for either hospital stay.[15] Only administrative data are used in readmission calculations (ie, there are no chart reviews or interviews with healthcare personnel or patients). Details are published online and updated at least annually.[15]
EFFECTS AND LIMITATIONS OF THE HRRP AND THE CMS READMISSION METRIC
Penalizing hospitals for higher‐than‐expected readmission rates based on the CMS metric has been successful in the sense that hospitals now feel more accountable for patient outcomes after discharge; they are implementing transitional care programs, improving communication, and building relationships with community programs.[4, 5, 16] Early data suggest a small decline in readmission rates of Medicare beneficiaries nationally.[17] Previously, such readmission rates were constant.[18]
Nevertheless, significant concerns with the current approach have surfaced.[19, 20, 21] First, why choose 30 days? This time horizon was believed to be long enough to identify readmissions attributable to an index admission and short enough to reflect hospital‐delivered care and transitions to the outpatient setting, and it allows for collaboration between hospitals and their communities to reduce readmissions.[3] However, some have argued that this time horizon has little scientific basis,[22] and that hospitals are unfairly held accountable for a timeframe when outcomes may largely be influenced by the quality of outpatient care or the development of new problems.[23, 24] Approximately one‐third of 30‐day readmissions occur within the first 7 days, and more than half (55.7%) occur within the first 14 days[22, 25]; such time frames may be more appropriate for hospital accountability.[26]
Second, spurred by the focus of CMS penalties, efforts to reduce readmissions have largely concerned patients admitted for HF, AMI, or PN, although these 3 medical conditions account for only 10% of Medicare hospitalizations.[18] Programs focused on a narrow patient population may not benefit other patients with high readmission rates, such as those with gastrointestinal or psychiatric problems,[2] or lead to improvements in the underlying processes of care that could benefit patients in additional ways. Indeed, research suggests that low readmission rates may not be related to other measures of hospital quality.[27, 28]
Third, public reporting and hospital penalties are based on 3‐year historical performance, in part to accumulate a large enough sample size for each diagnosis. Hospitals that seek real‐time performance monitoring are limited to tracking surrogate outcomes, such as readmissions back to their own facility.[29, 30] Moreover, because of the long performance time frame, hospitals that achieve rapid improvement may endure penalties precisely when they are attempting to sustain their achievements.
Fourth, the CMS approach utilizes a complex risk‐standardization methodology, which has only modest ability to predict readmissions and allow hospital comparisons.[9] There is no adjustment for community characteristics, even though practice patterns are significantly associated with readmission rates,[9, 31] and more than half of the variation in readmission rates across hospitals can be explained by characteristics of the community such as access to care.[32] Moreover, patient factors, such as race and socioeconomic status, are currently not included in an attempt to hold hospitals to similar standards regardless of their patient population. This is hotly contested, however, and critics note this policy penalizes hospitals for factors outside of their control, such as patients' ability to afford medications.[33] Indeed, the June 2013 Medicare Payment Advisory Committee (MedPAC) report to Congress recommended evaluating hospital performance against facilities with a like percentage of low‐income patients as a way to take into account socioeconomic status.[34]
Fifth, observation stays are excluded, so patients who remain in observation status during their index or subsequent hospitalization cannot be counted as a readmission. Prevalence of observation care has increased, raising concerns that inpatient admissions are being shifted to observation status, producing an artificial decline in readmissions.[35] Fortunately, recent population‐level data provide some reassuring evidence to the contrary.[36]
Finally, and perhaps most significantly, the current readmission metric does not consider preventability. Recent reviews have demonstrated that estimates of preventability vary widely in individual studies, ranging from 5% to 79%, depending on study methodology and setting.[7, 8] Across these studies, on average, only 23% of 30‐day readmissions appear to be avoidable.[8] Another way to consider the preventability of hospital readmissions is by noting that the most effective multimodal care‐transition interventions reduce readmission rates by only about 30%, and most interventions are much less effective.[26] The likely fact that only 23% to 30% of readmissions are preventable has profound implications for the anticipated results of hospital readmission reduction efforts. Interventions that are 75% effective in reducing preventable readmissions should be expected to produce only an 18% to 22% reduction in overall readmission rates.[37]
FOCUSING ON PREVENTABLE READMISSIONS
A greater focus on identifying and targeting preventable readmissions would offer a number of advantages over the present approach. First, it is more meaningful to compare hospitals based on their percentage of discharges resulting in a preventable readmission, than on the basis of highly complex risk standardization procedures for selected conditions. Second, a focus on preventable readmissions more clearly identifies and permits hospitals to target opportunities for improvement. Third, if the focus were on preventable readmissions for a large number of conditions, the necessary sample size could be obtained over a shorter period of time. Overall, such a preventable readmissions metric could serve as a more agile and undiluted performance indicator, as opposed to the present 3‐year rolling average rate of all‐cause readmissions for certain conditions, the majority of which are probably not preventable.
DEFINING PREVENTABILITY
Defining a preventable readmission is critically important. However, neither a consensus definition nor a validated standard for assessing preventable hospital readmissions exists. Different conceptual frameworks and terms (eg, avoidable, potentially preventable, or urgent readmission) complicate the issue.[38, 39, 40]
Although the CMS measure does not address preventability, it is helpful to consider whether other readmission metrics incorporate this concept. The United Health Group's (UHG, formerly Pacificare) All‐Cause Readmission Index, University HealthSystem Consortium's 30‐Day Readmission Rate (all cause), and 3M Health Information Systems' (3M) Potentially Preventable Readmissions (PPR) are 3 commonly used measures.
Of these, only the 3M PPR metric includes the concept of preventability. 3M created a proprietary matrix of 98,000 readmission‐index admission All Patient Refined Diagnosis Related Group pairs based on the review of several physicians and the logical assumption that a readmission for a clinically related diagnosis is potentially preventable.[24, 41] Readmission and index admissions are considered clinically related if any of the following occur: (1) medical readmission for continuation or recurrence of an initial, or closely related, condition; (2) medical readmission for acute decompensation of a chronic condition that was not the reason for the index admission but was plausibly related to care during or immediately afterward (eg, readmission for diabetes in a patient whose index admission was AMI); (3) medical readmission for acute complication plausibly related to care during index admission; (4) readmission for surgical procedure for continuation or recurrence of initial problem (eg, readmission for appendectomy following admission for abdominal pain and fever); or (5) readmission for surgical procedure to address complication resulting from care during index admission.[24, 41] The readmission time frame is not standardized and may be set by the user. Though conceptually appealing in some ways, CMS and the NQF have expressed concern about this specific approach because of the uncertain reliability of the relatedness of the admission‐readmission diagnosis dyads.[3]
In the research literature, only a few studies have examined the 3M PPR or other preventability assessments that rely on the relatedness of diagnostic codes.[8] Using the 3M PPR, a study showed that 78% of readmissions were classified as potentially preventable,[42] which explains why the 3M PPR and all‐cause readmission metric may correlate highly.[43] Others have demonstrated that ratings of hospital performance on readmission rates vary by a moderate to large amount, depending on whether the 3M PPR, CMS, or UHG methodology is used.[43, 44] An algorithm called SQLape[45, 46] is used in Switzerland to benchmark hospitals and defines potentially avoidable readmissions as being related to index diagnoses or complications of those conditions. It has recently been tested in the United States in a single‐center study,[47] and a multihospital study is underway.
Aside from these algorithms using related diagnosis codes, most ratings of preventability have relied on subjective assessments made primarily through a review of hospital records, and approximately one‐third also included data from clinic visits or interviews with the treating medical team or patients/families.[8] Unfortunately, these reports provide insufficient detail on how to apply their preventability criteria to subsequent readmission reviews. Studies did, however, provide categories of preventability into which readmissions could be organized (see Supporting Information, Appendix Table 1, in the online version of this article for details from a subset of studies cited in van Walraven's reviews that illustrate this point).
Assessment of preventability by clinician review can be challenging. In general, such assessments have considered readmissions resulting from factors within the hospital's control to be avoidable (eg, providing appropriate discharge instructions, reconciling medications, arranging timely postdischarge follow‐up appointments), whereas readmissions resulting from factors not within the hospital's control are unavoidable (eg, patient socioeconomic status, social support, disease progression). However, readmissions resulting from patient behaviors or social reasons could potentially be classified as avoidable or unavoidable depending on the circumstances. For example, if a patient decides not to take a prescribed antibiotic and is readmitted with worsening infection, this could be classified as an unavoidable readmission from the hospital's perspective. Alternatively, if the physician prescribing the antibiotic was inattentive to the cost of the medication and the patient would have taken a less expensive medication had it been prescribed, this could be classified as an avoidable readmission. Differing interpretations of contextual factors may partially account for the variability in clinical assessments of preventability.
Indeed, despite the lack of consensus around a standard method of defining preventability, hospitals and health systems are moving forward to address the issue and reduce readmissions. A recent survey by America's Essential Hospitals (previously the National Association of Public Hospitals and Health Systems), indicated that: (1) reducing readmissions was a high priority for the majority (86%) of members, (2) most had established interdisciplinary teams to address the issue, and (3) over half had a formal process for determining which readmissions were potentially preventable. Of the survey respondents, just over one‐third rely on staff review of individual patient charts or patient and family interviews, and slightly less than one‐third rely on other mechanisms such as external consultants, criteria developed by other entities, or the Institute for Clinical Systems Improvement methodology.[48] Approximately one‐fifth make use of 3M's PPR product, and slightly fewer use the list of the Agency for Healthcare Research and Quality's ambulatory care sensitive conditions (ACSCs). These are medical conditions for which it is believed that good outpatient care could prevent the need for hospitalization (eg, asthma, congestive heart failure, diabetes) or for which early intervention minimizes complications.[49] Hospitalization rates for ACSCs may represent a good measure of excess hospitalization, with a focus on the quality of outpatient care.
RECOMMENDATIONS
We recommend that reporting of hospital readmission rates be based on preventable or potentially preventable readmissions. Although we acknowledge the challenges in doing so, the advantages are notable. At minimum, a preventable readmission rate would more accurately reflect the true gap in care and therefore hospitals' real opportunity for improvement, without being obscured by readmissions that are not preventable.
Because readmission rates are used for public reporting and financial penalties for hospitals, we favor a measure of preventability that reflects the readmissions that the hospital or hospital system has the ability to prevent. This would not penalize hospitals for factors that are under the control of others, namely patients and caregivers, community supports, or society at large. We further recommend that this measure apply to a broader composite of unplanned care, inclusive of both inpatient and observation stays, which have little distinction in patients' eyes, and both represent potentially unnecessary utilization of acute‐care resources.[50] Such a measure would require development, validation, and appropriate vetting before it is implemented.
The first step is for researchers and policy makers to agree on how a measure of preventable or potentially preventable readmissions could be defined. A common element of preventability assessment is to identify the degree to which the reasons for readmission are related to the diagnoses of the index hospitalization. To be reliable and scalable, this measure will need to be based on algorithms that relate the index and readmission diagnoses, most likely using claims data. Choosing common medical and surgical conditions and developing a consensus‐based list of related readmission diagnoses is an important first step. It would also be important to include some less common conditions, because they may reflect very different aspects of hospital care.
An approach based on a list of related diagnoses would represent potentially preventable rehospitalizations. Generally, clinical review is required to determine actual preventability, taking into account patient factors such as a high level of illness or functional impairment that leads to clinical decompensation in spite of excellent management.[51, 52] Clinical review, like a root cause analysis, also provides greater insight into hospital processes that may warrant improvement. Therefore, even if an administrative measure of potentially preventable readmissions is implemented, hospitals may wish to continue performing detailed clinical review of some readmissions for quality improvement purposes. When clinical review becomes more standardized,[53] a combined approach that uses administrative data plus clinical verification and arbitration may be feasible, as with hospital‐acquired infections.
Similar work to develop related sets of admission and readmission diagnoses has already been undertaken in development of the 3M PPR and SQLape measures.[41, 46] However, the 3M PPR is a proprietary system that has low specificity and a high false‐positive rate for identifying preventable readmissions when compared to clinical review.[42] Moreover, neither measure has yet achieved the consensus required for widespread adoption in the United States. What is needed is a nonproprietary listing of related admission and readmission diagnoses, developed with the engagement of relevant stakeholders, that goes through a period of public comment and vetting by a body such as the NQF.
Until a validated measure of potentially preventable readmission can be developed, how could the current approach evolve toward preventability? The most feasible, rapidly implementable change would be to alter the readmission time horizon from 30 days to 7 or 15 days. A 30‐day period holds hospitals accountable for complications of outpatient care or new problems that may develop weeks after discharge. Even though this may foster shared accountability and collaboration among hospitals and outpatient or community settings, research has demonstrated that early readmissions (eg, within 715 days of discharge) are more likely preventable.[54] Second, consideration of the socioeconomic status of hospital patients, as recommended by MedPAC,[34] would improve on the current model by comparing hospitals to like facilities when determining penalties for excess readmission rates. Finally, adjustment for community factors, such as practice patterns and access to care, would enable readmission metrics to better reflect factors under the hospital's control.[32]
CONCLUSION
Holding hospitals accountable for the quality of acute and transitional care is an important policy initiative that has accelerated many improvements in discharge planning and care coordination. Optimally, the policies, public reporting, and penalties should target preventable readmissions, which may represent as little as one‐quarter of all readmissions. By summarizing some of the issues in defining preventability, we hope to foster continued refinement of quality metrics used in this arena.
Acknowledgements
We thank Eduard Vasilevskis, MD, MPH, for feedback on an earlier draft of this article. This manuscript was informed by a special report titled Preventable Readmissions, written by Julia Lavenberg, Joel Betesh, David Goldmann, Craig Kean, and Kendal Williams of the Penn Medicine Center for Evidence‐based Practice. The review was performed at the request of the Penn Medicine Chief Medical Officer Patrick J. Brennan to inform the development of local readmission prevention metrics, and is available at
Disclosures
Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1TR000003. Dr. Kripalani receives support from the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number R01HL109388, and from the Centers for Medicare and Medicaid Services under awards 1C1CMS331006‐01 and 1C1CMS330979‐01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Centers for Medicare and Medicaid Services.
- Physician Visits After Hospital Discharge: Implications for Reducing Readmissions. Washington, DC: National Institute for Health Care Reform; 2011. Report no. 6. , .
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Centers for Medicare and Medicaid Services, US Department of Health and Human Services. Medicare program: hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and FY 2012 rates. Fed Regist. 2011;76(160):51476–51846.
- Quality collaboratives and campaigns to reduce readmissions: what strategies are hospitals using? J Hosp Med. 2013;8:601–608. , , , , , .
- Contemporary data about hospital strategies to reduce unplanned readmissions: what has changed [research letter]? JAMA Intern Med. 2014;174(1):154–156. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- Comparing methods to calculate hospital‐specific rates of early death or urgent readmission. CMAJ. 2012;184(15):E810–E817. , , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- National Quality Forum. Patient outcomes: all‐cause readmissions expedited review 2011. Available at: http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id60(7):607–614.
- Data shows reduction in Medicare hospital readmission rates during 2012. Medicare Medicaid Res Rev. 2013;3(2):E1–E11. , , , , , .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Moving beyond readmission penalties: creating an ideal process to improve transitional care. J Hosp Med. 2013;8(2):102–109. , , , .
- A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):1175–1177. , .
- American Hospital Association. TrendWatch: examining the drivers of readmissions and reducing unnecessary readmissions for better patient care. Washington, DC: American Hospital Association; 2011.
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Characteristics of hospitals receiving penalties under the hospital readmissions reduction program. JAMA. 2013;309(4):342–343. , .
- Identifying potentially preventable readmissions. Health Care Financ Rev. 2008;30(1):75–91. , , , , , , et al.
- Use of hospital‐based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364–371. , , , et al.
- Reducing hospital readmission rates: current strategies and future directions. Annu Rev Med. 2014;65:471–485. , , , .
- Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2013;309(6):587–593. , , , et al.
- Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377–385. , , , et al.
- Limitations of using same‐hospital readmission metrics. Int J Qual Health Care. 2013;25(6):633–639. , , , .
- Is same‐hospital readmission rate a good surrogate for all‐hospital readmission rate? Med Care. 2010;48(5):477–481. , , , et al.
- The relationship between hospital admission rates and rehospitalizations. N Engl J Med. 2011;365(24):2287–2295. , , .
- Community factors and hospital readmission rates [published online April 9, 2014]. Health Serv Res. doi: 10.1111/1475–6773.12177. .
- American Hospital Association. Hospital readmissions reduction program: factsheet. American Hospital Association. Available at: http://www.aha.org/content/13/fs‐readmissions.pdf. Published April 14, 2014. Accessed May 5, 2014.
- Medicare Payment Advisory Commission. Report to the congress: Medicare and the health care delivery system. Available at: http://www.medpac.gov/documents/Jun13_EntireReport.pdf. Published June 14, 2013. Accessed May 5, 2014.
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Quality improvement of care transitions and the trend of composite hospital care. JAMA. 2014;311(10):1013–1014. , , .
- When projecting required effectiveness of interventions for hospital readmission reduction, the percentage that is potentially avoidable must be considered. J Clin Epidemiol. 2013;66(6):688–690. , .
- Urgent readmission rates can be used to infer differences in avoidable readmission rates between hospitals. J Clin Epidemiol. 2012;65(10):1124–1130. , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Measuring and preventing potentially avoidable hospital readmissions: a review of the literature. Hong Kong Med J. 2010;16(5):383–389. , , , , , .
- 3M Health Information Systems. Potentially preventable readmissions classification system methodology: overview. 3M Health Information Systems; May 2008. Report No.: GRP‐139. Available at: http://multimedia.3m.com/mws/mediawebserver?66666UuZjcFSLXTtNXMtmxMEEVuQEcuZgVs6EVs6E666666‐‐. Accessed June 8, 2014.
- Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system. BMC Med Inform Decis Mak. 2014;14:28. , , , , , .
- Comparing 2 methods of assessing 30‐day readmissions: what is the impact on hospital profiling in the Veterans Health Administration? Med Care. 2013;51(7):589–596. , , , , , , et al.
- It's not six of one, half‐dozen the other: a comparative analysis of 3 rehospitalization measurement systems for Massachusetts. Academy Health Annual Research Meeting. Seattle, WA. 2011. Available at: http://www.academyhealth.org/files/2011/tuesday/boutwell.pdf. Accessed May 9, 2014. , .
- Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11):972–981. , , , , , .
- Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573–587. , , , , , .
- Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632–638. , , , .
- National Association of Public Hospitals and Health Systems. NAPH members focus on reducing readmissions. Available at: www.naph.org. Published June 2011. Accessed October 19, 2011.
- Agency for Healthcare Research and Quality. AHRQ quality indicators: prevention quality indicators. Available at: http://www.qualityindicators.ahrq.gov/Modules/pqi_resources.aspx. Accessed February 11, 2014.
- Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450–453. , , , , , .
- Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–102. .
- The hospital‐dependent patient. N Engl J Med. 2014;370(8):694–697. , .
- The hospital medicine reengineering network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415–420. , , , et al.
- Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067–E1072. , , , et al.
Hospital readmissions cost Medicare $15 to $17 billion per year.[1, 2] In 2010, the Hospital Readmission Reduction Program (HRRP), created by the Patient Protection and Affordable Care Act, authorized the Centers for Medicare and Medicaid Services (CMS) to penalize hospitals with higher‐than‐expected readmission rates for certain index conditions.[3] Other payers may follow suit, so hospitals and health systems nationwide are devoting significant resources to reducing readmissions.[4, 5, 6]
Implicit in these efforts are the assumptions that a significant proportion of readmissions are preventable, and that preventable readmissions can be identified. Unfortunately, estimates of preventability vary widely.[7, 8] In this article, we examine how preventable readmissions have been defined, measured, and calculated, and explore the associated implications for readmission reduction efforts.
THE MEDICARE READMISSION METRIC
The medical literature reveals substantial heterogeneity in how readmissions are assessed. Time periods range from 14 days to 4 years, and readmissions may be counted differently depending on whether they are to the same hospital or to any hospital, whether they are for the same (or a related) condition or for any condition, whether a patient is allowed to count only once during the follow‐up period, how mortality is treated, and whether observation stays are considered.[9]
Despite a lack of consensus in the literature, the approach adopted by CMS is endorsed by the National Quality Forum (NQF)[10] and has become the de facto standard for calculating readmission rates. CMS derives risk‐standardized readmission rates for acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN), using administrative claims data for each Medicare fee‐for‐service beneficiary 65 years or older.[11, 12, 13, 14] CMS counts the first readmission (but not subsequent ones) for any cause within 30 days of the index discharge, including readmissions to other facilities. Certain planned readmissions for revascularization are excluded, as are patients who left against medical advice, transferred to another acute‐care hospital, or died during the index admission. Admissions to psychiatric, rehabilitation, cancer specialty, and children's hospitals[12] are also excluded, as well as patients classified as observation status for either hospital stay.[15] Only administrative data are used in readmission calculations (ie, there are no chart reviews or interviews with healthcare personnel or patients). Details are published online and updated at least annually.[15]
EFFECTS AND LIMITATIONS OF THE HRRP AND THE CMS READMISSION METRIC
Penalizing hospitals for higher‐than‐expected readmission rates based on the CMS metric has been successful in the sense that hospitals now feel more accountable for patient outcomes after discharge; they are implementing transitional care programs, improving communication, and building relationships with community programs.[4, 5, 16] Early data suggest a small decline in readmission rates of Medicare beneficiaries nationally.[17] Previously, such readmission rates were constant.[18]
Nevertheless, significant concerns with the current approach have surfaced.[19, 20, 21] First, why choose 30 days? This time horizon was believed to be long enough to identify readmissions attributable to an index admission and short enough to reflect hospital‐delivered care and transitions to the outpatient setting, and it allows for collaboration between hospitals and their communities to reduce readmissions.[3] However, some have argued that this time horizon has little scientific basis,[22] and that hospitals are unfairly held accountable for a timeframe when outcomes may largely be influenced by the quality of outpatient care or the development of new problems.[23, 24] Approximately one‐third of 30‐day readmissions occur within the first 7 days, and more than half (55.7%) occur within the first 14 days[22, 25]; such time frames may be more appropriate for hospital accountability.[26]
Second, spurred by the focus of CMS penalties, efforts to reduce readmissions have largely concerned patients admitted for HF, AMI, or PN, although these 3 medical conditions account for only 10% of Medicare hospitalizations.[18] Programs focused on a narrow patient population may not benefit other patients with high readmission rates, such as those with gastrointestinal or psychiatric problems,[2] or lead to improvements in the underlying processes of care that could benefit patients in additional ways. Indeed, research suggests that low readmission rates may not be related to other measures of hospital quality.[27, 28]
Third, public reporting and hospital penalties are based on 3‐year historical performance, in part to accumulate a large enough sample size for each diagnosis. Hospitals that seek real‐time performance monitoring are limited to tracking surrogate outcomes, such as readmissions back to their own facility.[29, 30] Moreover, because of the long performance time frame, hospitals that achieve rapid improvement may endure penalties precisely when they are attempting to sustain their achievements.
Fourth, the CMS approach utilizes a complex risk‐standardization methodology, which has only modest ability to predict readmissions and allow hospital comparisons.[9] There is no adjustment for community characteristics, even though practice patterns are significantly associated with readmission rates,[9, 31] and more than half of the variation in readmission rates across hospitals can be explained by characteristics of the community such as access to care.[32] Moreover, patient factors, such as race and socioeconomic status, are currently not included in an attempt to hold hospitals to similar standards regardless of their patient population. This is hotly contested, however, and critics note this policy penalizes hospitals for factors outside of their control, such as patients' ability to afford medications.[33] Indeed, the June 2013 Medicare Payment Advisory Committee (MedPAC) report to Congress recommended evaluating hospital performance against facilities with a like percentage of low‐income patients as a way to take into account socioeconomic status.[34]
Fifth, observation stays are excluded, so patients who remain in observation status during their index or subsequent hospitalization cannot be counted as a readmission. Prevalence of observation care has increased, raising concerns that inpatient admissions are being shifted to observation status, producing an artificial decline in readmissions.[35] Fortunately, recent population‐level data provide some reassuring evidence to the contrary.[36]
Finally, and perhaps most significantly, the current readmission metric does not consider preventability. Recent reviews have demonstrated that estimates of preventability vary widely in individual studies, ranging from 5% to 79%, depending on study methodology and setting.[7, 8] Across these studies, on average, only 23% of 30‐day readmissions appear to be avoidable.[8] Another way to consider the preventability of hospital readmissions is by noting that the most effective multimodal care‐transition interventions reduce readmission rates by only about 30%, and most interventions are much less effective.[26] The likely fact that only 23% to 30% of readmissions are preventable has profound implications for the anticipated results of hospital readmission reduction efforts. Interventions that are 75% effective in reducing preventable readmissions should be expected to produce only an 18% to 22% reduction in overall readmission rates.[37]
FOCUSING ON PREVENTABLE READMISSIONS
A greater focus on identifying and targeting preventable readmissions would offer a number of advantages over the present approach. First, it is more meaningful to compare hospitals based on their percentage of discharges resulting in a preventable readmission, than on the basis of highly complex risk standardization procedures for selected conditions. Second, a focus on preventable readmissions more clearly identifies and permits hospitals to target opportunities for improvement. Third, if the focus were on preventable readmissions for a large number of conditions, the necessary sample size could be obtained over a shorter period of time. Overall, such a preventable readmissions metric could serve as a more agile and undiluted performance indicator, as opposed to the present 3‐year rolling average rate of all‐cause readmissions for certain conditions, the majority of which are probably not preventable.
DEFINING PREVENTABILITY
Defining a preventable readmission is critically important. However, neither a consensus definition nor a validated standard for assessing preventable hospital readmissions exists. Different conceptual frameworks and terms (eg, avoidable, potentially preventable, or urgent readmission) complicate the issue.[38, 39, 40]
Although the CMS measure does not address preventability, it is helpful to consider whether other readmission metrics incorporate this concept. The United Health Group's (UHG, formerly Pacificare) All‐Cause Readmission Index, University HealthSystem Consortium's 30‐Day Readmission Rate (all cause), and 3M Health Information Systems' (3M) Potentially Preventable Readmissions (PPR) are 3 commonly used measures.
Of these, only the 3M PPR metric includes the concept of preventability. 3M created a proprietary matrix of 98,000 readmission‐index admission All Patient Refined Diagnosis Related Group pairs based on the review of several physicians and the logical assumption that a readmission for a clinically related diagnosis is potentially preventable.[24, 41] Readmission and index admissions are considered clinically related if any of the following occur: (1) medical readmission for continuation or recurrence of an initial, or closely related, condition; (2) medical readmission for acute decompensation of a chronic condition that was not the reason for the index admission but was plausibly related to care during or immediately afterward (eg, readmission for diabetes in a patient whose index admission was AMI); (3) medical readmission for acute complication plausibly related to care during index admission; (4) readmission for surgical procedure for continuation or recurrence of initial problem (eg, readmission for appendectomy following admission for abdominal pain and fever); or (5) readmission for surgical procedure to address complication resulting from care during index admission.[24, 41] The readmission time frame is not standardized and may be set by the user. Though conceptually appealing in some ways, CMS and the NQF have expressed concern about this specific approach because of the uncertain reliability of the relatedness of the admission‐readmission diagnosis dyads.[3]
In the research literature, only a few studies have examined the 3M PPR or other preventability assessments that rely on the relatedness of diagnostic codes.[8] Using the 3M PPR, a study showed that 78% of readmissions were classified as potentially preventable,[42] which explains why the 3M PPR and all‐cause readmission metric may correlate highly.[43] Others have demonstrated that ratings of hospital performance on readmission rates vary by a moderate to large amount, depending on whether the 3M PPR, CMS, or UHG methodology is used.[43, 44] An algorithm called SQLape[45, 46] is used in Switzerland to benchmark hospitals and defines potentially avoidable readmissions as being related to index diagnoses or complications of those conditions. It has recently been tested in the United States in a single‐center study,[47] and a multihospital study is underway.
Aside from these algorithms using related diagnosis codes, most ratings of preventability have relied on subjective assessments made primarily through a review of hospital records, and approximately one‐third also included data from clinic visits or interviews with the treating medical team or patients/families.[8] Unfortunately, these reports provide insufficient detail on how to apply their preventability criteria to subsequent readmission reviews. Studies did, however, provide categories of preventability into which readmissions could be organized (see Supporting Information, Appendix Table 1, in the online version of this article for details from a subset of studies cited in van Walraven's reviews that illustrate this point).
Assessment of preventability by clinician review can be challenging. In general, such assessments have considered readmissions resulting from factors within the hospital's control to be avoidable (eg, providing appropriate discharge instructions, reconciling medications, arranging timely postdischarge follow‐up appointments), whereas readmissions resulting from factors not within the hospital's control are unavoidable (eg, patient socioeconomic status, social support, disease progression). However, readmissions resulting from patient behaviors or social reasons could potentially be classified as avoidable or unavoidable depending on the circumstances. For example, if a patient decides not to take a prescribed antibiotic and is readmitted with worsening infection, this could be classified as an unavoidable readmission from the hospital's perspective. Alternatively, if the physician prescribing the antibiotic was inattentive to the cost of the medication and the patient would have taken a less expensive medication had it been prescribed, this could be classified as an avoidable readmission. Differing interpretations of contextual factors may partially account for the variability in clinical assessments of preventability.
Indeed, despite the lack of consensus around a standard method of defining preventability, hospitals and health systems are moving forward to address the issue and reduce readmissions. A recent survey by America's Essential Hospitals (previously the National Association of Public Hospitals and Health Systems), indicated that: (1) reducing readmissions was a high priority for the majority (86%) of members, (2) most had established interdisciplinary teams to address the issue, and (3) over half had a formal process for determining which readmissions were potentially preventable. Of the survey respondents, just over one‐third rely on staff review of individual patient charts or patient and family interviews, and slightly less than one‐third rely on other mechanisms such as external consultants, criteria developed by other entities, or the Institute for Clinical Systems Improvement methodology.[48] Approximately one‐fifth make use of 3M's PPR product, and slightly fewer use the list of the Agency for Healthcare Research and Quality's ambulatory care sensitive conditions (ACSCs). These are medical conditions for which it is believed that good outpatient care could prevent the need for hospitalization (eg, asthma, congestive heart failure, diabetes) or for which early intervention minimizes complications.[49] Hospitalization rates for ACSCs may represent a good measure of excess hospitalization, with a focus on the quality of outpatient care.
RECOMMENDATIONS
We recommend that reporting of hospital readmission rates be based on preventable or potentially preventable readmissions. Although we acknowledge the challenges in doing so, the advantages are notable. At minimum, a preventable readmission rate would more accurately reflect the true gap in care and therefore hospitals' real opportunity for improvement, without being obscured by readmissions that are not preventable.
Because readmission rates are used for public reporting and financial penalties for hospitals, we favor a measure of preventability that reflects the readmissions that the hospital or hospital system has the ability to prevent. This would not penalize hospitals for factors that are under the control of others, namely patients and caregivers, community supports, or society at large. We further recommend that this measure apply to a broader composite of unplanned care, inclusive of both inpatient and observation stays, which have little distinction in patients' eyes, and both represent potentially unnecessary utilization of acute‐care resources.[50] Such a measure would require development, validation, and appropriate vetting before it is implemented.
The first step is for researchers and policy makers to agree on how a measure of preventable or potentially preventable readmissions could be defined. A common element of preventability assessment is to identify the degree to which the reasons for readmission are related to the diagnoses of the index hospitalization. To be reliable and scalable, this measure will need to be based on algorithms that relate the index and readmission diagnoses, most likely using claims data. Choosing common medical and surgical conditions and developing a consensus‐based list of related readmission diagnoses is an important first step. It would also be important to include some less common conditions, because they may reflect very different aspects of hospital care.
An approach based on a list of related diagnoses would represent potentially preventable rehospitalizations. Generally, clinical review is required to determine actual preventability, taking into account patient factors such as a high level of illness or functional impairment that leads to clinical decompensation in spite of excellent management.[51, 52] Clinical review, like a root cause analysis, also provides greater insight into hospital processes that may warrant improvement. Therefore, even if an administrative measure of potentially preventable readmissions is implemented, hospitals may wish to continue performing detailed clinical review of some readmissions for quality improvement purposes. When clinical review becomes more standardized,[53] a combined approach that uses administrative data plus clinical verification and arbitration may be feasible, as with hospital‐acquired infections.
Similar work to develop related sets of admission and readmission diagnoses has already been undertaken in development of the 3M PPR and SQLape measures.[41, 46] However, the 3M PPR is a proprietary system that has low specificity and a high false‐positive rate for identifying preventable readmissions when compared to clinical review.[42] Moreover, neither measure has yet achieved the consensus required for widespread adoption in the United States. What is needed is a nonproprietary listing of related admission and readmission diagnoses, developed with the engagement of relevant stakeholders, that goes through a period of public comment and vetting by a body such as the NQF.
Until a validated measure of potentially preventable readmission can be developed, how could the current approach evolve toward preventability? The most feasible, rapidly implementable change would be to alter the readmission time horizon from 30 days to 7 or 15 days. A 30‐day period holds hospitals accountable for complications of outpatient care or new problems that may develop weeks after discharge. Even though this may foster shared accountability and collaboration among hospitals and outpatient or community settings, research has demonstrated that early readmissions (eg, within 715 days of discharge) are more likely preventable.[54] Second, consideration of the socioeconomic status of hospital patients, as recommended by MedPAC,[34] would improve on the current model by comparing hospitals to like facilities when determining penalties for excess readmission rates. Finally, adjustment for community factors, such as practice patterns and access to care, would enable readmission metrics to better reflect factors under the hospital's control.[32]
CONCLUSION
Holding hospitals accountable for the quality of acute and transitional care is an important policy initiative that has accelerated many improvements in discharge planning and care coordination. Optimally, the policies, public reporting, and penalties should target preventable readmissions, which may represent as little as one‐quarter of all readmissions. By summarizing some of the issues in defining preventability, we hope to foster continued refinement of quality metrics used in this arena.
Acknowledgements
We thank Eduard Vasilevskis, MD, MPH, for feedback on an earlier draft of this article. This manuscript was informed by a special report titled Preventable Readmissions, written by Julia Lavenberg, Joel Betesh, David Goldmann, Craig Kean, and Kendal Williams of the Penn Medicine Center for Evidence‐based Practice. The review was performed at the request of the Penn Medicine Chief Medical Officer Patrick J. Brennan to inform the development of local readmission prevention metrics, and is available at
Disclosures
Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1TR000003. Dr. Kripalani receives support from the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number R01HL109388, and from the Centers for Medicare and Medicaid Services under awards 1C1CMS331006‐01 and 1C1CMS330979‐01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Centers for Medicare and Medicaid Services.
Hospital readmissions cost Medicare $15 to $17 billion per year.[1, 2] In 2010, the Hospital Readmission Reduction Program (HRRP), created by the Patient Protection and Affordable Care Act, authorized the Centers for Medicare and Medicaid Services (CMS) to penalize hospitals with higher‐than‐expected readmission rates for certain index conditions.[3] Other payers may follow suit, so hospitals and health systems nationwide are devoting significant resources to reducing readmissions.[4, 5, 6]
Implicit in these efforts are the assumptions that a significant proportion of readmissions are preventable, and that preventable readmissions can be identified. Unfortunately, estimates of preventability vary widely.[7, 8] In this article, we examine how preventable readmissions have been defined, measured, and calculated, and explore the associated implications for readmission reduction efforts.
THE MEDICARE READMISSION METRIC
The medical literature reveals substantial heterogeneity in how readmissions are assessed. Time periods range from 14 days to 4 years, and readmissions may be counted differently depending on whether they are to the same hospital or to any hospital, whether they are for the same (or a related) condition or for any condition, whether a patient is allowed to count only once during the follow‐up period, how mortality is treated, and whether observation stays are considered.[9]
Despite a lack of consensus in the literature, the approach adopted by CMS is endorsed by the National Quality Forum (NQF)[10] and has become the de facto standard for calculating readmission rates. CMS derives risk‐standardized readmission rates for acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN), using administrative claims data for each Medicare fee‐for‐service beneficiary 65 years or older.[11, 12, 13, 14] CMS counts the first readmission (but not subsequent ones) for any cause within 30 days of the index discharge, including readmissions to other facilities. Certain planned readmissions for revascularization are excluded, as are patients who left against medical advice, transferred to another acute‐care hospital, or died during the index admission. Admissions to psychiatric, rehabilitation, cancer specialty, and children's hospitals[12] are also excluded, as well as patients classified as observation status for either hospital stay.[15] Only administrative data are used in readmission calculations (ie, there are no chart reviews or interviews with healthcare personnel or patients). Details are published online and updated at least annually.[15]
EFFECTS AND LIMITATIONS OF THE HRRP AND THE CMS READMISSION METRIC
Penalizing hospitals for higher‐than‐expected readmission rates based on the CMS metric has been successful in the sense that hospitals now feel more accountable for patient outcomes after discharge; they are implementing transitional care programs, improving communication, and building relationships with community programs.[4, 5, 16] Early data suggest a small decline in readmission rates of Medicare beneficiaries nationally.[17] Previously, such readmission rates were constant.[18]
Nevertheless, significant concerns with the current approach have surfaced.[19, 20, 21] First, why choose 30 days? This time horizon was believed to be long enough to identify readmissions attributable to an index admission and short enough to reflect hospital‐delivered care and transitions to the outpatient setting, and it allows for collaboration between hospitals and their communities to reduce readmissions.[3] However, some have argued that this time horizon has little scientific basis,[22] and that hospitals are unfairly held accountable for a timeframe when outcomes may largely be influenced by the quality of outpatient care or the development of new problems.[23, 24] Approximately one‐third of 30‐day readmissions occur within the first 7 days, and more than half (55.7%) occur within the first 14 days[22, 25]; such time frames may be more appropriate for hospital accountability.[26]
Second, spurred by the focus of CMS penalties, efforts to reduce readmissions have largely concerned patients admitted for HF, AMI, or PN, although these 3 medical conditions account for only 10% of Medicare hospitalizations.[18] Programs focused on a narrow patient population may not benefit other patients with high readmission rates, such as those with gastrointestinal or psychiatric problems,[2] or lead to improvements in the underlying processes of care that could benefit patients in additional ways. Indeed, research suggests that low readmission rates may not be related to other measures of hospital quality.[27, 28]
Third, public reporting and hospital penalties are based on 3‐year historical performance, in part to accumulate a large enough sample size for each diagnosis. Hospitals that seek real‐time performance monitoring are limited to tracking surrogate outcomes, such as readmissions back to their own facility.[29, 30] Moreover, because of the long performance time frame, hospitals that achieve rapid improvement may endure penalties precisely when they are attempting to sustain their achievements.
Fourth, the CMS approach utilizes a complex risk‐standardization methodology, which has only modest ability to predict readmissions and allow hospital comparisons.[9] There is no adjustment for community characteristics, even though practice patterns are significantly associated with readmission rates,[9, 31] and more than half of the variation in readmission rates across hospitals can be explained by characteristics of the community such as access to care.[32] Moreover, patient factors, such as race and socioeconomic status, are currently not included in an attempt to hold hospitals to similar standards regardless of their patient population. This is hotly contested, however, and critics note this policy penalizes hospitals for factors outside of their control, such as patients' ability to afford medications.[33] Indeed, the June 2013 Medicare Payment Advisory Committee (MedPAC) report to Congress recommended evaluating hospital performance against facilities with a like percentage of low‐income patients as a way to take into account socioeconomic status.[34]
Fifth, observation stays are excluded, so patients who remain in observation status during their index or subsequent hospitalization cannot be counted as a readmission. Prevalence of observation care has increased, raising concerns that inpatient admissions are being shifted to observation status, producing an artificial decline in readmissions.[35] Fortunately, recent population‐level data provide some reassuring evidence to the contrary.[36]
Finally, and perhaps most significantly, the current readmission metric does not consider preventability. Recent reviews have demonstrated that estimates of preventability vary widely in individual studies, ranging from 5% to 79%, depending on study methodology and setting.[7, 8] Across these studies, on average, only 23% of 30‐day readmissions appear to be avoidable.[8] Another way to consider the preventability of hospital readmissions is by noting that the most effective multimodal care‐transition interventions reduce readmission rates by only about 30%, and most interventions are much less effective.[26] The likely fact that only 23% to 30% of readmissions are preventable has profound implications for the anticipated results of hospital readmission reduction efforts. Interventions that are 75% effective in reducing preventable readmissions should be expected to produce only an 18% to 22% reduction in overall readmission rates.[37]
FOCUSING ON PREVENTABLE READMISSIONS
A greater focus on identifying and targeting preventable readmissions would offer a number of advantages over the present approach. First, it is more meaningful to compare hospitals based on their percentage of discharges resulting in a preventable readmission, than on the basis of highly complex risk standardization procedures for selected conditions. Second, a focus on preventable readmissions more clearly identifies and permits hospitals to target opportunities for improvement. Third, if the focus were on preventable readmissions for a large number of conditions, the necessary sample size could be obtained over a shorter period of time. Overall, such a preventable readmissions metric could serve as a more agile and undiluted performance indicator, as opposed to the present 3‐year rolling average rate of all‐cause readmissions for certain conditions, the majority of which are probably not preventable.
DEFINING PREVENTABILITY
Defining a preventable readmission is critically important. However, neither a consensus definition nor a validated standard for assessing preventable hospital readmissions exists. Different conceptual frameworks and terms (eg, avoidable, potentially preventable, or urgent readmission) complicate the issue.[38, 39, 40]
Although the CMS measure does not address preventability, it is helpful to consider whether other readmission metrics incorporate this concept. The United Health Group's (UHG, formerly Pacificare) All‐Cause Readmission Index, University HealthSystem Consortium's 30‐Day Readmission Rate (all cause), and 3M Health Information Systems' (3M) Potentially Preventable Readmissions (PPR) are 3 commonly used measures.
Of these, only the 3M PPR metric includes the concept of preventability. 3M created a proprietary matrix of 98,000 readmission‐index admission All Patient Refined Diagnosis Related Group pairs based on the review of several physicians and the logical assumption that a readmission for a clinically related diagnosis is potentially preventable.[24, 41] Readmission and index admissions are considered clinically related if any of the following occur: (1) medical readmission for continuation or recurrence of an initial, or closely related, condition; (2) medical readmission for acute decompensation of a chronic condition that was not the reason for the index admission but was plausibly related to care during or immediately afterward (eg, readmission for diabetes in a patient whose index admission was AMI); (3) medical readmission for acute complication plausibly related to care during index admission; (4) readmission for surgical procedure for continuation or recurrence of initial problem (eg, readmission for appendectomy following admission for abdominal pain and fever); or (5) readmission for surgical procedure to address complication resulting from care during index admission.[24, 41] The readmission time frame is not standardized and may be set by the user. Though conceptually appealing in some ways, CMS and the NQF have expressed concern about this specific approach because of the uncertain reliability of the relatedness of the admission‐readmission diagnosis dyads.[3]
In the research literature, only a few studies have examined the 3M PPR or other preventability assessments that rely on the relatedness of diagnostic codes.[8] Using the 3M PPR, a study showed that 78% of readmissions were classified as potentially preventable,[42] which explains why the 3M PPR and all‐cause readmission metric may correlate highly.[43] Others have demonstrated that ratings of hospital performance on readmission rates vary by a moderate to large amount, depending on whether the 3M PPR, CMS, or UHG methodology is used.[43, 44] An algorithm called SQLape[45, 46] is used in Switzerland to benchmark hospitals and defines potentially avoidable readmissions as being related to index diagnoses or complications of those conditions. It has recently been tested in the United States in a single‐center study,[47] and a multihospital study is underway.
Aside from these algorithms using related diagnosis codes, most ratings of preventability have relied on subjective assessments made primarily through a review of hospital records, and approximately one‐third also included data from clinic visits or interviews with the treating medical team or patients/families.[8] Unfortunately, these reports provide insufficient detail on how to apply their preventability criteria to subsequent readmission reviews. Studies did, however, provide categories of preventability into which readmissions could be organized (see Supporting Information, Appendix Table 1, in the online version of this article for details from a subset of studies cited in van Walraven's reviews that illustrate this point).
Assessment of preventability by clinician review can be challenging. In general, such assessments have considered readmissions resulting from factors within the hospital's control to be avoidable (eg, providing appropriate discharge instructions, reconciling medications, arranging timely postdischarge follow‐up appointments), whereas readmissions resulting from factors not within the hospital's control are unavoidable (eg, patient socioeconomic status, social support, disease progression). However, readmissions resulting from patient behaviors or social reasons could potentially be classified as avoidable or unavoidable depending on the circumstances. For example, if a patient decides not to take a prescribed antibiotic and is readmitted with worsening infection, this could be classified as an unavoidable readmission from the hospital's perspective. Alternatively, if the physician prescribing the antibiotic was inattentive to the cost of the medication and the patient would have taken a less expensive medication had it been prescribed, this could be classified as an avoidable readmission. Differing interpretations of contextual factors may partially account for the variability in clinical assessments of preventability.
Indeed, despite the lack of consensus around a standard method of defining preventability, hospitals and health systems are moving forward to address the issue and reduce readmissions. A recent survey by America's Essential Hospitals (previously the National Association of Public Hospitals and Health Systems), indicated that: (1) reducing readmissions was a high priority for the majority (86%) of members, (2) most had established interdisciplinary teams to address the issue, and (3) over half had a formal process for determining which readmissions were potentially preventable. Of the survey respondents, just over one‐third rely on staff review of individual patient charts or patient and family interviews, and slightly less than one‐third rely on other mechanisms such as external consultants, criteria developed by other entities, or the Institute for Clinical Systems Improvement methodology.[48] Approximately one‐fifth make use of 3M's PPR product, and slightly fewer use the list of the Agency for Healthcare Research and Quality's ambulatory care sensitive conditions (ACSCs). These are medical conditions for which it is believed that good outpatient care could prevent the need for hospitalization (eg, asthma, congestive heart failure, diabetes) or for which early intervention minimizes complications.[49] Hospitalization rates for ACSCs may represent a good measure of excess hospitalization, with a focus on the quality of outpatient care.
RECOMMENDATIONS
We recommend that reporting of hospital readmission rates be based on preventable or potentially preventable readmissions. Although we acknowledge the challenges in doing so, the advantages are notable. At minimum, a preventable readmission rate would more accurately reflect the true gap in care and therefore hospitals' real opportunity for improvement, without being obscured by readmissions that are not preventable.
Because readmission rates are used for public reporting and financial penalties for hospitals, we favor a measure of preventability that reflects the readmissions that the hospital or hospital system has the ability to prevent. This would not penalize hospitals for factors that are under the control of others, namely patients and caregivers, community supports, or society at large. We further recommend that this measure apply to a broader composite of unplanned care, inclusive of both inpatient and observation stays, which have little distinction in patients' eyes, and both represent potentially unnecessary utilization of acute‐care resources.[50] Such a measure would require development, validation, and appropriate vetting before it is implemented.
The first step is for researchers and policy makers to agree on how a measure of preventable or potentially preventable readmissions could be defined. A common element of preventability assessment is to identify the degree to which the reasons for readmission are related to the diagnoses of the index hospitalization. To be reliable and scalable, this measure will need to be based on algorithms that relate the index and readmission diagnoses, most likely using claims data. Choosing common medical and surgical conditions and developing a consensus‐based list of related readmission diagnoses is an important first step. It would also be important to include some less common conditions, because they may reflect very different aspects of hospital care.
An approach based on a list of related diagnoses would represent potentially preventable rehospitalizations. Generally, clinical review is required to determine actual preventability, taking into account patient factors such as a high level of illness or functional impairment that leads to clinical decompensation in spite of excellent management.[51, 52] Clinical review, like a root cause analysis, also provides greater insight into hospital processes that may warrant improvement. Therefore, even if an administrative measure of potentially preventable readmissions is implemented, hospitals may wish to continue performing detailed clinical review of some readmissions for quality improvement purposes. When clinical review becomes more standardized,[53] a combined approach that uses administrative data plus clinical verification and arbitration may be feasible, as with hospital‐acquired infections.
Similar work to develop related sets of admission and readmission diagnoses has already been undertaken in development of the 3M PPR and SQLape measures.[41, 46] However, the 3M PPR is a proprietary system that has low specificity and a high false‐positive rate for identifying preventable readmissions when compared to clinical review.[42] Moreover, neither measure has yet achieved the consensus required for widespread adoption in the United States. What is needed is a nonproprietary listing of related admission and readmission diagnoses, developed with the engagement of relevant stakeholders, that goes through a period of public comment and vetting by a body such as the NQF.
Until a validated measure of potentially preventable readmission can be developed, how could the current approach evolve toward preventability? The most feasible, rapidly implementable change would be to alter the readmission time horizon from 30 days to 7 or 15 days. A 30‐day period holds hospitals accountable for complications of outpatient care or new problems that may develop weeks after discharge. Even though this may foster shared accountability and collaboration among hospitals and outpatient or community settings, research has demonstrated that early readmissions (eg, within 715 days of discharge) are more likely preventable.[54] Second, consideration of the socioeconomic status of hospital patients, as recommended by MedPAC,[34] would improve on the current model by comparing hospitals to like facilities when determining penalties for excess readmission rates. Finally, adjustment for community factors, such as practice patterns and access to care, would enable readmission metrics to better reflect factors under the hospital's control.[32]
CONCLUSION
Holding hospitals accountable for the quality of acute and transitional care is an important policy initiative that has accelerated many improvements in discharge planning and care coordination. Optimally, the policies, public reporting, and penalties should target preventable readmissions, which may represent as little as one‐quarter of all readmissions. By summarizing some of the issues in defining preventability, we hope to foster continued refinement of quality metrics used in this arena.
Acknowledgements
We thank Eduard Vasilevskis, MD, MPH, for feedback on an earlier draft of this article. This manuscript was informed by a special report titled Preventable Readmissions, written by Julia Lavenberg, Joel Betesh, David Goldmann, Craig Kean, and Kendal Williams of the Penn Medicine Center for Evidence‐based Practice. The review was performed at the request of the Penn Medicine Chief Medical Officer Patrick J. Brennan to inform the development of local readmission prevention metrics, and is available at
Disclosures
Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1TR000003. Dr. Kripalani receives support from the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number R01HL109388, and from the Centers for Medicare and Medicaid Services under awards 1C1CMS331006‐01 and 1C1CMS330979‐01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Centers for Medicare and Medicaid Services.
- Physician Visits After Hospital Discharge: Implications for Reducing Readmissions. Washington, DC: National Institute for Health Care Reform; 2011. Report no. 6. , .
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Centers for Medicare and Medicaid Services, US Department of Health and Human Services. Medicare program: hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and FY 2012 rates. Fed Regist. 2011;76(160):51476–51846.
- Quality collaboratives and campaigns to reduce readmissions: what strategies are hospitals using? J Hosp Med. 2013;8:601–608. , , , , , .
- Contemporary data about hospital strategies to reduce unplanned readmissions: what has changed [research letter]? JAMA Intern Med. 2014;174(1):154–156. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- Comparing methods to calculate hospital‐specific rates of early death or urgent readmission. CMAJ. 2012;184(15):E810–E817. , , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- National Quality Forum. Patient outcomes: all‐cause readmissions expedited review 2011. Available at: http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id60(7):607–614.
- Data shows reduction in Medicare hospital readmission rates during 2012. Medicare Medicaid Res Rev. 2013;3(2):E1–E11. , , , , , .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Moving beyond readmission penalties: creating an ideal process to improve transitional care. J Hosp Med. 2013;8(2):102–109. , , , .
- A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):1175–1177. , .
- American Hospital Association. TrendWatch: examining the drivers of readmissions and reducing unnecessary readmissions for better patient care. Washington, DC: American Hospital Association; 2011.
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Characteristics of hospitals receiving penalties under the hospital readmissions reduction program. JAMA. 2013;309(4):342–343. , .
- Identifying potentially preventable readmissions. Health Care Financ Rev. 2008;30(1):75–91. , , , , , , et al.
- Use of hospital‐based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364–371. , , , et al.
- Reducing hospital readmission rates: current strategies and future directions. Annu Rev Med. 2014;65:471–485. , , , .
- Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2013;309(6):587–593. , , , et al.
- Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377–385. , , , et al.
- Limitations of using same‐hospital readmission metrics. Int J Qual Health Care. 2013;25(6):633–639. , , , .
- Is same‐hospital readmission rate a good surrogate for all‐hospital readmission rate? Med Care. 2010;48(5):477–481. , , , et al.
- The relationship between hospital admission rates and rehospitalizations. N Engl J Med. 2011;365(24):2287–2295. , , .
- Community factors and hospital readmission rates [published online April 9, 2014]. Health Serv Res. doi: 10.1111/1475–6773.12177. .
- American Hospital Association. Hospital readmissions reduction program: factsheet. American Hospital Association. Available at: http://www.aha.org/content/13/fs‐readmissions.pdf. Published April 14, 2014. Accessed May 5, 2014.
- Medicare Payment Advisory Commission. Report to the congress: Medicare and the health care delivery system. Available at: http://www.medpac.gov/documents/Jun13_EntireReport.pdf. Published June 14, 2013. Accessed May 5, 2014.
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Quality improvement of care transitions and the trend of composite hospital care. JAMA. 2014;311(10):1013–1014. , , .
- When projecting required effectiveness of interventions for hospital readmission reduction, the percentage that is potentially avoidable must be considered. J Clin Epidemiol. 2013;66(6):688–690. , .
- Urgent readmission rates can be used to infer differences in avoidable readmission rates between hospitals. J Clin Epidemiol. 2012;65(10):1124–1130. , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Measuring and preventing potentially avoidable hospital readmissions: a review of the literature. Hong Kong Med J. 2010;16(5):383–389. , , , , , .
- 3M Health Information Systems. Potentially preventable readmissions classification system methodology: overview. 3M Health Information Systems; May 2008. Report No.: GRP‐139. Available at: http://multimedia.3m.com/mws/mediawebserver?66666UuZjcFSLXTtNXMtmxMEEVuQEcuZgVs6EVs6E666666‐‐. Accessed June 8, 2014.
- Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system. BMC Med Inform Decis Mak. 2014;14:28. , , , , , .
- Comparing 2 methods of assessing 30‐day readmissions: what is the impact on hospital profiling in the Veterans Health Administration? Med Care. 2013;51(7):589–596. , , , , , , et al.
- It's not six of one, half‐dozen the other: a comparative analysis of 3 rehospitalization measurement systems for Massachusetts. Academy Health Annual Research Meeting. Seattle, WA. 2011. Available at: http://www.academyhealth.org/files/2011/tuesday/boutwell.pdf. Accessed May 9, 2014. , .
- Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11):972–981. , , , , , .
- Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573–587. , , , , , .
- Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632–638. , , , .
- National Association of Public Hospitals and Health Systems. NAPH members focus on reducing readmissions. Available at: www.naph.org. Published June 2011. Accessed October 19, 2011.
- Agency for Healthcare Research and Quality. AHRQ quality indicators: prevention quality indicators. Available at: http://www.qualityindicators.ahrq.gov/Modules/pqi_resources.aspx. Accessed February 11, 2014.
- Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450–453. , , , , , .
- Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–102. .
- The hospital‐dependent patient. N Engl J Med. 2014;370(8):694–697. , .
- The hospital medicine reengineering network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415–420. , , , et al.
- Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067–E1072. , , , et al.
- Physician Visits After Hospital Discharge: Implications for Reducing Readmissions. Washington, DC: National Institute for Health Care Reform; 2011. Report no. 6. , .
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Centers for Medicare and Medicaid Services, US Department of Health and Human Services. Medicare program: hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and FY 2012 rates. Fed Regist. 2011;76(160):51476–51846.
- Quality collaboratives and campaigns to reduce readmissions: what strategies are hospitals using? J Hosp Med. 2013;8:601–608. , , , , , .
- Contemporary data about hospital strategies to reduce unplanned readmissions: what has changed [research letter]? JAMA Intern Med. 2014;174(1):154–156. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- Comparing methods to calculate hospital‐specific rates of early death or urgent readmission. CMAJ. 2012;184(15):E810–E817. , , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- National Quality Forum. Patient outcomes: all‐cause readmissions expedited review 2011. Available at: http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id60(7):607–614.
- Data shows reduction in Medicare hospital readmission rates during 2012. Medicare Medicaid Res Rev. 2013;3(2):E1–E11. , , , , , .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Moving beyond readmission penalties: creating an ideal process to improve transitional care. J Hosp Med. 2013;8(2):102–109. , , , .
- A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):1175–1177. , .
- American Hospital Association. TrendWatch: examining the drivers of readmissions and reducing unnecessary readmissions for better patient care. Washington, DC: American Hospital Association; 2011.
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Characteristics of hospitals receiving penalties under the hospital readmissions reduction program. JAMA. 2013;309(4):342–343. , .
- Identifying potentially preventable readmissions. Health Care Financ Rev. 2008;30(1):75–91. , , , , , , et al.
- Use of hospital‐based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364–371. , , , et al.
- Reducing hospital readmission rates: current strategies and future directions. Annu Rev Med. 2014;65:471–485. , , , .
- Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2013;309(6):587–593. , , , et al.
- Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377–385. , , , et al.
- Limitations of using same‐hospital readmission metrics. Int J Qual Health Care. 2013;25(6):633–639. , , , .
- Is same‐hospital readmission rate a good surrogate for all‐hospital readmission rate? Med Care. 2010;48(5):477–481. , , , et al.
- The relationship between hospital admission rates and rehospitalizations. N Engl J Med. 2011;365(24):2287–2295. , , .
- Community factors and hospital readmission rates [published online April 9, 2014]. Health Serv Res. doi: 10.1111/1475–6773.12177. .
- American Hospital Association. Hospital readmissions reduction program: factsheet. American Hospital Association. Available at: http://www.aha.org/content/13/fs‐readmissions.pdf. Published April 14, 2014. Accessed May 5, 2014.
- Medicare Payment Advisory Commission. Report to the congress: Medicare and the health care delivery system. Available at: http://www.medpac.gov/documents/Jun13_EntireReport.pdf. Published June 14, 2013. Accessed May 5, 2014.
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Quality improvement of care transitions and the trend of composite hospital care. JAMA. 2014;311(10):1013–1014. , , .
- When projecting required effectiveness of interventions for hospital readmission reduction, the percentage that is potentially avoidable must be considered. J Clin Epidemiol. 2013;66(6):688–690. , .
- Urgent readmission rates can be used to infer differences in avoidable readmission rates between hospitals. J Clin Epidemiol. 2012;65(10):1124–1130. , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Measuring and preventing potentially avoidable hospital readmissions: a review of the literature. Hong Kong Med J. 2010;16(5):383–389. , , , , , .
- 3M Health Information Systems. Potentially preventable readmissions classification system methodology: overview. 3M Health Information Systems; May 2008. Report No.: GRP‐139. Available at: http://multimedia.3m.com/mws/mediawebserver?66666UuZjcFSLXTtNXMtmxMEEVuQEcuZgVs6EVs6E666666‐‐. Accessed June 8, 2014.
- Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system. BMC Med Inform Decis Mak. 2014;14:28. , , , , , .
- Comparing 2 methods of assessing 30‐day readmissions: what is the impact on hospital profiling in the Veterans Health Administration? Med Care. 2013;51(7):589–596. , , , , , , et al.
- It's not six of one, half‐dozen the other: a comparative analysis of 3 rehospitalization measurement systems for Massachusetts. Academy Health Annual Research Meeting. Seattle, WA. 2011. Available at: http://www.academyhealth.org/files/2011/tuesday/boutwell.pdf. Accessed May 9, 2014. , .
- Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11):972–981. , , , , , .
- Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573–587. , , , , , .
- Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632–638. , , , .
- National Association of Public Hospitals and Health Systems. NAPH members focus on reducing readmissions. Available at: www.naph.org. Published June 2011. Accessed October 19, 2011.
- Agency for Healthcare Research and Quality. AHRQ quality indicators: prevention quality indicators. Available at: http://www.qualityindicators.ahrq.gov/Modules/pqi_resources.aspx. Accessed February 11, 2014.
- Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450–453. , , , , , .
- Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–102. .
- The hospital‐dependent patient. N Engl J Med. 2014;370(8):694–697. , .
- The hospital medicine reengineering network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415–420. , , , et al.
- Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067–E1072. , , , et al.
Patients at Risk for Readmission
Unplanned hospital readmissions are common, costly, and potentially avoidable. Approximately 20% of Medicare patients are readmitted within 30 days of discharge.[1] Readmission rates are estimated to be similarly high in other population subgroups,[2, 3, 4] with approximately 80% of patients[1, 5, 6] readmitted to the original discharging hospital. A recent systematic review suggested that 27% of readmissions may be preventable.[7]
Hospital readmissions have increasingly been viewed as a correctable marker of poor quality care and have been adopted by a number of organizations as quality indicators.[8, 9, 10] As a result, hospitals have important internal and external motivations to address readmissions. Identification of patients at high risk for readmissions may be an important first step toward preventing them. In particular, readmission risk assessment could be used to help providers target the delivery of resource‐intensive transitional care interventions[11, 12, 13, 14] to patients with the greatest needs. Such an approach is appealing because it allows hospitals to focus scarce resources where the impact may be greatest and provides a starting point for organizations struggling to develop robust models of transitional care delivery.
Electronic health records (EHRs) may prove to be an important component of strategies designed to risk stratify patients at the point of care. Algorithms integrated into the EHR that automatically generate risk predictions have the potential to (1) improve provider time efficiency by automating the prediction process, (2) improve consistency of data collection and risk score calculation, (3) increase adoption through improved usability, and (4) provide clinically important information in real‐time to all healthcare team members caring for a hospitalized patient.
We thus sought to derive a predictive model for 30‐day readmissions using data reliably present in our EHR at the time of admission, and integrate this predictive model into our hospital's EHR to create an automated prediction tool that identifies on admission patients at high risk for readmission within 30 days of discharge. In addition, we prospectively validated this model using the 12‐month period after implementation and examined the impact on readmissions.
METHODS
Setting
The University of Pennsylvania Health System (UPHS) includes 3 hospitals, with a combined capacity of over 1500 beds and 70,000 annual admissions. All hospitals currently utilize Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL) as their EHR. The study sample included all adult admissions to any of the 3 UPHS hospitals during the study period. Admissions to short procedure, rehabilitation, and hospice units were excluded. The study received expedited approval and a HIPAA waiver from the University of Pennsylvania institutional review board.
Development of Predictive Model
The UPHS Center for Evidence‐based Practice[15, 16] performed a systematic review to identify factors associated with hospital readmission within 30 days of discharge. We then examined the data available from our hospital EHR at the time of admission for those factors identified in the review. Using different threshold values and look‐back periods, we developed and tested 30 candidate prediction models using these variables alone and in combination (Table 1). Prediction models were evaluated using 24 months of historical data between August 1, 2009 and August 1, 2011.
Implementation
An automated readmission risk flag was then integrated into the EHR. Patients classified as being at high risk for readmission with the automated prediction model were flagged in the EHR on admission (Figure 1A). The flag can be double‐clicked to display a separate screen with information relevant to discharge planning including inpatient and emergency department (ED) visits in the prior 12 months, as well as information about the primary team, length of stay, and admitting problem associated with those admissions (Figure 1B). The prediction model was integrated into our EHR using Arden Syntax for Medical Logic Modules.[17] The readmission risk screen involved presenting the provider with a new screen and was thus developed in Microsoft .NET using C# and Windows Forms (Microsoft Corp., Redmond, WA).
The flag was visible on the patient lists of all providers who utilized the EHR. This included but was not limited to nurses, social workers, unit pharmacists, and physicians. At the time of implementation, educational events regarding the readmission risk flag were provided in forums targeting administrators, pharmacists, social workers, and housestaff. Information about the flag and recommendations for use were distributed through emails and broadcast screensaver messages disseminated throughout the inpatient units of the health system. Providers were asked to pay special attention to discharge planning for patients triggering the readmission risk flag, including medication reconciliation by pharmacists for these patients prior to discharge, and arrangement of available home services by social workers.
The risk flag was 1 of 4 classes of interventions developed and endorsed by the health system in its efforts to reduce readmissions. Besides risk stratification, the other classes were: interdisciplinary rounding, patient education, and discharge communication. None of the interventions alone were expected to decrease readmissions, but as all 4 classes of interventions were implemented and performed routinely, the expectation was that they would work in concert to reduce readmissions.
Analysis
The primary outcome was all‐cause hospital readmissions in the healthcare system within 30 days of discharge. Although this outcome is commonly used both in the literature and as a quality metric, significant debate persists as to the appropriateness of this metric.[18] Many of the factors driving 30‐day readmissions may be dependent on factors outside of the discharging hospital's control and it has been argued that nearer‐term, nonelective readmission rates may provide a more meaningful quality metric.[18] Seven‐day unplanned readmissions were thus used as a secondary outcome measure for this study.
Sensitivity, specificity, predictive value, C statistic, F score (the harmonic mean of positive predictive value and sensitivity),[19] and screen‐positive rate were calculated for each of the 30 prediction models evaluated using the historical data. The prediction model with the best balance of F score and screen‐positive rate was selected as the prediction model to be integrated into the EHR. Prospective validation of the selected prediction model was performed using the 12‐month period following implementation of the risk flag (September 2011September 2012).
To assess the impact of the automated prediction model on monthly readmission rate, we used the 24‐month period immediately before and the 12‐month period immediately after implementation of the readmission risk flag. Segmented regression analysis was performed testing for changes in level and slope of readmission rates between preimplementation and postimplementation time periods. This quasiexperimental interrupted time series methodology[20] allows us to control for secular trends in readmission rates and to assess the preimplementation trend (secular trend), the difference in rates immediately before and after the implementation (immediate effect), and the postimplementation change over time (sustained effect). We used Cochrane‐Orcutt estimation[21] to correct for serial autocorrelation.
All analyses were performed using Stata 12.1 software (Stata Corp, College Station, TX).
RESULTS
Predictors of Readmission
Our systematic review of the literature identified several patient and healthcare utilization patterns predictive of 30‐day readmission risk. Utilization factors included length of stay, number of prior admissions, previous 30‐day readmissions, and previous ED visits. Patient characteristics included number of comorbidities, living alone, and payor. Evidence was inconsistent regarding threshold values for these variables.
Many variables readily available in our EHR were either found by the systematic review not to be reliably predictive of 30‐day readmission (including age and gender) or were not readily or reliably available on admission (including length of stay and payor). At the time of implementation, our EHR did not include vital sign or nursing assessment variables, so these were not considered for inclusion in our model.
Of the available variables, 3 were consistently accurate and available in the EHR at the time of patient admission: prior hospital admission, emergency department visit, and 30‐day readmission within UPHS. We then developed 30 candidate prediction models using a combination of these variables, including 1 and 2 prior admissions, ED visits, and 30‐day readmissions in the 6 and 12 months preceding the index visit (Table 1).
Development and Validation
We used 24 months of retrospective data, which included 120,396 discharges with 17,337 thirty‐day readmissions (14.4% 30‐day all‐cause readmission rate) to test the candidate prediction models. A single risk factor, 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%) (Table 1).
Sensitivity | Specificity | C Statistic | PPV | NPV | Screen Positive | F Score | |
---|---|---|---|---|---|---|---|
| |||||||
Retrospective Evaluation of Prediction Rules Lookback period: 6 months | |||||||
Prior Admissions | |||||||
1 | 53% | 74% | 0.640 | 26% | 91% | 30% | 0.350 |
2 | 32% | 90% | 0.610 | 35% | 89% | 13% | 0.333 |
3 | 20% | 96% | 0.578 | 44% | 88% | 7% | 0.274 |
Prior ED Visits | |||||||
1 | 31% | 81% | 0.558 | 21% | 87% | 21% | 0.252 |
2 | 13% | 93% | 0.532 | 25% | 87% | 8% | 0.172 |
3 | 7% | 97% | 0.519 | 27% | 86% | 4% | 0.111 |
Prior 30‐day Readmissions | |||||||
1 | 39% | 85% | 0.623 | 31% | 89% | 18% | 0.347 |
2 | 21% | 95% | 0.582 | 43% | 88% | 7% | 0.284 |
3 | 13% | 98% | 0.555 | 53% | 87% | 4% | 0.208 |
Combined Rules | |||||||
Admit1 & ED1 | 22% | 92% | 0.568 | 31% | 88% | 10% | 0.255 |
Admit2 & ED1 | 15% | 96% | 0.556 | 40% | 87% | 5% | 0.217 |
Admit1 & 30‐day1 | 39% | 85% | 0.623 | 31% | 89% | 18% | 0.346 |
Admit2 & 30‐day1 | 29% | 92% | 0.603 | 37% | 89% | 11% | 0.324 |
30‐day1 & ED1 | 17% | 95% | 0.559 | 37% | 87% | 6% | 0.229 |
30‐day1 & ED2 | 8% | 98% | 0.527 | 40% | 86% | 3% | 0.132 |
Lookback period: 12 months | |||||||
Prior Admission | |||||||
1 | 60% | 68% | 0.593 | 24% | 91% | 36% | 0.340 |
2a | 40% | 85% | 0.624 | 31% | 89% | 18% | 0.354 |
3 | 28% | 92% | 0.600 | 37% | 88% | 11% | 0.318 |
Prior ED Visit | |||||||
1 | 38% | 74% | 0.560 | 20% | 88% | 28% | 0.260 |
2 | 20% | 88% | 0.544 | 23% | 87% | 13% | 0.215 |
3 | 8% | 96% | 0.523 | 27% | 86% | 4% | 0.126 |
Prior 30‐day Readmission | |||||||
1 | 43% | 84% | 0.630 | 30% | 90% | 20% | 0.353 |
2 | 24% | 94% | 0.592 | 41% | 88% | 9% | 0.305 |
3 | 11% | 98% | 0.548 | 54% | 87% | 3% | 0.186 |
Combined Rules | |||||||
Admit1 & ED1 | 29% | 87% | 0.580 | 27% | 88% | 15% | 0.281 |
Admit2 & ED1 | 22% | 93% | 0.574 | 34% | 88% | 9% | 0.266 |
Admit1 & 30‐day1 | 42% | 84% | 0.630 | 30% | 90% | 14% | 0.353 |
Admit2 & 30‐day1 | 34% | 89% | 0.615 | 34% | 89% | 14% | 0.341 |
30‐day1 & ED1 | 21% | 93% | 0.569 | 35% | 88% | 9% | 0.261 |
30‐day1 & ED2 | 13% | 96% | 0.545 | 37% | 87% | 5% | 0.187 |
Prospective Evaluation of Prediction Rule | |||||||
30‐Day All‐Cause | 39% | 84% | 0.614 | 30% | 89% | 18% | 0.339 |
Prospective validation of the prediction model was performed using the 12‐month period directly following readmission risk flag implementation. During this period, the 30‐day all‐cause readmission rate was 15.1%. Sensitivity (39%), positive predictive value (30%), and proportion of patients flagged (18%) were consistent with the values derived from the retrospective data, supporting the reproducibility and predictive stability of the chosen risk prediction model (Table 1). The C statistic of the model was also consistent between the retrospective and prospective datasets (0.62 and 0.61, respectively).
Readmission Rates
The mean 30‐day all‐cause readmission rate for the 24‐month period prior to the intervention was 14.4%, whereas the mean for the 12‐month period after the implementation was 15.1%. Thirty‐day all‐cause and 7‐day unplanned monthly readmission rates do not appear to have been impacted by the intervention (Figure 2). There was no evidence for either an immediate or sustained effect (Table 2).
Hospital | Preimplementation Period | Immediate Effect | Postimplementation Period | P Value Change in Trenda | |||||
---|---|---|---|---|---|---|---|---|---|
Monthly % Change in Readmission Rates | P Value | Immediate % Change | P Value | Monthly % Change in Readmission Rates | P Value | ||||
| |||||||||
30‐Day All‐Cause Readmission Rates | |||||||||
Hosp A | 0.023 | Stable | 0.153 | 0.480 | 0.991 | 0.100 | Increasing | 0.044 | 0.134 |
Hosp B | 0.061 | Increasing | 0.002 | 0.492 | 0.125 | 0.060 | Stable | 0.296 | 0.048 |
Hosp C | 0.026 | Stable | 0.413 | 0.447 | 0.585 | 0.046 | Stable | 0.629 | 0.476 |
Health System | 0.032 | Increasing | 0.014 | 0.344 | 0.302 | 0.026 | Stable | 0.499 | 0.881 |
7‐Day Unplanned Readmission Rates | |||||||||
Hosp A | 0.004 | Stable | 0.642 | 0.271 | 0.417 | 0.005 | Stable | 0.891 | 0.967 |
Hosp B | 0.012 | Stable | 0.201 | 0.298 | 0.489 | 0.038 | Stable | 0.429 | 0.602 |
Hosp C | 0.008 | Stable | 0.213 | 0.353 | 0.204 | 0.004 | Stable | 0.895 | 0.899 |
Health System | 0.005 | Stable | 0.358 | 0.003 | 0.990 | 0.010 | Stable | 0.712 | 0.583 |
DISCUSSION
In this proof‐of‐concept study, we demonstrated the feasibility of an automated readmission risk prediction model integrated into a health system's EHR for a mixed population of hospitalized medical and surgical patients. To our knowledge, this is the first study in a general population of hospitalized patients to examine the impact of providing readmission risk assessment on readmission rates. We used a simple prediction model potentially generalizable to EHRs and healthcare populations beyond our own.
Existing risk prediction models for hospital readmission have important limitations and are difficult to implement in clinical practice.[22] Prediction models for hospital readmission are often dependent on retrospective claims data, developed for specific patient populations, and not designed for use early in the course of hospitalization when transitional care interventions can be initiated.[22] In addition, the time required to gather the necessary data and calculate the risk score remains a barrier to the adoption of prediction models in practice. By automating the process of readmission risk prediction, we were able to help integrate risk assessment into the healthcare process across many providers in a large multihospital healthcare organization. This has allowed us to consistently share risk assessment in real time with all members of the inpatient team, facilitating a team‐based approach to discharge planning.[23]
Two prior studies have developed readmission risk prediction models designed to be implemented into the EHR. Amarasingham et al.[24] developed and implemented[25] a heart failure‐specific prediction model based on the 18‐item Tabak mortality score.[26] Bradley et al.[27] studied in a broader population of medicine and surgery patients the predictive ability of a 26‐item score that utilized vital sign, cardiac rhythm, and nursing assessment data. Although EHRs are developing rapidly, currently the majority of EHRs do not support the use of many of the variables used in these models. In addition, both models were complex, raising concerns about generalizability to other healthcare settings and populations.
A distinctive characteristic of our model is its simplicity. We were cognizant of the realities of running a prediction model in a high‐volume production environment and the diminishing returns of adding more variables. We thus favored simplicity at all stages of model development, with the associated belief that complexity could be added with future iterations once feasibility had been established. Finally, we were aware that we were constructing a medical decision support tool rather than a simple classifier.[26] As such, the optimal model was not purely driven by discriminative ability, but also by our subjective assessment of the optimal trade‐off between sensitivity and specificity (the test‐treatment threshold) for such a model.[26] To facilitate model assessment, we thus categorized the potential predictor variables and evaluated the test characteristics of each combination of categorized variables. Although the C statistic of a model using continuous variables will be higher than a model using categorical values, model performance at the chosen trade‐off point is unlikely to be different.
Although the overall predictive ability of our model was fair, we found that it was associated with clinically meaningful differences in readmission rates between those triggering and not triggering the flag. The 30‐day all‐cause readmission rate in the 12‐month prospective sample was 15.1%, yet among those flagged as being at high risk for readmission the readmission rate was 30.4%. Given resource constraints and the need to selectively apply potentially costly care transition interventions, this may in practice translate into a meaningful discriminative ability.
Readmission rates did not change significantly during the study period. A number of plausible reasons for this exist, including: (1) the current model may not exhibit sufficient predictive ability to classify those at high risk or impact the behavior of providers appropriately, (2) those patients classified as high risk of readmission may not be at high risk of readmissions that are preventable, (3) information provided by the model may not yet routinely be used such that it can affect care, or (4) providing readmission risk assessment alone is not sufficient to influence readmission rates, and the other interventions or organizational changes necessary to impact care of those defined as high risk have not yet been implemented or are not yet being performed routinely. If the primary reasons for our results are those outlined in numbers 3 or 4, then readmission rates should improve over time as the risk flag becomes more routinely used, and those interventions necessary to impact readmission rates of those defined as high risk are implemented and performed.
Limitations
There are several limitations of this intervention. First, the prediction model was developed using 30‐day all‐cause readmissions, rather than attempting to identify potentially preventable readmissions. Thirty‐day readmission rates may not be a good proxy for preventable readmissions,[18] and as a consequence, the ability to predict 30‐day readmissions may not ensure that a prediction model is able to predict preventable readmissions. Nonetheless, 30‐day readmission rates remain the most commonly used quality metric.
Second, the impact of the risk flag on provider behavior is uncertain. We did not formally assess how the readmission risk flag was used by healthcare team members. Informal assessment has, however, revealed that the readmission risk flag is gradually being adopted by different members of the care team including unit‐based pharmacists who are using the flag to prioritize the delivery of medication education, social workers who are using the flag to prompt providers to consider higher level services for patients at high risk of readmission, and patient navigators who are using the flag to prioritize follow‐up phone calls. As a result, we hope that the flag will ultimately improve the processes of care for high‐risk patients.
Third, we did not capture readmissions to hospitals outside of our healthcare system and have therefore underestimated the readmission rate in our population. However, our assessment of the effect of the risk flag on readmissions focused on relative readmission rates over time, and the use of the interrupted time series methodology should protect against secular changes in outside hospital readmission rates that were not associated with the intervention.
Fourth, it is possible that the prediction model implemented could be significantly improved by including additional variables or data available during the hospital stay. However, simple classification models using a single variable have repeatedly been shown to have the ability to compete favorably with state‐of‐the‐art multivariable classification models.[28]
Fifth, our study was limited to a single academic health system, and our experience may not be generalizable to smaller healthcare systems with limited EHR systems. However, the simplicity of our prediction model and the integration into a commercial EHR may improve the generalizability of our experience to other healthcare settings. Additionally, partly due to recent policy initiatives, the adoption of integrated EHR systems by hospitals is expected to continue at a rapid rate and become the standard of care within the near future.[29]
CONCLUSION
An automated prediction model was effectively integrated into an existing EHR and was able to identify patients on admission who are at risk for readmission within 30 days of discharge. Future work will aim to further examine the impact of the flag on readmission rates, further refine the prediction model, and gather data on how providers and care teams use the information provided by the flag.
Disclosure
Dr. Umscheid‐s contribution to this project was supported in part by the National Center for Research Resources, Grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, Grant UL1TR000003. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):1074–1081. , .
- Do older rural and urban veterans experience different rates of unplanned readmission to VA and non‐VA hospitals? J Rural Health. 2009;25(1):62–69. , , , , .
- Cost, causes and rates of rehospitalization of preterm infants. J Perinatol. 2007;27(10):614–619. , , .
- Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54–60. , , , .
- Incidence and main factors associated with early unplanned hospital readmission among French medical inpatients aged 75 and over admitted through emergency units. Age Ageing. 2008;37(4):416–422. , , , et al.
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Hospital Quality Alliance. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html. Accessed March 6, 2013.
- Institute for Healthcare Improvement. Available at: http://www.ihi.org/explore/Readmissions/Pages/default.aspx. Accessed March 6, 2013.
- Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/HospitalQualityInits/OutcomeMeasures.html. Accessed March 6, 2013.
- Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281(7):613–620. , , , et al.
- Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. J Am Geriatr Soc. 2004;52(11):1817–1825. , , , , , .
- The importance of transitional care in achieving health reform. Health Aff (Millwood). 2011;30(4):746–754. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- University of Pennsylvania Health System Center for Evidence‐based Practice. Available at: http://www.uphs.upenn.edu/cep/. Accessed March 6, 2013.
- Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):1352–1355. , , .
- Writing Arden Syntax Medical Logic Modules. Comput Biol Med. 1994;24(5):331–363. .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Information Retrieval. 2nd ed. Oxford, UK: Butterworth‐Heinemann; 1979. .
- Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299–309. , , , .
- Application of least squares regression to relationships containing auto‐correlated error terms. J Am Stat Assoc. 1949; 44:32–61. , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- Core Principles and values of effective team‐based health care. Available at: https://www.nationalahec.org/pdfs/VSRT‐Team‐Based‐Care‐Principles‐Values.pdf. Accessed March 19, 2013. , , , et al.
- An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981–988. , , , et al.
- Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study [published online ahead of print July 31, 2013]. BMJ Qual Saf. doi:10.1136/bmjqs‐2013‐001901. , , , et al.
- The threshold approach to clinical decision making. N Engl J Med. 1980;302(20):1109–1117. , .
- Identifying patients at increased risk for unplanned readmission. Med Care. 2013;51(9):761–766. , , , , .
- Very simple classification rules perform well on most commonly used datasets. Mach Learn. 1993;11(1):63–91. .
- Stimulating the adoption of health information technology. N Engl J Med. 2009;360(15):1477–1479. .
Unplanned hospital readmissions are common, costly, and potentially avoidable. Approximately 20% of Medicare patients are readmitted within 30 days of discharge.[1] Readmission rates are estimated to be similarly high in other population subgroups,[2, 3, 4] with approximately 80% of patients[1, 5, 6] readmitted to the original discharging hospital. A recent systematic review suggested that 27% of readmissions may be preventable.[7]
Hospital readmissions have increasingly been viewed as a correctable marker of poor quality care and have been adopted by a number of organizations as quality indicators.[8, 9, 10] As a result, hospitals have important internal and external motivations to address readmissions. Identification of patients at high risk for readmissions may be an important first step toward preventing them. In particular, readmission risk assessment could be used to help providers target the delivery of resource‐intensive transitional care interventions[11, 12, 13, 14] to patients with the greatest needs. Such an approach is appealing because it allows hospitals to focus scarce resources where the impact may be greatest and provides a starting point for organizations struggling to develop robust models of transitional care delivery.
Electronic health records (EHRs) may prove to be an important component of strategies designed to risk stratify patients at the point of care. Algorithms integrated into the EHR that automatically generate risk predictions have the potential to (1) improve provider time efficiency by automating the prediction process, (2) improve consistency of data collection and risk score calculation, (3) increase adoption through improved usability, and (4) provide clinically important information in real‐time to all healthcare team members caring for a hospitalized patient.
We thus sought to derive a predictive model for 30‐day readmissions using data reliably present in our EHR at the time of admission, and integrate this predictive model into our hospital's EHR to create an automated prediction tool that identifies on admission patients at high risk for readmission within 30 days of discharge. In addition, we prospectively validated this model using the 12‐month period after implementation and examined the impact on readmissions.
METHODS
Setting
The University of Pennsylvania Health System (UPHS) includes 3 hospitals, with a combined capacity of over 1500 beds and 70,000 annual admissions. All hospitals currently utilize Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL) as their EHR. The study sample included all adult admissions to any of the 3 UPHS hospitals during the study period. Admissions to short procedure, rehabilitation, and hospice units were excluded. The study received expedited approval and a HIPAA waiver from the University of Pennsylvania institutional review board.
Development of Predictive Model
The UPHS Center for Evidence‐based Practice[15, 16] performed a systematic review to identify factors associated with hospital readmission within 30 days of discharge. We then examined the data available from our hospital EHR at the time of admission for those factors identified in the review. Using different threshold values and look‐back periods, we developed and tested 30 candidate prediction models using these variables alone and in combination (Table 1). Prediction models were evaluated using 24 months of historical data between August 1, 2009 and August 1, 2011.
Implementation
An automated readmission risk flag was then integrated into the EHR. Patients classified as being at high risk for readmission with the automated prediction model were flagged in the EHR on admission (Figure 1A). The flag can be double‐clicked to display a separate screen with information relevant to discharge planning including inpatient and emergency department (ED) visits in the prior 12 months, as well as information about the primary team, length of stay, and admitting problem associated with those admissions (Figure 1B). The prediction model was integrated into our EHR using Arden Syntax for Medical Logic Modules.[17] The readmission risk screen involved presenting the provider with a new screen and was thus developed in Microsoft .NET using C# and Windows Forms (Microsoft Corp., Redmond, WA).
The flag was visible on the patient lists of all providers who utilized the EHR. This included but was not limited to nurses, social workers, unit pharmacists, and physicians. At the time of implementation, educational events regarding the readmission risk flag were provided in forums targeting administrators, pharmacists, social workers, and housestaff. Information about the flag and recommendations for use were distributed through emails and broadcast screensaver messages disseminated throughout the inpatient units of the health system. Providers were asked to pay special attention to discharge planning for patients triggering the readmission risk flag, including medication reconciliation by pharmacists for these patients prior to discharge, and arrangement of available home services by social workers.
The risk flag was 1 of 4 classes of interventions developed and endorsed by the health system in its efforts to reduce readmissions. Besides risk stratification, the other classes were: interdisciplinary rounding, patient education, and discharge communication. None of the interventions alone were expected to decrease readmissions, but as all 4 classes of interventions were implemented and performed routinely, the expectation was that they would work in concert to reduce readmissions.
Analysis
The primary outcome was all‐cause hospital readmissions in the healthcare system within 30 days of discharge. Although this outcome is commonly used both in the literature and as a quality metric, significant debate persists as to the appropriateness of this metric.[18] Many of the factors driving 30‐day readmissions may be dependent on factors outside of the discharging hospital's control and it has been argued that nearer‐term, nonelective readmission rates may provide a more meaningful quality metric.[18] Seven‐day unplanned readmissions were thus used as a secondary outcome measure for this study.
Sensitivity, specificity, predictive value, C statistic, F score (the harmonic mean of positive predictive value and sensitivity),[19] and screen‐positive rate were calculated for each of the 30 prediction models evaluated using the historical data. The prediction model with the best balance of F score and screen‐positive rate was selected as the prediction model to be integrated into the EHR. Prospective validation of the selected prediction model was performed using the 12‐month period following implementation of the risk flag (September 2011September 2012).
To assess the impact of the automated prediction model on monthly readmission rate, we used the 24‐month period immediately before and the 12‐month period immediately after implementation of the readmission risk flag. Segmented regression analysis was performed testing for changes in level and slope of readmission rates between preimplementation and postimplementation time periods. This quasiexperimental interrupted time series methodology[20] allows us to control for secular trends in readmission rates and to assess the preimplementation trend (secular trend), the difference in rates immediately before and after the implementation (immediate effect), and the postimplementation change over time (sustained effect). We used Cochrane‐Orcutt estimation[21] to correct for serial autocorrelation.
All analyses were performed using Stata 12.1 software (Stata Corp, College Station, TX).
RESULTS
Predictors of Readmission
Our systematic review of the literature identified several patient and healthcare utilization patterns predictive of 30‐day readmission risk. Utilization factors included length of stay, number of prior admissions, previous 30‐day readmissions, and previous ED visits. Patient characteristics included number of comorbidities, living alone, and payor. Evidence was inconsistent regarding threshold values for these variables.
Many variables readily available in our EHR were either found by the systematic review not to be reliably predictive of 30‐day readmission (including age and gender) or were not readily or reliably available on admission (including length of stay and payor). At the time of implementation, our EHR did not include vital sign or nursing assessment variables, so these were not considered for inclusion in our model.
Of the available variables, 3 were consistently accurate and available in the EHR at the time of patient admission: prior hospital admission, emergency department visit, and 30‐day readmission within UPHS. We then developed 30 candidate prediction models using a combination of these variables, including 1 and 2 prior admissions, ED visits, and 30‐day readmissions in the 6 and 12 months preceding the index visit (Table 1).
Development and Validation
We used 24 months of retrospective data, which included 120,396 discharges with 17,337 thirty‐day readmissions (14.4% 30‐day all‐cause readmission rate) to test the candidate prediction models. A single risk factor, 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%) (Table 1).
Sensitivity | Specificity | C Statistic | PPV | NPV | Screen Positive | F Score | |
---|---|---|---|---|---|---|---|
| |||||||
Retrospective Evaluation of Prediction Rules Lookback period: 6 months | |||||||
Prior Admissions | |||||||
1 | 53% | 74% | 0.640 | 26% | 91% | 30% | 0.350 |
2 | 32% | 90% | 0.610 | 35% | 89% | 13% | 0.333 |
3 | 20% | 96% | 0.578 | 44% | 88% | 7% | 0.274 |
Prior ED Visits | |||||||
1 | 31% | 81% | 0.558 | 21% | 87% | 21% | 0.252 |
2 | 13% | 93% | 0.532 | 25% | 87% | 8% | 0.172 |
3 | 7% | 97% | 0.519 | 27% | 86% | 4% | 0.111 |
Prior 30‐day Readmissions | |||||||
1 | 39% | 85% | 0.623 | 31% | 89% | 18% | 0.347 |
2 | 21% | 95% | 0.582 | 43% | 88% | 7% | 0.284 |
3 | 13% | 98% | 0.555 | 53% | 87% | 4% | 0.208 |
Combined Rules | |||||||
Admit1 & ED1 | 22% | 92% | 0.568 | 31% | 88% | 10% | 0.255 |
Admit2 & ED1 | 15% | 96% | 0.556 | 40% | 87% | 5% | 0.217 |
Admit1 & 30‐day1 | 39% | 85% | 0.623 | 31% | 89% | 18% | 0.346 |
Admit2 & 30‐day1 | 29% | 92% | 0.603 | 37% | 89% | 11% | 0.324 |
30‐day1 & ED1 | 17% | 95% | 0.559 | 37% | 87% | 6% | 0.229 |
30‐day1 & ED2 | 8% | 98% | 0.527 | 40% | 86% | 3% | 0.132 |
Lookback period: 12 months | |||||||
Prior Admission | |||||||
1 | 60% | 68% | 0.593 | 24% | 91% | 36% | 0.340 |
2a | 40% | 85% | 0.624 | 31% | 89% | 18% | 0.354 |
3 | 28% | 92% | 0.600 | 37% | 88% | 11% | 0.318 |
Prior ED Visit | |||||||
1 | 38% | 74% | 0.560 | 20% | 88% | 28% | 0.260 |
2 | 20% | 88% | 0.544 | 23% | 87% | 13% | 0.215 |
3 | 8% | 96% | 0.523 | 27% | 86% | 4% | 0.126 |
Prior 30‐day Readmission | |||||||
1 | 43% | 84% | 0.630 | 30% | 90% | 20% | 0.353 |
2 | 24% | 94% | 0.592 | 41% | 88% | 9% | 0.305 |
3 | 11% | 98% | 0.548 | 54% | 87% | 3% | 0.186 |
Combined Rules | |||||||
Admit1 & ED1 | 29% | 87% | 0.580 | 27% | 88% | 15% | 0.281 |
Admit2 & ED1 | 22% | 93% | 0.574 | 34% | 88% | 9% | 0.266 |
Admit1 & 30‐day1 | 42% | 84% | 0.630 | 30% | 90% | 14% | 0.353 |
Admit2 & 30‐day1 | 34% | 89% | 0.615 | 34% | 89% | 14% | 0.341 |
30‐day1 & ED1 | 21% | 93% | 0.569 | 35% | 88% | 9% | 0.261 |
30‐day1 & ED2 | 13% | 96% | 0.545 | 37% | 87% | 5% | 0.187 |
Prospective Evaluation of Prediction Rule | |||||||
30‐Day All‐Cause | 39% | 84% | 0.614 | 30% | 89% | 18% | 0.339 |
Prospective validation of the prediction model was performed using the 12‐month period directly following readmission risk flag implementation. During this period, the 30‐day all‐cause readmission rate was 15.1%. Sensitivity (39%), positive predictive value (30%), and proportion of patients flagged (18%) were consistent with the values derived from the retrospective data, supporting the reproducibility and predictive stability of the chosen risk prediction model (Table 1). The C statistic of the model was also consistent between the retrospective and prospective datasets (0.62 and 0.61, respectively).
Readmission Rates
The mean 30‐day all‐cause readmission rate for the 24‐month period prior to the intervention was 14.4%, whereas the mean for the 12‐month period after the implementation was 15.1%. Thirty‐day all‐cause and 7‐day unplanned monthly readmission rates do not appear to have been impacted by the intervention (Figure 2). There was no evidence for either an immediate or sustained effect (Table 2).
Hospital | Preimplementation Period | Immediate Effect | Postimplementation Period | P Value Change in Trenda | |||||
---|---|---|---|---|---|---|---|---|---|
Monthly % Change in Readmission Rates | P Value | Immediate % Change | P Value | Monthly % Change in Readmission Rates | P Value | ||||
| |||||||||
30‐Day All‐Cause Readmission Rates | |||||||||
Hosp A | 0.023 | Stable | 0.153 | 0.480 | 0.991 | 0.100 | Increasing | 0.044 | 0.134 |
Hosp B | 0.061 | Increasing | 0.002 | 0.492 | 0.125 | 0.060 | Stable | 0.296 | 0.048 |
Hosp C | 0.026 | Stable | 0.413 | 0.447 | 0.585 | 0.046 | Stable | 0.629 | 0.476 |
Health System | 0.032 | Increasing | 0.014 | 0.344 | 0.302 | 0.026 | Stable | 0.499 | 0.881 |
7‐Day Unplanned Readmission Rates | |||||||||
Hosp A | 0.004 | Stable | 0.642 | 0.271 | 0.417 | 0.005 | Stable | 0.891 | 0.967 |
Hosp B | 0.012 | Stable | 0.201 | 0.298 | 0.489 | 0.038 | Stable | 0.429 | 0.602 |
Hosp C | 0.008 | Stable | 0.213 | 0.353 | 0.204 | 0.004 | Stable | 0.895 | 0.899 |
Health System | 0.005 | Stable | 0.358 | 0.003 | 0.990 | 0.010 | Stable | 0.712 | 0.583 |
DISCUSSION
In this proof‐of‐concept study, we demonstrated the feasibility of an automated readmission risk prediction model integrated into a health system's EHR for a mixed population of hospitalized medical and surgical patients. To our knowledge, this is the first study in a general population of hospitalized patients to examine the impact of providing readmission risk assessment on readmission rates. We used a simple prediction model potentially generalizable to EHRs and healthcare populations beyond our own.
Existing risk prediction models for hospital readmission have important limitations and are difficult to implement in clinical practice.[22] Prediction models for hospital readmission are often dependent on retrospective claims data, developed for specific patient populations, and not designed for use early in the course of hospitalization when transitional care interventions can be initiated.[22] In addition, the time required to gather the necessary data and calculate the risk score remains a barrier to the adoption of prediction models in practice. By automating the process of readmission risk prediction, we were able to help integrate risk assessment into the healthcare process across many providers in a large multihospital healthcare organization. This has allowed us to consistently share risk assessment in real time with all members of the inpatient team, facilitating a team‐based approach to discharge planning.[23]
Two prior studies have developed readmission risk prediction models designed to be implemented into the EHR. Amarasingham et al.[24] developed and implemented[25] a heart failure‐specific prediction model based on the 18‐item Tabak mortality score.[26] Bradley et al.[27] studied in a broader population of medicine and surgery patients the predictive ability of a 26‐item score that utilized vital sign, cardiac rhythm, and nursing assessment data. Although EHRs are developing rapidly, currently the majority of EHRs do not support the use of many of the variables used in these models. In addition, both models were complex, raising concerns about generalizability to other healthcare settings and populations.
A distinctive characteristic of our model is its simplicity. We were cognizant of the realities of running a prediction model in a high‐volume production environment and the diminishing returns of adding more variables. We thus favored simplicity at all stages of model development, with the associated belief that complexity could be added with future iterations once feasibility had been established. Finally, we were aware that we were constructing a medical decision support tool rather than a simple classifier.[26] As such, the optimal model was not purely driven by discriminative ability, but also by our subjective assessment of the optimal trade‐off between sensitivity and specificity (the test‐treatment threshold) for such a model.[26] To facilitate model assessment, we thus categorized the potential predictor variables and evaluated the test characteristics of each combination of categorized variables. Although the C statistic of a model using continuous variables will be higher than a model using categorical values, model performance at the chosen trade‐off point is unlikely to be different.
Although the overall predictive ability of our model was fair, we found that it was associated with clinically meaningful differences in readmission rates between those triggering and not triggering the flag. The 30‐day all‐cause readmission rate in the 12‐month prospective sample was 15.1%, yet among those flagged as being at high risk for readmission the readmission rate was 30.4%. Given resource constraints and the need to selectively apply potentially costly care transition interventions, this may in practice translate into a meaningful discriminative ability.
Readmission rates did not change significantly during the study period. A number of plausible reasons for this exist, including: (1) the current model may not exhibit sufficient predictive ability to classify those at high risk or impact the behavior of providers appropriately, (2) those patients classified as high risk of readmission may not be at high risk of readmissions that are preventable, (3) information provided by the model may not yet routinely be used such that it can affect care, or (4) providing readmission risk assessment alone is not sufficient to influence readmission rates, and the other interventions or organizational changes necessary to impact care of those defined as high risk have not yet been implemented or are not yet being performed routinely. If the primary reasons for our results are those outlined in numbers 3 or 4, then readmission rates should improve over time as the risk flag becomes more routinely used, and those interventions necessary to impact readmission rates of those defined as high risk are implemented and performed.
Limitations
There are several limitations of this intervention. First, the prediction model was developed using 30‐day all‐cause readmissions, rather than attempting to identify potentially preventable readmissions. Thirty‐day readmission rates may not be a good proxy for preventable readmissions,[18] and as a consequence, the ability to predict 30‐day readmissions may not ensure that a prediction model is able to predict preventable readmissions. Nonetheless, 30‐day readmission rates remain the most commonly used quality metric.
Second, the impact of the risk flag on provider behavior is uncertain. We did not formally assess how the readmission risk flag was used by healthcare team members. Informal assessment has, however, revealed that the readmission risk flag is gradually being adopted by different members of the care team including unit‐based pharmacists who are using the flag to prioritize the delivery of medication education, social workers who are using the flag to prompt providers to consider higher level services for patients at high risk of readmission, and patient navigators who are using the flag to prioritize follow‐up phone calls. As a result, we hope that the flag will ultimately improve the processes of care for high‐risk patients.
Third, we did not capture readmissions to hospitals outside of our healthcare system and have therefore underestimated the readmission rate in our population. However, our assessment of the effect of the risk flag on readmissions focused on relative readmission rates over time, and the use of the interrupted time series methodology should protect against secular changes in outside hospital readmission rates that were not associated with the intervention.
Fourth, it is possible that the prediction model implemented could be significantly improved by including additional variables or data available during the hospital stay. However, simple classification models using a single variable have repeatedly been shown to have the ability to compete favorably with state‐of‐the‐art multivariable classification models.[28]
Fifth, our study was limited to a single academic health system, and our experience may not be generalizable to smaller healthcare systems with limited EHR systems. However, the simplicity of our prediction model and the integration into a commercial EHR may improve the generalizability of our experience to other healthcare settings. Additionally, partly due to recent policy initiatives, the adoption of integrated EHR systems by hospitals is expected to continue at a rapid rate and become the standard of care within the near future.[29]
CONCLUSION
An automated prediction model was effectively integrated into an existing EHR and was able to identify patients on admission who are at risk for readmission within 30 days of discharge. Future work will aim to further examine the impact of the flag on readmission rates, further refine the prediction model, and gather data on how providers and care teams use the information provided by the flag.
Disclosure
Dr. Umscheid‐s contribution to this project was supported in part by the National Center for Research Resources, Grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, Grant UL1TR000003. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
Unplanned hospital readmissions are common, costly, and potentially avoidable. Approximately 20% of Medicare patients are readmitted within 30 days of discharge.[1] Readmission rates are estimated to be similarly high in other population subgroups,[2, 3, 4] with approximately 80% of patients[1, 5, 6] readmitted to the original discharging hospital. A recent systematic review suggested that 27% of readmissions may be preventable.[7]
Hospital readmissions have increasingly been viewed as a correctable marker of poor quality care and have been adopted by a number of organizations as quality indicators.[8, 9, 10] As a result, hospitals have important internal and external motivations to address readmissions. Identification of patients at high risk for readmissions may be an important first step toward preventing them. In particular, readmission risk assessment could be used to help providers target the delivery of resource‐intensive transitional care interventions[11, 12, 13, 14] to patients with the greatest needs. Such an approach is appealing because it allows hospitals to focus scarce resources where the impact may be greatest and provides a starting point for organizations struggling to develop robust models of transitional care delivery.
Electronic health records (EHRs) may prove to be an important component of strategies designed to risk stratify patients at the point of care. Algorithms integrated into the EHR that automatically generate risk predictions have the potential to (1) improve provider time efficiency by automating the prediction process, (2) improve consistency of data collection and risk score calculation, (3) increase adoption through improved usability, and (4) provide clinically important information in real‐time to all healthcare team members caring for a hospitalized patient.
We thus sought to derive a predictive model for 30‐day readmissions using data reliably present in our EHR at the time of admission, and integrate this predictive model into our hospital's EHR to create an automated prediction tool that identifies on admission patients at high risk for readmission within 30 days of discharge. In addition, we prospectively validated this model using the 12‐month period after implementation and examined the impact on readmissions.
METHODS
Setting
The University of Pennsylvania Health System (UPHS) includes 3 hospitals, with a combined capacity of over 1500 beds and 70,000 annual admissions. All hospitals currently utilize Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL) as their EHR. The study sample included all adult admissions to any of the 3 UPHS hospitals during the study period. Admissions to short procedure, rehabilitation, and hospice units were excluded. The study received expedited approval and a HIPAA waiver from the University of Pennsylvania institutional review board.
Development of Predictive Model
The UPHS Center for Evidence‐based Practice[15, 16] performed a systematic review to identify factors associated with hospital readmission within 30 days of discharge. We then examined the data available from our hospital EHR at the time of admission for those factors identified in the review. Using different threshold values and look‐back periods, we developed and tested 30 candidate prediction models using these variables alone and in combination (Table 1). Prediction models were evaluated using 24 months of historical data between August 1, 2009 and August 1, 2011.
Implementation
An automated readmission risk flag was then integrated into the EHR. Patients classified as being at high risk for readmission with the automated prediction model were flagged in the EHR on admission (Figure 1A). The flag can be double‐clicked to display a separate screen with information relevant to discharge planning including inpatient and emergency department (ED) visits in the prior 12 months, as well as information about the primary team, length of stay, and admitting problem associated with those admissions (Figure 1B). The prediction model was integrated into our EHR using Arden Syntax for Medical Logic Modules.[17] The readmission risk screen involved presenting the provider with a new screen and was thus developed in Microsoft .NET using C# and Windows Forms (Microsoft Corp., Redmond, WA).
The flag was visible on the patient lists of all providers who utilized the EHR. This included but was not limited to nurses, social workers, unit pharmacists, and physicians. At the time of implementation, educational events regarding the readmission risk flag were provided in forums targeting administrators, pharmacists, social workers, and housestaff. Information about the flag and recommendations for use were distributed through emails and broadcast screensaver messages disseminated throughout the inpatient units of the health system. Providers were asked to pay special attention to discharge planning for patients triggering the readmission risk flag, including medication reconciliation by pharmacists for these patients prior to discharge, and arrangement of available home services by social workers.
The risk flag was 1 of 4 classes of interventions developed and endorsed by the health system in its efforts to reduce readmissions. Besides risk stratification, the other classes were: interdisciplinary rounding, patient education, and discharge communication. None of the interventions alone were expected to decrease readmissions, but as all 4 classes of interventions were implemented and performed routinely, the expectation was that they would work in concert to reduce readmissions.
Analysis
The primary outcome was all‐cause hospital readmissions in the healthcare system within 30 days of discharge. Although this outcome is commonly used both in the literature and as a quality metric, significant debate persists as to the appropriateness of this metric.[18] Many of the factors driving 30‐day readmissions may be dependent on factors outside of the discharging hospital's control and it has been argued that nearer‐term, nonelective readmission rates may provide a more meaningful quality metric.[18] Seven‐day unplanned readmissions were thus used as a secondary outcome measure for this study.
Sensitivity, specificity, predictive value, C statistic, F score (the harmonic mean of positive predictive value and sensitivity),[19] and screen‐positive rate were calculated for each of the 30 prediction models evaluated using the historical data. The prediction model with the best balance of F score and screen‐positive rate was selected as the prediction model to be integrated into the EHR. Prospective validation of the selected prediction model was performed using the 12‐month period following implementation of the risk flag (September 2011September 2012).
To assess the impact of the automated prediction model on monthly readmission rate, we used the 24‐month period immediately before and the 12‐month period immediately after implementation of the readmission risk flag. Segmented regression analysis was performed testing for changes in level and slope of readmission rates between preimplementation and postimplementation time periods. This quasiexperimental interrupted time series methodology[20] allows us to control for secular trends in readmission rates and to assess the preimplementation trend (secular trend), the difference in rates immediately before and after the implementation (immediate effect), and the postimplementation change over time (sustained effect). We used Cochrane‐Orcutt estimation[21] to correct for serial autocorrelation.
All analyses were performed using Stata 12.1 software (Stata Corp, College Station, TX).
RESULTS
Predictors of Readmission
Our systematic review of the literature identified several patient and healthcare utilization patterns predictive of 30‐day readmission risk. Utilization factors included length of stay, number of prior admissions, previous 30‐day readmissions, and previous ED visits. Patient characteristics included number of comorbidities, living alone, and payor. Evidence was inconsistent regarding threshold values for these variables.
Many variables readily available in our EHR were either found by the systematic review not to be reliably predictive of 30‐day readmission (including age and gender) or were not readily or reliably available on admission (including length of stay and payor). At the time of implementation, our EHR did not include vital sign or nursing assessment variables, so these were not considered for inclusion in our model.
Of the available variables, 3 were consistently accurate and available in the EHR at the time of patient admission: prior hospital admission, emergency department visit, and 30‐day readmission within UPHS. We then developed 30 candidate prediction models using a combination of these variables, including 1 and 2 prior admissions, ED visits, and 30‐day readmissions in the 6 and 12 months preceding the index visit (Table 1).
Development and Validation
We used 24 months of retrospective data, which included 120,396 discharges with 17,337 thirty‐day readmissions (14.4% 30‐day all‐cause readmission rate) to test the candidate prediction models. A single risk factor, 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%) (Table 1).
Sensitivity | Specificity | C Statistic | PPV | NPV | Screen Positive | F Score | |
---|---|---|---|---|---|---|---|
| |||||||
Retrospective Evaluation of Prediction Rules Lookback period: 6 months | |||||||
Prior Admissions | |||||||
1 | 53% | 74% | 0.640 | 26% | 91% | 30% | 0.350 |
2 | 32% | 90% | 0.610 | 35% | 89% | 13% | 0.333 |
3 | 20% | 96% | 0.578 | 44% | 88% | 7% | 0.274 |
Prior ED Visits | |||||||
1 | 31% | 81% | 0.558 | 21% | 87% | 21% | 0.252 |
2 | 13% | 93% | 0.532 | 25% | 87% | 8% | 0.172 |
3 | 7% | 97% | 0.519 | 27% | 86% | 4% | 0.111 |
Prior 30‐day Readmissions | |||||||
1 | 39% | 85% | 0.623 | 31% | 89% | 18% | 0.347 |
2 | 21% | 95% | 0.582 | 43% | 88% | 7% | 0.284 |
3 | 13% | 98% | 0.555 | 53% | 87% | 4% | 0.208 |
Combined Rules | |||||||
Admit1 & ED1 | 22% | 92% | 0.568 | 31% | 88% | 10% | 0.255 |
Admit2 & ED1 | 15% | 96% | 0.556 | 40% | 87% | 5% | 0.217 |
Admit1 & 30‐day1 | 39% | 85% | 0.623 | 31% | 89% | 18% | 0.346 |
Admit2 & 30‐day1 | 29% | 92% | 0.603 | 37% | 89% | 11% | 0.324 |
30‐day1 & ED1 | 17% | 95% | 0.559 | 37% | 87% | 6% | 0.229 |
30‐day1 & ED2 | 8% | 98% | 0.527 | 40% | 86% | 3% | 0.132 |
Lookback period: 12 months | |||||||
Prior Admission | |||||||
1 | 60% | 68% | 0.593 | 24% | 91% | 36% | 0.340 |
2a | 40% | 85% | 0.624 | 31% | 89% | 18% | 0.354 |
3 | 28% | 92% | 0.600 | 37% | 88% | 11% | 0.318 |
Prior ED Visit | |||||||
1 | 38% | 74% | 0.560 | 20% | 88% | 28% | 0.260 |
2 | 20% | 88% | 0.544 | 23% | 87% | 13% | 0.215 |
3 | 8% | 96% | 0.523 | 27% | 86% | 4% | 0.126 |
Prior 30‐day Readmission | |||||||
1 | 43% | 84% | 0.630 | 30% | 90% | 20% | 0.353 |
2 | 24% | 94% | 0.592 | 41% | 88% | 9% | 0.305 |
3 | 11% | 98% | 0.548 | 54% | 87% | 3% | 0.186 |
Combined Rules | |||||||
Admit1 & ED1 | 29% | 87% | 0.580 | 27% | 88% | 15% | 0.281 |
Admit2 & ED1 | 22% | 93% | 0.574 | 34% | 88% | 9% | 0.266 |
Admit1 & 30‐day1 | 42% | 84% | 0.630 | 30% | 90% | 14% | 0.353 |
Admit2 & 30‐day1 | 34% | 89% | 0.615 | 34% | 89% | 14% | 0.341 |
30‐day1 & ED1 | 21% | 93% | 0.569 | 35% | 88% | 9% | 0.261 |
30‐day1 & ED2 | 13% | 96% | 0.545 | 37% | 87% | 5% | 0.187 |
Prospective Evaluation of Prediction Rule | |||||||
30‐Day All‐Cause | 39% | 84% | 0.614 | 30% | 89% | 18% | 0.339 |
Prospective validation of the prediction model was performed using the 12‐month period directly following readmission risk flag implementation. During this period, the 30‐day all‐cause readmission rate was 15.1%. Sensitivity (39%), positive predictive value (30%), and proportion of patients flagged (18%) were consistent with the values derived from the retrospective data, supporting the reproducibility and predictive stability of the chosen risk prediction model (Table 1). The C statistic of the model was also consistent between the retrospective and prospective datasets (0.62 and 0.61, respectively).
Readmission Rates
The mean 30‐day all‐cause readmission rate for the 24‐month period prior to the intervention was 14.4%, whereas the mean for the 12‐month period after the implementation was 15.1%. Thirty‐day all‐cause and 7‐day unplanned monthly readmission rates do not appear to have been impacted by the intervention (Figure 2). There was no evidence for either an immediate or sustained effect (Table 2).
Hospital | Preimplementation Period | Immediate Effect | Postimplementation Period | P Value Change in Trenda | |||||
---|---|---|---|---|---|---|---|---|---|
Monthly % Change in Readmission Rates | P Value | Immediate % Change | P Value | Monthly % Change in Readmission Rates | P Value | ||||
| |||||||||
30‐Day All‐Cause Readmission Rates | |||||||||
Hosp A | 0.023 | Stable | 0.153 | 0.480 | 0.991 | 0.100 | Increasing | 0.044 | 0.134 |
Hosp B | 0.061 | Increasing | 0.002 | 0.492 | 0.125 | 0.060 | Stable | 0.296 | 0.048 |
Hosp C | 0.026 | Stable | 0.413 | 0.447 | 0.585 | 0.046 | Stable | 0.629 | 0.476 |
Health System | 0.032 | Increasing | 0.014 | 0.344 | 0.302 | 0.026 | Stable | 0.499 | 0.881 |
7‐Day Unplanned Readmission Rates | |||||||||
Hosp A | 0.004 | Stable | 0.642 | 0.271 | 0.417 | 0.005 | Stable | 0.891 | 0.967 |
Hosp B | 0.012 | Stable | 0.201 | 0.298 | 0.489 | 0.038 | Stable | 0.429 | 0.602 |
Hosp C | 0.008 | Stable | 0.213 | 0.353 | 0.204 | 0.004 | Stable | 0.895 | 0.899 |
Health System | 0.005 | Stable | 0.358 | 0.003 | 0.990 | 0.010 | Stable | 0.712 | 0.583 |
DISCUSSION
In this proof‐of‐concept study, we demonstrated the feasibility of an automated readmission risk prediction model integrated into a health system's EHR for a mixed population of hospitalized medical and surgical patients. To our knowledge, this is the first study in a general population of hospitalized patients to examine the impact of providing readmission risk assessment on readmission rates. We used a simple prediction model potentially generalizable to EHRs and healthcare populations beyond our own.
Existing risk prediction models for hospital readmission have important limitations and are difficult to implement in clinical practice.[22] Prediction models for hospital readmission are often dependent on retrospective claims data, developed for specific patient populations, and not designed for use early in the course of hospitalization when transitional care interventions can be initiated.[22] In addition, the time required to gather the necessary data and calculate the risk score remains a barrier to the adoption of prediction models in practice. By automating the process of readmission risk prediction, we were able to help integrate risk assessment into the healthcare process across many providers in a large multihospital healthcare organization. This has allowed us to consistently share risk assessment in real time with all members of the inpatient team, facilitating a team‐based approach to discharge planning.[23]
Two prior studies have developed readmission risk prediction models designed to be implemented into the EHR. Amarasingham et al.[24] developed and implemented[25] a heart failure‐specific prediction model based on the 18‐item Tabak mortality score.[26] Bradley et al.[27] studied in a broader population of medicine and surgery patients the predictive ability of a 26‐item score that utilized vital sign, cardiac rhythm, and nursing assessment data. Although EHRs are developing rapidly, currently the majority of EHRs do not support the use of many of the variables used in these models. In addition, both models were complex, raising concerns about generalizability to other healthcare settings and populations.
A distinctive characteristic of our model is its simplicity. We were cognizant of the realities of running a prediction model in a high‐volume production environment and the diminishing returns of adding more variables. We thus favored simplicity at all stages of model development, with the associated belief that complexity could be added with future iterations once feasibility had been established. Finally, we were aware that we were constructing a medical decision support tool rather than a simple classifier.[26] As such, the optimal model was not purely driven by discriminative ability, but also by our subjective assessment of the optimal trade‐off between sensitivity and specificity (the test‐treatment threshold) for such a model.[26] To facilitate model assessment, we thus categorized the potential predictor variables and evaluated the test characteristics of each combination of categorized variables. Although the C statistic of a model using continuous variables will be higher than a model using categorical values, model performance at the chosen trade‐off point is unlikely to be different.
Although the overall predictive ability of our model was fair, we found that it was associated with clinically meaningful differences in readmission rates between those triggering and not triggering the flag. The 30‐day all‐cause readmission rate in the 12‐month prospective sample was 15.1%, yet among those flagged as being at high risk for readmission the readmission rate was 30.4%. Given resource constraints and the need to selectively apply potentially costly care transition interventions, this may in practice translate into a meaningful discriminative ability.
Readmission rates did not change significantly during the study period. A number of plausible reasons for this exist, including: (1) the current model may not exhibit sufficient predictive ability to classify those at high risk or impact the behavior of providers appropriately, (2) those patients classified as high risk of readmission may not be at high risk of readmissions that are preventable, (3) information provided by the model may not yet routinely be used such that it can affect care, or (4) providing readmission risk assessment alone is not sufficient to influence readmission rates, and the other interventions or organizational changes necessary to impact care of those defined as high risk have not yet been implemented or are not yet being performed routinely. If the primary reasons for our results are those outlined in numbers 3 or 4, then readmission rates should improve over time as the risk flag becomes more routinely used, and those interventions necessary to impact readmission rates of those defined as high risk are implemented and performed.
Limitations
There are several limitations of this intervention. First, the prediction model was developed using 30‐day all‐cause readmissions, rather than attempting to identify potentially preventable readmissions. Thirty‐day readmission rates may not be a good proxy for preventable readmissions,[18] and as a consequence, the ability to predict 30‐day readmissions may not ensure that a prediction model is able to predict preventable readmissions. Nonetheless, 30‐day readmission rates remain the most commonly used quality metric.
Second, the impact of the risk flag on provider behavior is uncertain. We did not formally assess how the readmission risk flag was used by healthcare team members. Informal assessment has, however, revealed that the readmission risk flag is gradually being adopted by different members of the care team including unit‐based pharmacists who are using the flag to prioritize the delivery of medication education, social workers who are using the flag to prompt providers to consider higher level services for patients at high risk of readmission, and patient navigators who are using the flag to prioritize follow‐up phone calls. As a result, we hope that the flag will ultimately improve the processes of care for high‐risk patients.
Third, we did not capture readmissions to hospitals outside of our healthcare system and have therefore underestimated the readmission rate in our population. However, our assessment of the effect of the risk flag on readmissions focused on relative readmission rates over time, and the use of the interrupted time series methodology should protect against secular changes in outside hospital readmission rates that were not associated with the intervention.
Fourth, it is possible that the prediction model implemented could be significantly improved by including additional variables or data available during the hospital stay. However, simple classification models using a single variable have repeatedly been shown to have the ability to compete favorably with state‐of‐the‐art multivariable classification models.[28]
Fifth, our study was limited to a single academic health system, and our experience may not be generalizable to smaller healthcare systems with limited EHR systems. However, the simplicity of our prediction model and the integration into a commercial EHR may improve the generalizability of our experience to other healthcare settings. Additionally, partly due to recent policy initiatives, the adoption of integrated EHR systems by hospitals is expected to continue at a rapid rate and become the standard of care within the near future.[29]
CONCLUSION
An automated prediction model was effectively integrated into an existing EHR and was able to identify patients on admission who are at risk for readmission within 30 days of discharge. Future work will aim to further examine the impact of the flag on readmission rates, further refine the prediction model, and gather data on how providers and care teams use the information provided by the flag.
Disclosure
Dr. Umscheid‐s contribution to this project was supported in part by the National Center for Research Resources, Grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, Grant UL1TR000003. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):1074–1081. , .
- Do older rural and urban veterans experience different rates of unplanned readmission to VA and non‐VA hospitals? J Rural Health. 2009;25(1):62–69. , , , , .
- Cost, causes and rates of rehospitalization of preterm infants. J Perinatol. 2007;27(10):614–619. , , .
- Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54–60. , , , .
- Incidence and main factors associated with early unplanned hospital readmission among French medical inpatients aged 75 and over admitted through emergency units. Age Ageing. 2008;37(4):416–422. , , , et al.
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Hospital Quality Alliance. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html. Accessed March 6, 2013.
- Institute for Healthcare Improvement. Available at: http://www.ihi.org/explore/Readmissions/Pages/default.aspx. Accessed March 6, 2013.
- Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/HospitalQualityInits/OutcomeMeasures.html. Accessed March 6, 2013.
- Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281(7):613–620. , , , et al.
- Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. J Am Geriatr Soc. 2004;52(11):1817–1825. , , , , , .
- The importance of transitional care in achieving health reform. Health Aff (Millwood). 2011;30(4):746–754. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- University of Pennsylvania Health System Center for Evidence‐based Practice. Available at: http://www.uphs.upenn.edu/cep/. Accessed March 6, 2013.
- Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):1352–1355. , , .
- Writing Arden Syntax Medical Logic Modules. Comput Biol Med. 1994;24(5):331–363. .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Information Retrieval. 2nd ed. Oxford, UK: Butterworth‐Heinemann; 1979. .
- Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299–309. , , , .
- Application of least squares regression to relationships containing auto‐correlated error terms. J Am Stat Assoc. 1949; 44:32–61. , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- Core Principles and values of effective team‐based health care. Available at: https://www.nationalahec.org/pdfs/VSRT‐Team‐Based‐Care‐Principles‐Values.pdf. Accessed March 19, 2013. , , , et al.
- An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981–988. , , , et al.
- Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study [published online ahead of print July 31, 2013]. BMJ Qual Saf. doi:10.1136/bmjqs‐2013‐001901. , , , et al.
- The threshold approach to clinical decision making. N Engl J Med. 1980;302(20):1109–1117. , .
- Identifying patients at increased risk for unplanned readmission. Med Care. 2013;51(9):761–766. , , , , .
- Very simple classification rules perform well on most commonly used datasets. Mach Learn. 1993;11(1):63–91. .
- Stimulating the adoption of health information technology. N Engl J Med. 2009;360(15):1477–1479. .
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):1074–1081. , .
- Do older rural and urban veterans experience different rates of unplanned readmission to VA and non‐VA hospitals? J Rural Health. 2009;25(1):62–69. , , , , .
- Cost, causes and rates of rehospitalization of preterm infants. J Perinatol. 2007;27(10):614–619. , , .
- Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54–60. , , , .
- Incidence and main factors associated with early unplanned hospital readmission among French medical inpatients aged 75 and over admitted through emergency units. Age Ageing. 2008;37(4):416–422. , , , et al.
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Hospital Quality Alliance. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html. Accessed March 6, 2013.
- Institute for Healthcare Improvement. Available at: http://www.ihi.org/explore/Readmissions/Pages/default.aspx. Accessed March 6, 2013.
- Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/HospitalQualityInits/OutcomeMeasures.html. Accessed March 6, 2013.
- Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281(7):613–620. , , , et al.
- Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. J Am Geriatr Soc. 2004;52(11):1817–1825. , , , , , .
- The importance of transitional care in achieving health reform. Health Aff (Millwood). 2011;30(4):746–754. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- University of Pennsylvania Health System Center for Evidence‐based Practice. Available at: http://www.uphs.upenn.edu/cep/. Accessed March 6, 2013.
- Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):1352–1355. , , .
- Writing Arden Syntax Medical Logic Modules. Comput Biol Med. 1994;24(5):331–363. .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Information Retrieval. 2nd ed. Oxford, UK: Butterworth‐Heinemann; 1979. .
- Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299–309. , , , .
- Application of least squares regression to relationships containing auto‐correlated error terms. J Am Stat Assoc. 1949; 44:32–61. , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- Core Principles and values of effective team‐based health care. Available at: https://www.nationalahec.org/pdfs/VSRT‐Team‐Based‐Care‐Principles‐Values.pdf. Accessed March 19, 2013. , , , et al.
- An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981–988. , , , et al.
- Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study [published online ahead of print July 31, 2013]. BMJ Qual Saf. doi:10.1136/bmjqs‐2013‐001901. , , , et al.
- The threshold approach to clinical decision making. N Engl J Med. 1980;302(20):1109–1117. , .
- Identifying patients at increased risk for unplanned readmission. Med Care. 2013;51(9):761–766. , , , , .
- Very simple classification rules perform well on most commonly used datasets. Mach Learn. 1993;11(1):63–91. .
- Stimulating the adoption of health information technology. N Engl J Med. 2009;360(15):1477–1479. .
© 2013 Society of Hospital Medicine