Affiliations
Department of Medicine, University of Pennsylvania
Center for Clinical Epidemiology and Biostatistics, University of Pennsylvania
Given name(s)
Gordon
Family name
Tait
Degrees
BS

EWRS for Sepsis

Article Type
Changed
Sun, 05/21/2017 - 13:33
Display Headline
Development, implementation, and impact of an automated early warning and response system for sepsis

There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]

Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.

Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.

METHODS

Setting and Data Sources

The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.

Development of the Intervention

The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.

To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]

To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.

The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.

Implementation of the EWRS

All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).

The Preimplementation (Silent) Period and EWRS Validation

The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.

The Postimplementation (Live) Period and Impact Analysis

The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.

Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.

The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).

RESULTS

In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.

In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).

Descriptive Statistics of the Study Population Before and After Implementation of the Early Warning and Response System
 Hospitals AC
 PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: BMI, body mass index; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; RRT, rapid response team; Y, years.

No. of encounters15,56715,526 
No. of alerts595 (4%)545 (4%)0.14
Age, y, median (IQR)62.0 (48.570.5)59.7 (46.169.6)0.04
Female298 (50%)274 (50%)0.95
Race   
White343 (58%)312 (57%)0.14
Black207 (35%)171 (31%) 
Other23 (4%)31 (6%) 
Unknown22 (4%)31 (6%) 
Admission type   
Elective201 (34%)167 (31%)0.40
ED300 (50%)278 (51%) 
Transfer94 (16%)99 (18%) 
BMI, kg/m2, median (IQR)27.0 (23.032.0)26.0 (22.031.0)0.24
Previous ICU admission137 (23%)127 (23%)0.91
RRT before alert27 (5%)20 (4%)0.46
Admission Charlson index, median (IQR)2.0 (1.04.0)2.0 (1.04.0)0.04
Admitting service   
Medicine398 (67%)364 (67%)0.18
Surgery173 (29%)169 (31%) 
Other24 (4%)12 (2%) 
Service where alert fired   
Medicine391 (66%)365 (67%)0.18
Surgery175 (29%)164 (30%) 
Other29 (5%)15 (3%) 

In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)

In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).

Clinical Process Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: ABD, abdomen; AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count; CT, computed tomography; CXR, chest radiograph; ECG, electrocardiogram; H, hours; IV, intravenous; PO, oral; RBC, red blood cell.

No. of alerts595545 
500 mL IV bolus order <3 h after alert92 (15%)142 (26%)<0.01
IV/PO antibiotic order <3 h after alert75 (13%)123 (23%)<0.01
IV/PO sepsis antibiotic order <3 h after alert61 (10%)85 (16%)<0.01
Lactic acid order <3 h after alert57 (10%)128 (23%)<0.01
Blood culture order <3 h after alert68 (11%)99 (18%)<0.01
Blood gas order <6 h after alert53 (9%)59 (11%)0.28
CBC or BMP <6 h after alert247 (42%)219 (40%)0.65
Vasopressor <6 h after alert17 (3%)21 (4%)0.35
Bronchodilator administration <6 h after alert71 (12%)64 (12%)0.92
RBC, plasma, or platelet transfusion order <6 h after alert31 (5%)52 (10%)<0.01
Naloxone order <6 h after alert0 (0%)1 (0%)0.30
AV node blocker order <6 h after alert35 (6%)35 (6%)0.70
Loop diuretic order <6 h after alert35 (6%)28 (5%)0.58
CXR <6 h after alert92 (15%)113 (21%)0.02
CT head, chest, or ABD <6 h after alert29 (5%)34 (6%)0.31
Cardiac monitoring (ECG or telemetry) <6 h after alert70 (12%)90 (17%)0.02
Adjusted Analysis for Clinical Process Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted Odds RatioAdjusted Odds RatioUnadjusted Odds RatioAdjusted Odds Ratio
  • NOTE: Odds ratios compare the odds of the outcome after versus before implementation of the early warning system. Abbreviations: AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count, CT, computed tomography; CXR, chest radiograph; H, hours; IV, intravenous; PO, oral. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 785.52). Adusted for log‐transformed age, gender, log‐transformed Charlson index at admission, admitting service, hospital, and admission month.

500 mL IV bolus order <3 h after alert1.93 (1.442.58)1.93 (1.432.61)1.64 (1.112.43)1.65 (1.102.47)
IV/PO antibiotic order <3 h after alert2.02 (1.482.77)2.02 (1.462.78)1.99 (1.323.00)2.02 (1.323.09)
IV/PO sepsis antibiotic order <3 h after alert1.62 (1.142.30)1.57 (1.102.25)1.63 (1.052.53)1.65 (1.052.58)
Lactic acid order <3 h after alert2.90 (2.074.06)3.11 (2.194.41)2.41 (1.583.67)2.79 (1.794.34)
Blood culture <3 h after alert1.72 (1.232.40)1.76 (1.252.47)1.36 (0.872.10)1.40 (0.902.20)
Blood gas order <6 h after alert1.24 (0.841.83)1.32 (0.891.97)1.06 (0.631.77)1.13 (0.671.92)
BMP or CBC order <6 h after alert0.95 (0.751.20)0.96 (0.751.21)1.00 (0.701.44)1.04 (0.721.50)
Vasopressor order <6 h after alert1.36 (0.712.61)1.47 (0.762.83)1.32 (0.583.04)1.38 (0.593.25)
Bronchodilator administration <6 h after alert0.98 (0.691.41)1.02 (0.701.47)1.13 (0.641.99)1.17 (0.652.10)
Transfusion order <6 h after alert1.92 (1.213.04)1.95 (1.233.11)1.65 (0.913.01)1.68 (0.913.10)
AV node blocker order <6 h after alert1.10 (0.681.78)1.20 (0.722.00)0.38 (0.131.08)0.39 (0.121.20)
Loop diuretic order <6 h after alert0.87 (0.521.44)0.93 (0.561.57)1.63 (0.634.21)1.87 (0.705.00)
CXR <6 h after alert1.43 (1.061.94)1.47 (1.081.99)1.45 (0.942.24)1.56 (1.002.43)
CT <6 h after alert1.30 (0.782.16)1.30 (0.782.19)0.97 (0.521.82)0.94 (0.491.79)
Cardiac monitoring <6 h after alert1.48 (1.062.08)1.54 (1.092.16)1.32 (0.792.18)1.44 (0.862.41)

Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).

Clinical Outcome Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: H, hours; ICU, intensive care unit; IP, inpatient; IQR, interquartile range; LOS, length of stay; LTC, long‐term care; O/E, observed to expected; Rehab, rehabilitation; RRT, rapid response team; SNF, skilled nursing facility.

No. of alerts595545 
Hospital LOS, d, median (IQR)10.1 (5.119.1)9.4 (5.218.9)0.92
ICU LOS after alert, d, median (IQR)3.4 (1.77.4)3.6 (1.96.8)0.72
ICU transfer <6 h after alert40 (7%)53 (10%)0.06
ICU transfer <24 h after alert71 (12%)79 (14%)0.20
ICU transfer any time after alert134 (23%)124 (23%)0.93
Time to first ICU after alert, h, median (IQR)21.3 (4.463.9)11.0 (2.358.7)0.22
RRT 6 h after alert13 (2%)9 (2%)0.51
Mortality of all patients52 (9%)41 (8%)0.45
Mortality 30 days after alert48 (8%)33 (6%)0.19
Mortality of those transferred to ICU40 (30%)32 (26%)0.47
Deceased or IP hospice94 (16%)72 (13%)0.22
Discharge to home347 (58%)351 (64%)0.04
Disposition location   
Home347 (58%)351 (64%)0.25
SNF89 (15%)65 (12%) 
Rehab24 (4%)20 (4%) 
LTC8 (1%)9 (2%) 
Other hospital16 (3%)6 (1%) 
Expired52 (9%)41 (8%) 
Hospice IP42 (7%)31 (6%) 
Hospice other11 (2%)14 (3%) 
Other location6 (1%)8 (1%) 
Sepsis discharge diagnosis230 (39%)247 (45%)0.02
Sepsis O/E 1.371.060.18
Adjusted Analysis for Clinical Outcome Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted EstimateAdjusted EstimateUnadjusted EstimateAdjusted Estimate
  • NOTE: Estimates compare the mean, odds, or hazard of the outcome after versus before implementation of the early warning system. Abbreviations: H, hours; ICU, intensive care unit; LOS, length of stay; NA, not applicable; RRT, rapid response team. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 85.52). Adjusted for gender, age, present on admission Charlson comorbidity score, admit service, hospital, and admission month (June+July or August+Sep). Coefficient. Odds ratio. Hazard ratio.

Hospital LOS, d1.01 (0.921.11)1.02 (0.931.12)0.99 (0.851.15)1.00 (0.871.16)
ICU transfer1.49 (0.972.29)1.65 (1.072.55)1.61 (0.922.84)1.82 (1.023.25)
Time to first ICU transfer after alert, h1.17 (0.871.57)1.23 (0.921.66)1.21 (0.831.75)1.31 (0.901.90)
ICU LOS, d1.01 (0.771.31)0.99 (0.761.28)0.87 (0.621.21)0.88 (0.641.21)
RRT0.75 (0.321.77)0.84 (0.352.02)0.81 (0.292.27)0.82 (0.272.43)
Mortality0.85 (0.551.30)0.98 (0.631.53)0.85 (0.551.30)0.98 (0.631.53)
Mortality within 30 days of alert0.73 (0.461.16)0.87 (0.541.40)0.59 (0.341.04)0.69 (0.381.26)
Mortality or inpatient hospice transfer0.82 (0.471.41)0.78 (0.441.41)0.67 (0.361.25)0.65 (0.331.29)
Discharge to home1.29 (1.021.64)1.18 (0.911.52)1.36 (0.951.95)1.22 (0.811.84)
Sepsis discharge diagnosis1.32 (1.041.67)1.43 (1.101.85)NANA

In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.

The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).

DISCUSSION

This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.

Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).

Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]

Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.

CONCLUSION

By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.

Acknowledgements

The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.

Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.

Files
References
  1. Gaieski DF, Edwards JM, Kallan MJ, Carr BG. Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):11671174.
  2. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580637.
  3. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  4. Otero RM, Nguyen HB, Huang DT, et al. Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):15791595.
  5. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):13681377.
  6. Whittaker SA, Mikkelsen ME, Gaieski DF, Koshy S, Kean C, Fuchs BD. Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945953.
  7. Bailey TC, Chen Y, Mao Y, et al. A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236242.
  8. Jones S, Mullally M, Ingleby S, Buist M, Bailey M, Eddleston JM. Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):8388.
  9. Nelson JL, Smith BL, Jared JD, Younger JG. Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500504.
  10. Sawyer AM, Deal EN, Labelle AJ, et al. Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469473.
  11. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):16441655.
  12. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):12501256.
  13. Sinuff T, Kahnamoui K, Cook DJ, Luce JM, Levy MM. Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):15881597.
  14. Bing‐Hua YU. Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268274.
  15. Cardoso LT, Grion CM, Matsuo T, et al. Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28.
Article PDF
Issue
Journal of Hospital Medicine - 10(1)
Publications
Page Number
26-31
Sections
Files
Files
Article PDF
Article PDF

There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]

Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.

Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.

METHODS

Setting and Data Sources

The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.

Development of the Intervention

The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.

To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]

To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.

The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.

Implementation of the EWRS

All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).

The Preimplementation (Silent) Period and EWRS Validation

The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.

The Postimplementation (Live) Period and Impact Analysis

The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.

Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.

The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).

RESULTS

In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.

In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).

Descriptive Statistics of the Study Population Before and After Implementation of the Early Warning and Response System
 Hospitals AC
 PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: BMI, body mass index; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; RRT, rapid response team; Y, years.

No. of encounters15,56715,526 
No. of alerts595 (4%)545 (4%)0.14
Age, y, median (IQR)62.0 (48.570.5)59.7 (46.169.6)0.04
Female298 (50%)274 (50%)0.95
Race   
White343 (58%)312 (57%)0.14
Black207 (35%)171 (31%) 
Other23 (4%)31 (6%) 
Unknown22 (4%)31 (6%) 
Admission type   
Elective201 (34%)167 (31%)0.40
ED300 (50%)278 (51%) 
Transfer94 (16%)99 (18%) 
BMI, kg/m2, median (IQR)27.0 (23.032.0)26.0 (22.031.0)0.24
Previous ICU admission137 (23%)127 (23%)0.91
RRT before alert27 (5%)20 (4%)0.46
Admission Charlson index, median (IQR)2.0 (1.04.0)2.0 (1.04.0)0.04
Admitting service   
Medicine398 (67%)364 (67%)0.18
Surgery173 (29%)169 (31%) 
Other24 (4%)12 (2%) 
Service where alert fired   
Medicine391 (66%)365 (67%)0.18
Surgery175 (29%)164 (30%) 
Other29 (5%)15 (3%) 

In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)

In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).

Clinical Process Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: ABD, abdomen; AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count; CT, computed tomography; CXR, chest radiograph; ECG, electrocardiogram; H, hours; IV, intravenous; PO, oral; RBC, red blood cell.

No. of alerts595545 
500 mL IV bolus order <3 h after alert92 (15%)142 (26%)<0.01
IV/PO antibiotic order <3 h after alert75 (13%)123 (23%)<0.01
IV/PO sepsis antibiotic order <3 h after alert61 (10%)85 (16%)<0.01
Lactic acid order <3 h after alert57 (10%)128 (23%)<0.01
Blood culture order <3 h after alert68 (11%)99 (18%)<0.01
Blood gas order <6 h after alert53 (9%)59 (11%)0.28
CBC or BMP <6 h after alert247 (42%)219 (40%)0.65
Vasopressor <6 h after alert17 (3%)21 (4%)0.35
Bronchodilator administration <6 h after alert71 (12%)64 (12%)0.92
RBC, plasma, or platelet transfusion order <6 h after alert31 (5%)52 (10%)<0.01
Naloxone order <6 h after alert0 (0%)1 (0%)0.30
AV node blocker order <6 h after alert35 (6%)35 (6%)0.70
Loop diuretic order <6 h after alert35 (6%)28 (5%)0.58
CXR <6 h after alert92 (15%)113 (21%)0.02
CT head, chest, or ABD <6 h after alert29 (5%)34 (6%)0.31
Cardiac monitoring (ECG or telemetry) <6 h after alert70 (12%)90 (17%)0.02
Adjusted Analysis for Clinical Process Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted Odds RatioAdjusted Odds RatioUnadjusted Odds RatioAdjusted Odds Ratio
  • NOTE: Odds ratios compare the odds of the outcome after versus before implementation of the early warning system. Abbreviations: AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count, CT, computed tomography; CXR, chest radiograph; H, hours; IV, intravenous; PO, oral. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 785.52). Adusted for log‐transformed age, gender, log‐transformed Charlson index at admission, admitting service, hospital, and admission month.

500 mL IV bolus order <3 h after alert1.93 (1.442.58)1.93 (1.432.61)1.64 (1.112.43)1.65 (1.102.47)
IV/PO antibiotic order <3 h after alert2.02 (1.482.77)2.02 (1.462.78)1.99 (1.323.00)2.02 (1.323.09)
IV/PO sepsis antibiotic order <3 h after alert1.62 (1.142.30)1.57 (1.102.25)1.63 (1.052.53)1.65 (1.052.58)
Lactic acid order <3 h after alert2.90 (2.074.06)3.11 (2.194.41)2.41 (1.583.67)2.79 (1.794.34)
Blood culture <3 h after alert1.72 (1.232.40)1.76 (1.252.47)1.36 (0.872.10)1.40 (0.902.20)
Blood gas order <6 h after alert1.24 (0.841.83)1.32 (0.891.97)1.06 (0.631.77)1.13 (0.671.92)
BMP or CBC order <6 h after alert0.95 (0.751.20)0.96 (0.751.21)1.00 (0.701.44)1.04 (0.721.50)
Vasopressor order <6 h after alert1.36 (0.712.61)1.47 (0.762.83)1.32 (0.583.04)1.38 (0.593.25)
Bronchodilator administration <6 h after alert0.98 (0.691.41)1.02 (0.701.47)1.13 (0.641.99)1.17 (0.652.10)
Transfusion order <6 h after alert1.92 (1.213.04)1.95 (1.233.11)1.65 (0.913.01)1.68 (0.913.10)
AV node blocker order <6 h after alert1.10 (0.681.78)1.20 (0.722.00)0.38 (0.131.08)0.39 (0.121.20)
Loop diuretic order <6 h after alert0.87 (0.521.44)0.93 (0.561.57)1.63 (0.634.21)1.87 (0.705.00)
CXR <6 h after alert1.43 (1.061.94)1.47 (1.081.99)1.45 (0.942.24)1.56 (1.002.43)
CT <6 h after alert1.30 (0.782.16)1.30 (0.782.19)0.97 (0.521.82)0.94 (0.491.79)
Cardiac monitoring <6 h after alert1.48 (1.062.08)1.54 (1.092.16)1.32 (0.792.18)1.44 (0.862.41)

Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).

Clinical Outcome Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: H, hours; ICU, intensive care unit; IP, inpatient; IQR, interquartile range; LOS, length of stay; LTC, long‐term care; O/E, observed to expected; Rehab, rehabilitation; RRT, rapid response team; SNF, skilled nursing facility.

No. of alerts595545 
Hospital LOS, d, median (IQR)10.1 (5.119.1)9.4 (5.218.9)0.92
ICU LOS after alert, d, median (IQR)3.4 (1.77.4)3.6 (1.96.8)0.72
ICU transfer <6 h after alert40 (7%)53 (10%)0.06
ICU transfer <24 h after alert71 (12%)79 (14%)0.20
ICU transfer any time after alert134 (23%)124 (23%)0.93
Time to first ICU after alert, h, median (IQR)21.3 (4.463.9)11.0 (2.358.7)0.22
RRT 6 h after alert13 (2%)9 (2%)0.51
Mortality of all patients52 (9%)41 (8%)0.45
Mortality 30 days after alert48 (8%)33 (6%)0.19
Mortality of those transferred to ICU40 (30%)32 (26%)0.47
Deceased or IP hospice94 (16%)72 (13%)0.22
Discharge to home347 (58%)351 (64%)0.04
Disposition location   
Home347 (58%)351 (64%)0.25
SNF89 (15%)65 (12%) 
Rehab24 (4%)20 (4%) 
LTC8 (1%)9 (2%) 
Other hospital16 (3%)6 (1%) 
Expired52 (9%)41 (8%) 
Hospice IP42 (7%)31 (6%) 
Hospice other11 (2%)14 (3%) 
Other location6 (1%)8 (1%) 
Sepsis discharge diagnosis230 (39%)247 (45%)0.02
Sepsis O/E 1.371.060.18
Adjusted Analysis for Clinical Outcome Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted EstimateAdjusted EstimateUnadjusted EstimateAdjusted Estimate
  • NOTE: Estimates compare the mean, odds, or hazard of the outcome after versus before implementation of the early warning system. Abbreviations: H, hours; ICU, intensive care unit; LOS, length of stay; NA, not applicable; RRT, rapid response team. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 85.52). Adjusted for gender, age, present on admission Charlson comorbidity score, admit service, hospital, and admission month (June+July or August+Sep). Coefficient. Odds ratio. Hazard ratio.

Hospital LOS, d1.01 (0.921.11)1.02 (0.931.12)0.99 (0.851.15)1.00 (0.871.16)
ICU transfer1.49 (0.972.29)1.65 (1.072.55)1.61 (0.922.84)1.82 (1.023.25)
Time to first ICU transfer after alert, h1.17 (0.871.57)1.23 (0.921.66)1.21 (0.831.75)1.31 (0.901.90)
ICU LOS, d1.01 (0.771.31)0.99 (0.761.28)0.87 (0.621.21)0.88 (0.641.21)
RRT0.75 (0.321.77)0.84 (0.352.02)0.81 (0.292.27)0.82 (0.272.43)
Mortality0.85 (0.551.30)0.98 (0.631.53)0.85 (0.551.30)0.98 (0.631.53)
Mortality within 30 days of alert0.73 (0.461.16)0.87 (0.541.40)0.59 (0.341.04)0.69 (0.381.26)
Mortality or inpatient hospice transfer0.82 (0.471.41)0.78 (0.441.41)0.67 (0.361.25)0.65 (0.331.29)
Discharge to home1.29 (1.021.64)1.18 (0.911.52)1.36 (0.951.95)1.22 (0.811.84)
Sepsis discharge diagnosis1.32 (1.041.67)1.43 (1.101.85)NANA

In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.

The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).

DISCUSSION

This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.

Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).

Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]

Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.

CONCLUSION

By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.

Acknowledgements

The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.

Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.

There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]

Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.

Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.

METHODS

Setting and Data Sources

The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.

Development of the Intervention

The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.

To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]

To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.

The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.

Implementation of the EWRS

All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).

The Preimplementation (Silent) Period and EWRS Validation

The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.

The Postimplementation (Live) Period and Impact Analysis

The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.

Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.

The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).

RESULTS

In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.

In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).

Descriptive Statistics of the Study Population Before and After Implementation of the Early Warning and Response System
 Hospitals AC
 PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: BMI, body mass index; ED, emergency department; ICU, intensive care unit; IQR, interquartile range; RRT, rapid response team; Y, years.

No. of encounters15,56715,526 
No. of alerts595 (4%)545 (4%)0.14
Age, y, median (IQR)62.0 (48.570.5)59.7 (46.169.6)0.04
Female298 (50%)274 (50%)0.95
Race   
White343 (58%)312 (57%)0.14
Black207 (35%)171 (31%) 
Other23 (4%)31 (6%) 
Unknown22 (4%)31 (6%) 
Admission type   
Elective201 (34%)167 (31%)0.40
ED300 (50%)278 (51%) 
Transfer94 (16%)99 (18%) 
BMI, kg/m2, median (IQR)27.0 (23.032.0)26.0 (22.031.0)0.24
Previous ICU admission137 (23%)127 (23%)0.91
RRT before alert27 (5%)20 (4%)0.46
Admission Charlson index, median (IQR)2.0 (1.04.0)2.0 (1.04.0)0.04
Admitting service   
Medicine398 (67%)364 (67%)0.18
Surgery173 (29%)169 (31%) 
Other24 (4%)12 (2%) 
Service where alert fired   
Medicine391 (66%)365 (67%)0.18
Surgery175 (29%)164 (30%) 
Other29 (5%)15 (3%) 

In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)

In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).

Clinical Process Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: ABD, abdomen; AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count; CT, computed tomography; CXR, chest radiograph; ECG, electrocardiogram; H, hours; IV, intravenous; PO, oral; RBC, red blood cell.

No. of alerts595545 
500 mL IV bolus order <3 h after alert92 (15%)142 (26%)<0.01
IV/PO antibiotic order <3 h after alert75 (13%)123 (23%)<0.01
IV/PO sepsis antibiotic order <3 h after alert61 (10%)85 (16%)<0.01
Lactic acid order <3 h after alert57 (10%)128 (23%)<0.01
Blood culture order <3 h after alert68 (11%)99 (18%)<0.01
Blood gas order <6 h after alert53 (9%)59 (11%)0.28
CBC or BMP <6 h after alert247 (42%)219 (40%)0.65
Vasopressor <6 h after alert17 (3%)21 (4%)0.35
Bronchodilator administration <6 h after alert71 (12%)64 (12%)0.92
RBC, plasma, or platelet transfusion order <6 h after alert31 (5%)52 (10%)<0.01
Naloxone order <6 h after alert0 (0%)1 (0%)0.30
AV node blocker order <6 h after alert35 (6%)35 (6%)0.70
Loop diuretic order <6 h after alert35 (6%)28 (5%)0.58
CXR <6 h after alert92 (15%)113 (21%)0.02
CT head, chest, or ABD <6 h after alert29 (5%)34 (6%)0.31
Cardiac monitoring (ECG or telemetry) <6 h after alert70 (12%)90 (17%)0.02
Adjusted Analysis for Clinical Process Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted Odds RatioAdjusted Odds RatioUnadjusted Odds RatioAdjusted Odds Ratio
  • NOTE: Odds ratios compare the odds of the outcome after versus before implementation of the early warning system. Abbreviations: AV, atrioventricular; BMP, basic metabolic panel; CBC, complete blood count, CT, computed tomography; CXR, chest radiograph; H, hours; IV, intravenous; PO, oral. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 785.52). Adusted for log‐transformed age, gender, log‐transformed Charlson index at admission, admitting service, hospital, and admission month.

500 mL IV bolus order <3 h after alert1.93 (1.442.58)1.93 (1.432.61)1.64 (1.112.43)1.65 (1.102.47)
IV/PO antibiotic order <3 h after alert2.02 (1.482.77)2.02 (1.462.78)1.99 (1.323.00)2.02 (1.323.09)
IV/PO sepsis antibiotic order <3 h after alert1.62 (1.142.30)1.57 (1.102.25)1.63 (1.052.53)1.65 (1.052.58)
Lactic acid order <3 h after alert2.90 (2.074.06)3.11 (2.194.41)2.41 (1.583.67)2.79 (1.794.34)
Blood culture <3 h after alert1.72 (1.232.40)1.76 (1.252.47)1.36 (0.872.10)1.40 (0.902.20)
Blood gas order <6 h after alert1.24 (0.841.83)1.32 (0.891.97)1.06 (0.631.77)1.13 (0.671.92)
BMP or CBC order <6 h after alert0.95 (0.751.20)0.96 (0.751.21)1.00 (0.701.44)1.04 (0.721.50)
Vasopressor order <6 h after alert1.36 (0.712.61)1.47 (0.762.83)1.32 (0.583.04)1.38 (0.593.25)
Bronchodilator administration <6 h after alert0.98 (0.691.41)1.02 (0.701.47)1.13 (0.641.99)1.17 (0.652.10)
Transfusion order <6 h after alert1.92 (1.213.04)1.95 (1.233.11)1.65 (0.913.01)1.68 (0.913.10)
AV node blocker order <6 h after alert1.10 (0.681.78)1.20 (0.722.00)0.38 (0.131.08)0.39 (0.121.20)
Loop diuretic order <6 h after alert0.87 (0.521.44)0.93 (0.561.57)1.63 (0.634.21)1.87 (0.705.00)
CXR <6 h after alert1.43 (1.061.94)1.47 (1.081.99)1.45 (0.942.24)1.56 (1.002.43)
CT <6 h after alert1.30 (0.782.16)1.30 (0.782.19)0.97 (0.521.82)0.94 (0.491.79)
Cardiac monitoring <6 h after alert1.48 (1.062.08)1.54 (1.092.16)1.32 (0.792.18)1.44 (0.862.41)

Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).

Clinical Outcome Measures Before and After Implementation of the Early Warning and Response System
 Hospitals AC
PreimplementationPostimplementationP Value
  • NOTE: Abbreviations: H, hours; ICU, intensive care unit; IP, inpatient; IQR, interquartile range; LOS, length of stay; LTC, long‐term care; O/E, observed to expected; Rehab, rehabilitation; RRT, rapid response team; SNF, skilled nursing facility.

No. of alerts595545 
Hospital LOS, d, median (IQR)10.1 (5.119.1)9.4 (5.218.9)0.92
ICU LOS after alert, d, median (IQR)3.4 (1.77.4)3.6 (1.96.8)0.72
ICU transfer <6 h after alert40 (7%)53 (10%)0.06
ICU transfer <24 h after alert71 (12%)79 (14%)0.20
ICU transfer any time after alert134 (23%)124 (23%)0.93
Time to first ICU after alert, h, median (IQR)21.3 (4.463.9)11.0 (2.358.7)0.22
RRT 6 h after alert13 (2%)9 (2%)0.51
Mortality of all patients52 (9%)41 (8%)0.45
Mortality 30 days after alert48 (8%)33 (6%)0.19
Mortality of those transferred to ICU40 (30%)32 (26%)0.47
Deceased or IP hospice94 (16%)72 (13%)0.22
Discharge to home347 (58%)351 (64%)0.04
Disposition location   
Home347 (58%)351 (64%)0.25
SNF89 (15%)65 (12%) 
Rehab24 (4%)20 (4%) 
LTC8 (1%)9 (2%) 
Other hospital16 (3%)6 (1%) 
Expired52 (9%)41 (8%) 
Hospice IP42 (7%)31 (6%) 
Hospice other11 (2%)14 (3%) 
Other location6 (1%)8 (1%) 
Sepsis discharge diagnosis230 (39%)247 (45%)0.02
Sepsis O/E 1.371.060.18
Adjusted Analysis for Clinical Outcome Measures for All Patients and Those Discharged With a Sepsis Diagnosis
 All Alerted PatientsDischarged With Sepsis Code*
Unadjusted EstimateAdjusted EstimateUnadjusted EstimateAdjusted Estimate
  • NOTE: Estimates compare the mean, odds, or hazard of the outcome after versus before implementation of the early warning system. Abbreviations: H, hours; ICU, intensive care unit; LOS, length of stay; NA, not applicable; RRT, rapid response team. *Sepsis definition based on International Classification of Diseases, 9th Revision diagnosis at discharge (790.7, 995.94, 995.92, 995.90, 995.91, 995.93, 85.52). Adjusted for gender, age, present on admission Charlson comorbidity score, admit service, hospital, and admission month (June+July or August+Sep). Coefficient. Odds ratio. Hazard ratio.

Hospital LOS, d1.01 (0.921.11)1.02 (0.931.12)0.99 (0.851.15)1.00 (0.871.16)
ICU transfer1.49 (0.972.29)1.65 (1.072.55)1.61 (0.922.84)1.82 (1.023.25)
Time to first ICU transfer after alert, h1.17 (0.871.57)1.23 (0.921.66)1.21 (0.831.75)1.31 (0.901.90)
ICU LOS, d1.01 (0.771.31)0.99 (0.761.28)0.87 (0.621.21)0.88 (0.641.21)
RRT0.75 (0.321.77)0.84 (0.352.02)0.81 (0.292.27)0.82 (0.272.43)
Mortality0.85 (0.551.30)0.98 (0.631.53)0.85 (0.551.30)0.98 (0.631.53)
Mortality within 30 days of alert0.73 (0.461.16)0.87 (0.541.40)0.59 (0.341.04)0.69 (0.381.26)
Mortality or inpatient hospice transfer0.82 (0.471.41)0.78 (0.441.41)0.67 (0.361.25)0.65 (0.331.29)
Discharge to home1.29 (1.021.64)1.18 (0.911.52)1.36 (0.951.95)1.22 (0.811.84)
Sepsis discharge diagnosis1.32 (1.041.67)1.43 (1.101.85)NANA

In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.

The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).

DISCUSSION

This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.

Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).

Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]

Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.

CONCLUSION

By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.

Acknowledgements

The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.

Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.

References
  1. Gaieski DF, Edwards JM, Kallan MJ, Carr BG. Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):11671174.
  2. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580637.
  3. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  4. Otero RM, Nguyen HB, Huang DT, et al. Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):15791595.
  5. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):13681377.
  6. Whittaker SA, Mikkelsen ME, Gaieski DF, Koshy S, Kean C, Fuchs BD. Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945953.
  7. Bailey TC, Chen Y, Mao Y, et al. A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236242.
  8. Jones S, Mullally M, Ingleby S, Buist M, Bailey M, Eddleston JM. Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):8388.
  9. Nelson JL, Smith BL, Jared JD, Younger JG. Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500504.
  10. Sawyer AM, Deal EN, Labelle AJ, et al. Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469473.
  11. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):16441655.
  12. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):12501256.
  13. Sinuff T, Kahnamoui K, Cook DJ, Luce JM, Levy MM. Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):15881597.
  14. Bing‐Hua YU. Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268274.
  15. Cardoso LT, Grion CM, Matsuo T, et al. Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28.
References
  1. Gaieski DF, Edwards JM, Kallan MJ, Carr BG. Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):11671174.
  2. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580637.
  3. Levy MM, Dellinger RP, Townsend SR, et al. The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367374.
  4. Otero RM, Nguyen HB, Huang DT, et al. Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):15791595.
  5. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):13681377.
  6. Whittaker SA, Mikkelsen ME, Gaieski DF, Koshy S, Kean C, Fuchs BD. Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945953.
  7. Bailey TC, Chen Y, Mao Y, et al. A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236242.
  8. Jones S, Mullally M, Ingleby S, Buist M, Bailey M, Eddleston JM. Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):8388.
  9. Nelson JL, Smith BL, Jared JD, Younger JG. Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500504.
  10. Sawyer AM, Deal EN, Labelle AJ, et al. Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469473.
  11. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):16441655.
  12. Levy MM, Fink MP, Marshall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):12501256.
  13. Sinuff T, Kahnamoui K, Cook DJ, Luce JM, Levy MM. Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):15881597.
  14. Bing‐Hua YU. Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268274.
  15. Cardoso LT, Grion CM, Matsuo T, et al. Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28.
Issue
Journal of Hospital Medicine - 10(1)
Issue
Journal of Hospital Medicine - 10(1)
Page Number
26-31
Page Number
26-31
Publications
Publications
Article Type
Display Headline
Development, implementation, and impact of an automated early warning and response system for sepsis
Display Headline
Development, implementation, and impact of an automated early warning and response system for sepsis
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Craig A Umscheid, MD, Assistant Professor of Medicine and Epidemiology, Director, Center for Evidence‐based Practice, Medical Director, Clinical Decision Support, Penn Medicine, 3535 Market Street, Mezzanine, Suite 50, Philadelphia, PA 19104; Telephone: 215‐349‐8098; Fax: 215‐349‐5829; E‐mail: craig.umscheid@uphs.upenn.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Patients at Risk for Readmission

Article Type
Changed
Sun, 05/21/2017 - 17:42
Display Headline
The readmission risk flag: Using the electronic health record to automatically identify patients at risk for 30‐day readmission

Unplanned hospital readmissions are common, costly, and potentially avoidable. Approximately 20% of Medicare patients are readmitted within 30 days of discharge.[1] Readmission rates are estimated to be similarly high in other population subgroups,[2, 3, 4] with approximately 80% of patients[1, 5, 6] readmitted to the original discharging hospital. A recent systematic review suggested that 27% of readmissions may be preventable.[7]

Hospital readmissions have increasingly been viewed as a correctable marker of poor quality care and have been adopted by a number of organizations as quality indicators.[8, 9, 10] As a result, hospitals have important internal and external motivations to address readmissions. Identification of patients at high risk for readmissions may be an important first step toward preventing them. In particular, readmission risk assessment could be used to help providers target the delivery of resource‐intensive transitional care interventions[11, 12, 13, 14] to patients with the greatest needs. Such an approach is appealing because it allows hospitals to focus scarce resources where the impact may be greatest and provides a starting point for organizations struggling to develop robust models of transitional care delivery.

Electronic health records (EHRs) may prove to be an important component of strategies designed to risk stratify patients at the point of care. Algorithms integrated into the EHR that automatically generate risk predictions have the potential to (1) improve provider time efficiency by automating the prediction process, (2) improve consistency of data collection and risk score calculation, (3) increase adoption through improved usability, and (4) provide clinically important information in real‐time to all healthcare team members caring for a hospitalized patient.

We thus sought to derive a predictive model for 30‐day readmissions using data reliably present in our EHR at the time of admission, and integrate this predictive model into our hospital's EHR to create an automated prediction tool that identifies on admission patients at high risk for readmission within 30 days of discharge. In addition, we prospectively validated this model using the 12‐month period after implementation and examined the impact on readmissions.

METHODS

Setting

The University of Pennsylvania Health System (UPHS) includes 3 hospitals, with a combined capacity of over 1500 beds and 70,000 annual admissions. All hospitals currently utilize Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL) as their EHR. The study sample included all adult admissions to any of the 3 UPHS hospitals during the study period. Admissions to short procedure, rehabilitation, and hospice units were excluded. The study received expedited approval and a HIPAA waiver from the University of Pennsylvania institutional review board.

Development of Predictive Model

The UPHS Center for Evidence‐based Practice[15, 16] performed a systematic review to identify factors associated with hospital readmission within 30 days of discharge. We then examined the data available from our hospital EHR at the time of admission for those factors identified in the review. Using different threshold values and look‐back periods, we developed and tested 30 candidate prediction models using these variables alone and in combination (Table 1). Prediction models were evaluated using 24 months of historical data between August 1, 2009 and August 1, 2011.

Implementation

An automated readmission risk flag was then integrated into the EHR. Patients classified as being at high risk for readmission with the automated prediction model were flagged in the EHR on admission (Figure 1A). The flag can be double‐clicked to display a separate screen with information relevant to discharge planning including inpatient and emergency department (ED) visits in the prior 12 months, as well as information about the primary team, length of stay, and admitting problem associated with those admissions (Figure 1B). The prediction model was integrated into our EHR using Arden Syntax for Medical Logic Modules.[17] The readmission risk screen involved presenting the provider with a new screen and was thus developed in Microsoft .NET using C# and Windows Forms (Microsoft Corp., Redmond, WA).

Figure 1
(A) Screenshot of the electronic health record (EHR) with the readmission risk flag implemented and visible in the ninth column of the patient list. (B) A new screen with patient‐specific information relevant to discharge planning can be accessed within the EHR by double‐clicking a patient's risk flag.

The flag was visible on the patient lists of all providers who utilized the EHR. This included but was not limited to nurses, social workers, unit pharmacists, and physicians. At the time of implementation, educational events regarding the readmission risk flag were provided in forums targeting administrators, pharmacists, social workers, and housestaff. Information about the flag and recommendations for use were distributed through emails and broadcast screensaver messages disseminated throughout the inpatient units of the health system. Providers were asked to pay special attention to discharge planning for patients triggering the readmission risk flag, including medication reconciliation by pharmacists for these patients prior to discharge, and arrangement of available home services by social workers.

The risk flag was 1 of 4 classes of interventions developed and endorsed by the health system in its efforts to reduce readmissions. Besides risk stratification, the other classes were: interdisciplinary rounding, patient education, and discharge communication. None of the interventions alone were expected to decrease readmissions, but as all 4 classes of interventions were implemented and performed routinely, the expectation was that they would work in concert to reduce readmissions.

Analysis

The primary outcome was all‐cause hospital readmissions in the healthcare system within 30 days of discharge. Although this outcome is commonly used both in the literature and as a quality metric, significant debate persists as to the appropriateness of this metric.[18] Many of the factors driving 30‐day readmissions may be dependent on factors outside of the discharging hospital's control and it has been argued that nearer‐term, nonelective readmission rates may provide a more meaningful quality metric.[18] Seven‐day unplanned readmissions were thus used as a secondary outcome measure for this study.

Sensitivity, specificity, predictive value, C statistic, F score (the harmonic mean of positive predictive value and sensitivity),[19] and screen‐positive rate were calculated for each of the 30 prediction models evaluated using the historical data. The prediction model with the best balance of F score and screen‐positive rate was selected as the prediction model to be integrated into the EHR. Prospective validation of the selected prediction model was performed using the 12‐month period following implementation of the risk flag (September 2011September 2012).

To assess the impact of the automated prediction model on monthly readmission rate, we used the 24‐month period immediately before and the 12‐month period immediately after implementation of the readmission risk flag. Segmented regression analysis was performed testing for changes in level and slope of readmission rates between preimplementation and postimplementation time periods. This quasiexperimental interrupted time series methodology[20] allows us to control for secular trends in readmission rates and to assess the preimplementation trend (secular trend), the difference in rates immediately before and after the implementation (immediate effect), and the postimplementation change over time (sustained effect). We used Cochrane‐Orcutt estimation[21] to correct for serial autocorrelation.

All analyses were performed using Stata 12.1 software (Stata Corp, College Station, TX).

RESULTS

Predictors of Readmission

Our systematic review of the literature identified several patient and healthcare utilization patterns predictive of 30‐day readmission risk. Utilization factors included length of stay, number of prior admissions, previous 30‐day readmissions, and previous ED visits. Patient characteristics included number of comorbidities, living alone, and payor. Evidence was inconsistent regarding threshold values for these variables.

Many variables readily available in our EHR were either found by the systematic review not to be reliably predictive of 30‐day readmission (including age and gender) or were not readily or reliably available on admission (including length of stay and payor). At the time of implementation, our EHR did not include vital sign or nursing assessment variables, so these were not considered for inclusion in our model.

Of the available variables, 3 were consistently accurate and available in the EHR at the time of patient admission: prior hospital admission, emergency department visit, and 30‐day readmission within UPHS. We then developed 30 candidate prediction models using a combination of these variables, including 1 and 2 prior admissions, ED visits, and 30‐day readmissions in the 6 and 12 months preceding the index visit (Table 1).

Development and Validation

We used 24 months of retrospective data, which included 120,396 discharges with 17,337 thirty‐day readmissions (14.4% 30‐day all‐cause readmission rate) to test the candidate prediction models. A single risk factor, 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%) (Table 1).

Retrospective and Prospective Evaluation of Prediction Models for 30‐Day All‐Cause Readmissions
 SensitivitySpecificityC StatisticPPVNPVScreen PositiveF Score
  • NOTE: Abbreviations: 30‐day, prior 30‐day readmission; Admit, inpatient hospital admission; ED, emergency room visit; NPV, negative predictive value; PPV, positive predictive value.

  • Optimum prediction model.

Retrospective Evaluation of Prediction Rules Lookback period: 6 months
Prior Admissions
153%74%0.64026%91%30%0.350
232%90%0.61035%89%13%0.333
320%96%0.57844%88%7%0.274
Prior ED Visits
131%81%0.55821%87%21%0.252
213%93%0.53225%87%8%0.172
37%97%0.51927%86%4%0.111
Prior 30‐day Readmissions
139%85%0.62331%89%18%0.347
221%95%0.58243%88%7%0.284
313%98%0.55553%87%4%0.208
Combined Rules
Admit1 & ED122%92%0.56831%88%10%0.255
Admit2 & ED115%96%0.55640%87%5%0.217
Admit1 & 30‐day139%85%0.62331%89%18%0.346
Admit2 & 30‐day129%92%0.60337%89%11%0.324
30‐day1 & ED117%95%0.55937%87%6%0.229
30‐day1 & ED28%98%0.52740%86%3%0.132
Lookback period: 12 months
Prior Admission
160%68%0.59324%91%36%0.340
2a40%85%0.62431%89%18%0.354
328%92%0.60037%88%11%0.318
Prior ED Visit
138%74%0.56020%88%28%0.260
220%88%0.54423%87%13%0.215
38%96%0.52327%86%4%0.126
Prior 30‐day Readmission
143%84%0.63030%90%20%0.353
224%94%0.59241%88%9%0.305
311%98%0.54854%87%3%0.186
Combined Rules
Admit1 & ED129%87%0.58027%88%15%0.281
Admit2 & ED122%93%0.57434%88%9%0.266
Admit1 & 30‐day142%84%0.63030%90%14%0.353
Admit2 & 30‐day134%89%0.61534%89%14%0.341
30‐day1 & ED121%93%0.56935%88%9%0.261
30‐day1 & ED213%96%0.54537%87%5%0.187
Prospective Evaluation of Prediction Rule
30‐Day All‐Cause39%84%0.61430%89%18%0.339

Prospective validation of the prediction model was performed using the 12‐month period directly following readmission risk flag implementation. During this period, the 30‐day all‐cause readmission rate was 15.1%. Sensitivity (39%), positive predictive value (30%), and proportion of patients flagged (18%) were consistent with the values derived from the retrospective data, supporting the reproducibility and predictive stability of the chosen risk prediction model (Table 1). The C statistic of the model was also consistent between the retrospective and prospective datasets (0.62 and 0.61, respectively).

Readmission Rates

The mean 30‐day all‐cause readmission rate for the 24‐month period prior to the intervention was 14.4%, whereas the mean for the 12‐month period after the implementation was 15.1%. Thirty‐day all‐cause and 7‐day unplanned monthly readmission rates do not appear to have been impacted by the intervention (Figure 2). There was no evidence for either an immediate or sustained effect (Table 2).

Figure 2
(A) Thirty‐day all‐cause readmission rates over time. (B) Seven‐day unplanned readmission rates over time.
Interrupted Time Series of Readmission Rates
HospitalPreimplementation PeriodImmediate EffectPostimplementation PeriodP Value Change in Trenda
Monthly % Change in Readmission RatesP ValueImmediate % ChangeP ValueMonthly % Change in Readmission RatesP Value
  • NOTE: Regression coefficients represent the absolute change in the monthly readmission rate (percentage) per unit time (month). Models are adjusted for autocorrelation using the Cochrane‐Orcutt estimator.

  • P value compares the pre‐ and postimplementation trends in readmission rates.

30‐Day All‐Cause Readmission Rates
Hosp A0.023Stable0.1530.4800.9910.100Increasing0.0440.134
Hosp B0.061Increasing0.0020.4920.1250.060Stable0.2960.048
Hosp C0.026Stable0.4130.4470.5850.046Stable0.6290.476
Health System0.032Increasing0.0140.3440.3020.026Stable0.4990.881
7‐Day Unplanned Readmission Rates
Hosp A0.004Stable0.6420.2710.4170.005Stable0.8910.967
Hosp B0.012Stable0.2010.2980.4890.038Stable0.4290.602
Hosp C0.008Stable0.2130.3530.2040.004Stable0.8950.899
Health System0.005Stable0.3580.0030.9900.010Stable0.7120.583

DISCUSSION

In this proof‐of‐concept study, we demonstrated the feasibility of an automated readmission risk prediction model integrated into a health system's EHR for a mixed population of hospitalized medical and surgical patients. To our knowledge, this is the first study in a general population of hospitalized patients to examine the impact of providing readmission risk assessment on readmission rates. We used a simple prediction model potentially generalizable to EHRs and healthcare populations beyond our own.

Existing risk prediction models for hospital readmission have important limitations and are difficult to implement in clinical practice.[22] Prediction models for hospital readmission are often dependent on retrospective claims data, developed for specific patient populations, and not designed for use early in the course of hospitalization when transitional care interventions can be initiated.[22] In addition, the time required to gather the necessary data and calculate the risk score remains a barrier to the adoption of prediction models in practice. By automating the process of readmission risk prediction, we were able to help integrate risk assessment into the healthcare process across many providers in a large multihospital healthcare organization. This has allowed us to consistently share risk assessment in real time with all members of the inpatient team, facilitating a team‐based approach to discharge planning.[23]

Two prior studies have developed readmission risk prediction models designed to be implemented into the EHR. Amarasingham et al.[24] developed and implemented[25] a heart failure‐specific prediction model based on the 18‐item Tabak mortality score.[26] Bradley et al.[27] studied in a broader population of medicine and surgery patients the predictive ability of a 26‐item score that utilized vital sign, cardiac rhythm, and nursing assessment data. Although EHRs are developing rapidly, currently the majority of EHRs do not support the use of many of the variables used in these models. In addition, both models were complex, raising concerns about generalizability to other healthcare settings and populations.

A distinctive characteristic of our model is its simplicity. We were cognizant of the realities of running a prediction model in a high‐volume production environment and the diminishing returns of adding more variables. We thus favored simplicity at all stages of model development, with the associated belief that complexity could be added with future iterations once feasibility had been established. Finally, we were aware that we were constructing a medical decision support tool rather than a simple classifier.[26] As such, the optimal model was not purely driven by discriminative ability, but also by our subjective assessment of the optimal trade‐off between sensitivity and specificity (the test‐treatment threshold) for such a model.[26] To facilitate model assessment, we thus categorized the potential predictor variables and evaluated the test characteristics of each combination of categorized variables. Although the C statistic of a model using continuous variables will be higher than a model using categorical values, model performance at the chosen trade‐off point is unlikely to be different.

Although the overall predictive ability of our model was fair, we found that it was associated with clinically meaningful differences in readmission rates between those triggering and not triggering the flag. The 30‐day all‐cause readmission rate in the 12‐month prospective sample was 15.1%, yet among those flagged as being at high risk for readmission the readmission rate was 30.4%. Given resource constraints and the need to selectively apply potentially costly care transition interventions, this may in practice translate into a meaningful discriminative ability.

Readmission rates did not change significantly during the study period. A number of plausible reasons for this exist, including: (1) the current model may not exhibit sufficient predictive ability to classify those at high risk or impact the behavior of providers appropriately, (2) those patients classified as high risk of readmission may not be at high risk of readmissions that are preventable, (3) information provided by the model may not yet routinely be used such that it can affect care, or (4) providing readmission risk assessment alone is not sufficient to influence readmission rates, and the other interventions or organizational changes necessary to impact care of those defined as high risk have not yet been implemented or are not yet being performed routinely. If the primary reasons for our results are those outlined in numbers 3 or 4, then readmission rates should improve over time as the risk flag becomes more routinely used, and those interventions necessary to impact readmission rates of those defined as high risk are implemented and performed.

Limitations

There are several limitations of this intervention. First, the prediction model was developed using 30‐day all‐cause readmissions, rather than attempting to identify potentially preventable readmissions. Thirty‐day readmission rates may not be a good proxy for preventable readmissions,[18] and as a consequence, the ability to predict 30‐day readmissions may not ensure that a prediction model is able to predict preventable readmissions. Nonetheless, 30‐day readmission rates remain the most commonly used quality metric.

Second, the impact of the risk flag on provider behavior is uncertain. We did not formally assess how the readmission risk flag was used by healthcare team members. Informal assessment has, however, revealed that the readmission risk flag is gradually being adopted by different members of the care team including unit‐based pharmacists who are using the flag to prioritize the delivery of medication education, social workers who are using the flag to prompt providers to consider higher level services for patients at high risk of readmission, and patient navigators who are using the flag to prioritize follow‐up phone calls. As a result, we hope that the flag will ultimately improve the processes of care for high‐risk patients.

Third, we did not capture readmissions to hospitals outside of our healthcare system and have therefore underestimated the readmission rate in our population. However, our assessment of the effect of the risk flag on readmissions focused on relative readmission rates over time, and the use of the interrupted time series methodology should protect against secular changes in outside hospital readmission rates that were not associated with the intervention.

Fourth, it is possible that the prediction model implemented could be significantly improved by including additional variables or data available during the hospital stay. However, simple classification models using a single variable have repeatedly been shown to have the ability to compete favorably with state‐of‐the‐art multivariable classification models.[28]

Fifth, our study was limited to a single academic health system, and our experience may not be generalizable to smaller healthcare systems with limited EHR systems. However, the simplicity of our prediction model and the integration into a commercial EHR may improve the generalizability of our experience to other healthcare settings. Additionally, partly due to recent policy initiatives, the adoption of integrated EHR systems by hospitals is expected to continue at a rapid rate and become the standard of care within the near future.[29]

CONCLUSION

An automated prediction model was effectively integrated into an existing EHR and was able to identify patients on admission who are at risk for readmission within 30 days of discharge. Future work will aim to further examine the impact of the flag on readmission rates, further refine the prediction model, and gather data on how providers and care teams use the information provided by the flag.

Disclosure

Dr. Umscheid‐s contribution to this project was supported in part by the National Center for Research Resources, Grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, Grant UL1TR000003. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Files
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):10741081.
  3. Weeks WB, Lee RE, Wallace AE, West AN, Bagian JP. Do older rural and urban veterans experience different rates of unplanned readmission to VA and non‐VA hospitals? J Rural Health. 2009;25(1):6269.
  4. Underwood MA, Danielsen B, Gilbert WM. Cost, causes and rates of rehospitalization of preterm infants. J Perinatol. 2007;27(10):614619.
  5. Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):5460.
  6. Lanièce II, Couturier PP, Dramé MM, et al. Incidence and main factors associated with early unplanned hospital readmission among French medical inpatients aged 75 and over admitted through emergency units. Age Ageing. 2008;37(4):416422.
  7. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  8. Hospital Quality Alliance. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html. Accessed March 6, 2013.
  9. Institute for Healthcare Improvement. Available at: http://www.ihi.org/explore/Readmissions/Pages/default.aspx. Accessed March 6, 2013.
  10. Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/HospitalQualityInits/OutcomeMeasures.html. Accessed March 6, 2013.
  11. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281(7):613620.
  12. Coleman EA, Smith JD, Frank JC, Min S‐J, Parry C, Kramer AM. Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. J Am Geriatr Soc. 2004;52(11):18171825.
  13. Naylor MD, Aiken LH, Kurtzman ET, Olds DM, Hirschman KB. The importance of transitional care in achieving health reform. Health Aff (Millwood). 2011;30(4):746754.
  14. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  15. University of Pennsylvania Health System Center for Evidence‐based Practice. Available at: http://www.uphs.upenn.edu/cep/. Accessed March 6, 2013.
  16. Umscheid CA, Williams K, Brennan PJ. Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):13521355.
  17. Hripcsak G. Writing Arden Syntax Medical Logic Modules. Comput Biol Med. 1994;24(5):331363.
  18. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  19. Rijsbergen CJ. Information Retrieval. 2nd ed. Oxford, UK: Butterworth‐Heinemann; 1979.
  20. Wagner AK, Soumerai SB, Zhang F, Ross‐Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299309.
  21. Cochrane D, Orcutt GH. Application of least squares regression to relationships containing auto‐correlated error terms. J Am Stat Assoc. 1949; 44:3261.
  22. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  23. Mitchell P, Wynia M, Golden R, et al. Core Principles and values of effective team‐based health care. Available at: https://www.nationalahec.org/pdfs/VSRT‐Team‐Based‐Care‐Principles‐Values.pdf. Accessed March 19, 2013.
  24. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  25. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study [published online ahead of print July 31, 2013]. BMJ Qual Saf. doi:10.1136/bmjqs‐2013‐001901.
  26. Pauker SG, Kassirer JP. The threshold approach to clinical decision making. N Engl J Med. 1980;302(20):11091117.
  27. Bradley EH, Yakusheva O, Horwitz LI, Sipsma H, Fletcher J. Identifying patients at increased risk for unplanned readmission. Med Care. 2013;51(9):761766.
  28. Holte RC. Very simple classification rules perform well on most commonly used datasets. Mach Learn. 1993;11(1):6391.
  29. Blumenthal D. Stimulating the adoption of health information technology. N Engl J Med. 2009;360(15):14771479.
Article PDF
Issue
Journal of Hospital Medicine - 8(12)
Publications
Page Number
689-695
Sections
Files
Files
Article PDF
Article PDF

Unplanned hospital readmissions are common, costly, and potentially avoidable. Approximately 20% of Medicare patients are readmitted within 30 days of discharge.[1] Readmission rates are estimated to be similarly high in other population subgroups,[2, 3, 4] with approximately 80% of patients[1, 5, 6] readmitted to the original discharging hospital. A recent systematic review suggested that 27% of readmissions may be preventable.[7]

Hospital readmissions have increasingly been viewed as a correctable marker of poor quality care and have been adopted by a number of organizations as quality indicators.[8, 9, 10] As a result, hospitals have important internal and external motivations to address readmissions. Identification of patients at high risk for readmissions may be an important first step toward preventing them. In particular, readmission risk assessment could be used to help providers target the delivery of resource‐intensive transitional care interventions[11, 12, 13, 14] to patients with the greatest needs. Such an approach is appealing because it allows hospitals to focus scarce resources where the impact may be greatest and provides a starting point for organizations struggling to develop robust models of transitional care delivery.

Electronic health records (EHRs) may prove to be an important component of strategies designed to risk stratify patients at the point of care. Algorithms integrated into the EHR that automatically generate risk predictions have the potential to (1) improve provider time efficiency by automating the prediction process, (2) improve consistency of data collection and risk score calculation, (3) increase adoption through improved usability, and (4) provide clinically important information in real‐time to all healthcare team members caring for a hospitalized patient.

We thus sought to derive a predictive model for 30‐day readmissions using data reliably present in our EHR at the time of admission, and integrate this predictive model into our hospital's EHR to create an automated prediction tool that identifies on admission patients at high risk for readmission within 30 days of discharge. In addition, we prospectively validated this model using the 12‐month period after implementation and examined the impact on readmissions.

METHODS

Setting

The University of Pennsylvania Health System (UPHS) includes 3 hospitals, with a combined capacity of over 1500 beds and 70,000 annual admissions. All hospitals currently utilize Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL) as their EHR. The study sample included all adult admissions to any of the 3 UPHS hospitals during the study period. Admissions to short procedure, rehabilitation, and hospice units were excluded. The study received expedited approval and a HIPAA waiver from the University of Pennsylvania institutional review board.

Development of Predictive Model

The UPHS Center for Evidence‐based Practice[15, 16] performed a systematic review to identify factors associated with hospital readmission within 30 days of discharge. We then examined the data available from our hospital EHR at the time of admission for those factors identified in the review. Using different threshold values and look‐back periods, we developed and tested 30 candidate prediction models using these variables alone and in combination (Table 1). Prediction models were evaluated using 24 months of historical data between August 1, 2009 and August 1, 2011.

Implementation

An automated readmission risk flag was then integrated into the EHR. Patients classified as being at high risk for readmission with the automated prediction model were flagged in the EHR on admission (Figure 1A). The flag can be double‐clicked to display a separate screen with information relevant to discharge planning including inpatient and emergency department (ED) visits in the prior 12 months, as well as information about the primary team, length of stay, and admitting problem associated with those admissions (Figure 1B). The prediction model was integrated into our EHR using Arden Syntax for Medical Logic Modules.[17] The readmission risk screen involved presenting the provider with a new screen and was thus developed in Microsoft .NET using C# and Windows Forms (Microsoft Corp., Redmond, WA).

Figure 1
(A) Screenshot of the electronic health record (EHR) with the readmission risk flag implemented and visible in the ninth column of the patient list. (B) A new screen with patient‐specific information relevant to discharge planning can be accessed within the EHR by double‐clicking a patient's risk flag.

The flag was visible on the patient lists of all providers who utilized the EHR. This included but was not limited to nurses, social workers, unit pharmacists, and physicians. At the time of implementation, educational events regarding the readmission risk flag were provided in forums targeting administrators, pharmacists, social workers, and housestaff. Information about the flag and recommendations for use were distributed through emails and broadcast screensaver messages disseminated throughout the inpatient units of the health system. Providers were asked to pay special attention to discharge planning for patients triggering the readmission risk flag, including medication reconciliation by pharmacists for these patients prior to discharge, and arrangement of available home services by social workers.

The risk flag was 1 of 4 classes of interventions developed and endorsed by the health system in its efforts to reduce readmissions. Besides risk stratification, the other classes were: interdisciplinary rounding, patient education, and discharge communication. None of the interventions alone were expected to decrease readmissions, but as all 4 classes of interventions were implemented and performed routinely, the expectation was that they would work in concert to reduce readmissions.

Analysis

The primary outcome was all‐cause hospital readmissions in the healthcare system within 30 days of discharge. Although this outcome is commonly used both in the literature and as a quality metric, significant debate persists as to the appropriateness of this metric.[18] Many of the factors driving 30‐day readmissions may be dependent on factors outside of the discharging hospital's control and it has been argued that nearer‐term, nonelective readmission rates may provide a more meaningful quality metric.[18] Seven‐day unplanned readmissions were thus used as a secondary outcome measure for this study.

Sensitivity, specificity, predictive value, C statistic, F score (the harmonic mean of positive predictive value and sensitivity),[19] and screen‐positive rate were calculated for each of the 30 prediction models evaluated using the historical data. The prediction model with the best balance of F score and screen‐positive rate was selected as the prediction model to be integrated into the EHR. Prospective validation of the selected prediction model was performed using the 12‐month period following implementation of the risk flag (September 2011September 2012).

To assess the impact of the automated prediction model on monthly readmission rate, we used the 24‐month period immediately before and the 12‐month period immediately after implementation of the readmission risk flag. Segmented regression analysis was performed testing for changes in level and slope of readmission rates between preimplementation and postimplementation time periods. This quasiexperimental interrupted time series methodology[20] allows us to control for secular trends in readmission rates and to assess the preimplementation trend (secular trend), the difference in rates immediately before and after the implementation (immediate effect), and the postimplementation change over time (sustained effect). We used Cochrane‐Orcutt estimation[21] to correct for serial autocorrelation.

All analyses were performed using Stata 12.1 software (Stata Corp, College Station, TX).

RESULTS

Predictors of Readmission

Our systematic review of the literature identified several patient and healthcare utilization patterns predictive of 30‐day readmission risk. Utilization factors included length of stay, number of prior admissions, previous 30‐day readmissions, and previous ED visits. Patient characteristics included number of comorbidities, living alone, and payor. Evidence was inconsistent regarding threshold values for these variables.

Many variables readily available in our EHR were either found by the systematic review not to be reliably predictive of 30‐day readmission (including age and gender) or were not readily or reliably available on admission (including length of stay and payor). At the time of implementation, our EHR did not include vital sign or nursing assessment variables, so these were not considered for inclusion in our model.

Of the available variables, 3 were consistently accurate and available in the EHR at the time of patient admission: prior hospital admission, emergency department visit, and 30‐day readmission within UPHS. We then developed 30 candidate prediction models using a combination of these variables, including 1 and 2 prior admissions, ED visits, and 30‐day readmissions in the 6 and 12 months preceding the index visit (Table 1).

Development and Validation

We used 24 months of retrospective data, which included 120,396 discharges with 17,337 thirty‐day readmissions (14.4% 30‐day all‐cause readmission rate) to test the candidate prediction models. A single risk factor, 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%) (Table 1).

Retrospective and Prospective Evaluation of Prediction Models for 30‐Day All‐Cause Readmissions
 SensitivitySpecificityC StatisticPPVNPVScreen PositiveF Score
  • NOTE: Abbreviations: 30‐day, prior 30‐day readmission; Admit, inpatient hospital admission; ED, emergency room visit; NPV, negative predictive value; PPV, positive predictive value.

  • Optimum prediction model.

Retrospective Evaluation of Prediction Rules Lookback period: 6 months
Prior Admissions
153%74%0.64026%91%30%0.350
232%90%0.61035%89%13%0.333
320%96%0.57844%88%7%0.274
Prior ED Visits
131%81%0.55821%87%21%0.252
213%93%0.53225%87%8%0.172
37%97%0.51927%86%4%0.111
Prior 30‐day Readmissions
139%85%0.62331%89%18%0.347
221%95%0.58243%88%7%0.284
313%98%0.55553%87%4%0.208
Combined Rules
Admit1 & ED122%92%0.56831%88%10%0.255
Admit2 & ED115%96%0.55640%87%5%0.217
Admit1 & 30‐day139%85%0.62331%89%18%0.346
Admit2 & 30‐day129%92%0.60337%89%11%0.324
30‐day1 & ED117%95%0.55937%87%6%0.229
30‐day1 & ED28%98%0.52740%86%3%0.132
Lookback period: 12 months
Prior Admission
160%68%0.59324%91%36%0.340
2a40%85%0.62431%89%18%0.354
328%92%0.60037%88%11%0.318
Prior ED Visit
138%74%0.56020%88%28%0.260
220%88%0.54423%87%13%0.215
38%96%0.52327%86%4%0.126
Prior 30‐day Readmission
143%84%0.63030%90%20%0.353
224%94%0.59241%88%9%0.305
311%98%0.54854%87%3%0.186
Combined Rules
Admit1 & ED129%87%0.58027%88%15%0.281
Admit2 & ED122%93%0.57434%88%9%0.266
Admit1 & 30‐day142%84%0.63030%90%14%0.353
Admit2 & 30‐day134%89%0.61534%89%14%0.341
30‐day1 & ED121%93%0.56935%88%9%0.261
30‐day1 & ED213%96%0.54537%87%5%0.187
Prospective Evaluation of Prediction Rule
30‐Day All‐Cause39%84%0.61430%89%18%0.339

Prospective validation of the prediction model was performed using the 12‐month period directly following readmission risk flag implementation. During this period, the 30‐day all‐cause readmission rate was 15.1%. Sensitivity (39%), positive predictive value (30%), and proportion of patients flagged (18%) were consistent with the values derived from the retrospective data, supporting the reproducibility and predictive stability of the chosen risk prediction model (Table 1). The C statistic of the model was also consistent between the retrospective and prospective datasets (0.62 and 0.61, respectively).

Readmission Rates

The mean 30‐day all‐cause readmission rate for the 24‐month period prior to the intervention was 14.4%, whereas the mean for the 12‐month period after the implementation was 15.1%. Thirty‐day all‐cause and 7‐day unplanned monthly readmission rates do not appear to have been impacted by the intervention (Figure 2). There was no evidence for either an immediate or sustained effect (Table 2).

Figure 2
(A) Thirty‐day all‐cause readmission rates over time. (B) Seven‐day unplanned readmission rates over time.
Interrupted Time Series of Readmission Rates
HospitalPreimplementation PeriodImmediate EffectPostimplementation PeriodP Value Change in Trenda
Monthly % Change in Readmission RatesP ValueImmediate % ChangeP ValueMonthly % Change in Readmission RatesP Value
  • NOTE: Regression coefficients represent the absolute change in the monthly readmission rate (percentage) per unit time (month). Models are adjusted for autocorrelation using the Cochrane‐Orcutt estimator.

  • P value compares the pre‐ and postimplementation trends in readmission rates.

30‐Day All‐Cause Readmission Rates
Hosp A0.023Stable0.1530.4800.9910.100Increasing0.0440.134
Hosp B0.061Increasing0.0020.4920.1250.060Stable0.2960.048
Hosp C0.026Stable0.4130.4470.5850.046Stable0.6290.476
Health System0.032Increasing0.0140.3440.3020.026Stable0.4990.881
7‐Day Unplanned Readmission Rates
Hosp A0.004Stable0.6420.2710.4170.005Stable0.8910.967
Hosp B0.012Stable0.2010.2980.4890.038Stable0.4290.602
Hosp C0.008Stable0.2130.3530.2040.004Stable0.8950.899
Health System0.005Stable0.3580.0030.9900.010Stable0.7120.583

DISCUSSION

In this proof‐of‐concept study, we demonstrated the feasibility of an automated readmission risk prediction model integrated into a health system's EHR for a mixed population of hospitalized medical and surgical patients. To our knowledge, this is the first study in a general population of hospitalized patients to examine the impact of providing readmission risk assessment on readmission rates. We used a simple prediction model potentially generalizable to EHRs and healthcare populations beyond our own.

Existing risk prediction models for hospital readmission have important limitations and are difficult to implement in clinical practice.[22] Prediction models for hospital readmission are often dependent on retrospective claims data, developed for specific patient populations, and not designed for use early in the course of hospitalization when transitional care interventions can be initiated.[22] In addition, the time required to gather the necessary data and calculate the risk score remains a barrier to the adoption of prediction models in practice. By automating the process of readmission risk prediction, we were able to help integrate risk assessment into the healthcare process across many providers in a large multihospital healthcare organization. This has allowed us to consistently share risk assessment in real time with all members of the inpatient team, facilitating a team‐based approach to discharge planning.[23]

Two prior studies have developed readmission risk prediction models designed to be implemented into the EHR. Amarasingham et al.[24] developed and implemented[25] a heart failure‐specific prediction model based on the 18‐item Tabak mortality score.[26] Bradley et al.[27] studied in a broader population of medicine and surgery patients the predictive ability of a 26‐item score that utilized vital sign, cardiac rhythm, and nursing assessment data. Although EHRs are developing rapidly, currently the majority of EHRs do not support the use of many of the variables used in these models. In addition, both models were complex, raising concerns about generalizability to other healthcare settings and populations.

A distinctive characteristic of our model is its simplicity. We were cognizant of the realities of running a prediction model in a high‐volume production environment and the diminishing returns of adding more variables. We thus favored simplicity at all stages of model development, with the associated belief that complexity could be added with future iterations once feasibility had been established. Finally, we were aware that we were constructing a medical decision support tool rather than a simple classifier.[26] As such, the optimal model was not purely driven by discriminative ability, but also by our subjective assessment of the optimal trade‐off between sensitivity and specificity (the test‐treatment threshold) for such a model.[26] To facilitate model assessment, we thus categorized the potential predictor variables and evaluated the test characteristics of each combination of categorized variables. Although the C statistic of a model using continuous variables will be higher than a model using categorical values, model performance at the chosen trade‐off point is unlikely to be different.

Although the overall predictive ability of our model was fair, we found that it was associated with clinically meaningful differences in readmission rates between those triggering and not triggering the flag. The 30‐day all‐cause readmission rate in the 12‐month prospective sample was 15.1%, yet among those flagged as being at high risk for readmission the readmission rate was 30.4%. Given resource constraints and the need to selectively apply potentially costly care transition interventions, this may in practice translate into a meaningful discriminative ability.

Readmission rates did not change significantly during the study period. A number of plausible reasons for this exist, including: (1) the current model may not exhibit sufficient predictive ability to classify those at high risk or impact the behavior of providers appropriately, (2) those patients classified as high risk of readmission may not be at high risk of readmissions that are preventable, (3) information provided by the model may not yet routinely be used such that it can affect care, or (4) providing readmission risk assessment alone is not sufficient to influence readmission rates, and the other interventions or organizational changes necessary to impact care of those defined as high risk have not yet been implemented or are not yet being performed routinely. If the primary reasons for our results are those outlined in numbers 3 or 4, then readmission rates should improve over time as the risk flag becomes more routinely used, and those interventions necessary to impact readmission rates of those defined as high risk are implemented and performed.

Limitations

There are several limitations of this intervention. First, the prediction model was developed using 30‐day all‐cause readmissions, rather than attempting to identify potentially preventable readmissions. Thirty‐day readmission rates may not be a good proxy for preventable readmissions,[18] and as a consequence, the ability to predict 30‐day readmissions may not ensure that a prediction model is able to predict preventable readmissions. Nonetheless, 30‐day readmission rates remain the most commonly used quality metric.

Second, the impact of the risk flag on provider behavior is uncertain. We did not formally assess how the readmission risk flag was used by healthcare team members. Informal assessment has, however, revealed that the readmission risk flag is gradually being adopted by different members of the care team including unit‐based pharmacists who are using the flag to prioritize the delivery of medication education, social workers who are using the flag to prompt providers to consider higher level services for patients at high risk of readmission, and patient navigators who are using the flag to prioritize follow‐up phone calls. As a result, we hope that the flag will ultimately improve the processes of care for high‐risk patients.

Third, we did not capture readmissions to hospitals outside of our healthcare system and have therefore underestimated the readmission rate in our population. However, our assessment of the effect of the risk flag on readmissions focused on relative readmission rates over time, and the use of the interrupted time series methodology should protect against secular changes in outside hospital readmission rates that were not associated with the intervention.

Fourth, it is possible that the prediction model implemented could be significantly improved by including additional variables or data available during the hospital stay. However, simple classification models using a single variable have repeatedly been shown to have the ability to compete favorably with state‐of‐the‐art multivariable classification models.[28]

Fifth, our study was limited to a single academic health system, and our experience may not be generalizable to smaller healthcare systems with limited EHR systems. However, the simplicity of our prediction model and the integration into a commercial EHR may improve the generalizability of our experience to other healthcare settings. Additionally, partly due to recent policy initiatives, the adoption of integrated EHR systems by hospitals is expected to continue at a rapid rate and become the standard of care within the near future.[29]

CONCLUSION

An automated prediction model was effectively integrated into an existing EHR and was able to identify patients on admission who are at risk for readmission within 30 days of discharge. Future work will aim to further examine the impact of the flag on readmission rates, further refine the prediction model, and gather data on how providers and care teams use the information provided by the flag.

Disclosure

Dr. Umscheid‐s contribution to this project was supported in part by the National Center for Research Resources, Grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, Grant UL1TR000003. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

Unplanned hospital readmissions are common, costly, and potentially avoidable. Approximately 20% of Medicare patients are readmitted within 30 days of discharge.[1] Readmission rates are estimated to be similarly high in other population subgroups,[2, 3, 4] with approximately 80% of patients[1, 5, 6] readmitted to the original discharging hospital. A recent systematic review suggested that 27% of readmissions may be preventable.[7]

Hospital readmissions have increasingly been viewed as a correctable marker of poor quality care and have been adopted by a number of organizations as quality indicators.[8, 9, 10] As a result, hospitals have important internal and external motivations to address readmissions. Identification of patients at high risk for readmissions may be an important first step toward preventing them. In particular, readmission risk assessment could be used to help providers target the delivery of resource‐intensive transitional care interventions[11, 12, 13, 14] to patients with the greatest needs. Such an approach is appealing because it allows hospitals to focus scarce resources where the impact may be greatest and provides a starting point for organizations struggling to develop robust models of transitional care delivery.

Electronic health records (EHRs) may prove to be an important component of strategies designed to risk stratify patients at the point of care. Algorithms integrated into the EHR that automatically generate risk predictions have the potential to (1) improve provider time efficiency by automating the prediction process, (2) improve consistency of data collection and risk score calculation, (3) increase adoption through improved usability, and (4) provide clinically important information in real‐time to all healthcare team members caring for a hospitalized patient.

We thus sought to derive a predictive model for 30‐day readmissions using data reliably present in our EHR at the time of admission, and integrate this predictive model into our hospital's EHR to create an automated prediction tool that identifies on admission patients at high risk for readmission within 30 days of discharge. In addition, we prospectively validated this model using the 12‐month period after implementation and examined the impact on readmissions.

METHODS

Setting

The University of Pennsylvania Health System (UPHS) includes 3 hospitals, with a combined capacity of over 1500 beds and 70,000 annual admissions. All hospitals currently utilize Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL) as their EHR. The study sample included all adult admissions to any of the 3 UPHS hospitals during the study period. Admissions to short procedure, rehabilitation, and hospice units were excluded. The study received expedited approval and a HIPAA waiver from the University of Pennsylvania institutional review board.

Development of Predictive Model

The UPHS Center for Evidence‐based Practice[15, 16] performed a systematic review to identify factors associated with hospital readmission within 30 days of discharge. We then examined the data available from our hospital EHR at the time of admission for those factors identified in the review. Using different threshold values and look‐back periods, we developed and tested 30 candidate prediction models using these variables alone and in combination (Table 1). Prediction models were evaluated using 24 months of historical data between August 1, 2009 and August 1, 2011.

Implementation

An automated readmission risk flag was then integrated into the EHR. Patients classified as being at high risk for readmission with the automated prediction model were flagged in the EHR on admission (Figure 1A). The flag can be double‐clicked to display a separate screen with information relevant to discharge planning including inpatient and emergency department (ED) visits in the prior 12 months, as well as information about the primary team, length of stay, and admitting problem associated with those admissions (Figure 1B). The prediction model was integrated into our EHR using Arden Syntax for Medical Logic Modules.[17] The readmission risk screen involved presenting the provider with a new screen and was thus developed in Microsoft .NET using C# and Windows Forms (Microsoft Corp., Redmond, WA).

Figure 1
(A) Screenshot of the electronic health record (EHR) with the readmission risk flag implemented and visible in the ninth column of the patient list. (B) A new screen with patient‐specific information relevant to discharge planning can be accessed within the EHR by double‐clicking a patient's risk flag.

The flag was visible on the patient lists of all providers who utilized the EHR. This included but was not limited to nurses, social workers, unit pharmacists, and physicians. At the time of implementation, educational events regarding the readmission risk flag were provided in forums targeting administrators, pharmacists, social workers, and housestaff. Information about the flag and recommendations for use were distributed through emails and broadcast screensaver messages disseminated throughout the inpatient units of the health system. Providers were asked to pay special attention to discharge planning for patients triggering the readmission risk flag, including medication reconciliation by pharmacists for these patients prior to discharge, and arrangement of available home services by social workers.

The risk flag was 1 of 4 classes of interventions developed and endorsed by the health system in its efforts to reduce readmissions. Besides risk stratification, the other classes were: interdisciplinary rounding, patient education, and discharge communication. None of the interventions alone were expected to decrease readmissions, but as all 4 classes of interventions were implemented and performed routinely, the expectation was that they would work in concert to reduce readmissions.

Analysis

The primary outcome was all‐cause hospital readmissions in the healthcare system within 30 days of discharge. Although this outcome is commonly used both in the literature and as a quality metric, significant debate persists as to the appropriateness of this metric.[18] Many of the factors driving 30‐day readmissions may be dependent on factors outside of the discharging hospital's control and it has been argued that nearer‐term, nonelective readmission rates may provide a more meaningful quality metric.[18] Seven‐day unplanned readmissions were thus used as a secondary outcome measure for this study.

Sensitivity, specificity, predictive value, C statistic, F score (the harmonic mean of positive predictive value and sensitivity),[19] and screen‐positive rate were calculated for each of the 30 prediction models evaluated using the historical data. The prediction model with the best balance of F score and screen‐positive rate was selected as the prediction model to be integrated into the EHR. Prospective validation of the selected prediction model was performed using the 12‐month period following implementation of the risk flag (September 2011September 2012).

To assess the impact of the automated prediction model on monthly readmission rate, we used the 24‐month period immediately before and the 12‐month period immediately after implementation of the readmission risk flag. Segmented regression analysis was performed testing for changes in level and slope of readmission rates between preimplementation and postimplementation time periods. This quasiexperimental interrupted time series methodology[20] allows us to control for secular trends in readmission rates and to assess the preimplementation trend (secular trend), the difference in rates immediately before and after the implementation (immediate effect), and the postimplementation change over time (sustained effect). We used Cochrane‐Orcutt estimation[21] to correct for serial autocorrelation.

All analyses were performed using Stata 12.1 software (Stata Corp, College Station, TX).

RESULTS

Predictors of Readmission

Our systematic review of the literature identified several patient and healthcare utilization patterns predictive of 30‐day readmission risk. Utilization factors included length of stay, number of prior admissions, previous 30‐day readmissions, and previous ED visits. Patient characteristics included number of comorbidities, living alone, and payor. Evidence was inconsistent regarding threshold values for these variables.

Many variables readily available in our EHR were either found by the systematic review not to be reliably predictive of 30‐day readmission (including age and gender) or were not readily or reliably available on admission (including length of stay and payor). At the time of implementation, our EHR did not include vital sign or nursing assessment variables, so these were not considered for inclusion in our model.

Of the available variables, 3 were consistently accurate and available in the EHR at the time of patient admission: prior hospital admission, emergency department visit, and 30‐day readmission within UPHS. We then developed 30 candidate prediction models using a combination of these variables, including 1 and 2 prior admissions, ED visits, and 30‐day readmissions in the 6 and 12 months preceding the index visit (Table 1).

Development and Validation

We used 24 months of retrospective data, which included 120,396 discharges with 17,337 thirty‐day readmissions (14.4% 30‐day all‐cause readmission rate) to test the candidate prediction models. A single risk factor, 2 inpatient admissions in the past 12 months, was found to have the best balance of sensitivity (40%), positive predictive value (31%), and proportion of patients flagged (18%) (Table 1).

Retrospective and Prospective Evaluation of Prediction Models for 30‐Day All‐Cause Readmissions
 SensitivitySpecificityC StatisticPPVNPVScreen PositiveF Score
  • NOTE: Abbreviations: 30‐day, prior 30‐day readmission; Admit, inpatient hospital admission; ED, emergency room visit; NPV, negative predictive value; PPV, positive predictive value.

  • Optimum prediction model.

Retrospective Evaluation of Prediction Rules Lookback period: 6 months
Prior Admissions
153%74%0.64026%91%30%0.350
232%90%0.61035%89%13%0.333
320%96%0.57844%88%7%0.274
Prior ED Visits
131%81%0.55821%87%21%0.252
213%93%0.53225%87%8%0.172
37%97%0.51927%86%4%0.111
Prior 30‐day Readmissions
139%85%0.62331%89%18%0.347
221%95%0.58243%88%7%0.284
313%98%0.55553%87%4%0.208
Combined Rules
Admit1 & ED122%92%0.56831%88%10%0.255
Admit2 & ED115%96%0.55640%87%5%0.217
Admit1 & 30‐day139%85%0.62331%89%18%0.346
Admit2 & 30‐day129%92%0.60337%89%11%0.324
30‐day1 & ED117%95%0.55937%87%6%0.229
30‐day1 & ED28%98%0.52740%86%3%0.132
Lookback period: 12 months
Prior Admission
160%68%0.59324%91%36%0.340
2a40%85%0.62431%89%18%0.354
328%92%0.60037%88%11%0.318
Prior ED Visit
138%74%0.56020%88%28%0.260
220%88%0.54423%87%13%0.215
38%96%0.52327%86%4%0.126
Prior 30‐day Readmission
143%84%0.63030%90%20%0.353
224%94%0.59241%88%9%0.305
311%98%0.54854%87%3%0.186
Combined Rules
Admit1 & ED129%87%0.58027%88%15%0.281
Admit2 & ED122%93%0.57434%88%9%0.266
Admit1 & 30‐day142%84%0.63030%90%14%0.353
Admit2 & 30‐day134%89%0.61534%89%14%0.341
30‐day1 & ED121%93%0.56935%88%9%0.261
30‐day1 & ED213%96%0.54537%87%5%0.187
Prospective Evaluation of Prediction Rule
30‐Day All‐Cause39%84%0.61430%89%18%0.339

Prospective validation of the prediction model was performed using the 12‐month period directly following readmission risk flag implementation. During this period, the 30‐day all‐cause readmission rate was 15.1%. Sensitivity (39%), positive predictive value (30%), and proportion of patients flagged (18%) were consistent with the values derived from the retrospective data, supporting the reproducibility and predictive stability of the chosen risk prediction model (Table 1). The C statistic of the model was also consistent between the retrospective and prospective datasets (0.62 and 0.61, respectively).

Readmission Rates

The mean 30‐day all‐cause readmission rate for the 24‐month period prior to the intervention was 14.4%, whereas the mean for the 12‐month period after the implementation was 15.1%. Thirty‐day all‐cause and 7‐day unplanned monthly readmission rates do not appear to have been impacted by the intervention (Figure 2). There was no evidence for either an immediate or sustained effect (Table 2).

Figure 2
(A) Thirty‐day all‐cause readmission rates over time. (B) Seven‐day unplanned readmission rates over time.
Interrupted Time Series of Readmission Rates
HospitalPreimplementation PeriodImmediate EffectPostimplementation PeriodP Value Change in Trenda
Monthly % Change in Readmission RatesP ValueImmediate % ChangeP ValueMonthly % Change in Readmission RatesP Value
  • NOTE: Regression coefficients represent the absolute change in the monthly readmission rate (percentage) per unit time (month). Models are adjusted for autocorrelation using the Cochrane‐Orcutt estimator.

  • P value compares the pre‐ and postimplementation trends in readmission rates.

30‐Day All‐Cause Readmission Rates
Hosp A0.023Stable0.1530.4800.9910.100Increasing0.0440.134
Hosp B0.061Increasing0.0020.4920.1250.060Stable0.2960.048
Hosp C0.026Stable0.4130.4470.5850.046Stable0.6290.476
Health System0.032Increasing0.0140.3440.3020.026Stable0.4990.881
7‐Day Unplanned Readmission Rates
Hosp A0.004Stable0.6420.2710.4170.005Stable0.8910.967
Hosp B0.012Stable0.2010.2980.4890.038Stable0.4290.602
Hosp C0.008Stable0.2130.3530.2040.004Stable0.8950.899
Health System0.005Stable0.3580.0030.9900.010Stable0.7120.583

DISCUSSION

In this proof‐of‐concept study, we demonstrated the feasibility of an automated readmission risk prediction model integrated into a health system's EHR for a mixed population of hospitalized medical and surgical patients. To our knowledge, this is the first study in a general population of hospitalized patients to examine the impact of providing readmission risk assessment on readmission rates. We used a simple prediction model potentially generalizable to EHRs and healthcare populations beyond our own.

Existing risk prediction models for hospital readmission have important limitations and are difficult to implement in clinical practice.[22] Prediction models for hospital readmission are often dependent on retrospective claims data, developed for specific patient populations, and not designed for use early in the course of hospitalization when transitional care interventions can be initiated.[22] In addition, the time required to gather the necessary data and calculate the risk score remains a barrier to the adoption of prediction models in practice. By automating the process of readmission risk prediction, we were able to help integrate risk assessment into the healthcare process across many providers in a large multihospital healthcare organization. This has allowed us to consistently share risk assessment in real time with all members of the inpatient team, facilitating a team‐based approach to discharge planning.[23]

Two prior studies have developed readmission risk prediction models designed to be implemented into the EHR. Amarasingham et al.[24] developed and implemented[25] a heart failure‐specific prediction model based on the 18‐item Tabak mortality score.[26] Bradley et al.[27] studied in a broader population of medicine and surgery patients the predictive ability of a 26‐item score that utilized vital sign, cardiac rhythm, and nursing assessment data. Although EHRs are developing rapidly, currently the majority of EHRs do not support the use of many of the variables used in these models. In addition, both models were complex, raising concerns about generalizability to other healthcare settings and populations.

A distinctive characteristic of our model is its simplicity. We were cognizant of the realities of running a prediction model in a high‐volume production environment and the diminishing returns of adding more variables. We thus favored simplicity at all stages of model development, with the associated belief that complexity could be added with future iterations once feasibility had been established. Finally, we were aware that we were constructing a medical decision support tool rather than a simple classifier.[26] As such, the optimal model was not purely driven by discriminative ability, but also by our subjective assessment of the optimal trade‐off between sensitivity and specificity (the test‐treatment threshold) for such a model.[26] To facilitate model assessment, we thus categorized the potential predictor variables and evaluated the test characteristics of each combination of categorized variables. Although the C statistic of a model using continuous variables will be higher than a model using categorical values, model performance at the chosen trade‐off point is unlikely to be different.

Although the overall predictive ability of our model was fair, we found that it was associated with clinically meaningful differences in readmission rates between those triggering and not triggering the flag. The 30‐day all‐cause readmission rate in the 12‐month prospective sample was 15.1%, yet among those flagged as being at high risk for readmission the readmission rate was 30.4%. Given resource constraints and the need to selectively apply potentially costly care transition interventions, this may in practice translate into a meaningful discriminative ability.

Readmission rates did not change significantly during the study period. A number of plausible reasons for this exist, including: (1) the current model may not exhibit sufficient predictive ability to classify those at high risk or impact the behavior of providers appropriately, (2) those patients classified as high risk of readmission may not be at high risk of readmissions that are preventable, (3) information provided by the model may not yet routinely be used such that it can affect care, or (4) providing readmission risk assessment alone is not sufficient to influence readmission rates, and the other interventions or organizational changes necessary to impact care of those defined as high risk have not yet been implemented or are not yet being performed routinely. If the primary reasons for our results are those outlined in numbers 3 or 4, then readmission rates should improve over time as the risk flag becomes more routinely used, and those interventions necessary to impact readmission rates of those defined as high risk are implemented and performed.

Limitations

There are several limitations of this intervention. First, the prediction model was developed using 30‐day all‐cause readmissions, rather than attempting to identify potentially preventable readmissions. Thirty‐day readmission rates may not be a good proxy for preventable readmissions,[18] and as a consequence, the ability to predict 30‐day readmissions may not ensure that a prediction model is able to predict preventable readmissions. Nonetheless, 30‐day readmission rates remain the most commonly used quality metric.

Second, the impact of the risk flag on provider behavior is uncertain. We did not formally assess how the readmission risk flag was used by healthcare team members. Informal assessment has, however, revealed that the readmission risk flag is gradually being adopted by different members of the care team including unit‐based pharmacists who are using the flag to prioritize the delivery of medication education, social workers who are using the flag to prompt providers to consider higher level services for patients at high risk of readmission, and patient navigators who are using the flag to prioritize follow‐up phone calls. As a result, we hope that the flag will ultimately improve the processes of care for high‐risk patients.

Third, we did not capture readmissions to hospitals outside of our healthcare system and have therefore underestimated the readmission rate in our population. However, our assessment of the effect of the risk flag on readmissions focused on relative readmission rates over time, and the use of the interrupted time series methodology should protect against secular changes in outside hospital readmission rates that were not associated with the intervention.

Fourth, it is possible that the prediction model implemented could be significantly improved by including additional variables or data available during the hospital stay. However, simple classification models using a single variable have repeatedly been shown to have the ability to compete favorably with state‐of‐the‐art multivariable classification models.[28]

Fifth, our study was limited to a single academic health system, and our experience may not be generalizable to smaller healthcare systems with limited EHR systems. However, the simplicity of our prediction model and the integration into a commercial EHR may improve the generalizability of our experience to other healthcare settings. Additionally, partly due to recent policy initiatives, the adoption of integrated EHR systems by hospitals is expected to continue at a rapid rate and become the standard of care within the near future.[29]

CONCLUSION

An automated prediction model was effectively integrated into an existing EHR and was able to identify patients on admission who are at risk for readmission within 30 days of discharge. Future work will aim to further examine the impact of the flag on readmission rates, further refine the prediction model, and gather data on how providers and care teams use the information provided by the flag.

Disclosure

Dr. Umscheid‐s contribution to this project was supported in part by the National Center for Research Resources, Grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, Grant UL1TR000003. The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):10741081.
  3. Weeks WB, Lee RE, Wallace AE, West AN, Bagian JP. Do older rural and urban veterans experience different rates of unplanned readmission to VA and non‐VA hospitals? J Rural Health. 2009;25(1):6269.
  4. Underwood MA, Danielsen B, Gilbert WM. Cost, causes and rates of rehospitalization of preterm infants. J Perinatol. 2007;27(10):614619.
  5. Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):5460.
  6. Lanièce II, Couturier PP, Dramé MM, et al. Incidence and main factors associated with early unplanned hospital readmission among French medical inpatients aged 75 and over admitted through emergency units. Age Ageing. 2008;37(4):416422.
  7. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  8. Hospital Quality Alliance. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html. Accessed March 6, 2013.
  9. Institute for Healthcare Improvement. Available at: http://www.ihi.org/explore/Readmissions/Pages/default.aspx. Accessed March 6, 2013.
  10. Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/HospitalQualityInits/OutcomeMeasures.html. Accessed March 6, 2013.
  11. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281(7):613620.
  12. Coleman EA, Smith JD, Frank JC, Min S‐J, Parry C, Kramer AM. Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. J Am Geriatr Soc. 2004;52(11):18171825.
  13. Naylor MD, Aiken LH, Kurtzman ET, Olds DM, Hirschman KB. The importance of transitional care in achieving health reform. Health Aff (Millwood). 2011;30(4):746754.
  14. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  15. University of Pennsylvania Health System Center for Evidence‐based Practice. Available at: http://www.uphs.upenn.edu/cep/. Accessed March 6, 2013.
  16. Umscheid CA, Williams K, Brennan PJ. Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):13521355.
  17. Hripcsak G. Writing Arden Syntax Medical Logic Modules. Comput Biol Med. 1994;24(5):331363.
  18. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  19. Rijsbergen CJ. Information Retrieval. 2nd ed. Oxford, UK: Butterworth‐Heinemann; 1979.
  20. Wagner AK, Soumerai SB, Zhang F, Ross‐Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299309.
  21. Cochrane D, Orcutt GH. Application of least squares regression to relationships containing auto‐correlated error terms. J Am Stat Assoc. 1949; 44:3261.
  22. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  23. Mitchell P, Wynia M, Golden R, et al. Core Principles and values of effective team‐based health care. Available at: https://www.nationalahec.org/pdfs/VSRT‐Team‐Based‐Care‐Principles‐Values.pdf. Accessed March 19, 2013.
  24. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  25. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study [published online ahead of print July 31, 2013]. BMJ Qual Saf. doi:10.1136/bmjqs‐2013‐001901.
  26. Pauker SG, Kassirer JP. The threshold approach to clinical decision making. N Engl J Med. 1980;302(20):11091117.
  27. Bradley EH, Yakusheva O, Horwitz LI, Sipsma H, Fletcher J. Identifying patients at increased risk for unplanned readmission. Med Care. 2013;51(9):761766.
  28. Holte RC. Very simple classification rules perform well on most commonly used datasets. Mach Learn. 1993;11(1):6391.
  29. Blumenthal D. Stimulating the adoption of health information technology. N Engl J Med. 2009;360(15):14771479.
References
  1. Jencks SF, Williams MV, Coleman EA. Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):14181428.
  2. Benbassat J, Taragin M. Hospital readmissions as a measure of quality of health care: advantages and limitations. Arch Intern Med. 2000;160(8):10741081.
  3. Weeks WB, Lee RE, Wallace AE, West AN, Bagian JP. Do older rural and urban veterans experience different rates of unplanned readmission to VA and non‐VA hospitals? J Rural Health. 2009;25(1):6269.
  4. Underwood MA, Danielsen B, Gilbert WM. Cost, causes and rates of rehospitalization of preterm infants. J Perinatol. 2007;27(10):614619.
  5. Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):5460.
  6. Lanièce II, Couturier PP, Dramé MM, et al. Incidence and main factors associated with early unplanned hospital readmission among French medical inpatients aged 75 and over admitted through emergency units. Age Ageing. 2008;37(4):416422.
  7. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  8. Hospital Quality Alliance. Available at: http://www.hospitalqualityalliance.org/hospitalqualityalliance/qualitymeasures/qualitymeasures.html. Accessed March 6, 2013.
  9. Institute for Healthcare Improvement. Available at: http://www.ihi.org/explore/Readmissions/Pages/default.aspx. Accessed March 6, 2013.
  10. Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/HospitalQualityInits/OutcomeMeasures.html. Accessed March 6, 2013.
  11. Naylor MD, Brooten D, Campbell R, et al. Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial. JAMA. 1999;281(7):613620.
  12. Coleman EA, Smith JD, Frank JC, Min S‐J, Parry C, Kramer AM. Preparing patients and caregivers to participate in care delivered across settings: the Care Transitions Intervention. J Am Geriatr Soc. 2004;52(11):18171825.
  13. Naylor MD, Aiken LH, Kurtzman ET, Olds DM, Hirschman KB. The importance of transitional care in achieving health reform. Health Aff (Millwood). 2011;30(4):746754.
  14. Hansen LO, Young RS, Hinami K, Leung A, Williams MV. Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520528.
  15. University of Pennsylvania Health System Center for Evidence‐based Practice. Available at: http://www.uphs.upenn.edu/cep/. Accessed March 6, 2013.
  16. Umscheid CA, Williams K, Brennan PJ. Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):13521355.
  17. Hripcsak G. Writing Arden Syntax Medical Logic Modules. Comput Biol Med. 1994;24(5):331363.
  18. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):13661369.
  19. Rijsbergen CJ. Information Retrieval. 2nd ed. Oxford, UK: Butterworth‐Heinemann; 1979.
  20. Wagner AK, Soumerai SB, Zhang F, Ross‐Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27(4):299309.
  21. Cochrane D, Orcutt GH. Application of least squares regression to relationships containing auto‐correlated error terms. J Am Stat Assoc. 1949; 44:3261.
  22. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):16881698.
  23. Mitchell P, Wynia M, Golden R, et al. Core Principles and values of effective team‐based health care. Available at: https://www.nationalahec.org/pdfs/VSRT‐Team‐Based‐Care‐Principles‐Values.pdf. Accessed March 19, 2013.
  24. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48(11):981988.
  25. Amarasingham R, Patel PC, Toto K, et al. Allocating scarce resources in real‐time to reduce heart failure readmissions: a prospective, controlled study [published online ahead of print July 31, 2013]. BMJ Qual Saf. doi:10.1136/bmjqs‐2013‐001901.
  26. Pauker SG, Kassirer JP. The threshold approach to clinical decision making. N Engl J Med. 1980;302(20):11091117.
  27. Bradley EH, Yakusheva O, Horwitz LI, Sipsma H, Fletcher J. Identifying patients at increased risk for unplanned readmission. Med Care. 2013;51(9):761766.
  28. Holte RC. Very simple classification rules perform well on most commonly used datasets. Mach Learn. 1993;11(1):6391.
  29. Blumenthal D. Stimulating the adoption of health information technology. N Engl J Med. 2009;360(15):14771479.
Issue
Journal of Hospital Medicine - 8(12)
Issue
Journal of Hospital Medicine - 8(12)
Page Number
689-695
Page Number
689-695
Publications
Publications
Article Type
Display Headline
The readmission risk flag: Using the electronic health record to automatically identify patients at risk for 30‐day readmission
Display Headline
The readmission risk flag: Using the electronic health record to automatically identify patients at risk for 30‐day readmission
Sections
Article Source

© 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Craig A. Umscheid, MD, Penn Medicine, 3535 Market Street, Mezzanine, Suite 50, Philadelphia, PA 19104; Telephone: 215‐349‐8098; Fax: 215‐349‐5829; E‐mail: craig.umscheid@uphs.upenn.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files