Affiliations
Division of Research, Kaiser Permanente Northern California
Given name(s)
Benjamin J.
Family name
Turk
Degrees
BA

Electronic Order Set for AMI

Article Type
Changed
Sun, 05/21/2017 - 14:39
Display Headline
An electronic order set for acute myocardial infarction is associated with improved patient outcomes through better adherence to clinical practice guidelines

Although the prevalence of coronary heart disease and death from acute myocardial infarction (AMI) have declined steadily, about 935,000 heart attacks still occur annually in the United States, with approximately one‐third of these being fatal.[1, 2, 3] Studies have demonstrated decreased 30‐day and longer‐term mortality in AMI patients who receive evidence‐based treatment, including aspirin, ‐blockers, angiotensin‐converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), anticoagulation therapy, and statins.[4, 5, 6, 7] Despite clinical practice guidelines (CPGs) outlining evidence‐based care and considerable efforts to implement processes that improve patient outcomes, delivery of effective therapy remains suboptimal.[8] For example, the Hospital Quality Alliance Program[9] found that in AMI patients, use of aspirin on admission was only 81% to 92%, ‐blocker on admission 75% to 85%, and ACE inhibitors for left ventricular dysfunction 71% to 74%.

Efforts to increase adherence to CPGs and improve patient outcomes in AMI have resulted in variable degrees of success. They include promotion of CPGs,[4, 5, 6, 7] physician education with feedback, report cards, care paths, registries,[10] Joint Commission standardized measures,[11] and paper checklists or order sets (OS).[12, 13]

In this report, we describe the association between use of an evidence‐based, electronic OS for AMI (AMI‐OS) and better adherence to CPGs. This AMI‐OS was implemented in the inpatient electronic medical records (EMRs) of a large integrated healthcare delivery system, Kaiser Permanente Northern California (KPNC). The purpose of our investigation was to determine (1) whether use of the AMI‐OS was associated with improved AMI processes and patient outcomes, and (2) whether these associations persisted after risk adjustment using a comprehensive severity of illness scoring system.

MATERIALS AND METHODS

This project was approved by the KPNC institutional review board.

Under a mutual exclusivity arrangement, salaried physicians of The Permanente Medical Group, Inc., care for 3.4 million Kaiser Foundation Health Plan, Inc. members at facilities owned by Kaiser Foundation Hospitals, Inc. All KPNC facilities employ the same information systems with a common medical record number and can track care covered by the plan but delivered elsewhere.[14] Our setting consisted of 21 KPNC hospitals described in previous reports,[15, 16, 17, 18] using the same commercially available EMR system that includes computerized physician order entry (CPOE). Deployment of the customized inpatient Epic EMR (www.epicsystems.com), known internally as KP HealthConnect (KPHC), began in 2006 and was completed in 2010.

In this EMR's CPOE, physicians have options to select individual orders (a la carte) or they can utilize an OS, which is a collection of the most appropriate orders associated with specific diagnoses, procedures, or treatments. The evidence‐based AMI‐OS studied in this project was developed by a multidisciplinary team (for detailed components see Supporting Appendix 1Appendix 5 in the online version of this article).

Our study focused on the first set of hospital admission orders for patients with AMI. The study sample consisted of patients meeting these criteria: (1) age 18 years at admission; (2) admitted to a KPNC hospital for an overnight stay between September 28, 2008 and December 31, 2010; (3) principal diagnosis was AMI (International Classification of Diseases, 9th Revision [ICD‐9][19] codes 410.00, 01, 10, 11, 20, 21, 30, 31, 40, 41, 50, 51, 60, 61, 70, 71, 80, 90, and 91); and (4) KPHC had been operational at the hospital for at least 3 months to be included (for assembly descriptions see Supporting Appendices 15 in the online version of this article). At the study hospitals, troponin I was measured using the Beckman Access AccuTnI assay (Beckman Coulter, Inc., Brea, CA), whose upper reference limit (99th percentile) is 0.04 ng/mL. We excluded patients initially hospitalized for AMI at a non‐KPNC site and transferred into a study hospital.

The data processing methods we employed have been detailed elsewhere.[14, 15, 17, 20, 21, 22] The dependent outcome variables were total hospital length of stay, inpatient mortality, 30‐day mortality, and all‐cause rehospitalization within 30 days of discharge. Linked state mortality data were unavailable for the entire study period, so we ascertained 30‐day mortality based on the combination of KPNC patient demographic data and publicly available Social Security Administration decedent files. We ascertained rehospitalization by scanning KPNC hospitalization databases, which also track out‐of‐plan use.

The dependent process variables were use of aspirin within 24 hours of admission, ‐blockers, anticoagulation, ACE inhibitors or ARBs, and statins. The primary independent variable of interest was whether or not the admitting physician employed the AMI‐OS when admission orders were entered. Consequently, this variable is dichotomous (AMI‐OS vs a la carte).

We controlled for acute illness severity and chronic illness burden using a recent modification[22] of an externally validated risk‐adjustment system applicable to all hospitalized patients.[15, 16, 23, 24, 25] Our methodology included vital signs, neurological status checks, and laboratory test results obtained in the 72 hours preceding hospital admission; comorbidities were captured longitudinally using data from the year preceding hospitalization (for comparison purposes, we also assigned a Charlson Comorbidity Index score[26]).

End‐of‐life care directives are mandatory on admission at KPNC hospitals. Physicians have 4 options: full code, partial code, do not resuscitate, and comfort care only. Because of small numbers in some categories, we collapsed these 4 categories into full code and not full code. Because patients' care directives may change, we elected to capture the care directive in effect when a patient first entered a hospital unit other than the emergency department (ED).

Two authors (M.B., P.C.L.), one of whom is a board‐certified cardiologist, reviewed all admission electrocardiograms and made a consensus determination as to whether or not criteria for ST‐segment elevation myocardial infarction (STEMI) were present (ie, new ST‐segment elevation or left bundle branch block); we also reviewed the records of all patients with missing troponin I data to confirm the AMI diagnosis.

Statistical Methods

We performed unadjusted comparisons between AMI‐OS and nonAMI‐OS patients using the t test or the [2] test, as appropriate.

We hypothesized that the AMI‐OS plays a mediating role on patient outcomes through its effect on adherence to recommended treatment. We evaluated this hypothesis for inpatient mortality by first fitting a multivariable logistic regression model for inpatient mortality as the outcome and either the 5 evidence‐based therapies or the total number of evidence‐based therapies used (ranging from 02, 3, 4, or 5) as the dependent variable controlling for age, gender, presence of STEMI, troponin I, comorbidities, illness severity, ED length of stay (LOS), care directive status, and timing of cardiac catheterization referral as covariates to confirm the protective effect of these therapies on mortality. We then used the same model to estimate the effect of AMI‐OS on inpatient mortality, substituting the therapies with AMI‐OS as the dependent variable and using the same covariates. Last, we included both the therapies and the AMI‐OS in the model to evaluate their combined effects.[27]

We used 2 different methods to estimate the effects of AMI‐OS and number of therapies provided on the outcomes while adjusting for observed baseline differences between the 2 groups of patients: propensity risk score matching, which estimates the average treatment effect for the treated,[28, 29] and inverse probability of treatment weighting, which is used to estimate the average treatment effect.[30, 31, 32] The propensity score was defined as the probability of receiving the intervention for a patient with specific predictive factors.[33, 34] We computed a propensity score for each patient by using logistic regression, with the dependent variable being receipt of AMI‐OS and the independent variables being the covariates used for the multivariate logistic regression as well as ICD‐9 code for final diagnosis. We calculated the Mahalanobis distance between patients who received AMI‐OS (cases) and patients who did not received AMI‐OS (controls) using the same set of covariates. We matched each case to a single control within the same facility based on the nearest available Mahalanobis metric matching within calipers defied as the maximum width of 0.2 standard deviations of the logit of the estimated propensity score.[29, 35] We estimated the odds ratios for the binary dependent variables based on a conditional logistic regression model to account for the matched pairs design.[28] We used a generalized linear model with the log‐transformed LOS as the outcome to estimate the ratio of the LOS geometric mean of the cases to the controls. We calculated the relative risk for patients receiving AMI‐OS via the inverse probability weighting method by first defining a weight for each patient. [We assigned a weight of 1/psi to patients who received the AMI‐OS and a weight of 1/(1psi) to patients who did not receive the AMI‐OS, where psi denotes the propensity score for patient i]. We used a logistic regression model for the binary dependent variables with the same set of covariates described above to estimate the adjusted odds ratios while weighting each observation by its corresponding weight. Last, we used a weighted generalized linear model to estimate the AMI‐OS effect on the log‐transformed LOS.

RESULTS

Table 1 summarizes the characteristics of the 5879 patients. It shows that AMI‐OS patients were more likely to receive evidence‐based therapies for AMI (aspirin, ‐blockers, ACE inhibitors or ARBs, anticoagulation, and statins) and had a 46% lower mortality rate in hospital (3.51 % vs 6.52%) and 33% lower rate at 30 days (5.66% vs 8.48%). AMI‐OS patients were also found to be at lower risk for an adverse outcome than nonAMI‐OS patients. The AMI‐OS patients had lower peak troponin I values, severity of illness (lower Laboratory‐Based Acute Physiology Score, version 2 [LAPS2] scores), comorbidity burdens (lower Comorbidity Point Score, version 2 [COPS2] and Charlson scores), and global predicted mortality risk. AMI‐OS patients were also less likely to have required intensive care. AMI‐OS patients were at higher risk of death than nonAMI‐OS patients with respect to only 1 variable (being full code at the time of admission), but although this difference was statistically significant, it was of minor clinical impact (86% vs 88%).

Description of Study Cohort
 Patients Initially Managed UsingP Valuea
AMI Order Set, N=3,531bA La Carte Orders, N=2,348b
  • NOTE: Abbreviations: ACE, angiotensin‐converting enzyme; AMI, acute myocardial infarction; AMI‐OS, acute myocardial infarction order set; ARBs, angiotensin receptor blockers; COPS2, Comorbidity Point Score, version 2; CPOE, computerized physician order entry; ED, emergency department; ICU, intensive care unit; LAPS2, Laboratory‐based Acute Physiology Score, version 2; SD, standard deviation; STEMI, ST‐segment elevation myocardial infarction.

  • 2 or t test, as appropriate. See text for further methodological details.

  • AMI‐OS is an evidence‐based electronic checklist that guides physicians to order the most effective therapy by CPOE during the hospital admission process. In contrast, a la carte means that the clinician did not use the AMI‐OS, but rather entered individual orders via CPOE. See text for further details.

  • STEMI as evident by electrocardiogram. See text for details on ascertainment.

  • See text and reference 31 for details on how this score was assigned.

  • The COPS2 is a longitudinal, diagnosis‐based score assigned monthly that integrates all diagnoses incurred by a patient in the preceding 12 months. It is a continuous variable that can range between a minimum of zero and a theoretical maximum of 1,014, although <0.05% of Kaiser Permanente hospitalized patients have a COPS2 exceeding 241, and none have had a COPS2 >306. Increasing values of the COPS2 are associated with increasing mortality. See text and references 20 and 27 for additional details on the COPS2.

  • The LAPS2 integrates results from vital signs, neurological status checks, and 15 laboratory tests in the 72 hours preceding hospitalization into a single continuous variable. Increasing degrees of physiologic derangement are reflected in a higher LAPS2, which can range between a minimum of zero and a theoretical maximum of 414, although <0.05% of Kaiser Permanente hospitalized patients have a LAPS2 exceeding 227, and none have had a LAPS2 >282. Increasing values of LAPS2 are associated with increasing mortality. See text and references 20 and 27 for additional details on the LAPS2.

  • See text for details of specific therapies and how they were ascertained using the electronic medical record.

  • Percent mortality risk based on age, sex, diagnosis, COPS2, LAPS2, and care directive using a predictive model described in text and in reference 22.

  • See text for description of how end‐of‐life care directives are captured in the electronic medical record.

  • Direct admit means that the first hospital unit in which a patient stayed was the ICU; transfer refers to those patients transferred to the ICU from another unit in the hospital.

Age, y, median (meanSD)70 (69.413.8)70 (69.213.8)0.5603
Age (% >65 years)2,134 (60.4%)1,415 (60.3%)0.8949
Sex (% male)2,202 (62.4%)1,451 (61.8%)0.6620
STEMI (% with)c166 (4.7%)369 (15.7%)<0.0001
Troponin I (% missing)111 (3.1%)151 (6.4%)<0.0001
Troponin I median (meanSD)0.57 (3.08.2)0.27 (2.58.9)0.0651
Charlson score median (meanSD)d2.0 (2.51.5)2.0 (2.71.6)<0.0001
COPS2, median (meanSD)e14.0 (29.831.7)17.0 (34.334.4)<0.0001
LAPS2, median (meanSD)e0.0 (35.643.5)27.0 (40.948.1)<0.0001
Length of stay in ED, h, median (meanSD)5.7 (5.93.0)5.7 (5.43.1)<0.0001
Patients receiving aspirin within 24 hoursf3,470 (98.3%)2,202 (93.8%)<0.0001
Patients receiving anticoagulation therapyf2,886 (81.7%)1,846 (78.6%)0.0032
Patients receiving ‐blockersf3,196 (90.5%)1,926 (82.0%)<0.0001
Patients receiving ACE inhibitors or ARBsf2,395 (67.8%)1,244 (53.0%)<0.0001
Patients receiving statinsf3,337 (94.5%)1,975 (84.1%)<0.0001
Patient received 1 or more therapies3,531 (100.0%)2,330 (99.2%)<0.0001
Patient received 2 or more therapies3,521 (99.7%)2,266 (96.5%)<0.0001
Patient received 3 or more therapies3,440 (97.4%)2,085 (88.8%)<0.0001
Patient received 4 or more therapies3,015 (85.4%)1,646 (70.1%)<0.0001
Patient received all 5 therapies1,777 (50.3%)866 (35.9%)<0.0001
Predicted mortality risk, %, median, (meanSD)f0.86 (3.27.4)1.19 (4.810.8)<0.0001
Full code at time of hospital entry (%)g3,041 (86.1%)2,066 (88.0%)0.0379
Admitted to ICU (%)i   
Direct admit826 (23.4%)567 (24.2%)0.5047
Unplanned transfer222 (6.3%)133 (5.7%)0.3262
Ever1,283 (36.3%)1,169 (49.8%)<0.0001
Length of stay, h, median (meanSD)68.3 (109.4140.9)68.9 (113.8154.3)0.2615
Inpatient mortality (%)124 (3.5%)153 (6.5%)<0.0001
30‐day mortality (%)200 (5.7%)199 (8.5%)<0.0001
All‐cause rehospitalization within 30 days (%)576 (16.3%)401 (17.1%)0.4398
Cardiac catheterization procedure referral timing   
1 day preadmission to discharge2,018 (57.2%)1,348 (57.4%)0.1638
2 days preadmission or earlier97 (2.8%)87 (3.7%) 
After discharge149 (4.2%)104 (4.4%) 
No referral1,267 (35.9%)809 (34.5%) 

Table 2 shows the result of a logistic regression model in which the dependent variable was inpatient mortality and either the 5 evidence‐based therapies or the total number of evidence‐based therapies are the dependent variables. ‐blocker, statin, and ACE inhibitor or ARB therapies all had a protective effect on mortality, with odds ratios ranging from 0.48 (95% confidence interval [CI]: 0.36‐0.64), 0.63 (95% CI: 0.45‐0.89), and 0.40 (95% CI: 0.30‐0.53), respectively. An increased number of therapies also had a beneficial effect on inpatient mortality, with patients having 3 or more of the evidence‐based therapies showing an adjusted odds ratio (AOR) of 0.49 (95% CI: 0.33‐0.73), 4 or more therapies an AOR of 0.29 (95% CI: 0.20‐0.42), and 0.17 (95% CI: 0.11‐0.25) for 5 or more therapies.

Logistic Regression Model for Inpatient Mortality to Estimate the Effect of Evidence‐Based Therapies
 Multiple Therapies EffectIndividual Therapies Effect
OutcomeDeathDeath
Number of outcomes277277
 AORa95% CIbAORa95% CIb
  • NOTE: Abbreviations: ACE = angiotensin converting enzyme; ARB = angiotensin receptor blockers.

  • Adjusted odds ratio.

  • 95% confidence interval.

  • ST‐segment elevation myocardial infarction present.

  • See text and preceding table for details on COmorbidity Point Score, version 2 and Laboratory Acute Physiology Score, version 2.

  • Emergency department length of stay.

  • See text for details on how care directives were categorized.

Age in years    
1839Ref Ref 
40641.02(0.147.73)1.01(0.137.66)
65844.05(0.5529.72)3.89(0.5328.66)
85+4.99(0.6737.13)4.80(0.6435.84)
Sex    
FemaleRef   
Male1.05(0.811.37)1.07(0.821.39)
STEMIc    
AbsentRef Ref 
Present4.00(2.755.81)3.86(2.645.63)
Troponin I    
0.1 ng/mlRef Ref 
>0.1 ng/ml1.01(0.721.42)1.02(0.731.43)
COPS2d (AOR per 10 points)1.05(1.011.08)1.04(1.011.08)
LAPS2d (AOR per 10 points)1.09(1.061.11)1.09(1.061.11)
ED LOSe (hours)    
<6Ref Ref 
670.74(0.531.03)0.76(0.541.06)
>=120.82(0.391.74)0.83(0.391.78)
Code Statusf    
Full CodeRef   
Not Full Code1.08(0.781.49)1.09(0.791.51)
Cardiac procedure referral    
None during stayRef   
1 day pre adm until discharge0.40(0.290.54)0.39(0.280.53)
Number of therapies received    
2 or lessRef   
30.49(0.330.73)  
40.29(0.200.42)  
50.17(0.110.25)  
Aspirin therapy  0.80(0.491.32)
Anticoagulation therapy  0.86(0.641.16)
Beta Blocker therapy  0.48(0.360.64)
Statin therapy  0.63(0.450.89)
ACE inhibitors or ARBs  0.40(0.300.53)
C Statistic0.814 0.822 
Hosmer‐Lemeshow p value0.509 0.934 

Table 3 shows that the use of the AMI‐OS is protective, with an AOR of 0.59 and a 95% CI of 0.45‐0.76. Table 3 also shows that the most potent predictors were comorbidity burden (AOR: 1.07, 95% CI: 1.03‐1.10 per 10 COPS2 points), severity of illness (AOR: 1.09, 95% CI: 1.07‐1.12 per 10 LAPS2 points), STEMI (AOR: 3.86, 95% CI: 2.68‐5.58), and timing of cardiac catheterization referral occurring immediately prior to or during the admission (AOR: 0.37, 95% CI: 0.27‐0.51). The statistical significance of the AMI‐OS effect disappears when both AMI‐OS and the individual therapies are included in the same model (see Supporting Information, Appendices 15, in the online version of this article).

Logistic Regression Model for Inpatient Mortality to Estimate the Effect of Acute Myocardial Infarction Order Set
OutcomeDeath 
Number of outcomes277 
 AORa95% CIb
  • Adjusted odds ratio.

  • 95% confidence interval.

  • ST‐segment elevation myocardial infarction present.

  • See text and preceding table for details on COmorbidity Point Score, version 2 and Laboratory Acute Physiology Score, version 2.

  • Emergency department length of stay.

  • See text for details on how care directives were categorized.

  • **See text for details on the order set.

Age in years  
1839Ref 
40641.16(0.158.78)
65844.67(0.6334.46)
85+5.45(0.7340.86)
Sex  
FemaleRef 
Male1.05(0.811.36)
STEMIc  
AbsentRef 
Present3.86(2.685.58)
Troponin I  
0.1 ng/mlRef 
>0.1 ng/ml1.16(0.831.62)
COPS2d (AOR per 10 points)1.07(1.031.10)
LAPS2d (AOR per 10 points)1.09(1.071.12)
ED LOSe (hours)  
<6Ref 
670.72(0.521.00)
>=120.70(0.331.48)
Code statusf  
Full codeRef 
Not full code1.22(0.891.68)
Cardiac procedure referral  
None during stayRef 
1 day pre adm until discharge0.37(0.270.51)
Order set employedg  
NoRef 
Yes0.59(0.450.76)
C Statistic0.792 
Hosmer‐Lemeshow p value0.273 

Table 4 shows separately the average treatment effect (ATE) and average treatment effect for the treated (ATT) of AMI‐OS and of increasing number of therapies on other outcomes (30‐day mortality, LOS, and readmission). Both the ATE and ATT show that the use of the AMI‐OS was significantly protective with respect to mortality and total hospital LOS but not significant with respect to readmission. The effect of the number of therapies on mortality is significantly higher with increasing number of therapies. For example, patients who received 5 therapies had an average treatment effect on 30‐day inpatient mortality of 0.23 (95% CI: 0.15‐0.35) compared to 0.64 (95% CI: 0.43‐0.96) for 3 therapies, almost a 3‐fold difference. The effects of increasing number of therapies were not significant for LOS or readmission. A sensitivity analysis in which the 535 STEMI patients were removed showed essentially the same results, so it is not reported here.

Adjusted Odds Ratio (95% CI) or Mean Length‐of‐Stay Ratio (95% CI) in Study Patients
OutcomeOrder Seta3 Therapiesb4 Therapiesb5 Therapiesb
  • NOTE: Abbreviations: CI, confidence interval; LOS, length of stay.

  • Refers to comparison in which the reference group consists of patients who were not treated using the acute myocardial infarction order set.

  • Refers to comparison in which the reference group consists of patients who received 2 or less of the 5 recommended therapies.

  • See text for description of average treatment effect methodology.

  • See text for description of average treatment effect on the treated and matched pair adjustment methodology.

  • See text for details on how we modeled LOS.

Average treatment effectc
Inpatient mortality0.67 (0.520.86)0.64 (0.430.96)0.37 (0.250.54)0.23 (0.150.35)
30‐day mortality0.77 (0.620.96)0.68 (0.480.98)0.34 (0.240.48)0.26 (0.180.37)
Readmission1.03 (0.901.19)1.20 (0.871.66)1.19 (0.881.60)1.30 (0.961.76)
LOS, ratio of the geometric means0.91 (0.870.95)1.16 (1.031.30)1.17 (1.051.30)1.12 (1.001.24)
Average treatment effect on the treatedd
Inpatient mortality0.69 (0.520.92)0.35 (0.130.93)0.17 (0.070.43)0.08 (0.030.20)
30‐day mortality0.84 (0.661.06)0.35 (0.150.79)0.17 (0.070.37)0.09 (0.040.20)
Readmission1.02 (0.871.20)1.39 (0.852.26)1.36 (0.882.12)1.23 (0.801.89)
LOS, ratio of the geometric meanse0.92 (0.870.97)1.18 (1.021.37)1.16 (1.011.33)1.04 (0.911.19)

To further elucidate possible reasons why physicians did not use the AMI‐OS, the lead author reviewed 105 randomly selected records where the AMI‐OS was not used, 5 records from each of the 21 study hospitals. This review found that in 36% of patients, the AMI‐OS was not used because emergent catheterization or transfer to a facility with percutaneous coronary intervention capability occurred. Presence of other significant medical conditions, including critical illness, was the reason in 17% of these cases, patient or family refusal of treatments in 8%, issues around end‐of‐life care in 3%, and specific medical contraindications in 1%. In the remaining 34%, no reason for not using the AMI‐OS could be identified.

DISCUSSION

We evaluated the use of an evidence‐based electronic AMI‐OS embedded in a comprehensive EMR and found that it was beneficial. Its use was associated with increased adherence to evidence‐based therapies, which in turn were associated with improved outcomes. Using data from a large cohort of hospitalized AMI patients in 21 community hospitals, we were able to use risk adjustment that included physiologic illness severity to adjust for baseline mortality risk. Patients in whom the AMI‐OS was employed tended to be at lower risk; nonetheless, after controlling for confounding variables and adjusting for bias using propensity scores, the AMI‐OS was associated with increased use of evidence‐based therapies and decreased mortality. Most importantly, it appears that the benefits of the OS were not just due to increased receipt of individual recommended therapies, but to increased concurrent receipt of multiple recommended therapies.

Modern EMRs have great potential for significant improvements in the quality, efficiency, and safety of care provided,[36] and our study highlights this potential. However, a number of important limitations to our study must be considered. Although we had access to a very rich dataset, we could not control for all possible confounders, and our risk adjustment cannot match the level of information available to clinicians. In particular, the measurements available to us with respect to cardiac risk are limited. Thus, we have to recognize that the strength of our findings does not approximate that of a randomized trial, and one would expect that the magnitude of the beneficial association would fall under more controlled conditions. Resource limitations also did not permit us to gather more time course data (eg, sequential measurements of patient instability, cardiac damage, or use of recommended therapies), which could provide a better delineation of differences in both processes and outcomes.

Limitations also exist to the generalizability of the use of order sets in other settings that go beyond the availability of a comprehensive EMR. Our study population was cared for in a setting with an unusually high level of integration.[1] For example, KPNC has an elaborate administrative infrastructure for training in the use of the EMR as well as ensuring that order sets are not just evidence‐based, but that they are perceived by clinicians to be of significant value. This infrastructure, established to ensure physician buy‐in, may not be easy to replicate in smaller or less‐integrated settings. Thus, it is conceivable that factors other than the degree of support during the EMR deployments can affect rates of order set use.

Although our use of counterfactual methods included illness severity (LAPS2) and longitudinal comorbidity burden (COPS2), which are not yet available outside highly integrated delivery services employing comprehensive EMRs, it is possible they are insufficient. We cannot exclude the possibility that other biases or patient characteristics were present that led clinicians to preferentially employ the electronic order set in some patients but not in others. One could also argue that future studies should consider using overall adherence to recommended AMI treatment guidelines as a risk adjustment tool that would permit one to analyze what other factors may be playing a role in residual differences in patient outcomes. Last, one could object to our inclusion of STEMI patients; however, this was not a study on optimum treatment strategies for STEMI patients. Rather, it was a study on the impact on AMI outcomes of a specific component of computerized order entry outside the research setting.

Despite these limitations, we believe that our findings provide strong support for the continued use of electronic evidence‐based order sets in the inpatient medical setting. Once the initial implementation of a comprehensive EMR has occurred, deployment of these electronic order sets is a relatively inexpensive but effective method to foster compliance with evidence‐based care.

Future research in healthcare information technology can take a number of directions. One important area, of course, revolves around ways to promote enhanced physician adoption of EMRs. Our audit of records where the AMI‐OS was not used found that specific reasons for not using the order set (eg, treatment refusals, emergent intervention) were present in two‐thirds of the cases. This suggests that future analyses of adherence involving EMRs and CPOE implementation should take a more nuanced look at how order entry is actually enabled. It may be that understanding how order sets affect care enhances clinician acceptance and thus could serve as an incentive to EMR adoption. However, once an EMR is adopted, a need exists to continue evaluations such as this because, ultimately, the gold standard should be improved patient care processes and better outcomes for patients.

Acknowledgement

The authors give special thanks to Dr. Brian Hoberman for sponsoring this work, Dr. Alan S. Go for providing assistance with obtaining copies of electrocardiograms for review, Drs. Tracy Lieu and Vincent Liu for reviewing the manuscript, and Ms. Rachel Lesser for formatting the manuscript.

Disclosures: This work was supported by The Permanente Medical Group, Inc. and Kaiser Foundation Hospitals, Inc. The algorithms used to extract data and perform risk adjustment were developed with funding from the Sidney Garfield Memorial Fund (Early Detection of Impending Physiologic Deterioration in Hospitalized Patients, 1159518), the Agency for Healthcare Quality and Research (Rapid Clinical Snapshots From the EMR Among Pneumonia Patients, 1R01HS018480‐01), and the Gordon and Betty Moore Foundation (Early Detection of Impending Physiologic Deterioration: Electronic Early Warning System).

Files
References
  1. Yeh RW, Sidney S, Chandra M, Sorel M, Selby JV, Go AS. Population trends in the incidence and outcomes of acute myocardial infarction. N Engl J Med. 2010;362(23):21552165.
  2. Rosamond WD, Chambless LE, Heiss G, et al. Twenty‐two‐year trends in incidence of myocardial infarction, coronary heart disease mortality, and case fatality in 4 US communities, 1987–2008. Circulation. 2012;125(15):18481857.
  3. Roger VL, Go AS, Lloyd‐Jones DM, et al. Heart disease and stroke statistics—2012 update: a report from the American Heart Association. Circulation. 2012;125(1):e2e220.
  4. Anderson JL, Adams CD, Antman EM, et al. ACC/AHA 2007 guidelines for the management of patients with unstable angina/non‐ST‐Elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 2002 Guidelines for the Management of Patients With Unstable Angina/Non‐ST‐Elevation Myocardial Infarction) developed in collaboration with the American College of Emergency Physicians, the Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons endorsed by the American Association of Cardiovascular and Pulmonary Rehabilitation and the Society for Academic Emergency Medicine. J Am Coll Cardiol. 2007;50(7):e1e157.
  5. Antman EM, Hand M, Armstrong PW, et al. 2007 focused update of the ACC/AHA 2004 guidelines for the management of patients with ST‐elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2008;51(2):210247.
  6. Jernberg T, Johanson P, Held C, Svennblad B, Lindback J, Wallentin L. Association between adoption of evidence‐based treatment and survival for patients with ST‐elevation myocardial infarction. JAMA. 2011;305(16):16771684.
  7. Puymirat E, Simon T, Steg PG, et al. Association of changes in clinical characteristics and management with improvement in survival among patients with ST‐elevation myocardial infarction. JAMA. 2012;308(10):9981006.
  8. Motivala AA, Cannon CP, Srinivas VS, et al. Changes in myocardial infarction guideline adherence as a function of patient risk: an end to paradoxical care? J Am Coll Cardiol. 2011;58(17):17601765.
  9. Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals—the Hospital Quality Alliance program. N Engl J Med. 2005;353(3):265274.
  10. Desai N, Chen AN, et al. Challenges in the treatment of NSTEMI patients at high risk for both ischemic and bleeding events: insights from the ACTION Registry‐GWTG. J Am Coll Cardiol. 2011;57:E913E913.
  11. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002–2004. N Engl J Med. 2005;353(3):255264.
  12. Eagle KA, Montoye K, Riba AL. Guideline‐based standardized care is associated with substantially lower mortality in medicare patients with acute myocardial infarction. J Am Coll Cardiol. 2005;46(7):12421248.
  13. Ballard DJ, Ogola G, Fleming NS, et al. Impact of a standardized heart failure order set on mortality, readmission, and quality and costs of care. Int J Qual Health Care. 2010;22(6):437444.
  14. Selby JV. Linking automated databases for research in managed care settings. Ann Intern Med. 1997;127(8 pt 2):719724.
  15. Escobar G, Greene J, Scheirer P, Gardner M, Draper D, Kipnis P. Risk adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232239.
  16. Liu V, Kipnis P, Gould MK, Escobar GJ. Length of stay predictions: improvements through the use of automated laboratory and comorbidity variables. Med Care. 2010;48(8):739744.
  17. Escobar GJ, Greene JD, Gardner MN, Marelich GP, Quick B, Kipnis P. Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS). J Hosp Med. 2011;6(2):7480.
  18. Liu V, Kipnis P, Rizk NW, Escobar GJ. Adverse outcomes associated with delayed intensive care unit transfers in an integrated healthcare system. J Hosp Med. 2012;7(3):224230.
  19. International Classification of Diseases, 9th Revision‐Clinical Modification. 4th ed. 3 Vols. Los Angeles, CA: Practice Management Information Corporation; 2006.
  20. Go AS, Hylek EM, Chang Y, et al. Anticoagulation therapy for stroke prevention in atrial fibrillation: how well do randomized trials translate into clinical practice? JAMA. 2003;290(20):26852692.
  21. Escobar GJ, LaGuardia J, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  22. Escobar GJ, Gardner M, Greene JG, David D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  23. Kipnis P, Escobar GJ, Draper D. Effect of choice of estimation method on inter‐hospital mortality rate comparisons. Med Care. 2010;48(5):456485.
  24. Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63(7):798803.
  25. Wong J, Taljaard M, Forster AJ, Escobar GJ, Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49(8):734743.
  26. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613619.
  27. MacKinnon DP. Introduction to Statistical Mediation Analysis. New York, NY: Lawrence Erlbaum Associates; 2008.
  28. Imbens GW. Nonparametric estimation of average treatment effects under exogenity: a review. Rev Econ Stat. 2004;86:25.
  29. Rosenbaum PR. Design of Observational Studies. New York, NY: Springer Science+Business Media; 2010.
  30. Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity‐score matched samples. Stat Med. 2009;28:24.
  31. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. J Am Stat Assoc. 1994(89):846866.
  32. Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat Med. 2004;23(19):29372960.
  33. Rosenbaum PR. Discussing hidden bias in observational studies. Ann Intern Med. 1991;115(11):901905.
  34. D'Agostino RB. Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17(19):22652281.
  35. Feng WW, Jun Y, Xu R. A method/macro based on propensity score and Mahalanobis distance to reduce bias in treatment comparison in observational study, 2005. www.lexjansen.com/pharmasug/2006/publichealthresearch/pr05.pdf. Accessed on September 14, 2013.
  36. Ettinger WH. Using health information technology to improve health care. Arch Intern Med. 2012;172(22):17281730.
Article PDF
Issue
Journal of Hospital Medicine - 9(3)
Publications
Page Number
155-161
Sections
Files
Files
Article PDF
Article PDF

Although the prevalence of coronary heart disease and death from acute myocardial infarction (AMI) have declined steadily, about 935,000 heart attacks still occur annually in the United States, with approximately one‐third of these being fatal.[1, 2, 3] Studies have demonstrated decreased 30‐day and longer‐term mortality in AMI patients who receive evidence‐based treatment, including aspirin, ‐blockers, angiotensin‐converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), anticoagulation therapy, and statins.[4, 5, 6, 7] Despite clinical practice guidelines (CPGs) outlining evidence‐based care and considerable efforts to implement processes that improve patient outcomes, delivery of effective therapy remains suboptimal.[8] For example, the Hospital Quality Alliance Program[9] found that in AMI patients, use of aspirin on admission was only 81% to 92%, ‐blocker on admission 75% to 85%, and ACE inhibitors for left ventricular dysfunction 71% to 74%.

Efforts to increase adherence to CPGs and improve patient outcomes in AMI have resulted in variable degrees of success. They include promotion of CPGs,[4, 5, 6, 7] physician education with feedback, report cards, care paths, registries,[10] Joint Commission standardized measures,[11] and paper checklists or order sets (OS).[12, 13]

In this report, we describe the association between use of an evidence‐based, electronic OS for AMI (AMI‐OS) and better adherence to CPGs. This AMI‐OS was implemented in the inpatient electronic medical records (EMRs) of a large integrated healthcare delivery system, Kaiser Permanente Northern California (KPNC). The purpose of our investigation was to determine (1) whether use of the AMI‐OS was associated with improved AMI processes and patient outcomes, and (2) whether these associations persisted after risk adjustment using a comprehensive severity of illness scoring system.

MATERIALS AND METHODS

This project was approved by the KPNC institutional review board.

Under a mutual exclusivity arrangement, salaried physicians of The Permanente Medical Group, Inc., care for 3.4 million Kaiser Foundation Health Plan, Inc. members at facilities owned by Kaiser Foundation Hospitals, Inc. All KPNC facilities employ the same information systems with a common medical record number and can track care covered by the plan but delivered elsewhere.[14] Our setting consisted of 21 KPNC hospitals described in previous reports,[15, 16, 17, 18] using the same commercially available EMR system that includes computerized physician order entry (CPOE). Deployment of the customized inpatient Epic EMR (www.epicsystems.com), known internally as KP HealthConnect (KPHC), began in 2006 and was completed in 2010.

In this EMR's CPOE, physicians have options to select individual orders (a la carte) or they can utilize an OS, which is a collection of the most appropriate orders associated with specific diagnoses, procedures, or treatments. The evidence‐based AMI‐OS studied in this project was developed by a multidisciplinary team (for detailed components see Supporting Appendix 1Appendix 5 in the online version of this article).

Our study focused on the first set of hospital admission orders for patients with AMI. The study sample consisted of patients meeting these criteria: (1) age 18 years at admission; (2) admitted to a KPNC hospital for an overnight stay between September 28, 2008 and December 31, 2010; (3) principal diagnosis was AMI (International Classification of Diseases, 9th Revision [ICD‐9][19] codes 410.00, 01, 10, 11, 20, 21, 30, 31, 40, 41, 50, 51, 60, 61, 70, 71, 80, 90, and 91); and (4) KPHC had been operational at the hospital for at least 3 months to be included (for assembly descriptions see Supporting Appendices 15 in the online version of this article). At the study hospitals, troponin I was measured using the Beckman Access AccuTnI assay (Beckman Coulter, Inc., Brea, CA), whose upper reference limit (99th percentile) is 0.04 ng/mL. We excluded patients initially hospitalized for AMI at a non‐KPNC site and transferred into a study hospital.

The data processing methods we employed have been detailed elsewhere.[14, 15, 17, 20, 21, 22] The dependent outcome variables were total hospital length of stay, inpatient mortality, 30‐day mortality, and all‐cause rehospitalization within 30 days of discharge. Linked state mortality data were unavailable for the entire study period, so we ascertained 30‐day mortality based on the combination of KPNC patient demographic data and publicly available Social Security Administration decedent files. We ascertained rehospitalization by scanning KPNC hospitalization databases, which also track out‐of‐plan use.

The dependent process variables were use of aspirin within 24 hours of admission, ‐blockers, anticoagulation, ACE inhibitors or ARBs, and statins. The primary independent variable of interest was whether or not the admitting physician employed the AMI‐OS when admission orders were entered. Consequently, this variable is dichotomous (AMI‐OS vs a la carte).

We controlled for acute illness severity and chronic illness burden using a recent modification[22] of an externally validated risk‐adjustment system applicable to all hospitalized patients.[15, 16, 23, 24, 25] Our methodology included vital signs, neurological status checks, and laboratory test results obtained in the 72 hours preceding hospital admission; comorbidities were captured longitudinally using data from the year preceding hospitalization (for comparison purposes, we also assigned a Charlson Comorbidity Index score[26]).

End‐of‐life care directives are mandatory on admission at KPNC hospitals. Physicians have 4 options: full code, partial code, do not resuscitate, and comfort care only. Because of small numbers in some categories, we collapsed these 4 categories into full code and not full code. Because patients' care directives may change, we elected to capture the care directive in effect when a patient first entered a hospital unit other than the emergency department (ED).

Two authors (M.B., P.C.L.), one of whom is a board‐certified cardiologist, reviewed all admission electrocardiograms and made a consensus determination as to whether or not criteria for ST‐segment elevation myocardial infarction (STEMI) were present (ie, new ST‐segment elevation or left bundle branch block); we also reviewed the records of all patients with missing troponin I data to confirm the AMI diagnosis.

Statistical Methods

We performed unadjusted comparisons between AMI‐OS and nonAMI‐OS patients using the t test or the [2] test, as appropriate.

We hypothesized that the AMI‐OS plays a mediating role on patient outcomes through its effect on adherence to recommended treatment. We evaluated this hypothesis for inpatient mortality by first fitting a multivariable logistic regression model for inpatient mortality as the outcome and either the 5 evidence‐based therapies or the total number of evidence‐based therapies used (ranging from 02, 3, 4, or 5) as the dependent variable controlling for age, gender, presence of STEMI, troponin I, comorbidities, illness severity, ED length of stay (LOS), care directive status, and timing of cardiac catheterization referral as covariates to confirm the protective effect of these therapies on mortality. We then used the same model to estimate the effect of AMI‐OS on inpatient mortality, substituting the therapies with AMI‐OS as the dependent variable and using the same covariates. Last, we included both the therapies and the AMI‐OS in the model to evaluate their combined effects.[27]

We used 2 different methods to estimate the effects of AMI‐OS and number of therapies provided on the outcomes while adjusting for observed baseline differences between the 2 groups of patients: propensity risk score matching, which estimates the average treatment effect for the treated,[28, 29] and inverse probability of treatment weighting, which is used to estimate the average treatment effect.[30, 31, 32] The propensity score was defined as the probability of receiving the intervention for a patient with specific predictive factors.[33, 34] We computed a propensity score for each patient by using logistic regression, with the dependent variable being receipt of AMI‐OS and the independent variables being the covariates used for the multivariate logistic regression as well as ICD‐9 code for final diagnosis. We calculated the Mahalanobis distance between patients who received AMI‐OS (cases) and patients who did not received AMI‐OS (controls) using the same set of covariates. We matched each case to a single control within the same facility based on the nearest available Mahalanobis metric matching within calipers defied as the maximum width of 0.2 standard deviations of the logit of the estimated propensity score.[29, 35] We estimated the odds ratios for the binary dependent variables based on a conditional logistic regression model to account for the matched pairs design.[28] We used a generalized linear model with the log‐transformed LOS as the outcome to estimate the ratio of the LOS geometric mean of the cases to the controls. We calculated the relative risk for patients receiving AMI‐OS via the inverse probability weighting method by first defining a weight for each patient. [We assigned a weight of 1/psi to patients who received the AMI‐OS and a weight of 1/(1psi) to patients who did not receive the AMI‐OS, where psi denotes the propensity score for patient i]. We used a logistic regression model for the binary dependent variables with the same set of covariates described above to estimate the adjusted odds ratios while weighting each observation by its corresponding weight. Last, we used a weighted generalized linear model to estimate the AMI‐OS effect on the log‐transformed LOS.

RESULTS

Table 1 summarizes the characteristics of the 5879 patients. It shows that AMI‐OS patients were more likely to receive evidence‐based therapies for AMI (aspirin, ‐blockers, ACE inhibitors or ARBs, anticoagulation, and statins) and had a 46% lower mortality rate in hospital (3.51 % vs 6.52%) and 33% lower rate at 30 days (5.66% vs 8.48%). AMI‐OS patients were also found to be at lower risk for an adverse outcome than nonAMI‐OS patients. The AMI‐OS patients had lower peak troponin I values, severity of illness (lower Laboratory‐Based Acute Physiology Score, version 2 [LAPS2] scores), comorbidity burdens (lower Comorbidity Point Score, version 2 [COPS2] and Charlson scores), and global predicted mortality risk. AMI‐OS patients were also less likely to have required intensive care. AMI‐OS patients were at higher risk of death than nonAMI‐OS patients with respect to only 1 variable (being full code at the time of admission), but although this difference was statistically significant, it was of minor clinical impact (86% vs 88%).

Description of Study Cohort
 Patients Initially Managed UsingP Valuea
AMI Order Set, N=3,531bA La Carte Orders, N=2,348b
  • NOTE: Abbreviations: ACE, angiotensin‐converting enzyme; AMI, acute myocardial infarction; AMI‐OS, acute myocardial infarction order set; ARBs, angiotensin receptor blockers; COPS2, Comorbidity Point Score, version 2; CPOE, computerized physician order entry; ED, emergency department; ICU, intensive care unit; LAPS2, Laboratory‐based Acute Physiology Score, version 2; SD, standard deviation; STEMI, ST‐segment elevation myocardial infarction.

  • 2 or t test, as appropriate. See text for further methodological details.

  • AMI‐OS is an evidence‐based electronic checklist that guides physicians to order the most effective therapy by CPOE during the hospital admission process. In contrast, a la carte means that the clinician did not use the AMI‐OS, but rather entered individual orders via CPOE. See text for further details.

  • STEMI as evident by electrocardiogram. See text for details on ascertainment.

  • See text and reference 31 for details on how this score was assigned.

  • The COPS2 is a longitudinal, diagnosis‐based score assigned monthly that integrates all diagnoses incurred by a patient in the preceding 12 months. It is a continuous variable that can range between a minimum of zero and a theoretical maximum of 1,014, although <0.05% of Kaiser Permanente hospitalized patients have a COPS2 exceeding 241, and none have had a COPS2 >306. Increasing values of the COPS2 are associated with increasing mortality. See text and references 20 and 27 for additional details on the COPS2.

  • The LAPS2 integrates results from vital signs, neurological status checks, and 15 laboratory tests in the 72 hours preceding hospitalization into a single continuous variable. Increasing degrees of physiologic derangement are reflected in a higher LAPS2, which can range between a minimum of zero and a theoretical maximum of 414, although <0.05% of Kaiser Permanente hospitalized patients have a LAPS2 exceeding 227, and none have had a LAPS2 >282. Increasing values of LAPS2 are associated with increasing mortality. See text and references 20 and 27 for additional details on the LAPS2.

  • See text for details of specific therapies and how they were ascertained using the electronic medical record.

  • Percent mortality risk based on age, sex, diagnosis, COPS2, LAPS2, and care directive using a predictive model described in text and in reference 22.

  • See text for description of how end‐of‐life care directives are captured in the electronic medical record.

  • Direct admit means that the first hospital unit in which a patient stayed was the ICU; transfer refers to those patients transferred to the ICU from another unit in the hospital.

Age, y, median (meanSD)70 (69.413.8)70 (69.213.8)0.5603
Age (% >65 years)2,134 (60.4%)1,415 (60.3%)0.8949
Sex (% male)2,202 (62.4%)1,451 (61.8%)0.6620
STEMI (% with)c166 (4.7%)369 (15.7%)<0.0001
Troponin I (% missing)111 (3.1%)151 (6.4%)<0.0001
Troponin I median (meanSD)0.57 (3.08.2)0.27 (2.58.9)0.0651
Charlson score median (meanSD)d2.0 (2.51.5)2.0 (2.71.6)<0.0001
COPS2, median (meanSD)e14.0 (29.831.7)17.0 (34.334.4)<0.0001
LAPS2, median (meanSD)e0.0 (35.643.5)27.0 (40.948.1)<0.0001
Length of stay in ED, h, median (meanSD)5.7 (5.93.0)5.7 (5.43.1)<0.0001
Patients receiving aspirin within 24 hoursf3,470 (98.3%)2,202 (93.8%)<0.0001
Patients receiving anticoagulation therapyf2,886 (81.7%)1,846 (78.6%)0.0032
Patients receiving ‐blockersf3,196 (90.5%)1,926 (82.0%)<0.0001
Patients receiving ACE inhibitors or ARBsf2,395 (67.8%)1,244 (53.0%)<0.0001
Patients receiving statinsf3,337 (94.5%)1,975 (84.1%)<0.0001
Patient received 1 or more therapies3,531 (100.0%)2,330 (99.2%)<0.0001
Patient received 2 or more therapies3,521 (99.7%)2,266 (96.5%)<0.0001
Patient received 3 or more therapies3,440 (97.4%)2,085 (88.8%)<0.0001
Patient received 4 or more therapies3,015 (85.4%)1,646 (70.1%)<0.0001
Patient received all 5 therapies1,777 (50.3%)866 (35.9%)<0.0001
Predicted mortality risk, %, median, (meanSD)f0.86 (3.27.4)1.19 (4.810.8)<0.0001
Full code at time of hospital entry (%)g3,041 (86.1%)2,066 (88.0%)0.0379
Admitted to ICU (%)i   
Direct admit826 (23.4%)567 (24.2%)0.5047
Unplanned transfer222 (6.3%)133 (5.7%)0.3262
Ever1,283 (36.3%)1,169 (49.8%)<0.0001
Length of stay, h, median (meanSD)68.3 (109.4140.9)68.9 (113.8154.3)0.2615
Inpatient mortality (%)124 (3.5%)153 (6.5%)<0.0001
30‐day mortality (%)200 (5.7%)199 (8.5%)<0.0001
All‐cause rehospitalization within 30 days (%)576 (16.3%)401 (17.1%)0.4398
Cardiac catheterization procedure referral timing   
1 day preadmission to discharge2,018 (57.2%)1,348 (57.4%)0.1638
2 days preadmission or earlier97 (2.8%)87 (3.7%) 
After discharge149 (4.2%)104 (4.4%) 
No referral1,267 (35.9%)809 (34.5%) 

Table 2 shows the result of a logistic regression model in which the dependent variable was inpatient mortality and either the 5 evidence‐based therapies or the total number of evidence‐based therapies are the dependent variables. ‐blocker, statin, and ACE inhibitor or ARB therapies all had a protective effect on mortality, with odds ratios ranging from 0.48 (95% confidence interval [CI]: 0.36‐0.64), 0.63 (95% CI: 0.45‐0.89), and 0.40 (95% CI: 0.30‐0.53), respectively. An increased number of therapies also had a beneficial effect on inpatient mortality, with patients having 3 or more of the evidence‐based therapies showing an adjusted odds ratio (AOR) of 0.49 (95% CI: 0.33‐0.73), 4 or more therapies an AOR of 0.29 (95% CI: 0.20‐0.42), and 0.17 (95% CI: 0.11‐0.25) for 5 or more therapies.

Logistic Regression Model for Inpatient Mortality to Estimate the Effect of Evidence‐Based Therapies
 Multiple Therapies EffectIndividual Therapies Effect
OutcomeDeathDeath
Number of outcomes277277
 AORa95% CIbAORa95% CIb
  • NOTE: Abbreviations: ACE = angiotensin converting enzyme; ARB = angiotensin receptor blockers.

  • Adjusted odds ratio.

  • 95% confidence interval.

  • ST‐segment elevation myocardial infarction present.

  • See text and preceding table for details on COmorbidity Point Score, version 2 and Laboratory Acute Physiology Score, version 2.

  • Emergency department length of stay.

  • See text for details on how care directives were categorized.

Age in years    
1839Ref Ref 
40641.02(0.147.73)1.01(0.137.66)
65844.05(0.5529.72)3.89(0.5328.66)
85+4.99(0.6737.13)4.80(0.6435.84)
Sex    
FemaleRef   
Male1.05(0.811.37)1.07(0.821.39)
STEMIc    
AbsentRef Ref 
Present4.00(2.755.81)3.86(2.645.63)
Troponin I    
0.1 ng/mlRef Ref 
>0.1 ng/ml1.01(0.721.42)1.02(0.731.43)
COPS2d (AOR per 10 points)1.05(1.011.08)1.04(1.011.08)
LAPS2d (AOR per 10 points)1.09(1.061.11)1.09(1.061.11)
ED LOSe (hours)    
<6Ref Ref 
670.74(0.531.03)0.76(0.541.06)
>=120.82(0.391.74)0.83(0.391.78)
Code Statusf    
Full CodeRef   
Not Full Code1.08(0.781.49)1.09(0.791.51)
Cardiac procedure referral    
None during stayRef   
1 day pre adm until discharge0.40(0.290.54)0.39(0.280.53)
Number of therapies received    
2 or lessRef   
30.49(0.330.73)  
40.29(0.200.42)  
50.17(0.110.25)  
Aspirin therapy  0.80(0.491.32)
Anticoagulation therapy  0.86(0.641.16)
Beta Blocker therapy  0.48(0.360.64)
Statin therapy  0.63(0.450.89)
ACE inhibitors or ARBs  0.40(0.300.53)
C Statistic0.814 0.822 
Hosmer‐Lemeshow p value0.509 0.934 

Table 3 shows that the use of the AMI‐OS is protective, with an AOR of 0.59 and a 95% CI of 0.45‐0.76. Table 3 also shows that the most potent predictors were comorbidity burden (AOR: 1.07, 95% CI: 1.03‐1.10 per 10 COPS2 points), severity of illness (AOR: 1.09, 95% CI: 1.07‐1.12 per 10 LAPS2 points), STEMI (AOR: 3.86, 95% CI: 2.68‐5.58), and timing of cardiac catheterization referral occurring immediately prior to or during the admission (AOR: 0.37, 95% CI: 0.27‐0.51). The statistical significance of the AMI‐OS effect disappears when both AMI‐OS and the individual therapies are included in the same model (see Supporting Information, Appendices 15, in the online version of this article).

Logistic Regression Model for Inpatient Mortality to Estimate the Effect of Acute Myocardial Infarction Order Set
OutcomeDeath 
Number of outcomes277 
 AORa95% CIb
  • Adjusted odds ratio.

  • 95% confidence interval.

  • ST‐segment elevation myocardial infarction present.

  • See text and preceding table for details on COmorbidity Point Score, version 2 and Laboratory Acute Physiology Score, version 2.

  • Emergency department length of stay.

  • See text for details on how care directives were categorized.

  • **See text for details on the order set.

Age in years  
1839Ref 
40641.16(0.158.78)
65844.67(0.6334.46)
85+5.45(0.7340.86)
Sex  
FemaleRef 
Male1.05(0.811.36)
STEMIc  
AbsentRef 
Present3.86(2.685.58)
Troponin I  
0.1 ng/mlRef 
>0.1 ng/ml1.16(0.831.62)
COPS2d (AOR per 10 points)1.07(1.031.10)
LAPS2d (AOR per 10 points)1.09(1.071.12)
ED LOSe (hours)  
<6Ref 
670.72(0.521.00)
>=120.70(0.331.48)
Code statusf  
Full codeRef 
Not full code1.22(0.891.68)
Cardiac procedure referral  
None during stayRef 
1 day pre adm until discharge0.37(0.270.51)
Order set employedg  
NoRef 
Yes0.59(0.450.76)
C Statistic0.792 
Hosmer‐Lemeshow p value0.273 

Table 4 shows separately the average treatment effect (ATE) and average treatment effect for the treated (ATT) of AMI‐OS and of increasing number of therapies on other outcomes (30‐day mortality, LOS, and readmission). Both the ATE and ATT show that the use of the AMI‐OS was significantly protective with respect to mortality and total hospital LOS but not significant with respect to readmission. The effect of the number of therapies on mortality is significantly higher with increasing number of therapies. For example, patients who received 5 therapies had an average treatment effect on 30‐day inpatient mortality of 0.23 (95% CI: 0.15‐0.35) compared to 0.64 (95% CI: 0.43‐0.96) for 3 therapies, almost a 3‐fold difference. The effects of increasing number of therapies were not significant for LOS or readmission. A sensitivity analysis in which the 535 STEMI patients were removed showed essentially the same results, so it is not reported here.

Adjusted Odds Ratio (95% CI) or Mean Length‐of‐Stay Ratio (95% CI) in Study Patients
OutcomeOrder Seta3 Therapiesb4 Therapiesb5 Therapiesb
  • NOTE: Abbreviations: CI, confidence interval; LOS, length of stay.

  • Refers to comparison in which the reference group consists of patients who were not treated using the acute myocardial infarction order set.

  • Refers to comparison in which the reference group consists of patients who received 2 or less of the 5 recommended therapies.

  • See text for description of average treatment effect methodology.

  • See text for description of average treatment effect on the treated and matched pair adjustment methodology.

  • See text for details on how we modeled LOS.

Average treatment effectc
Inpatient mortality0.67 (0.520.86)0.64 (0.430.96)0.37 (0.250.54)0.23 (0.150.35)
30‐day mortality0.77 (0.620.96)0.68 (0.480.98)0.34 (0.240.48)0.26 (0.180.37)
Readmission1.03 (0.901.19)1.20 (0.871.66)1.19 (0.881.60)1.30 (0.961.76)
LOS, ratio of the geometric means0.91 (0.870.95)1.16 (1.031.30)1.17 (1.051.30)1.12 (1.001.24)
Average treatment effect on the treatedd
Inpatient mortality0.69 (0.520.92)0.35 (0.130.93)0.17 (0.070.43)0.08 (0.030.20)
30‐day mortality0.84 (0.661.06)0.35 (0.150.79)0.17 (0.070.37)0.09 (0.040.20)
Readmission1.02 (0.871.20)1.39 (0.852.26)1.36 (0.882.12)1.23 (0.801.89)
LOS, ratio of the geometric meanse0.92 (0.870.97)1.18 (1.021.37)1.16 (1.011.33)1.04 (0.911.19)

To further elucidate possible reasons why physicians did not use the AMI‐OS, the lead author reviewed 105 randomly selected records where the AMI‐OS was not used, 5 records from each of the 21 study hospitals. This review found that in 36% of patients, the AMI‐OS was not used because emergent catheterization or transfer to a facility with percutaneous coronary intervention capability occurred. Presence of other significant medical conditions, including critical illness, was the reason in 17% of these cases, patient or family refusal of treatments in 8%, issues around end‐of‐life care in 3%, and specific medical contraindications in 1%. In the remaining 34%, no reason for not using the AMI‐OS could be identified.

DISCUSSION

We evaluated the use of an evidence‐based electronic AMI‐OS embedded in a comprehensive EMR and found that it was beneficial. Its use was associated with increased adherence to evidence‐based therapies, which in turn were associated with improved outcomes. Using data from a large cohort of hospitalized AMI patients in 21 community hospitals, we were able to use risk adjustment that included physiologic illness severity to adjust for baseline mortality risk. Patients in whom the AMI‐OS was employed tended to be at lower risk; nonetheless, after controlling for confounding variables and adjusting for bias using propensity scores, the AMI‐OS was associated with increased use of evidence‐based therapies and decreased mortality. Most importantly, it appears that the benefits of the OS were not just due to increased receipt of individual recommended therapies, but to increased concurrent receipt of multiple recommended therapies.

Modern EMRs have great potential for significant improvements in the quality, efficiency, and safety of care provided,[36] and our study highlights this potential. However, a number of important limitations to our study must be considered. Although we had access to a very rich dataset, we could not control for all possible confounders, and our risk adjustment cannot match the level of information available to clinicians. In particular, the measurements available to us with respect to cardiac risk are limited. Thus, we have to recognize that the strength of our findings does not approximate that of a randomized trial, and one would expect that the magnitude of the beneficial association would fall under more controlled conditions. Resource limitations also did not permit us to gather more time course data (eg, sequential measurements of patient instability, cardiac damage, or use of recommended therapies), which could provide a better delineation of differences in both processes and outcomes.

Limitations also exist to the generalizability of the use of order sets in other settings that go beyond the availability of a comprehensive EMR. Our study population was cared for in a setting with an unusually high level of integration.[1] For example, KPNC has an elaborate administrative infrastructure for training in the use of the EMR as well as ensuring that order sets are not just evidence‐based, but that they are perceived by clinicians to be of significant value. This infrastructure, established to ensure physician buy‐in, may not be easy to replicate in smaller or less‐integrated settings. Thus, it is conceivable that factors other than the degree of support during the EMR deployments can affect rates of order set use.

Although our use of counterfactual methods included illness severity (LAPS2) and longitudinal comorbidity burden (COPS2), which are not yet available outside highly integrated delivery services employing comprehensive EMRs, it is possible they are insufficient. We cannot exclude the possibility that other biases or patient characteristics were present that led clinicians to preferentially employ the electronic order set in some patients but not in others. One could also argue that future studies should consider using overall adherence to recommended AMI treatment guidelines as a risk adjustment tool that would permit one to analyze what other factors may be playing a role in residual differences in patient outcomes. Last, one could object to our inclusion of STEMI patients; however, this was not a study on optimum treatment strategies for STEMI patients. Rather, it was a study on the impact on AMI outcomes of a specific component of computerized order entry outside the research setting.

Despite these limitations, we believe that our findings provide strong support for the continued use of electronic evidence‐based order sets in the inpatient medical setting. Once the initial implementation of a comprehensive EMR has occurred, deployment of these electronic order sets is a relatively inexpensive but effective method to foster compliance with evidence‐based care.

Future research in healthcare information technology can take a number of directions. One important area, of course, revolves around ways to promote enhanced physician adoption of EMRs. Our audit of records where the AMI‐OS was not used found that specific reasons for not using the order set (eg, treatment refusals, emergent intervention) were present in two‐thirds of the cases. This suggests that future analyses of adherence involving EMRs and CPOE implementation should take a more nuanced look at how order entry is actually enabled. It may be that understanding how order sets affect care enhances clinician acceptance and thus could serve as an incentive to EMR adoption. However, once an EMR is adopted, a need exists to continue evaluations such as this because, ultimately, the gold standard should be improved patient care processes and better outcomes for patients.

Acknowledgement

The authors give special thanks to Dr. Brian Hoberman for sponsoring this work, Dr. Alan S. Go for providing assistance with obtaining copies of electrocardiograms for review, Drs. Tracy Lieu and Vincent Liu for reviewing the manuscript, and Ms. Rachel Lesser for formatting the manuscript.

Disclosures: This work was supported by The Permanente Medical Group, Inc. and Kaiser Foundation Hospitals, Inc. The algorithms used to extract data and perform risk adjustment were developed with funding from the Sidney Garfield Memorial Fund (Early Detection of Impending Physiologic Deterioration in Hospitalized Patients, 1159518), the Agency for Healthcare Quality and Research (Rapid Clinical Snapshots From the EMR Among Pneumonia Patients, 1R01HS018480‐01), and the Gordon and Betty Moore Foundation (Early Detection of Impending Physiologic Deterioration: Electronic Early Warning System).

Although the prevalence of coronary heart disease and death from acute myocardial infarction (AMI) have declined steadily, about 935,000 heart attacks still occur annually in the United States, with approximately one‐third of these being fatal.[1, 2, 3] Studies have demonstrated decreased 30‐day and longer‐term mortality in AMI patients who receive evidence‐based treatment, including aspirin, ‐blockers, angiotensin‐converting enzyme (ACE) inhibitors or angiotensin receptor blockers (ARBs), anticoagulation therapy, and statins.[4, 5, 6, 7] Despite clinical practice guidelines (CPGs) outlining evidence‐based care and considerable efforts to implement processes that improve patient outcomes, delivery of effective therapy remains suboptimal.[8] For example, the Hospital Quality Alliance Program[9] found that in AMI patients, use of aspirin on admission was only 81% to 92%, ‐blocker on admission 75% to 85%, and ACE inhibitors for left ventricular dysfunction 71% to 74%.

Efforts to increase adherence to CPGs and improve patient outcomes in AMI have resulted in variable degrees of success. They include promotion of CPGs,[4, 5, 6, 7] physician education with feedback, report cards, care paths, registries,[10] Joint Commission standardized measures,[11] and paper checklists or order sets (OS).[12, 13]

In this report, we describe the association between use of an evidence‐based, electronic OS for AMI (AMI‐OS) and better adherence to CPGs. This AMI‐OS was implemented in the inpatient electronic medical records (EMRs) of a large integrated healthcare delivery system, Kaiser Permanente Northern California (KPNC). The purpose of our investigation was to determine (1) whether use of the AMI‐OS was associated with improved AMI processes and patient outcomes, and (2) whether these associations persisted after risk adjustment using a comprehensive severity of illness scoring system.

MATERIALS AND METHODS

This project was approved by the KPNC institutional review board.

Under a mutual exclusivity arrangement, salaried physicians of The Permanente Medical Group, Inc., care for 3.4 million Kaiser Foundation Health Plan, Inc. members at facilities owned by Kaiser Foundation Hospitals, Inc. All KPNC facilities employ the same information systems with a common medical record number and can track care covered by the plan but delivered elsewhere.[14] Our setting consisted of 21 KPNC hospitals described in previous reports,[15, 16, 17, 18] using the same commercially available EMR system that includes computerized physician order entry (CPOE). Deployment of the customized inpatient Epic EMR (www.epicsystems.com), known internally as KP HealthConnect (KPHC), began in 2006 and was completed in 2010.

In this EMR's CPOE, physicians have options to select individual orders (a la carte) or they can utilize an OS, which is a collection of the most appropriate orders associated with specific diagnoses, procedures, or treatments. The evidence‐based AMI‐OS studied in this project was developed by a multidisciplinary team (for detailed components see Supporting Appendix 1Appendix 5 in the online version of this article).

Our study focused on the first set of hospital admission orders for patients with AMI. The study sample consisted of patients meeting these criteria: (1) age 18 years at admission; (2) admitted to a KPNC hospital for an overnight stay between September 28, 2008 and December 31, 2010; (3) principal diagnosis was AMI (International Classification of Diseases, 9th Revision [ICD‐9][19] codes 410.00, 01, 10, 11, 20, 21, 30, 31, 40, 41, 50, 51, 60, 61, 70, 71, 80, 90, and 91); and (4) KPHC had been operational at the hospital for at least 3 months to be included (for assembly descriptions see Supporting Appendices 15 in the online version of this article). At the study hospitals, troponin I was measured using the Beckman Access AccuTnI assay (Beckman Coulter, Inc., Brea, CA), whose upper reference limit (99th percentile) is 0.04 ng/mL. We excluded patients initially hospitalized for AMI at a non‐KPNC site and transferred into a study hospital.

The data processing methods we employed have been detailed elsewhere.[14, 15, 17, 20, 21, 22] The dependent outcome variables were total hospital length of stay, inpatient mortality, 30‐day mortality, and all‐cause rehospitalization within 30 days of discharge. Linked state mortality data were unavailable for the entire study period, so we ascertained 30‐day mortality based on the combination of KPNC patient demographic data and publicly available Social Security Administration decedent files. We ascertained rehospitalization by scanning KPNC hospitalization databases, which also track out‐of‐plan use.

The dependent process variables were use of aspirin within 24 hours of admission, ‐blockers, anticoagulation, ACE inhibitors or ARBs, and statins. The primary independent variable of interest was whether or not the admitting physician employed the AMI‐OS when admission orders were entered. Consequently, this variable is dichotomous (AMI‐OS vs a la carte).

We controlled for acute illness severity and chronic illness burden using a recent modification[22] of an externally validated risk‐adjustment system applicable to all hospitalized patients.[15, 16, 23, 24, 25] Our methodology included vital signs, neurological status checks, and laboratory test results obtained in the 72 hours preceding hospital admission; comorbidities were captured longitudinally using data from the year preceding hospitalization (for comparison purposes, we also assigned a Charlson Comorbidity Index score[26]).

End‐of‐life care directives are mandatory on admission at KPNC hospitals. Physicians have 4 options: full code, partial code, do not resuscitate, and comfort care only. Because of small numbers in some categories, we collapsed these 4 categories into full code and not full code. Because patients' care directives may change, we elected to capture the care directive in effect when a patient first entered a hospital unit other than the emergency department (ED).

Two authors (M.B., P.C.L.), one of whom is a board‐certified cardiologist, reviewed all admission electrocardiograms and made a consensus determination as to whether or not criteria for ST‐segment elevation myocardial infarction (STEMI) were present (ie, new ST‐segment elevation or left bundle branch block); we also reviewed the records of all patients with missing troponin I data to confirm the AMI diagnosis.

Statistical Methods

We performed unadjusted comparisons between AMI‐OS and nonAMI‐OS patients using the t test or the [2] test, as appropriate.

We hypothesized that the AMI‐OS plays a mediating role on patient outcomes through its effect on adherence to recommended treatment. We evaluated this hypothesis for inpatient mortality by first fitting a multivariable logistic regression model for inpatient mortality as the outcome and either the 5 evidence‐based therapies or the total number of evidence‐based therapies used (ranging from 02, 3, 4, or 5) as the dependent variable controlling for age, gender, presence of STEMI, troponin I, comorbidities, illness severity, ED length of stay (LOS), care directive status, and timing of cardiac catheterization referral as covariates to confirm the protective effect of these therapies on mortality. We then used the same model to estimate the effect of AMI‐OS on inpatient mortality, substituting the therapies with AMI‐OS as the dependent variable and using the same covariates. Last, we included both the therapies and the AMI‐OS in the model to evaluate their combined effects.[27]

We used 2 different methods to estimate the effects of AMI‐OS and number of therapies provided on the outcomes while adjusting for observed baseline differences between the 2 groups of patients: propensity risk score matching, which estimates the average treatment effect for the treated,[28, 29] and inverse probability of treatment weighting, which is used to estimate the average treatment effect.[30, 31, 32] The propensity score was defined as the probability of receiving the intervention for a patient with specific predictive factors.[33, 34] We computed a propensity score for each patient by using logistic regression, with the dependent variable being receipt of AMI‐OS and the independent variables being the covariates used for the multivariate logistic regression as well as ICD‐9 code for final diagnosis. We calculated the Mahalanobis distance between patients who received AMI‐OS (cases) and patients who did not received AMI‐OS (controls) using the same set of covariates. We matched each case to a single control within the same facility based on the nearest available Mahalanobis metric matching within calipers defied as the maximum width of 0.2 standard deviations of the logit of the estimated propensity score.[29, 35] We estimated the odds ratios for the binary dependent variables based on a conditional logistic regression model to account for the matched pairs design.[28] We used a generalized linear model with the log‐transformed LOS as the outcome to estimate the ratio of the LOS geometric mean of the cases to the controls. We calculated the relative risk for patients receiving AMI‐OS via the inverse probability weighting method by first defining a weight for each patient. [We assigned a weight of 1/psi to patients who received the AMI‐OS and a weight of 1/(1psi) to patients who did not receive the AMI‐OS, where psi denotes the propensity score for patient i]. We used a logistic regression model for the binary dependent variables with the same set of covariates described above to estimate the adjusted odds ratios while weighting each observation by its corresponding weight. Last, we used a weighted generalized linear model to estimate the AMI‐OS effect on the log‐transformed LOS.

RESULTS

Table 1 summarizes the characteristics of the 5879 patients. It shows that AMI‐OS patients were more likely to receive evidence‐based therapies for AMI (aspirin, ‐blockers, ACE inhibitors or ARBs, anticoagulation, and statins) and had a 46% lower mortality rate in hospital (3.51 % vs 6.52%) and 33% lower rate at 30 days (5.66% vs 8.48%). AMI‐OS patients were also found to be at lower risk for an adverse outcome than nonAMI‐OS patients. The AMI‐OS patients had lower peak troponin I values, severity of illness (lower Laboratory‐Based Acute Physiology Score, version 2 [LAPS2] scores), comorbidity burdens (lower Comorbidity Point Score, version 2 [COPS2] and Charlson scores), and global predicted mortality risk. AMI‐OS patients were also less likely to have required intensive care. AMI‐OS patients were at higher risk of death than nonAMI‐OS patients with respect to only 1 variable (being full code at the time of admission), but although this difference was statistically significant, it was of minor clinical impact (86% vs 88%).

Description of Study Cohort
 Patients Initially Managed UsingP Valuea
AMI Order Set, N=3,531bA La Carte Orders, N=2,348b
  • NOTE: Abbreviations: ACE, angiotensin‐converting enzyme; AMI, acute myocardial infarction; AMI‐OS, acute myocardial infarction order set; ARBs, angiotensin receptor blockers; COPS2, Comorbidity Point Score, version 2; CPOE, computerized physician order entry; ED, emergency department; ICU, intensive care unit; LAPS2, Laboratory‐based Acute Physiology Score, version 2; SD, standard deviation; STEMI, ST‐segment elevation myocardial infarction.

  • 2 or t test, as appropriate. See text for further methodological details.

  • AMI‐OS is an evidence‐based electronic checklist that guides physicians to order the most effective therapy by CPOE during the hospital admission process. In contrast, a la carte means that the clinician did not use the AMI‐OS, but rather entered individual orders via CPOE. See text for further details.

  • STEMI as evident by electrocardiogram. See text for details on ascertainment.

  • See text and reference 31 for details on how this score was assigned.

  • The COPS2 is a longitudinal, diagnosis‐based score assigned monthly that integrates all diagnoses incurred by a patient in the preceding 12 months. It is a continuous variable that can range between a minimum of zero and a theoretical maximum of 1,014, although <0.05% of Kaiser Permanente hospitalized patients have a COPS2 exceeding 241, and none have had a COPS2 >306. Increasing values of the COPS2 are associated with increasing mortality. See text and references 20 and 27 for additional details on the COPS2.

  • The LAPS2 integrates results from vital signs, neurological status checks, and 15 laboratory tests in the 72 hours preceding hospitalization into a single continuous variable. Increasing degrees of physiologic derangement are reflected in a higher LAPS2, which can range between a minimum of zero and a theoretical maximum of 414, although <0.05% of Kaiser Permanente hospitalized patients have a LAPS2 exceeding 227, and none have had a LAPS2 >282. Increasing values of LAPS2 are associated with increasing mortality. See text and references 20 and 27 for additional details on the LAPS2.

  • See text for details of specific therapies and how they were ascertained using the electronic medical record.

  • Percent mortality risk based on age, sex, diagnosis, COPS2, LAPS2, and care directive using a predictive model described in text and in reference 22.

  • See text for description of how end‐of‐life care directives are captured in the electronic medical record.

  • Direct admit means that the first hospital unit in which a patient stayed was the ICU; transfer refers to those patients transferred to the ICU from another unit in the hospital.

Age, y, median (meanSD)70 (69.413.8)70 (69.213.8)0.5603
Age (% >65 years)2,134 (60.4%)1,415 (60.3%)0.8949
Sex (% male)2,202 (62.4%)1,451 (61.8%)0.6620
STEMI (% with)c166 (4.7%)369 (15.7%)<0.0001
Troponin I (% missing)111 (3.1%)151 (6.4%)<0.0001
Troponin I median (meanSD)0.57 (3.08.2)0.27 (2.58.9)0.0651
Charlson score median (meanSD)d2.0 (2.51.5)2.0 (2.71.6)<0.0001
COPS2, median (meanSD)e14.0 (29.831.7)17.0 (34.334.4)<0.0001
LAPS2, median (meanSD)e0.0 (35.643.5)27.0 (40.948.1)<0.0001
Length of stay in ED, h, median (meanSD)5.7 (5.93.0)5.7 (5.43.1)<0.0001
Patients receiving aspirin within 24 hoursf3,470 (98.3%)2,202 (93.8%)<0.0001
Patients receiving anticoagulation therapyf2,886 (81.7%)1,846 (78.6%)0.0032
Patients receiving ‐blockersf3,196 (90.5%)1,926 (82.0%)<0.0001
Patients receiving ACE inhibitors or ARBsf2,395 (67.8%)1,244 (53.0%)<0.0001
Patients receiving statinsf3,337 (94.5%)1,975 (84.1%)<0.0001
Patient received 1 or more therapies3,531 (100.0%)2,330 (99.2%)<0.0001
Patient received 2 or more therapies3,521 (99.7%)2,266 (96.5%)<0.0001
Patient received 3 or more therapies3,440 (97.4%)2,085 (88.8%)<0.0001
Patient received 4 or more therapies3,015 (85.4%)1,646 (70.1%)<0.0001
Patient received all 5 therapies1,777 (50.3%)866 (35.9%)<0.0001
Predicted mortality risk, %, median, (meanSD)f0.86 (3.27.4)1.19 (4.810.8)<0.0001
Full code at time of hospital entry (%)g3,041 (86.1%)2,066 (88.0%)0.0379
Admitted to ICU (%)i   
Direct admit826 (23.4%)567 (24.2%)0.5047
Unplanned transfer222 (6.3%)133 (5.7%)0.3262
Ever1,283 (36.3%)1,169 (49.8%)<0.0001
Length of stay, h, median (meanSD)68.3 (109.4140.9)68.9 (113.8154.3)0.2615
Inpatient mortality (%)124 (3.5%)153 (6.5%)<0.0001
30‐day mortality (%)200 (5.7%)199 (8.5%)<0.0001
All‐cause rehospitalization within 30 days (%)576 (16.3%)401 (17.1%)0.4398
Cardiac catheterization procedure referral timing   
1 day preadmission to discharge2,018 (57.2%)1,348 (57.4%)0.1638
2 days preadmission or earlier97 (2.8%)87 (3.7%) 
After discharge149 (4.2%)104 (4.4%) 
No referral1,267 (35.9%)809 (34.5%) 

Table 2 shows the result of a logistic regression model in which the dependent variable was inpatient mortality and either the 5 evidence‐based therapies or the total number of evidence‐based therapies are the dependent variables. ‐blocker, statin, and ACE inhibitor or ARB therapies all had a protective effect on mortality, with odds ratios ranging from 0.48 (95% confidence interval [CI]: 0.36‐0.64), 0.63 (95% CI: 0.45‐0.89), and 0.40 (95% CI: 0.30‐0.53), respectively. An increased number of therapies also had a beneficial effect on inpatient mortality, with patients having 3 or more of the evidence‐based therapies showing an adjusted odds ratio (AOR) of 0.49 (95% CI: 0.33‐0.73), 4 or more therapies an AOR of 0.29 (95% CI: 0.20‐0.42), and 0.17 (95% CI: 0.11‐0.25) for 5 or more therapies.

Logistic Regression Model for Inpatient Mortality to Estimate the Effect of Evidence‐Based Therapies
 Multiple Therapies EffectIndividual Therapies Effect
OutcomeDeathDeath
Number of outcomes277277
 AORa95% CIbAORa95% CIb
  • NOTE: Abbreviations: ACE = angiotensin converting enzyme; ARB = angiotensin receptor blockers.

  • Adjusted odds ratio.

  • 95% confidence interval.

  • ST‐segment elevation myocardial infarction present.

  • See text and preceding table for details on COmorbidity Point Score, version 2 and Laboratory Acute Physiology Score, version 2.

  • Emergency department length of stay.

  • See text for details on how care directives were categorized.

Age in years    
1839Ref Ref 
40641.02(0.147.73)1.01(0.137.66)
65844.05(0.5529.72)3.89(0.5328.66)
85+4.99(0.6737.13)4.80(0.6435.84)
Sex    
FemaleRef   
Male1.05(0.811.37)1.07(0.821.39)
STEMIc    
AbsentRef Ref 
Present4.00(2.755.81)3.86(2.645.63)
Troponin I    
0.1 ng/mlRef Ref 
>0.1 ng/ml1.01(0.721.42)1.02(0.731.43)
COPS2d (AOR per 10 points)1.05(1.011.08)1.04(1.011.08)
LAPS2d (AOR per 10 points)1.09(1.061.11)1.09(1.061.11)
ED LOSe (hours)    
<6Ref Ref 
670.74(0.531.03)0.76(0.541.06)
>=120.82(0.391.74)0.83(0.391.78)
Code Statusf    
Full CodeRef   
Not Full Code1.08(0.781.49)1.09(0.791.51)
Cardiac procedure referral    
None during stayRef   
1 day pre adm until discharge0.40(0.290.54)0.39(0.280.53)
Number of therapies received    
2 or lessRef   
30.49(0.330.73)  
40.29(0.200.42)  
50.17(0.110.25)  
Aspirin therapy  0.80(0.491.32)
Anticoagulation therapy  0.86(0.641.16)
Beta Blocker therapy  0.48(0.360.64)
Statin therapy  0.63(0.450.89)
ACE inhibitors or ARBs  0.40(0.300.53)
C Statistic0.814 0.822 
Hosmer‐Lemeshow p value0.509 0.934 

Table 3 shows that the use of the AMI‐OS is protective, with an AOR of 0.59 and a 95% CI of 0.45‐0.76. Table 3 also shows that the most potent predictors were comorbidity burden (AOR: 1.07, 95% CI: 1.03‐1.10 per 10 COPS2 points), severity of illness (AOR: 1.09, 95% CI: 1.07‐1.12 per 10 LAPS2 points), STEMI (AOR: 3.86, 95% CI: 2.68‐5.58), and timing of cardiac catheterization referral occurring immediately prior to or during the admission (AOR: 0.37, 95% CI: 0.27‐0.51). The statistical significance of the AMI‐OS effect disappears when both AMI‐OS and the individual therapies are included in the same model (see Supporting Information, Appendices 15, in the online version of this article).

Logistic Regression Model for Inpatient Mortality to Estimate the Effect of Acute Myocardial Infarction Order Set
OutcomeDeath 
Number of outcomes277 
 AORa95% CIb
  • Adjusted odds ratio.

  • 95% confidence interval.

  • ST‐segment elevation myocardial infarction present.

  • See text and preceding table for details on COmorbidity Point Score, version 2 and Laboratory Acute Physiology Score, version 2.

  • Emergency department length of stay.

  • See text for details on how care directives were categorized.

  • **See text for details on the order set.

Age in years  
1839Ref 
40641.16(0.158.78)
65844.67(0.6334.46)
85+5.45(0.7340.86)
Sex  
FemaleRef 
Male1.05(0.811.36)
STEMIc  
AbsentRef 
Present3.86(2.685.58)
Troponin I  
0.1 ng/mlRef 
>0.1 ng/ml1.16(0.831.62)
COPS2d (AOR per 10 points)1.07(1.031.10)
LAPS2d (AOR per 10 points)1.09(1.071.12)
ED LOSe (hours)  
<6Ref 
670.72(0.521.00)
>=120.70(0.331.48)
Code statusf  
Full codeRef 
Not full code1.22(0.891.68)
Cardiac procedure referral  
None during stayRef 
1 day pre adm until discharge0.37(0.270.51)
Order set employedg  
NoRef 
Yes0.59(0.450.76)
C Statistic0.792 
Hosmer‐Lemeshow p value0.273 

Table 4 shows separately the average treatment effect (ATE) and average treatment effect for the treated (ATT) of AMI‐OS and of increasing number of therapies on other outcomes (30‐day mortality, LOS, and readmission). Both the ATE and ATT show that the use of the AMI‐OS was significantly protective with respect to mortality and total hospital LOS but not significant with respect to readmission. The effect of the number of therapies on mortality is significantly higher with increasing number of therapies. For example, patients who received 5 therapies had an average treatment effect on 30‐day inpatient mortality of 0.23 (95% CI: 0.15‐0.35) compared to 0.64 (95% CI: 0.43‐0.96) for 3 therapies, almost a 3‐fold difference. The effects of increasing number of therapies were not significant for LOS or readmission. A sensitivity analysis in which the 535 STEMI patients were removed showed essentially the same results, so it is not reported here.

Adjusted Odds Ratio (95% CI) or Mean Length‐of‐Stay Ratio (95% CI) in Study Patients
OutcomeOrder Seta3 Therapiesb4 Therapiesb5 Therapiesb
  • NOTE: Abbreviations: CI, confidence interval; LOS, length of stay.

  • Refers to comparison in which the reference group consists of patients who were not treated using the acute myocardial infarction order set.

  • Refers to comparison in which the reference group consists of patients who received 2 or less of the 5 recommended therapies.

  • See text for description of average treatment effect methodology.

  • See text for description of average treatment effect on the treated and matched pair adjustment methodology.

  • See text for details on how we modeled LOS.

Average treatment effectc
Inpatient mortality0.67 (0.520.86)0.64 (0.430.96)0.37 (0.250.54)0.23 (0.150.35)
30‐day mortality0.77 (0.620.96)0.68 (0.480.98)0.34 (0.240.48)0.26 (0.180.37)
Readmission1.03 (0.901.19)1.20 (0.871.66)1.19 (0.881.60)1.30 (0.961.76)
LOS, ratio of the geometric means0.91 (0.870.95)1.16 (1.031.30)1.17 (1.051.30)1.12 (1.001.24)
Average treatment effect on the treatedd
Inpatient mortality0.69 (0.520.92)0.35 (0.130.93)0.17 (0.070.43)0.08 (0.030.20)
30‐day mortality0.84 (0.661.06)0.35 (0.150.79)0.17 (0.070.37)0.09 (0.040.20)
Readmission1.02 (0.871.20)1.39 (0.852.26)1.36 (0.882.12)1.23 (0.801.89)
LOS, ratio of the geometric meanse0.92 (0.870.97)1.18 (1.021.37)1.16 (1.011.33)1.04 (0.911.19)

To further elucidate possible reasons why physicians did not use the AMI‐OS, the lead author reviewed 105 randomly selected records where the AMI‐OS was not used, 5 records from each of the 21 study hospitals. This review found that in 36% of patients, the AMI‐OS was not used because emergent catheterization or transfer to a facility with percutaneous coronary intervention capability occurred. Presence of other significant medical conditions, including critical illness, was the reason in 17% of these cases, patient or family refusal of treatments in 8%, issues around end‐of‐life care in 3%, and specific medical contraindications in 1%. In the remaining 34%, no reason for not using the AMI‐OS could be identified.

DISCUSSION

We evaluated the use of an evidence‐based electronic AMI‐OS embedded in a comprehensive EMR and found that it was beneficial. Its use was associated with increased adherence to evidence‐based therapies, which in turn were associated with improved outcomes. Using data from a large cohort of hospitalized AMI patients in 21 community hospitals, we were able to use risk adjustment that included physiologic illness severity to adjust for baseline mortality risk. Patients in whom the AMI‐OS was employed tended to be at lower risk; nonetheless, after controlling for confounding variables and adjusting for bias using propensity scores, the AMI‐OS was associated with increased use of evidence‐based therapies and decreased mortality. Most importantly, it appears that the benefits of the OS were not just due to increased receipt of individual recommended therapies, but to increased concurrent receipt of multiple recommended therapies.

Modern EMRs have great potential for significant improvements in the quality, efficiency, and safety of care provided,[36] and our study highlights this potential. However, a number of important limitations to our study must be considered. Although we had access to a very rich dataset, we could not control for all possible confounders, and our risk adjustment cannot match the level of information available to clinicians. In particular, the measurements available to us with respect to cardiac risk are limited. Thus, we have to recognize that the strength of our findings does not approximate that of a randomized trial, and one would expect that the magnitude of the beneficial association would fall under more controlled conditions. Resource limitations also did not permit us to gather more time course data (eg, sequential measurements of patient instability, cardiac damage, or use of recommended therapies), which could provide a better delineation of differences in both processes and outcomes.

Limitations also exist to the generalizability of the use of order sets in other settings that go beyond the availability of a comprehensive EMR. Our study population was cared for in a setting with an unusually high level of integration.[1] For example, KPNC has an elaborate administrative infrastructure for training in the use of the EMR as well as ensuring that order sets are not just evidence‐based, but that they are perceived by clinicians to be of significant value. This infrastructure, established to ensure physician buy‐in, may not be easy to replicate in smaller or less‐integrated settings. Thus, it is conceivable that factors other than the degree of support during the EMR deployments can affect rates of order set use.

Although our use of counterfactual methods included illness severity (LAPS2) and longitudinal comorbidity burden (COPS2), which are not yet available outside highly integrated delivery services employing comprehensive EMRs, it is possible they are insufficient. We cannot exclude the possibility that other biases or patient characteristics were present that led clinicians to preferentially employ the electronic order set in some patients but not in others. One could also argue that future studies should consider using overall adherence to recommended AMI treatment guidelines as a risk adjustment tool that would permit one to analyze what other factors may be playing a role in residual differences in patient outcomes. Last, one could object to our inclusion of STEMI patients; however, this was not a study on optimum treatment strategies for STEMI patients. Rather, it was a study on the impact on AMI outcomes of a specific component of computerized order entry outside the research setting.

Despite these limitations, we believe that our findings provide strong support for the continued use of electronic evidence‐based order sets in the inpatient medical setting. Once the initial implementation of a comprehensive EMR has occurred, deployment of these electronic order sets is a relatively inexpensive but effective method to foster compliance with evidence‐based care.

Future research in healthcare information technology can take a number of directions. One important area, of course, revolves around ways to promote enhanced physician adoption of EMRs. Our audit of records where the AMI‐OS was not used found that specific reasons for not using the order set (eg, treatment refusals, emergent intervention) were present in two‐thirds of the cases. This suggests that future analyses of adherence involving EMRs and CPOE implementation should take a more nuanced look at how order entry is actually enabled. It may be that understanding how order sets affect care enhances clinician acceptance and thus could serve as an incentive to EMR adoption. However, once an EMR is adopted, a need exists to continue evaluations such as this because, ultimately, the gold standard should be improved patient care processes and better outcomes for patients.

Acknowledgement

The authors give special thanks to Dr. Brian Hoberman for sponsoring this work, Dr. Alan S. Go for providing assistance with obtaining copies of electrocardiograms for review, Drs. Tracy Lieu and Vincent Liu for reviewing the manuscript, and Ms. Rachel Lesser for formatting the manuscript.

Disclosures: This work was supported by The Permanente Medical Group, Inc. and Kaiser Foundation Hospitals, Inc. The algorithms used to extract data and perform risk adjustment were developed with funding from the Sidney Garfield Memorial Fund (Early Detection of Impending Physiologic Deterioration in Hospitalized Patients, 1159518), the Agency for Healthcare Quality and Research (Rapid Clinical Snapshots From the EMR Among Pneumonia Patients, 1R01HS018480‐01), and the Gordon and Betty Moore Foundation (Early Detection of Impending Physiologic Deterioration: Electronic Early Warning System).

References
  1. Yeh RW, Sidney S, Chandra M, Sorel M, Selby JV, Go AS. Population trends in the incidence and outcomes of acute myocardial infarction. N Engl J Med. 2010;362(23):21552165.
  2. Rosamond WD, Chambless LE, Heiss G, et al. Twenty‐two‐year trends in incidence of myocardial infarction, coronary heart disease mortality, and case fatality in 4 US communities, 1987–2008. Circulation. 2012;125(15):18481857.
  3. Roger VL, Go AS, Lloyd‐Jones DM, et al. Heart disease and stroke statistics—2012 update: a report from the American Heart Association. Circulation. 2012;125(1):e2e220.
  4. Anderson JL, Adams CD, Antman EM, et al. ACC/AHA 2007 guidelines for the management of patients with unstable angina/non‐ST‐Elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 2002 Guidelines for the Management of Patients With Unstable Angina/Non‐ST‐Elevation Myocardial Infarction) developed in collaboration with the American College of Emergency Physicians, the Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons endorsed by the American Association of Cardiovascular and Pulmonary Rehabilitation and the Society for Academic Emergency Medicine. J Am Coll Cardiol. 2007;50(7):e1e157.
  5. Antman EM, Hand M, Armstrong PW, et al. 2007 focused update of the ACC/AHA 2004 guidelines for the management of patients with ST‐elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2008;51(2):210247.
  6. Jernberg T, Johanson P, Held C, Svennblad B, Lindback J, Wallentin L. Association between adoption of evidence‐based treatment and survival for patients with ST‐elevation myocardial infarction. JAMA. 2011;305(16):16771684.
  7. Puymirat E, Simon T, Steg PG, et al. Association of changes in clinical characteristics and management with improvement in survival among patients with ST‐elevation myocardial infarction. JAMA. 2012;308(10):9981006.
  8. Motivala AA, Cannon CP, Srinivas VS, et al. Changes in myocardial infarction guideline adherence as a function of patient risk: an end to paradoxical care? J Am Coll Cardiol. 2011;58(17):17601765.
  9. Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals—the Hospital Quality Alliance program. N Engl J Med. 2005;353(3):265274.
  10. Desai N, Chen AN, et al. Challenges in the treatment of NSTEMI patients at high risk for both ischemic and bleeding events: insights from the ACTION Registry‐GWTG. J Am Coll Cardiol. 2011;57:E913E913.
  11. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002–2004. N Engl J Med. 2005;353(3):255264.
  12. Eagle KA, Montoye K, Riba AL. Guideline‐based standardized care is associated with substantially lower mortality in medicare patients with acute myocardial infarction. J Am Coll Cardiol. 2005;46(7):12421248.
  13. Ballard DJ, Ogola G, Fleming NS, et al. Impact of a standardized heart failure order set on mortality, readmission, and quality and costs of care. Int J Qual Health Care. 2010;22(6):437444.
  14. Selby JV. Linking automated databases for research in managed care settings. Ann Intern Med. 1997;127(8 pt 2):719724.
  15. Escobar G, Greene J, Scheirer P, Gardner M, Draper D, Kipnis P. Risk adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232239.
  16. Liu V, Kipnis P, Gould MK, Escobar GJ. Length of stay predictions: improvements through the use of automated laboratory and comorbidity variables. Med Care. 2010;48(8):739744.
  17. Escobar GJ, Greene JD, Gardner MN, Marelich GP, Quick B, Kipnis P. Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS). J Hosp Med. 2011;6(2):7480.
  18. Liu V, Kipnis P, Rizk NW, Escobar GJ. Adverse outcomes associated with delayed intensive care unit transfers in an integrated healthcare system. J Hosp Med. 2012;7(3):224230.
  19. International Classification of Diseases, 9th Revision‐Clinical Modification. 4th ed. 3 Vols. Los Angeles, CA: Practice Management Information Corporation; 2006.
  20. Go AS, Hylek EM, Chang Y, et al. Anticoagulation therapy for stroke prevention in atrial fibrillation: how well do randomized trials translate into clinical practice? JAMA. 2003;290(20):26852692.
  21. Escobar GJ, LaGuardia J, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  22. Escobar GJ, Gardner M, Greene JG, David D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  23. Kipnis P, Escobar GJ, Draper D. Effect of choice of estimation method on inter‐hospital mortality rate comparisons. Med Care. 2010;48(5):456485.
  24. Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63(7):798803.
  25. Wong J, Taljaard M, Forster AJ, Escobar GJ, Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49(8):734743.
  26. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613619.
  27. MacKinnon DP. Introduction to Statistical Mediation Analysis. New York, NY: Lawrence Erlbaum Associates; 2008.
  28. Imbens GW. Nonparametric estimation of average treatment effects under exogenity: a review. Rev Econ Stat. 2004;86:25.
  29. Rosenbaum PR. Design of Observational Studies. New York, NY: Springer Science+Business Media; 2010.
  30. Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity‐score matched samples. Stat Med. 2009;28:24.
  31. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. J Am Stat Assoc. 1994(89):846866.
  32. Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat Med. 2004;23(19):29372960.
  33. Rosenbaum PR. Discussing hidden bias in observational studies. Ann Intern Med. 1991;115(11):901905.
  34. D'Agostino RB. Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17(19):22652281.
  35. Feng WW, Jun Y, Xu R. A method/macro based on propensity score and Mahalanobis distance to reduce bias in treatment comparison in observational study, 2005. www.lexjansen.com/pharmasug/2006/publichealthresearch/pr05.pdf. Accessed on September 14, 2013.
  36. Ettinger WH. Using health information technology to improve health care. Arch Intern Med. 2012;172(22):17281730.
References
  1. Yeh RW, Sidney S, Chandra M, Sorel M, Selby JV, Go AS. Population trends in the incidence and outcomes of acute myocardial infarction. N Engl J Med. 2010;362(23):21552165.
  2. Rosamond WD, Chambless LE, Heiss G, et al. Twenty‐two‐year trends in incidence of myocardial infarction, coronary heart disease mortality, and case fatality in 4 US communities, 1987–2008. Circulation. 2012;125(15):18481857.
  3. Roger VL, Go AS, Lloyd‐Jones DM, et al. Heart disease and stroke statistics—2012 update: a report from the American Heart Association. Circulation. 2012;125(1):e2e220.
  4. Anderson JL, Adams CD, Antman EM, et al. ACC/AHA 2007 guidelines for the management of patients with unstable angina/non‐ST‐Elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Revise the 2002 Guidelines for the Management of Patients With Unstable Angina/Non‐ST‐Elevation Myocardial Infarction) developed in collaboration with the American College of Emergency Physicians, the Society for Cardiovascular Angiography and Interventions, and the Society of Thoracic Surgeons endorsed by the American Association of Cardiovascular and Pulmonary Rehabilitation and the Society for Academic Emergency Medicine. J Am Coll Cardiol. 2007;50(7):e1e157.
  5. Antman EM, Hand M, Armstrong PW, et al. 2007 focused update of the ACC/AHA 2004 guidelines for the management of patients with ST‐elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol. 2008;51(2):210247.
  6. Jernberg T, Johanson P, Held C, Svennblad B, Lindback J, Wallentin L. Association between adoption of evidence‐based treatment and survival for patients with ST‐elevation myocardial infarction. JAMA. 2011;305(16):16771684.
  7. Puymirat E, Simon T, Steg PG, et al. Association of changes in clinical characteristics and management with improvement in survival among patients with ST‐elevation myocardial infarction. JAMA. 2012;308(10):9981006.
  8. Motivala AA, Cannon CP, Srinivas VS, et al. Changes in myocardial infarction guideline adherence as a function of patient risk: an end to paradoxical care? J Am Coll Cardiol. 2011;58(17):17601765.
  9. Jha AK, Li Z, Orav EJ, Epstein AM. Care in U.S. hospitals—the Hospital Quality Alliance program. N Engl J Med. 2005;353(3):265274.
  10. Desai N, Chen AN, et al. Challenges in the treatment of NSTEMI patients at high risk for both ischemic and bleeding events: insights from the ACTION Registry‐GWTG. J Am Coll Cardiol. 2011;57:E913E913.
  11. Williams SC, Schmaltz SP, Morton DJ, Koss RG, Loeb JM. Quality of care in U.S. hospitals as reflected by standardized measures, 2002–2004. N Engl J Med. 2005;353(3):255264.
  12. Eagle KA, Montoye K, Riba AL. Guideline‐based standardized care is associated with substantially lower mortality in medicare patients with acute myocardial infarction. J Am Coll Cardiol. 2005;46(7):12421248.
  13. Ballard DJ, Ogola G, Fleming NS, et al. Impact of a standardized heart failure order set on mortality, readmission, and quality and costs of care. Int J Qual Health Care. 2010;22(6):437444.
  14. Selby JV. Linking automated databases for research in managed care settings. Ann Intern Med. 1997;127(8 pt 2):719724.
  15. Escobar G, Greene J, Scheirer P, Gardner M, Draper D, Kipnis P. Risk adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232239.
  16. Liu V, Kipnis P, Gould MK, Escobar GJ. Length of stay predictions: improvements through the use of automated laboratory and comorbidity variables. Med Care. 2010;48(8):739744.
  17. Escobar GJ, Greene JD, Gardner MN, Marelich GP, Quick B, Kipnis P. Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS). J Hosp Med. 2011;6(2):7480.
  18. Liu V, Kipnis P, Rizk NW, Escobar GJ. Adverse outcomes associated with delayed intensive care unit transfers in an integrated healthcare system. J Hosp Med. 2012;7(3):224230.
  19. International Classification of Diseases, 9th Revision‐Clinical Modification. 4th ed. 3 Vols. Los Angeles, CA: Practice Management Information Corporation; 2006.
  20. Go AS, Hylek EM, Chang Y, et al. Anticoagulation therapy for stroke prevention in atrial fibrillation: how well do randomized trials translate into clinical practice? JAMA. 2003;290(20):26852692.
  21. Escobar GJ, LaGuardia J, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  22. Escobar GJ, Gardner M, Greene JG, David D, Kipnis P. Risk‐adjusting hospital mortality using a comprehensive electronic record in an integrated healthcare delivery system. Med Care. 2013;51(5):446453.
  23. Kipnis P, Escobar GJ, Draper D. Effect of choice of estimation method on inter‐hospital mortality rate comparisons. Med Care. 2010;48(5):456485.
  24. Walraven C, Escobar GJ, Greene JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63(7):798803.
  25. Wong J, Taljaard M, Forster AJ, Escobar GJ, Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49(8):734743.
  26. Deyo RA, Cherkin DC, Ciol MA. Adapting a clinical comorbidity index for use with ICD‐9‐CM administrative databases. J Clin Epidemiol. 1992;45(6):613619.
  27. MacKinnon DP. Introduction to Statistical Mediation Analysis. New York, NY: Lawrence Erlbaum Associates; 2008.
  28. Imbens GW. Nonparametric estimation of average treatment effects under exogenity: a review. Rev Econ Stat. 2004;86:25.
  29. Rosenbaum PR. Design of Observational Studies. New York, NY: Springer Science+Business Media; 2010.
  30. Austin PC. Balance diagnostics for comparing the distribution of baseline covariates between treatment groups in propensity‐score matched samples. Stat Med. 2009;28:24.
  31. Robins JM, Rotnitzky A, Zhao LP. Estimation of regression coefficients when some regressors are not always observed. J Am Stat Assoc. 1994(89):846866.
  32. Lunceford JK, Davidian M. Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study. Stat Med. 2004;23(19):29372960.
  33. Rosenbaum PR. Discussing hidden bias in observational studies. Ann Intern Med. 1991;115(11):901905.
  34. D'Agostino RB. Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17(19):22652281.
  35. Feng WW, Jun Y, Xu R. A method/macro based on propensity score and Mahalanobis distance to reduce bias in treatment comparison in observational study, 2005. www.lexjansen.com/pharmasug/2006/publichealthresearch/pr05.pdf. Accessed on September 14, 2013.
  36. Ettinger WH. Using health information technology to improve health care. Arch Intern Med. 2012;172(22):17281730.
Issue
Journal of Hospital Medicine - 9(3)
Issue
Journal of Hospital Medicine - 9(3)
Page Number
155-161
Page Number
155-161
Publications
Publications
Article Type
Display Headline
An electronic order set for acute myocardial infarction is associated with improved patient outcomes through better adherence to clinical practice guidelines
Display Headline
An electronic order set for acute myocardial infarction is associated with improved patient outcomes through better adherence to clinical practice guidelines
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Gabriel J. Escobar, MD, Division of Research, Kaiser Permanente Northern California, 2000 Broadway Avenue, 032R01, Oakland, CA 94612; Telephone: 510‐891‐5929; E‐mail: gabriel.escobar@kp.org
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Detection of Physiologic Deterioration

Article Type
Changed
Mon, 05/22/2017 - 18:44
Display Headline
Early detection of impending physiologic deterioration among patients who are not in intensive care: Development of predictive models using data from an automated electronic medical record

Patients in general medicalsurgical wards who experience unplanned transfer to the intensive care unit (ICU) have increased mortality and morbidity.13 Using an externally validated methodology permitting assessment of illness severity and mortality risk among all hospitalized patients,4, 5 we recently documented observed‐to‐expected mortality ratios >3.0 and excess length of stay of 10 days among patients who experienced such transfers.6

It is possible to predict adverse outcomes among monitored patients (eg, patients in the ICU or undergoing continuous electronic monitoring).7, 8 However, prediction of unplanned transfers among medicalsurgical ward patients presents challenges. Data collection (vital signs and laboratory tests) is relatively infrequent. The event rate (3% of hospital admissions) is low, and the rate in narrow time periods (eg, 12 hours) is extremely low: a hospital with 4000 admissions per year might experience 1 unplanned transfer to the ICU every 3 days. Not surprisingly, performance of models suitable for predicting ward patients' need for intensive care within narrow time frames have been disappointing.9 The Modified Early Warning Score (MEWS), has a c‐statistic, or area under the receiver operator characteristic of 0.67,1012 and our own model incorporating 14 laboratory tests, but no vital signs, has excellent performance with respect to predicting inpatient mortality, but poor performance with respect to unplanned transfer.6

In this report, we describe the development and validation of a complex predictive model suitable for use with ward patients. Our objective for this work was to develop a predictive model based on clinical and physiologic data available in real time from a comprehensive electronic medical record (EMR), not a clinically intuitive, manually assigned tool. The outcome of interest was unplanned transfer from the ward to the ICU, or death on the ward in a patient who was full code. This model has been developed as part of a regional effort to decrease preventable mortality in the Northern California Kaiser Permanente Medical Care Program (KPMCP), an integrated healthcare delivery system with 22 hospitals.

MATERIALS AND METHODS

For additional details, see the Supporting Information, Appendices 112, in the online version of this article.

This project was approved by the KPMCP Institutional Board for the Protection of Human Subjects.

The Northern California KPMCP serves a total population of approximately 3.3 million members. All Northern California KPMCP hospitals and clinics employ the same information systems with a common medical record number and can track care covered by the plan but delivered elsewhere. Databases maintained by the KPMCP capture admission and discharge times, admission and discharge diagnoses and procedures (assigned by professional coders), bed histories permitting quantification of intra‐hospital transfers, inter‐hospital transfers, as well as the results of all inpatient and outpatient laboratory tests. In July 2006, the KPMCP began deployment of the EMR developed by Epic Systems Corporation (www.epic. com), which has been adapted for the KPMCP and is known as KP HealthConnect (KPHC) in its hospitals. The last of these 22 hospitals went online in March 2010.

Our setting consisted of 14 hospitals in which the KPHC inpatient EMR had been running for at least 3 months (the KPMCP Antioch, Fremont, Hayward, Manteca, Modesto, Roseville, Sacramento, Santa Clara, San Francisco, Santa Rosa, South Sacramento, South San Francisco, Santa Teresa, and Walnut Creek hospitals). We have described the general characteristics of KPMCP hospitals elsewhere.4, 6 Our initial study population consisted of all patients admitted to these hospitals who met the following criteria: hospitalization began from November 1, 2006 through December 31, 2009; initial hospitalization occurred at a Northern California KPMCP hospital (ie, for inter‐hospital transfers, the first hospital stay occurred within the KPMCP); age 18 years; hospitalization was not for childbirth; and KPHC had been operational at the hospital for at least 3 months.

Analytic Approach

The primary outcome for this study was transfer to the ICU after admission to the hospital among patients residing either in a general medicalsurgical ward (ward) or transitional care unit (TCU), or death in the ward or TCU in a patient who was full code at the time of death (ie, had the patient survived, s/he would have been transferred to the ICU). The unit of analysis for this study was a 12‐hour patient shift, which could begin with a 7 AM T0 (henceforth, day shift) or a 7 PM T0 (night shift); in other words, we aimed to predict the occurrence of an event within 12 hours of T0 using only data available prior to T0. A shift in which a patient experienced the primary study outcome is an event shift, while one in which a patient did not experience the primary outcome is a comparison shift. Using this approach, an individual patient record could consist of both event and comparison shifts, since some patients might have multiple unplanned transfers and some patients might have none. Our basic analytic approach consisted of creating a cohort of event and comparison shifts (10 comparison shifts were randomly selected for each event shift), splitting the cohort into a derivation dataset (50%) and validation dataset (50%), developing a model using the derivation dataset, then applying the coefficients of the derivation dataset to the validation dataset. Because some event shifts were excluded due to the minimum 4‐hour length‐of‐stay requirement, we also applied model coefficients to these excluded shifts and a set of randomly selected comparison shifts.

Since the purpose of these analyses was to develop models with maximal signal extraction from sparsely collected predictors, we did not block a time period after the T0 to allow for a reaction time to the alarm. Thus, since some events could occur immediately after the T0 (as can be seen in the Supporting Information, Appendices, in the online version of this article), our models would need to be run at intervals that are more frequent than 2 times a day.

Independent Variables

In addition to patients' age and sex, we tested the following candidate independent variables. Some of these variables are part of the KPMCP risk adjustment model4, 5 and were available electronically for all patients in the cohort. We grouped admission diagnoses into 44 broad diagnostic categories (primary conditions), and admission types into 4 groups (emergency medical, emergency surgical, elective medical, and elective surgical). We quantified patients' degree of physiologic derangement in the 72 hours preceding hospitalization with a Laboratory‐based Acute Physiology Score (LAPS) using 14 laboratory test results prior to hospitalization; we also tested individual laboratory test results obtained after admission to the hospital. We quantified patients' comorbid illness burden using a COmorbidity Point Score (COPS) based on patients' preexisting diagnoses over the 12‐month period preceding hospitalization.4 We extracted temperature, heart rate, respiratory rate, systolic blood pressure, diastolic blood pressure, oxygen saturation, and neurological status from the EMR. We also tested the following variables based on specific information extracted from the EMR: shock index (heart rate divided by systolic blood pressure)13; care directive status (patients were placed into 4 groups: full code, partial code, do not resuscitate [DNR], and no care directive in place); and a proxy for measured lactate (PML; anion gap/serum bicarbonate 100).1416 For comparison purposes, we also created a retrospective electronically assigned MEWS, which we refer to as the MEWS(re), and we assigned this score to patient records electronically using data from KP HealthConnect.

Statistical Methods

Analyses were performed in SAS 9.1, Stata 10, and R 2.12. Final validation was performed using SAS (SAS Institute Inc., Carey, North Carolina). Since we did not limit ourselves to traditional severity‐scoring approaches (eg, selecting the worst heart rate in a given time interval), but also included trend terms (eg, change in heart rate over the 24 hours preceding T0), the number of potential variables to test was very large. Detailed description of the statistical strategies employed for variable selection is provided in the Supporting Information, Appendices, in the online version of this article. Once variables were selected, our basic approach was to test a series of diagnosis‐specific logistic regression submodels using a variety of predictors that included vital signs, vital signs trends (eg, most recent heart rate minus earliest heart rate, heart rate over preceding 24 hours), and other above‐mentioned variables.

We assessed the ability of a submodel to correctly distinguish patients who died, from survivors, using the c‐statistic, as well as other metrics recommended by Cook.17 At the end of the modeling process, we pooled the results across all submodels. For vital signs, where the rate of missing data was <3%, we tested submodels in which we dropped shifts with missing data, as well as submodels in which we imputed missing vital signs to a normal value. For laboratory data, where the rate of missing data for a given shift was much greater, we employed a probabilistic imputation method that included consideration of when a laboratory test result became available.

RESULTS

During the study period, a total of 102,488 patients experienced 145,335 hospitalizations at the study hospitals. We removed 66 patients with 138 hospitalizations for data quality reasons, leaving us with our initial study sample of 102,422 patients whose characteristics are summarized in Table 1. Table 1, in which the unit of analysis is an individual patient, shows that patients who experienced the primary outcome were similar to those patients described in our previous report, in terms of their characteristics on admission as well as in experiencing excess morbidity and mortality.6

Characteristics of Final Study Cohort
 Never Admitted to ICUDirect Admit to ICU From EDUnplanned Transfer to ICU*Other ICU Admission
  • NOTE: All overnight admissions to the study hospitals excluding 66 patients who were removed due to incomplete data. Column categories are mutually exclusive and based on a patient's first hospitalization during the study time period.

  • Abbreviations: COPS, COmorbidity Point Score, DNR, do not resuscitate; ED, emergency department; GI, gastrointestinal; ICU, intensive care unit; LAPS, Laboratory Acute Physiology Score; SD, standard deviation.

  • This group consists of all patients who meet our case definition and includes: 1) patients who had an unplanned transfer to the ICU from the transitional care unit (TCU) or ward; and 2) patients who died on the ward without a DNR order in place at the time of death (ie, who would have been transferred to the ICU had they survived).

  • This group includes patients admitted directly to the ICU from the operating room, post‐anesthesia recovery, or an unknown unit, as well as patients with a planned transfer to the ICU.

  • LAPS point score based on 14 laboratory test results obtained in the 72 hr preceding hospitalization. With respect to a patient's physiologic derangement, the unadjusted relationship of LAPS and inpatient mortality is as follows: a LAPS <7 is associated with a mortality risk of <1%; <7 to 30 with a mortality risk of 1%5%; 30 to 60 with a mortality risk of 5%9%; and >60 with a mortality risk of 10% or more. See text and Escobar et al4 for more details. COPS point score based on a patient's healthcare utilization diagnoses (during the year preceding admission to the hospital). Analogous to present on admission (POA) coding. Scores can range from 0 to a theoretical maximum of 701, but scores >200 are rare. With respect to a patient's preexisting comorbidity burden, the unadjusted relationship of COPS and inpatient mortality is as follows: a COPS <50 is associated with a mortality risk of <1%; <100 with a mortality risk of 1%5%; 100 to 145 with a mortality risk of 5%10%; and >145 with a mortality risk of 10% or more. See text and Escobar et al4 for more details. ∥Numbers for patients who survived last hospitalization to discharge are available upon request.

N89,269596328804310
Age (mean SD)61.26 18.6262.25 18.1366.12 16.2064.45 15.91
Male (n, %)37,228 (41.70%)3091 (51.84%)1416 (49.17%)2378 (55.17%)
LAPS (mean SD)13.02 15.7932.72 24.8524.83 21.5311.79 18.16
COPS(mean SD)67.25 51.4273.88 57.4286.33 59.3378.44 52.49
% Predicted mortality risk (mean SD)1.93% 3.98%7.69% 12.59%5.23% 7.70%3.66% 6.81%
Survived first hospitalization to discharge88,479 (99.12%)5336 (89.49%)2316 (80.42%)4063 (94.27%)
Care order on admission    
Full code78,877 (88.36%)5198 (87.17%)2598 (90.21%)4097 (95.06%)
Partial code664 (0.74%)156 (2.62%)50 (1.74%)27 (0.63%)
Comfort care21 (0.02%)2 (0.03%)0 (0%)0 (0%)
DNR8227 (9.22%)539 (9.04%)219 (7.60%)161 (3.74%)
Comfort care and DNR229 (0.26%)9 (0.15%)2 (0.07%)2 (0.05%)
No order1251 (1.40%)59 (0.99%)11 (0.38%)23 (0.53%)
Admission diagnosis (n, %)    
Pneumonia2385 (2.67%)258 (4.33%)242 (8.40%)68 (1.58%)
Sepsis5822 (6.52%)503 (8.44%)279 (9.69%)169 (3.92%)
GI bleeding9938 (11.13%)616 (10.33%)333 (11.56%)290 (6.73%)
Cancer2845 (3.19%)14 (0.23%)95 (3.30%)492 (11.42%)
Total hospital length of stay (days SD)3.08 3.295.37 7.5012.16 13.128.06 9.53

Figure 1shows how we developed the analysis cohort, by removing patients with a comfort‐care‐only order placed within 4 hours after admission (369 patients/744 hospitalizations) and patients who were never admitted to the ward or TCU (7,220/10,574). This left a cohort consisting of 94,833 patients who experienced 133,879 hospitalizations spanning a total of 1,079,062 shifts. We then removed shifts where: 1) a patient was not on the ward at the start of a shift, or was on the ward for <4 hours of a shift; 2) the patient had a comfort‐care order in place at the start of the shift; and 3) the patient died and was ineligible to be a case (the patient had a DNR order in place or died in the ICU). The final cohort eligible for sampling consisted of 846,907 shifts, which involved a total of 92,797 patients and 130,627 hospitalizations. There were a total of 4,036 event shifts, which included 3,224 where a patient was transferred from the ward to the ICU, 717 from the TCU to the ICU, and 95 where a patient died on the ward or TCU without a DNR order in place. We then randomly selected 39,782 comparison shifts. Thus, our final cohort for analysis included 4,036 event shifts (1,979 derivation/2,057 validation and 39,782 comparison shifts (19,509/20,273). As a secondary validation, we also applied model coefficients to the 429 event shifts excluded due to the <4‐hour length‐of‐stay requirement.

Figure 1
Development of sampling cohort. *There are 429 event shifts excluded; see text for details. Abbreviations: DNR, do not resuscitate; ICU, intensive care unit; TCU, transitional care unit.

Table 2 compares event shifts with comparison shifts. In the 24 hours preceding ICU transfer, patients who were subsequently transferred had statistically significant, but not necessarily clinically significant, differences in terms of these variables. However, missing laboratory data were more common, ranging from 18% to 31% of all shifts (we did not incorporate laboratory tests where 35% of the shifts had missing data for that test).

Event and Comparison Shifts
PredictorEvent ShiftsComparison ShiftsP
  • NOTE: Code status, vital sign, and laboratory values measures closest to the start of the shift (7 AM or 7 PM) are used. Abbreviations: COPS, COmorbidity Point Score; ICU, intensive care unit; LAPS, Laboratory Acute Physiology Score; MEWS(re), Modified Early Warning Score (retrospective electronic); SD, standard deviation.

  • LAPS; see Table 1, text, and Escobar et al4 for more details.

  • COPS; see Table 1, text, and Escobar et al4 for more details.

  • Refers to patients who had an active full code order at the start of the sampling time frame.

  • See text for explanation of sampling time frame, and how both cases and controls could have been in the ICU.

  • See text for explanation of how both cases and controls could have experienced an unplanned transfer to the ICU.

  • MEWS(re); see text and Subbe et al10 for a description of this score.

  • (Anion gap bicarbonate) 100.

  • Rates of missing data for vital signs are not shown because <3% of the shifts were missing these data.

Number403639,782 
Age (mean SD)67.19 15.2565.41 17.40<0.001
Male (n, %)2007 (49.73%)17,709 (44.52%)<0.001
Day shift1364 (33.80%)17,714 (44.53%)<0.001
LAPS*27.89 22.1020.49 20.16<0.001
COPS116.33 72.31100.81 68.44<0.001
Full code (n, %)3496 (86.2%)32,156 (80.8%)<0.001
ICU shift during hospitalization3964 (98.22%)7197 (18.09%)<0.001
Unplanned transfer to ICU during hospitalization353 (8.8%)1466 (3.7%)<0.001
Temperature (mean SD)98.15 (1.13)98.10 (0.85)0.009
Heart rate (mean SD)90.30 (20.48)79.86 (5.27)<0.001
Respiratory rate (mean SD)20.36 (3.70)18.87 (1.79)<0.001
Systolic blood pressure (mean SD)123.65 (23.26)126.21 (19.88)<0.001
Diastolic blood pressure (mean SD)68.38 (14.49)69.46 (11.95)<0.001
Oxygen saturation (mean SD)95.72% (3.00)96.47 % (2.26)<0.001
MEWS(re) (mean SD)3.64 (2.02)2.34 (1.61)<0.001
% <574.86%92.79% 
% 525.14%7.21%<0.001
Proxy for measured lactate# (mean SD)36.85 (28.24)28.73 (16.74)<0.001
% Missing in 24 hr before start of shift**17.91%28.78%<0.001
Blood urea nitrogen (mean SD)32.03 (25.39)22.72 (18.9)<0.001
% Missing in 24 hr before start of shift19.67%20.90%<0.001
White blood cell count 1000 (mean SD)12.33 (11.42)9.83 (6.58)<0.001
% Missing in 24 hr before start of shift21.43%30.98%<0.001
Hematocrit (mean SD)33.08 (6.28)33.07 (5.25)0.978
% Missing in 24 hr before start of shift19.87%29.55%<0.001

After conducting multiple analyses using the derivation dataset, we developed 24 submodels, a compromise between our finding that primary‐condition‐specific models showed better performance and the fact that we had very few events among patients with certain primary conditions (eg, pericarditis/valvular heart disease), which forced us to create composite categories (eg, a category pooling patients with pericarditis, atherosclerosis, and peripheral vascular disease). Table 3 lists variables included in our final submodels.

Variables Included in Final Electronic Medical Record‐Based Models
VariableDescription
  • Abbreviations: COPS, COmorbidity Point Score; LAPS, Laboratory Acute Physiology Score; LOS, length of stay.

  • LAPS based on 14 laboratory test results obtained in the 72 hr preceding hospitalization. See text and Escobar et al4 for details.

  • COPS based on a patient's diagnoses in the 12 mo preceding hospitalization. See text and Escobar et al4 for details. Indicator variable (for patients in whom a COPS could not be obtained) also included in models.

  • See text and Supporting Information, Appendices, in the online version of this article for details on imputation strategy employed when values were missing. See Wrenn14 and Rocktaeschel et al16 for justification for use of the combination of anion gap and serum bicarbonate.

Directive statusFull code or not full code
LAPS*Admission physiologic severity of illness score (continuous variable ranging from 0 to 256). Standardized and included as LAPS and LAPS squared
COPSComorbidity burden score (continuous variable ranging from 0 to 701). Standardized and included as COPS and COPS squared.
COPS statusIndicator for absent comorbidity data
LOS at T0Length of stay in the hospital (total time in hours) at the T0; standardized.
T0 time of day7 AM or 7 PM
TemperatureWorst (highest) temperature in 24 hr preceding T0; variability in temperature in 24 hr preceding T0.
Heart rateMost recent heart rate in 24 hr preceding T0; variability in heart rate in 24 hr preceding T0.
Respiratory rateMost recent respiratory rate in 24 hr preceding T0; worst (highest) respiratory rate in 24 hr preceding T0; variability in respiratory rate in 24 hr preceding T0.
Diastolic blood pressureMost recent diastolic blood pressure in 24 hr preceding T0 transformed by subtracting 70 from the actual value and squaring the result. Any value above 2000 is subsequently then set to 2000, yielding a continuous variable ranging from 0 to 2000.
Systolic pressureVariability in systolic blood pressure in 24 hr preceding T0.
  
Pulse oximetryWorst (lowest) oxygen saturation in 24 hr preceding T0; variability in oxygen saturation in 24 hr preceding T0.
Neurological statusMost recent neurological status check in 24 hr preceding T0.
Laboratory testsBlood urea nitrogen
 Proxy for measured lactate = (anion gap serum bicarbonate) 100
 Hematocrit
 Total white blood cell count

Table 4 summarizes key results in the validation dataset. Across all diagnoses, the MEWS(re) had c‐statistic of 0.709 (95% confidence interval, 0.6970.721) in the derivation dataset and 0.698 (0.6860.710) in the validation dataset. In the validation dataset, the MEWS(re) performed best among patients with a set of gastrointestinal diagnoses (c = 0.792; 0.7260.857) and worst among patients with congestive heart failure (0.541; 0.5000.620). In contrast, across all primary conditions, the EMR‐based models had a c‐statistic of 0.845 (0.8260.863) in the derivation dataset and 0.775 (0.7530.797) in the validation dataset. In the validation dataset, the EMR‐based models also performed best among patients with a set of gastrointestinal diagnoses (0.841; 0.7830.897) and worst among patients with congestive heart failure (0.683; 0.6100.755). A negative correlation (R = 0.63) was evident between the number of event shifts in a submodel and the drop in the c‐statistic seen in the validation dataset.

Best and Worst Performing Submodels in the Validation Dataset
 No. of Shifts in Validation Datasetc‐Statistic
Diagnoses Group*EventComparisonMEWS(re)EMR Model
  • Abbreviations: EMR, electronic medical record; GI, gastrointestinal; MEWS(re), Modified Early Warning Score (retrospective electronic).

  • Specific International Classification of Diseases (ICD) codes used are detailed in the Supporting Information, Appendices, in the online version of this article.

  • MEWS(re); see text, Supporting Information, Appendices, in the online version of this article, and Subbe et al10 for more details.

  • Model based on comprehensive data from EMR; see text, Table 3, and Supporting Information, Appendices, in the online version of this article for more details.

  • This group of diagnoses includes appendicitis, cholecystitis, cholangitis, hernias, and pancreatic disorders.

  • This group of diagnoses includes: gastrointestinal hemorrhage, miscellaneous disorders affecting the stomach and duodenum, diverticulitis, abdominal symptoms, nausea with vomiting, and blood in stool.

  • This group of diagnoses includes inflammatory bowel disease, malabsorption syndromes, gastrointestinal obstruction, and enteritides.

Acute myocardial infarction361690.5410.572
Diseases of pulmonary circulation and cardiac dysrhythmias403290.5650.645
Seizure disorders454970.5940.647
Rule out myocardial infarction777270.6020.648
Pneumonia1638470.7410.801
GI diagnoses, set A589420.7550.803
GI diagnoses, set B2562,6100.7720.806
GI diagnoses, set C465200.7920.841
All diagnosis2,03220,1060.6980.775

We also compared model performance when our datasets were restricted to 1 randomly selected observation per patient; in these analyses, the total number of event shifts was 3,647 and the number of comparison shifts was 29,052. The c‐statistic for the MEWS(re) in the derivation dataset was 0.709 (0.6940.725); in the validation dataset, it was 0.698 (0.6920.714). The corresponding values for the EMR‐based models were 0.856 (0.8350.877) and 0.780 (0.7560.804). We also tested models in which, instead of dropping shifts with missing vital signs, we imputed missing vital signs to their normal value. The c‐statistic for the EMR‐based model with imputed vital sign values was 0.842 (0.8230.861) in the derivation dataset and 0.773 (0.7520.794) in the validation dataset. Lastly, we applied model coefficients to a dataset consisting of 4,290 randomly selected comparison shifts plus the 429 shifts excluded because of the 4‐hour length‐of‐stay criterion. The c‐statistic for this analysis was 0.756 (0.7030.809).

As a general rule, the EMR‐based models were more than twice as efficient as the MEWS(re). For example, a MEWS(re) threshold of 6 as the trigger for an alarm would identify 15% of all transfers to the ICU, with 34.4 false alarms for each transfer; in contrast, using the EMR‐based approach to identify 15% of all transfers, there were 14.5 false alarms for each transfer. Applied to the entire KPMCP Northern California Region, using the MEWS(re), a total of 52 patients per day would need to be evaluated, but only 22 per day using the EMR‐based approach. If one employed a MEWS(re) threshold of 4, this would lead to identification of 44% of all transfers, with a ratio of 69 false alarms for each transfer; using the EMR, the ratio would be 34 to 1. Across the entire KPMCP, a total of 276 patients per day (or about 19.5 a day per hospital) would need to be evaluated using the MEWS(re), but only 136 (or about 9.5 per hospital per day) using the EMR.

DISCUSSION

Using data from a large hospital cohort, we have developed a predictive model suitable for use in non‐ICU populations cared for in integrated healthcare settings with fully automated EMRs. The overall performance of our model, which incorporates acute physiology, diagnosis, and longitudinal data, is superior to the predictive ability of a model that can be assigned manually. This is not surprising, given that scoring systems such as the MEWS make an explicit tradeoff losing information found in multiple variables in exchange for ease of manual assignment. Currently, the model described in this report is being implemented in a simulated environment, a final safety test prior to piloting real‐time provision of probability estimates to clinicians and nurses. Though not yet ready for real‐time use, it is reasonable for our model to be tested using the KPHC shadow server, since evaluation in a simulated environment constitutes a critical evaluation step prior to deployment for clinical use. We also anticipate further refinement and revalidation to occur as more inpatient data become available in the KPMCP and elsewhere.

A number of limitations to our approach must be emphasized. In developing our models, we determined that, while modeling by clinical condition was important, the study outcome was rare for some primary conditions. In these diagnostic groups, which accounted for 12.5% of the event shifts and 10.6% of the comparison shifts, the c‐statistic in the validation dataset was <0.70. Since all 22 KPMCP hospitals are now online and will generate an additional 150,000 adult hospitalizations per year, we expect to be able to correct this problem prior to deployment of these models for clinical use. Having additional data will permit us to improve model discrimination and thus decrease the evaluation‐to‐detection ratio. In future iterations of these models, more experimentation with grouping of International Classification of Diseases (ICD) codes may be required. The problem of grouping ICD codes is not an easy one to resolve, in that diagnoses in the grouping must share common pathophysiology while having a grouping with a sufficient number of adverse events for stable statistical models.

Ideally, it would have been desirable to employ a more objective measure of deterioration, since the decision to transfer a patient to the ICU is discretionary. However, we have found that key data points needed to define such a measure (eg, vital signs) are not consistently charted when a patient deterioratesthis is not surprising outside the research setting, given that nurses and physicians involved in a transfer may be focusing on caring for the patient rather than immediately charting. Given the complexities of end‐of‐life‐care decision‐making, we could not employ death as the outcome of interest. A related issue is that our model does not differentiate between reasons for needing transfer to the ICU, an issue recently discussed by Bapoje et al.18

Our model does not address an important issue raised by Bapoje et al18 and Litvak, Pronovost, and others,19, 20 namely, whether a patient should have been admitted to a non‐ICU setting in the first place. Our team is currently developing a model for doing exactly this (providing decision support for triage in the emergency department), but discussion of this methodology is outside the scope of this article.

Because of resource and data limitations, our model also does not include newborns, children, women admitted for childbirth, or patients transferred from non‐KPMCP hospitals. However, the approach described here could serve as a starting point for developing models for these other populations.

The generalizability of our model must also be considered. The Northern California KPMCP is unusual in having large electronic databases that include physiologic as well as longitudinal patient data. Many hospitals cannot take advantage of all the methods described here. However, the methods we employed could be modified for use by hospital systems in countries such as Great Britain and Canada, and entities such as the Veterans Administration Hospital System in the United States. The KPMCP population, an insured population with few barriers to access, is healthier than the general population, and some population subsets are underrepresented in our cohort. Practice patterns may also vary. Nonetheless, the model described here could serve as a good starting point for future collaborative studies, and it would be possible to develop models suitable for use by stand‐alone hospitals (eg, recalibrating so that one used a Charlson comorbidity21 score based on present on‐admission codes rather than the COPS).

The need for early detection of patient deterioration has played a major role in the development of rapid response teams, as well as scores such as the MEWS. In particular, entities such as the Institute for Healthcare Improvement have advocated the use of early warning systems.22 However, having a statistically robust model to support an early warning system is only part of the solution, and a number of new challenges must then be addressed. The first is actual electronic deployment. Existing inpatient EMRs were not designed with complex calculations in mind, and we anticipate that some degradation in performance will occur when we test our models using real‐time data capture. As Bapoje et al point out, simply having an alert may be insufficient, since not all transfers are preventable.18 Early warning systems also raise ethical issues (for example, what should be done if an alert leads a clinician to confront the fact that an end‐of‐life‐care discussion needs to occur?). From a research perspective, if one were to formally test the benefits of such models, it would be critical to define outcome measures other than death (which is strongly affected by end‐of‐life‐care decisions) or ICU transfer (which is often desirable).

In conclusion, we have developed an approach for predicting impending physiologic deterioration of hospitalized adults outside the ICU. Our approach illustrates how organizations can take maximal advantage of EMRs in a manner that exceeds meaningful use specifications.23, 24 Our study highlights the possibility of using fully automated EMR data for building and applying sophisticated statistical models in settings other than the highly monitored ICU without the need for additional equipment. It also expands the universe of severity scoring to one in which probability estimates are provided in real time and throughout an entire hospitalization. Model performance will undoubtedly improve over time, as more patient data become available. Although our approach has important limitations, it is suitable for testing using real‐time data in a simulated environment. Such testing would permit identification of unanticipated problems and quantification of the degradation of model performance due to real life factors, such as delays in vital signs charting or EMR system brownouts. It could also serve as the springboard for future collaborative studies, with a broader population base, in which the EMR becomes a tool for care, not just documentation.

Acknowledgements

We thank Ms Marla Gardner and Mr John Greene for their work in the development phase of this project. We are grateful to Brian Hoberman, Andrew Hwang, and Marc Flagg from the RIMS group; to Colin Stobbs, Sriram Thiruvenkatachari, and Sundeep Sood from KP IT, Inc; and to Dennis Andaya, Linda Gliner, and Cyndi Vasallo for their assistance with data‐quality audits. We are also grateful to Dr Philip Madvig, Dr Paul Feigenbaum, Dr Alan Whippy, Mr Gregory Adams, Ms Barbara Crawford, and Dr Marybeth Sharpe for their administrative support and encouragement; and to Dr Alan S. Go, Acting Director of the Kaiser Permanente Division of Research, for reviewing the manuscript.

Files
References
  1. Barnett MJ,Kaboli PJ,Sirio CA,Rosenthal GE.Day of the week of intensive care admission and patient outcomes: a multisite regional evaluation.Med Care.2002;40(6):530539.
  2. Ensminger SA,Morales IJ,Peters SG, et al.The hospital mortality of patients admitted to the ICU on weekends.Chest.2004;126(4):12921298.
  3. Luyt CE,Combes A,Aegerter P, et al.Mortality among patients admitted to intensive care units during weekday day shifts compared with “off” hours.Crit Care Med.2007;35(1):311.
  4. Escobar G,Greene J,Scheirer P,Gardner M,Draper D,Kipnis P.Risk adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46(3):232239.
  5. van Walraven C,Escobar GJ,Greene JD,Forster AJ.The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population.J Clin Epidemiol.2010;63(7):798803.
  6. Escobar GJ,Greene JD,Gardner MN,Marelich GP,Quick B,Kipnis P.Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS).J Hosp Med.2011;6(2):7480.
  7. Chambrin MC,Ravaux P,Calvelo‐Aros D,Jaborska A,Chopin C,Boniface B.Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis.Intensive Care Med.1999;25(12):13601366.
  8. Saria S,Rajani AK,Gould J,Koller D,Penn AA.Integration of early physiological responses predicts later illness severity in preterm infants.Sci Transl Med.2010;2(48):48ra65.
  9. Subbe CP,Gao H,Harrison DA.Reproducibility of physiological track‐and‐trigger warning systems for identifying at‐risk patients on the ward.Intensive Care Med.2007;33(4):619624.
  10. Subbe CP,Kruger M,Rutherford P,Gemmel L.Validation of a Modified Early Warning Score in medical admissions.Q J Med.2001;94:521526.
  11. Subbe CP,Davies RG,Williams E,Rutherford P,Gemmell L.Effect of introducing the Modified Early Warning score on clinical outcomes, cardio‐pulmonary arrests and intensive care utilisation in acute medical admissions.Anaesthesia.2003;58(8):797802.
  12. MERIT Study Investigators.Introduction of the medical emergency team (MET) system: a cluster‐randomized controlled trial.Lancet.2005;365(9477):20912097.
  13. Keller AS,Kirkland LL,Rajasekaran SY,Cha S,Rady MY,Huddleston JM.Unplanned transfers to the intensive care unit: the role of the shock index.J Hosp Med.2010;5(8):460465.
  14. Wrenn K.The delta (delta) gap: an approach to mixed acid‐base disorders.Ann Emerg Med.1990;19(11):13101313.
  15. Williamson JC.Acid‐base disorders: classification and management strategies.Am Fam Physician.1995;52(2):584590.
  16. Rocktaeschel J,Morimatsu H,Uchino S,Bellomo R.Unmeasured anions in critically ill patients: can they predict mortality?Crit Care Med.2003;31(8):21312136.
  17. Cook NR.Use and misuse of the receiver operating characteristic curve in risk prediction.Circulation.2007;115(7):928935.
  18. Bapoje SR,Gaudiani JL,Narayanan V,Albert RK.Unplanned transfers to a medical intensive care unit: causes and relationship to preventable errors in care.J Hosp Med.2011;6(2):6872.
  19. Litvak E,Pronovost PJ.Rethinking rapid response teams.JAMA.2010;304(12):13751376.
  20. Winters BD,Pham J,Pronovost PJ.Rapid response teams—walk, don't run.JAMA.2006;296(13):16451647.
  21. Charlson ME,Pompei P,Ales KL,MacKenzie CR.A new method of classifying prognostic comorbidity in longitudinal populations: development and validation.J Chronic Dis.1987;40:373383.
  22. Institute for Healthcare Improvement.Early Warning Systems:The Next Level of Rapid Response.2011. http://www.ihi.org/IHI/Programs/AudioAndWebPrograms/ExpeditionEarlyWarningSystemsTheNextLevelofRapidResponse.htm?player=wmp. Accessed 4/6/11.
  23. Bowes WA.Assessing readiness for meeting meaningful use: identifying electronic health record functionality and measuring levels of adoption.AMIA Annu Symp Proc.2010;2010:6670.
  24. Medicare and Medicaid Programs;Electronic Health Record Incentive Program. Final Rule.Fed Reg.2010;75(144):4431344588.
Article PDF
Issue
Journal of Hospital Medicine - 7(5)
Publications
Page Number
388-395
Sections
Files
Files
Article PDF
Article PDF

Patients in general medicalsurgical wards who experience unplanned transfer to the intensive care unit (ICU) have increased mortality and morbidity.13 Using an externally validated methodology permitting assessment of illness severity and mortality risk among all hospitalized patients,4, 5 we recently documented observed‐to‐expected mortality ratios >3.0 and excess length of stay of 10 days among patients who experienced such transfers.6

It is possible to predict adverse outcomes among monitored patients (eg, patients in the ICU or undergoing continuous electronic monitoring).7, 8 However, prediction of unplanned transfers among medicalsurgical ward patients presents challenges. Data collection (vital signs and laboratory tests) is relatively infrequent. The event rate (3% of hospital admissions) is low, and the rate in narrow time periods (eg, 12 hours) is extremely low: a hospital with 4000 admissions per year might experience 1 unplanned transfer to the ICU every 3 days. Not surprisingly, performance of models suitable for predicting ward patients' need for intensive care within narrow time frames have been disappointing.9 The Modified Early Warning Score (MEWS), has a c‐statistic, or area under the receiver operator characteristic of 0.67,1012 and our own model incorporating 14 laboratory tests, but no vital signs, has excellent performance with respect to predicting inpatient mortality, but poor performance with respect to unplanned transfer.6

In this report, we describe the development and validation of a complex predictive model suitable for use with ward patients. Our objective for this work was to develop a predictive model based on clinical and physiologic data available in real time from a comprehensive electronic medical record (EMR), not a clinically intuitive, manually assigned tool. The outcome of interest was unplanned transfer from the ward to the ICU, or death on the ward in a patient who was full code. This model has been developed as part of a regional effort to decrease preventable mortality in the Northern California Kaiser Permanente Medical Care Program (KPMCP), an integrated healthcare delivery system with 22 hospitals.

MATERIALS AND METHODS

For additional details, see the Supporting Information, Appendices 112, in the online version of this article.

This project was approved by the KPMCP Institutional Board for the Protection of Human Subjects.

The Northern California KPMCP serves a total population of approximately 3.3 million members. All Northern California KPMCP hospitals and clinics employ the same information systems with a common medical record number and can track care covered by the plan but delivered elsewhere. Databases maintained by the KPMCP capture admission and discharge times, admission and discharge diagnoses and procedures (assigned by professional coders), bed histories permitting quantification of intra‐hospital transfers, inter‐hospital transfers, as well as the results of all inpatient and outpatient laboratory tests. In July 2006, the KPMCP began deployment of the EMR developed by Epic Systems Corporation (www.epic. com), which has been adapted for the KPMCP and is known as KP HealthConnect (KPHC) in its hospitals. The last of these 22 hospitals went online in March 2010.

Our setting consisted of 14 hospitals in which the KPHC inpatient EMR had been running for at least 3 months (the KPMCP Antioch, Fremont, Hayward, Manteca, Modesto, Roseville, Sacramento, Santa Clara, San Francisco, Santa Rosa, South Sacramento, South San Francisco, Santa Teresa, and Walnut Creek hospitals). We have described the general characteristics of KPMCP hospitals elsewhere.4, 6 Our initial study population consisted of all patients admitted to these hospitals who met the following criteria: hospitalization began from November 1, 2006 through December 31, 2009; initial hospitalization occurred at a Northern California KPMCP hospital (ie, for inter‐hospital transfers, the first hospital stay occurred within the KPMCP); age 18 years; hospitalization was not for childbirth; and KPHC had been operational at the hospital for at least 3 months.

Analytic Approach

The primary outcome for this study was transfer to the ICU after admission to the hospital among patients residing either in a general medicalsurgical ward (ward) or transitional care unit (TCU), or death in the ward or TCU in a patient who was full code at the time of death (ie, had the patient survived, s/he would have been transferred to the ICU). The unit of analysis for this study was a 12‐hour patient shift, which could begin with a 7 AM T0 (henceforth, day shift) or a 7 PM T0 (night shift); in other words, we aimed to predict the occurrence of an event within 12 hours of T0 using only data available prior to T0. A shift in which a patient experienced the primary study outcome is an event shift, while one in which a patient did not experience the primary outcome is a comparison shift. Using this approach, an individual patient record could consist of both event and comparison shifts, since some patients might have multiple unplanned transfers and some patients might have none. Our basic analytic approach consisted of creating a cohort of event and comparison shifts (10 comparison shifts were randomly selected for each event shift), splitting the cohort into a derivation dataset (50%) and validation dataset (50%), developing a model using the derivation dataset, then applying the coefficients of the derivation dataset to the validation dataset. Because some event shifts were excluded due to the minimum 4‐hour length‐of‐stay requirement, we also applied model coefficients to these excluded shifts and a set of randomly selected comparison shifts.

Since the purpose of these analyses was to develop models with maximal signal extraction from sparsely collected predictors, we did not block a time period after the T0 to allow for a reaction time to the alarm. Thus, since some events could occur immediately after the T0 (as can be seen in the Supporting Information, Appendices, in the online version of this article), our models would need to be run at intervals that are more frequent than 2 times a day.

Independent Variables

In addition to patients' age and sex, we tested the following candidate independent variables. Some of these variables are part of the KPMCP risk adjustment model4, 5 and were available electronically for all patients in the cohort. We grouped admission diagnoses into 44 broad diagnostic categories (primary conditions), and admission types into 4 groups (emergency medical, emergency surgical, elective medical, and elective surgical). We quantified patients' degree of physiologic derangement in the 72 hours preceding hospitalization with a Laboratory‐based Acute Physiology Score (LAPS) using 14 laboratory test results prior to hospitalization; we also tested individual laboratory test results obtained after admission to the hospital. We quantified patients' comorbid illness burden using a COmorbidity Point Score (COPS) based on patients' preexisting diagnoses over the 12‐month period preceding hospitalization.4 We extracted temperature, heart rate, respiratory rate, systolic blood pressure, diastolic blood pressure, oxygen saturation, and neurological status from the EMR. We also tested the following variables based on specific information extracted from the EMR: shock index (heart rate divided by systolic blood pressure)13; care directive status (patients were placed into 4 groups: full code, partial code, do not resuscitate [DNR], and no care directive in place); and a proxy for measured lactate (PML; anion gap/serum bicarbonate 100).1416 For comparison purposes, we also created a retrospective electronically assigned MEWS, which we refer to as the MEWS(re), and we assigned this score to patient records electronically using data from KP HealthConnect.

Statistical Methods

Analyses were performed in SAS 9.1, Stata 10, and R 2.12. Final validation was performed using SAS (SAS Institute Inc., Carey, North Carolina). Since we did not limit ourselves to traditional severity‐scoring approaches (eg, selecting the worst heart rate in a given time interval), but also included trend terms (eg, change in heart rate over the 24 hours preceding T0), the number of potential variables to test was very large. Detailed description of the statistical strategies employed for variable selection is provided in the Supporting Information, Appendices, in the online version of this article. Once variables were selected, our basic approach was to test a series of diagnosis‐specific logistic regression submodels using a variety of predictors that included vital signs, vital signs trends (eg, most recent heart rate minus earliest heart rate, heart rate over preceding 24 hours), and other above‐mentioned variables.

We assessed the ability of a submodel to correctly distinguish patients who died, from survivors, using the c‐statistic, as well as other metrics recommended by Cook.17 At the end of the modeling process, we pooled the results across all submodels. For vital signs, where the rate of missing data was <3%, we tested submodels in which we dropped shifts with missing data, as well as submodels in which we imputed missing vital signs to a normal value. For laboratory data, where the rate of missing data for a given shift was much greater, we employed a probabilistic imputation method that included consideration of when a laboratory test result became available.

RESULTS

During the study period, a total of 102,488 patients experienced 145,335 hospitalizations at the study hospitals. We removed 66 patients with 138 hospitalizations for data quality reasons, leaving us with our initial study sample of 102,422 patients whose characteristics are summarized in Table 1. Table 1, in which the unit of analysis is an individual patient, shows that patients who experienced the primary outcome were similar to those patients described in our previous report, in terms of their characteristics on admission as well as in experiencing excess morbidity and mortality.6

Characteristics of Final Study Cohort
 Never Admitted to ICUDirect Admit to ICU From EDUnplanned Transfer to ICU*Other ICU Admission
  • NOTE: All overnight admissions to the study hospitals excluding 66 patients who were removed due to incomplete data. Column categories are mutually exclusive and based on a patient's first hospitalization during the study time period.

  • Abbreviations: COPS, COmorbidity Point Score, DNR, do not resuscitate; ED, emergency department; GI, gastrointestinal; ICU, intensive care unit; LAPS, Laboratory Acute Physiology Score; SD, standard deviation.

  • This group consists of all patients who meet our case definition and includes: 1) patients who had an unplanned transfer to the ICU from the transitional care unit (TCU) or ward; and 2) patients who died on the ward without a DNR order in place at the time of death (ie, who would have been transferred to the ICU had they survived).

  • This group includes patients admitted directly to the ICU from the operating room, post‐anesthesia recovery, or an unknown unit, as well as patients with a planned transfer to the ICU.

  • LAPS point score based on 14 laboratory test results obtained in the 72 hr preceding hospitalization. With respect to a patient's physiologic derangement, the unadjusted relationship of LAPS and inpatient mortality is as follows: a LAPS <7 is associated with a mortality risk of <1%; <7 to 30 with a mortality risk of 1%5%; 30 to 60 with a mortality risk of 5%9%; and >60 with a mortality risk of 10% or more. See text and Escobar et al4 for more details. COPS point score based on a patient's healthcare utilization diagnoses (during the year preceding admission to the hospital). Analogous to present on admission (POA) coding. Scores can range from 0 to a theoretical maximum of 701, but scores >200 are rare. With respect to a patient's preexisting comorbidity burden, the unadjusted relationship of COPS and inpatient mortality is as follows: a COPS <50 is associated with a mortality risk of <1%; <100 with a mortality risk of 1%5%; 100 to 145 with a mortality risk of 5%10%; and >145 with a mortality risk of 10% or more. See text and Escobar et al4 for more details. ∥Numbers for patients who survived last hospitalization to discharge are available upon request.

N89,269596328804310
Age (mean SD)61.26 18.6262.25 18.1366.12 16.2064.45 15.91
Male (n, %)37,228 (41.70%)3091 (51.84%)1416 (49.17%)2378 (55.17%)
LAPS (mean SD)13.02 15.7932.72 24.8524.83 21.5311.79 18.16
COPS(mean SD)67.25 51.4273.88 57.4286.33 59.3378.44 52.49
% Predicted mortality risk (mean SD)1.93% 3.98%7.69% 12.59%5.23% 7.70%3.66% 6.81%
Survived first hospitalization to discharge88,479 (99.12%)5336 (89.49%)2316 (80.42%)4063 (94.27%)
Care order on admission    
Full code78,877 (88.36%)5198 (87.17%)2598 (90.21%)4097 (95.06%)
Partial code664 (0.74%)156 (2.62%)50 (1.74%)27 (0.63%)
Comfort care21 (0.02%)2 (0.03%)0 (0%)0 (0%)
DNR8227 (9.22%)539 (9.04%)219 (7.60%)161 (3.74%)
Comfort care and DNR229 (0.26%)9 (0.15%)2 (0.07%)2 (0.05%)
No order1251 (1.40%)59 (0.99%)11 (0.38%)23 (0.53%)
Admission diagnosis (n, %)    
Pneumonia2385 (2.67%)258 (4.33%)242 (8.40%)68 (1.58%)
Sepsis5822 (6.52%)503 (8.44%)279 (9.69%)169 (3.92%)
GI bleeding9938 (11.13%)616 (10.33%)333 (11.56%)290 (6.73%)
Cancer2845 (3.19%)14 (0.23%)95 (3.30%)492 (11.42%)
Total hospital length of stay (days SD)3.08 3.295.37 7.5012.16 13.128.06 9.53

Figure 1shows how we developed the analysis cohort, by removing patients with a comfort‐care‐only order placed within 4 hours after admission (369 patients/744 hospitalizations) and patients who were never admitted to the ward or TCU (7,220/10,574). This left a cohort consisting of 94,833 patients who experienced 133,879 hospitalizations spanning a total of 1,079,062 shifts. We then removed shifts where: 1) a patient was not on the ward at the start of a shift, or was on the ward for <4 hours of a shift; 2) the patient had a comfort‐care order in place at the start of the shift; and 3) the patient died and was ineligible to be a case (the patient had a DNR order in place or died in the ICU). The final cohort eligible for sampling consisted of 846,907 shifts, which involved a total of 92,797 patients and 130,627 hospitalizations. There were a total of 4,036 event shifts, which included 3,224 where a patient was transferred from the ward to the ICU, 717 from the TCU to the ICU, and 95 where a patient died on the ward or TCU without a DNR order in place. We then randomly selected 39,782 comparison shifts. Thus, our final cohort for analysis included 4,036 event shifts (1,979 derivation/2,057 validation and 39,782 comparison shifts (19,509/20,273). As a secondary validation, we also applied model coefficients to the 429 event shifts excluded due to the <4‐hour length‐of‐stay requirement.

Figure 1
Development of sampling cohort. *There are 429 event shifts excluded; see text for details. Abbreviations: DNR, do not resuscitate; ICU, intensive care unit; TCU, transitional care unit.

Table 2 compares event shifts with comparison shifts. In the 24 hours preceding ICU transfer, patients who were subsequently transferred had statistically significant, but not necessarily clinically significant, differences in terms of these variables. However, missing laboratory data were more common, ranging from 18% to 31% of all shifts (we did not incorporate laboratory tests where 35% of the shifts had missing data for that test).

Event and Comparison Shifts
PredictorEvent ShiftsComparison ShiftsP
  • NOTE: Code status, vital sign, and laboratory values measures closest to the start of the shift (7 AM or 7 PM) are used. Abbreviations: COPS, COmorbidity Point Score; ICU, intensive care unit; LAPS, Laboratory Acute Physiology Score; MEWS(re), Modified Early Warning Score (retrospective electronic); SD, standard deviation.

  • LAPS; see Table 1, text, and Escobar et al4 for more details.

  • COPS; see Table 1, text, and Escobar et al4 for more details.

  • Refers to patients who had an active full code order at the start of the sampling time frame.

  • See text for explanation of sampling time frame, and how both cases and controls could have been in the ICU.

  • See text for explanation of how both cases and controls could have experienced an unplanned transfer to the ICU.

  • MEWS(re); see text and Subbe et al10 for a description of this score.

  • (Anion gap bicarbonate) 100.

  • Rates of missing data for vital signs are not shown because <3% of the shifts were missing these data.

Number403639,782 
Age (mean SD)67.19 15.2565.41 17.40<0.001
Male (n, %)2007 (49.73%)17,709 (44.52%)<0.001
Day shift1364 (33.80%)17,714 (44.53%)<0.001
LAPS*27.89 22.1020.49 20.16<0.001
COPS116.33 72.31100.81 68.44<0.001
Full code (n, %)3496 (86.2%)32,156 (80.8%)<0.001
ICU shift during hospitalization3964 (98.22%)7197 (18.09%)<0.001
Unplanned transfer to ICU during hospitalization353 (8.8%)1466 (3.7%)<0.001
Temperature (mean SD)98.15 (1.13)98.10 (0.85)0.009
Heart rate (mean SD)90.30 (20.48)79.86 (5.27)<0.001
Respiratory rate (mean SD)20.36 (3.70)18.87 (1.79)<0.001
Systolic blood pressure (mean SD)123.65 (23.26)126.21 (19.88)<0.001
Diastolic blood pressure (mean SD)68.38 (14.49)69.46 (11.95)<0.001
Oxygen saturation (mean SD)95.72% (3.00)96.47 % (2.26)<0.001
MEWS(re) (mean SD)3.64 (2.02)2.34 (1.61)<0.001
% <574.86%92.79% 
% 525.14%7.21%<0.001
Proxy for measured lactate# (mean SD)36.85 (28.24)28.73 (16.74)<0.001
% Missing in 24 hr before start of shift**17.91%28.78%<0.001
Blood urea nitrogen (mean SD)32.03 (25.39)22.72 (18.9)<0.001
% Missing in 24 hr before start of shift19.67%20.90%<0.001
White blood cell count 1000 (mean SD)12.33 (11.42)9.83 (6.58)<0.001
% Missing in 24 hr before start of shift21.43%30.98%<0.001
Hematocrit (mean SD)33.08 (6.28)33.07 (5.25)0.978
% Missing in 24 hr before start of shift19.87%29.55%<0.001

After conducting multiple analyses using the derivation dataset, we developed 24 submodels, a compromise between our finding that primary‐condition‐specific models showed better performance and the fact that we had very few events among patients with certain primary conditions (eg, pericarditis/valvular heart disease), which forced us to create composite categories (eg, a category pooling patients with pericarditis, atherosclerosis, and peripheral vascular disease). Table 3 lists variables included in our final submodels.

Variables Included in Final Electronic Medical Record‐Based Models
VariableDescription
  • Abbreviations: COPS, COmorbidity Point Score; LAPS, Laboratory Acute Physiology Score; LOS, length of stay.

  • LAPS based on 14 laboratory test results obtained in the 72 hr preceding hospitalization. See text and Escobar et al4 for details.

  • COPS based on a patient's diagnoses in the 12 mo preceding hospitalization. See text and Escobar et al4 for details. Indicator variable (for patients in whom a COPS could not be obtained) also included in models.

  • See text and Supporting Information, Appendices, in the online version of this article for details on imputation strategy employed when values were missing. See Wrenn14 and Rocktaeschel et al16 for justification for use of the combination of anion gap and serum bicarbonate.

Directive statusFull code or not full code
LAPS*Admission physiologic severity of illness score (continuous variable ranging from 0 to 256). Standardized and included as LAPS and LAPS squared
COPSComorbidity burden score (continuous variable ranging from 0 to 701). Standardized and included as COPS and COPS squared.
COPS statusIndicator for absent comorbidity data
LOS at T0Length of stay in the hospital (total time in hours) at the T0; standardized.
T0 time of day7 AM or 7 PM
TemperatureWorst (highest) temperature in 24 hr preceding T0; variability in temperature in 24 hr preceding T0.
Heart rateMost recent heart rate in 24 hr preceding T0; variability in heart rate in 24 hr preceding T0.
Respiratory rateMost recent respiratory rate in 24 hr preceding T0; worst (highest) respiratory rate in 24 hr preceding T0; variability in respiratory rate in 24 hr preceding T0.
Diastolic blood pressureMost recent diastolic blood pressure in 24 hr preceding T0 transformed by subtracting 70 from the actual value and squaring the result. Any value above 2000 is subsequently then set to 2000, yielding a continuous variable ranging from 0 to 2000.
Systolic pressureVariability in systolic blood pressure in 24 hr preceding T0.
  
Pulse oximetryWorst (lowest) oxygen saturation in 24 hr preceding T0; variability in oxygen saturation in 24 hr preceding T0.
Neurological statusMost recent neurological status check in 24 hr preceding T0.
Laboratory testsBlood urea nitrogen
 Proxy for measured lactate = (anion gap serum bicarbonate) 100
 Hematocrit
 Total white blood cell count

Table 4 summarizes key results in the validation dataset. Across all diagnoses, the MEWS(re) had c‐statistic of 0.709 (95% confidence interval, 0.6970.721) in the derivation dataset and 0.698 (0.6860.710) in the validation dataset. In the validation dataset, the MEWS(re) performed best among patients with a set of gastrointestinal diagnoses (c = 0.792; 0.7260.857) and worst among patients with congestive heart failure (0.541; 0.5000.620). In contrast, across all primary conditions, the EMR‐based models had a c‐statistic of 0.845 (0.8260.863) in the derivation dataset and 0.775 (0.7530.797) in the validation dataset. In the validation dataset, the EMR‐based models also performed best among patients with a set of gastrointestinal diagnoses (0.841; 0.7830.897) and worst among patients with congestive heart failure (0.683; 0.6100.755). A negative correlation (R = 0.63) was evident between the number of event shifts in a submodel and the drop in the c‐statistic seen in the validation dataset.

Best and Worst Performing Submodels in the Validation Dataset
 No. of Shifts in Validation Datasetc‐Statistic
Diagnoses Group*EventComparisonMEWS(re)EMR Model
  • Abbreviations: EMR, electronic medical record; GI, gastrointestinal; MEWS(re), Modified Early Warning Score (retrospective electronic).

  • Specific International Classification of Diseases (ICD) codes used are detailed in the Supporting Information, Appendices, in the online version of this article.

  • MEWS(re); see text, Supporting Information, Appendices, in the online version of this article, and Subbe et al10 for more details.

  • Model based on comprehensive data from EMR; see text, Table 3, and Supporting Information, Appendices, in the online version of this article for more details.

  • This group of diagnoses includes appendicitis, cholecystitis, cholangitis, hernias, and pancreatic disorders.

  • This group of diagnoses includes: gastrointestinal hemorrhage, miscellaneous disorders affecting the stomach and duodenum, diverticulitis, abdominal symptoms, nausea with vomiting, and blood in stool.

  • This group of diagnoses includes inflammatory bowel disease, malabsorption syndromes, gastrointestinal obstruction, and enteritides.

Acute myocardial infarction361690.5410.572
Diseases of pulmonary circulation and cardiac dysrhythmias403290.5650.645
Seizure disorders454970.5940.647
Rule out myocardial infarction777270.6020.648
Pneumonia1638470.7410.801
GI diagnoses, set A589420.7550.803
GI diagnoses, set B2562,6100.7720.806
GI diagnoses, set C465200.7920.841
All diagnosis2,03220,1060.6980.775

We also compared model performance when our datasets were restricted to 1 randomly selected observation per patient; in these analyses, the total number of event shifts was 3,647 and the number of comparison shifts was 29,052. The c‐statistic for the MEWS(re) in the derivation dataset was 0.709 (0.6940.725); in the validation dataset, it was 0.698 (0.6920.714). The corresponding values for the EMR‐based models were 0.856 (0.8350.877) and 0.780 (0.7560.804). We also tested models in which, instead of dropping shifts with missing vital signs, we imputed missing vital signs to their normal value. The c‐statistic for the EMR‐based model with imputed vital sign values was 0.842 (0.8230.861) in the derivation dataset and 0.773 (0.7520.794) in the validation dataset. Lastly, we applied model coefficients to a dataset consisting of 4,290 randomly selected comparison shifts plus the 429 shifts excluded because of the 4‐hour length‐of‐stay criterion. The c‐statistic for this analysis was 0.756 (0.7030.809).

As a general rule, the EMR‐based models were more than twice as efficient as the MEWS(re). For example, a MEWS(re) threshold of 6 as the trigger for an alarm would identify 15% of all transfers to the ICU, with 34.4 false alarms for each transfer; in contrast, using the EMR‐based approach to identify 15% of all transfers, there were 14.5 false alarms for each transfer. Applied to the entire KPMCP Northern California Region, using the MEWS(re), a total of 52 patients per day would need to be evaluated, but only 22 per day using the EMR‐based approach. If one employed a MEWS(re) threshold of 4, this would lead to identification of 44% of all transfers, with a ratio of 69 false alarms for each transfer; using the EMR, the ratio would be 34 to 1. Across the entire KPMCP, a total of 276 patients per day (or about 19.5 a day per hospital) would need to be evaluated using the MEWS(re), but only 136 (or about 9.5 per hospital per day) using the EMR.

DISCUSSION

Using data from a large hospital cohort, we have developed a predictive model suitable for use in non‐ICU populations cared for in integrated healthcare settings with fully automated EMRs. The overall performance of our model, which incorporates acute physiology, diagnosis, and longitudinal data, is superior to the predictive ability of a model that can be assigned manually. This is not surprising, given that scoring systems such as the MEWS make an explicit tradeoff losing information found in multiple variables in exchange for ease of manual assignment. Currently, the model described in this report is being implemented in a simulated environment, a final safety test prior to piloting real‐time provision of probability estimates to clinicians and nurses. Though not yet ready for real‐time use, it is reasonable for our model to be tested using the KPHC shadow server, since evaluation in a simulated environment constitutes a critical evaluation step prior to deployment for clinical use. We also anticipate further refinement and revalidation to occur as more inpatient data become available in the KPMCP and elsewhere.

A number of limitations to our approach must be emphasized. In developing our models, we determined that, while modeling by clinical condition was important, the study outcome was rare for some primary conditions. In these diagnostic groups, which accounted for 12.5% of the event shifts and 10.6% of the comparison shifts, the c‐statistic in the validation dataset was <0.70. Since all 22 KPMCP hospitals are now online and will generate an additional 150,000 adult hospitalizations per year, we expect to be able to correct this problem prior to deployment of these models for clinical use. Having additional data will permit us to improve model discrimination and thus decrease the evaluation‐to‐detection ratio. In future iterations of these models, more experimentation with grouping of International Classification of Diseases (ICD) codes may be required. The problem of grouping ICD codes is not an easy one to resolve, in that diagnoses in the grouping must share common pathophysiology while having a grouping with a sufficient number of adverse events for stable statistical models.

Ideally, it would have been desirable to employ a more objective measure of deterioration, since the decision to transfer a patient to the ICU is discretionary. However, we have found that key data points needed to define such a measure (eg, vital signs) are not consistently charted when a patient deterioratesthis is not surprising outside the research setting, given that nurses and physicians involved in a transfer may be focusing on caring for the patient rather than immediately charting. Given the complexities of end‐of‐life‐care decision‐making, we could not employ death as the outcome of interest. A related issue is that our model does not differentiate between reasons for needing transfer to the ICU, an issue recently discussed by Bapoje et al.18

Our model does not address an important issue raised by Bapoje et al18 and Litvak, Pronovost, and others,19, 20 namely, whether a patient should have been admitted to a non‐ICU setting in the first place. Our team is currently developing a model for doing exactly this (providing decision support for triage in the emergency department), but discussion of this methodology is outside the scope of this article.

Because of resource and data limitations, our model also does not include newborns, children, women admitted for childbirth, or patients transferred from non‐KPMCP hospitals. However, the approach described here could serve as a starting point for developing models for these other populations.

The generalizability of our model must also be considered. The Northern California KPMCP is unusual in having large electronic databases that include physiologic as well as longitudinal patient data. Many hospitals cannot take advantage of all the methods described here. However, the methods we employed could be modified for use by hospital systems in countries such as Great Britain and Canada, and entities such as the Veterans Administration Hospital System in the United States. The KPMCP population, an insured population with few barriers to access, is healthier than the general population, and some population subsets are underrepresented in our cohort. Practice patterns may also vary. Nonetheless, the model described here could serve as a good starting point for future collaborative studies, and it would be possible to develop models suitable for use by stand‐alone hospitals (eg, recalibrating so that one used a Charlson comorbidity21 score based on present on‐admission codes rather than the COPS).

The need for early detection of patient deterioration has played a major role in the development of rapid response teams, as well as scores such as the MEWS. In particular, entities such as the Institute for Healthcare Improvement have advocated the use of early warning systems.22 However, having a statistically robust model to support an early warning system is only part of the solution, and a number of new challenges must then be addressed. The first is actual electronic deployment. Existing inpatient EMRs were not designed with complex calculations in mind, and we anticipate that some degradation in performance will occur when we test our models using real‐time data capture. As Bapoje et al point out, simply having an alert may be insufficient, since not all transfers are preventable.18 Early warning systems also raise ethical issues (for example, what should be done if an alert leads a clinician to confront the fact that an end‐of‐life‐care discussion needs to occur?). From a research perspective, if one were to formally test the benefits of such models, it would be critical to define outcome measures other than death (which is strongly affected by end‐of‐life‐care decisions) or ICU transfer (which is often desirable).

In conclusion, we have developed an approach for predicting impending physiologic deterioration of hospitalized adults outside the ICU. Our approach illustrates how organizations can take maximal advantage of EMRs in a manner that exceeds meaningful use specifications.23, 24 Our study highlights the possibility of using fully automated EMR data for building and applying sophisticated statistical models in settings other than the highly monitored ICU without the need for additional equipment. It also expands the universe of severity scoring to one in which probability estimates are provided in real time and throughout an entire hospitalization. Model performance will undoubtedly improve over time, as more patient data become available. Although our approach has important limitations, it is suitable for testing using real‐time data in a simulated environment. Such testing would permit identification of unanticipated problems and quantification of the degradation of model performance due to real life factors, such as delays in vital signs charting or EMR system brownouts. It could also serve as the springboard for future collaborative studies, with a broader population base, in which the EMR becomes a tool for care, not just documentation.

Acknowledgements

We thank Ms Marla Gardner and Mr John Greene for their work in the development phase of this project. We are grateful to Brian Hoberman, Andrew Hwang, and Marc Flagg from the RIMS group; to Colin Stobbs, Sriram Thiruvenkatachari, and Sundeep Sood from KP IT, Inc; and to Dennis Andaya, Linda Gliner, and Cyndi Vasallo for their assistance with data‐quality audits. We are also grateful to Dr Philip Madvig, Dr Paul Feigenbaum, Dr Alan Whippy, Mr Gregory Adams, Ms Barbara Crawford, and Dr Marybeth Sharpe for their administrative support and encouragement; and to Dr Alan S. Go, Acting Director of the Kaiser Permanente Division of Research, for reviewing the manuscript.

Patients in general medicalsurgical wards who experience unplanned transfer to the intensive care unit (ICU) have increased mortality and morbidity.13 Using an externally validated methodology permitting assessment of illness severity and mortality risk among all hospitalized patients,4, 5 we recently documented observed‐to‐expected mortality ratios >3.0 and excess length of stay of 10 days among patients who experienced such transfers.6

It is possible to predict adverse outcomes among monitored patients (eg, patients in the ICU or undergoing continuous electronic monitoring).7, 8 However, prediction of unplanned transfers among medicalsurgical ward patients presents challenges. Data collection (vital signs and laboratory tests) is relatively infrequent. The event rate (3% of hospital admissions) is low, and the rate in narrow time periods (eg, 12 hours) is extremely low: a hospital with 4000 admissions per year might experience 1 unplanned transfer to the ICU every 3 days. Not surprisingly, performance of models suitable for predicting ward patients' need for intensive care within narrow time frames have been disappointing.9 The Modified Early Warning Score (MEWS), has a c‐statistic, or area under the receiver operator characteristic of 0.67,1012 and our own model incorporating 14 laboratory tests, but no vital signs, has excellent performance with respect to predicting inpatient mortality, but poor performance with respect to unplanned transfer.6

In this report, we describe the development and validation of a complex predictive model suitable for use with ward patients. Our objective for this work was to develop a predictive model based on clinical and physiologic data available in real time from a comprehensive electronic medical record (EMR), not a clinically intuitive, manually assigned tool. The outcome of interest was unplanned transfer from the ward to the ICU, or death on the ward in a patient who was full code. This model has been developed as part of a regional effort to decrease preventable mortality in the Northern California Kaiser Permanente Medical Care Program (KPMCP), an integrated healthcare delivery system with 22 hospitals.

MATERIALS AND METHODS

For additional details, see the Supporting Information, Appendices 112, in the online version of this article.

This project was approved by the KPMCP Institutional Board for the Protection of Human Subjects.

The Northern California KPMCP serves a total population of approximately 3.3 million members. All Northern California KPMCP hospitals and clinics employ the same information systems with a common medical record number and can track care covered by the plan but delivered elsewhere. Databases maintained by the KPMCP capture admission and discharge times, admission and discharge diagnoses and procedures (assigned by professional coders), bed histories permitting quantification of intra‐hospital transfers, inter‐hospital transfers, as well as the results of all inpatient and outpatient laboratory tests. In July 2006, the KPMCP began deployment of the EMR developed by Epic Systems Corporation (www.epic. com), which has been adapted for the KPMCP and is known as KP HealthConnect (KPHC) in its hospitals. The last of these 22 hospitals went online in March 2010.

Our setting consisted of 14 hospitals in which the KPHC inpatient EMR had been running for at least 3 months (the KPMCP Antioch, Fremont, Hayward, Manteca, Modesto, Roseville, Sacramento, Santa Clara, San Francisco, Santa Rosa, South Sacramento, South San Francisco, Santa Teresa, and Walnut Creek hospitals). We have described the general characteristics of KPMCP hospitals elsewhere.4, 6 Our initial study population consisted of all patients admitted to these hospitals who met the following criteria: hospitalization began from November 1, 2006 through December 31, 2009; initial hospitalization occurred at a Northern California KPMCP hospital (ie, for inter‐hospital transfers, the first hospital stay occurred within the KPMCP); age 18 years; hospitalization was not for childbirth; and KPHC had been operational at the hospital for at least 3 months.

Analytic Approach

The primary outcome for this study was transfer to the ICU after admission to the hospital among patients residing either in a general medicalsurgical ward (ward) or transitional care unit (TCU), or death in the ward or TCU in a patient who was full code at the time of death (ie, had the patient survived, s/he would have been transferred to the ICU). The unit of analysis for this study was a 12‐hour patient shift, which could begin with a 7 AM T0 (henceforth, day shift) or a 7 PM T0 (night shift); in other words, we aimed to predict the occurrence of an event within 12 hours of T0 using only data available prior to T0. A shift in which a patient experienced the primary study outcome is an event shift, while one in which a patient did not experience the primary outcome is a comparison shift. Using this approach, an individual patient record could consist of both event and comparison shifts, since some patients might have multiple unplanned transfers and some patients might have none. Our basic analytic approach consisted of creating a cohort of event and comparison shifts (10 comparison shifts were randomly selected for each event shift), splitting the cohort into a derivation dataset (50%) and validation dataset (50%), developing a model using the derivation dataset, then applying the coefficients of the derivation dataset to the validation dataset. Because some event shifts were excluded due to the minimum 4‐hour length‐of‐stay requirement, we also applied model coefficients to these excluded shifts and a set of randomly selected comparison shifts.

Since the purpose of these analyses was to develop models with maximal signal extraction from sparsely collected predictors, we did not block a time period after the T0 to allow for a reaction time to the alarm. Thus, since some events could occur immediately after the T0 (as can be seen in the Supporting Information, Appendices, in the online version of this article), our models would need to be run at intervals that are more frequent than 2 times a day.

Independent Variables

In addition to patients' age and sex, we tested the following candidate independent variables. Some of these variables are part of the KPMCP risk adjustment model4, 5 and were available electronically for all patients in the cohort. We grouped admission diagnoses into 44 broad diagnostic categories (primary conditions), and admission types into 4 groups (emergency medical, emergency surgical, elective medical, and elective surgical). We quantified patients' degree of physiologic derangement in the 72 hours preceding hospitalization with a Laboratory‐based Acute Physiology Score (LAPS) using 14 laboratory test results prior to hospitalization; we also tested individual laboratory test results obtained after admission to the hospital. We quantified patients' comorbid illness burden using a COmorbidity Point Score (COPS) based on patients' preexisting diagnoses over the 12‐month period preceding hospitalization.4 We extracted temperature, heart rate, respiratory rate, systolic blood pressure, diastolic blood pressure, oxygen saturation, and neurological status from the EMR. We also tested the following variables based on specific information extracted from the EMR: shock index (heart rate divided by systolic blood pressure)13; care directive status (patients were placed into 4 groups: full code, partial code, do not resuscitate [DNR], and no care directive in place); and a proxy for measured lactate (PML; anion gap/serum bicarbonate 100).1416 For comparison purposes, we also created a retrospective electronically assigned MEWS, which we refer to as the MEWS(re), and we assigned this score to patient records electronically using data from KP HealthConnect.

Statistical Methods

Analyses were performed in SAS 9.1, Stata 10, and R 2.12. Final validation was performed using SAS (SAS Institute Inc., Carey, North Carolina). Since we did not limit ourselves to traditional severity‐scoring approaches (eg, selecting the worst heart rate in a given time interval), but also included trend terms (eg, change in heart rate over the 24 hours preceding T0), the number of potential variables to test was very large. Detailed description of the statistical strategies employed for variable selection is provided in the Supporting Information, Appendices, in the online version of this article. Once variables were selected, our basic approach was to test a series of diagnosis‐specific logistic regression submodels using a variety of predictors that included vital signs, vital signs trends (eg, most recent heart rate minus earliest heart rate, heart rate over preceding 24 hours), and other above‐mentioned variables.

We assessed the ability of a submodel to correctly distinguish patients who died, from survivors, using the c‐statistic, as well as other metrics recommended by Cook.17 At the end of the modeling process, we pooled the results across all submodels. For vital signs, where the rate of missing data was <3%, we tested submodels in which we dropped shifts with missing data, as well as submodels in which we imputed missing vital signs to a normal value. For laboratory data, where the rate of missing data for a given shift was much greater, we employed a probabilistic imputation method that included consideration of when a laboratory test result became available.

RESULTS

During the study period, a total of 102,488 patients experienced 145,335 hospitalizations at the study hospitals. We removed 66 patients with 138 hospitalizations for data quality reasons, leaving us with our initial study sample of 102,422 patients whose characteristics are summarized in Table 1. Table 1, in which the unit of analysis is an individual patient, shows that patients who experienced the primary outcome were similar to those patients described in our previous report, in terms of their characteristics on admission as well as in experiencing excess morbidity and mortality.6

Characteristics of Final Study Cohort
 Never Admitted to ICUDirect Admit to ICU From EDUnplanned Transfer to ICU*Other ICU Admission
  • NOTE: All overnight admissions to the study hospitals excluding 66 patients who were removed due to incomplete data. Column categories are mutually exclusive and based on a patient's first hospitalization during the study time period.

  • Abbreviations: COPS, COmorbidity Point Score, DNR, do not resuscitate; ED, emergency department; GI, gastrointestinal; ICU, intensive care unit; LAPS, Laboratory Acute Physiology Score; SD, standard deviation.

  • This group consists of all patients who meet our case definition and includes: 1) patients who had an unplanned transfer to the ICU from the transitional care unit (TCU) or ward; and 2) patients who died on the ward without a DNR order in place at the time of death (ie, who would have been transferred to the ICU had they survived).

  • This group includes patients admitted directly to the ICU from the operating room, post‐anesthesia recovery, or an unknown unit, as well as patients with a planned transfer to the ICU.

  • LAPS point score based on 14 laboratory test results obtained in the 72 hr preceding hospitalization. With respect to a patient's physiologic derangement, the unadjusted relationship of LAPS and inpatient mortality is as follows: a LAPS <7 is associated with a mortality risk of <1%; <7 to 30 with a mortality risk of 1%5%; 30 to 60 with a mortality risk of 5%9%; and >60 with a mortality risk of 10% or more. See text and Escobar et al4 for more details. COPS point score based on a patient's healthcare utilization diagnoses (during the year preceding admission to the hospital). Analogous to present on admission (POA) coding. Scores can range from 0 to a theoretical maximum of 701, but scores >200 are rare. With respect to a patient's preexisting comorbidity burden, the unadjusted relationship of COPS and inpatient mortality is as follows: a COPS <50 is associated with a mortality risk of <1%; <100 with a mortality risk of 1%5%; 100 to 145 with a mortality risk of 5%10%; and >145 with a mortality risk of 10% or more. See text and Escobar et al4 for more details. ∥Numbers for patients who survived last hospitalization to discharge are available upon request.

N89,269596328804310
Age (mean SD)61.26 18.6262.25 18.1366.12 16.2064.45 15.91
Male (n, %)37,228 (41.70%)3091 (51.84%)1416 (49.17%)2378 (55.17%)
LAPS (mean SD)13.02 15.7932.72 24.8524.83 21.5311.79 18.16
COPS(mean SD)67.25 51.4273.88 57.4286.33 59.3378.44 52.49
% Predicted mortality risk (mean SD)1.93% 3.98%7.69% 12.59%5.23% 7.70%3.66% 6.81%
Survived first hospitalization to discharge88,479 (99.12%)5336 (89.49%)2316 (80.42%)4063 (94.27%)
Care order on admission    
Full code78,877 (88.36%)5198 (87.17%)2598 (90.21%)4097 (95.06%)
Partial code664 (0.74%)156 (2.62%)50 (1.74%)27 (0.63%)
Comfort care21 (0.02%)2 (0.03%)0 (0%)0 (0%)
DNR8227 (9.22%)539 (9.04%)219 (7.60%)161 (3.74%)
Comfort care and DNR229 (0.26%)9 (0.15%)2 (0.07%)2 (0.05%)
No order1251 (1.40%)59 (0.99%)11 (0.38%)23 (0.53%)
Admission diagnosis (n, %)    
Pneumonia2385 (2.67%)258 (4.33%)242 (8.40%)68 (1.58%)
Sepsis5822 (6.52%)503 (8.44%)279 (9.69%)169 (3.92%)
GI bleeding9938 (11.13%)616 (10.33%)333 (11.56%)290 (6.73%)
Cancer2845 (3.19%)14 (0.23%)95 (3.30%)492 (11.42%)
Total hospital length of stay (days SD)3.08 3.295.37 7.5012.16 13.128.06 9.53

Figure 1shows how we developed the analysis cohort, by removing patients with a comfort‐care‐only order placed within 4 hours after admission (369 patients/744 hospitalizations) and patients who were never admitted to the ward or TCU (7,220/10,574). This left a cohort consisting of 94,833 patients who experienced 133,879 hospitalizations spanning a total of 1,079,062 shifts. We then removed shifts where: 1) a patient was not on the ward at the start of a shift, or was on the ward for <4 hours of a shift; 2) the patient had a comfort‐care order in place at the start of the shift; and 3) the patient died and was ineligible to be a case (the patient had a DNR order in place or died in the ICU). The final cohort eligible for sampling consisted of 846,907 shifts, which involved a total of 92,797 patients and 130,627 hospitalizations. There were a total of 4,036 event shifts, which included 3,224 where a patient was transferred from the ward to the ICU, 717 from the TCU to the ICU, and 95 where a patient died on the ward or TCU without a DNR order in place. We then randomly selected 39,782 comparison shifts. Thus, our final cohort for analysis included 4,036 event shifts (1,979 derivation/2,057 validation and 39,782 comparison shifts (19,509/20,273). As a secondary validation, we also applied model coefficients to the 429 event shifts excluded due to the <4‐hour length‐of‐stay requirement.

Figure 1
Development of sampling cohort. *There are 429 event shifts excluded; see text for details. Abbreviations: DNR, do not resuscitate; ICU, intensive care unit; TCU, transitional care unit.

Table 2 compares event shifts with comparison shifts. In the 24 hours preceding ICU transfer, patients who were subsequently transferred had statistically significant, but not necessarily clinically significant, differences in terms of these variables. However, missing laboratory data were more common, ranging from 18% to 31% of all shifts (we did not incorporate laboratory tests where 35% of the shifts had missing data for that test).

Event and Comparison Shifts
PredictorEvent ShiftsComparison ShiftsP
  • NOTE: Code status, vital sign, and laboratory values measures closest to the start of the shift (7 AM or 7 PM) are used. Abbreviations: COPS, COmorbidity Point Score; ICU, intensive care unit; LAPS, Laboratory Acute Physiology Score; MEWS(re), Modified Early Warning Score (retrospective electronic); SD, standard deviation.

  • LAPS; see Table 1, text, and Escobar et al4 for more details.

  • COPS; see Table 1, text, and Escobar et al4 for more details.

  • Refers to patients who had an active full code order at the start of the sampling time frame.

  • See text for explanation of sampling time frame, and how both cases and controls could have been in the ICU.

  • See text for explanation of how both cases and controls could have experienced an unplanned transfer to the ICU.

  • MEWS(re); see text and Subbe et al10 for a description of this score.

  • (Anion gap bicarbonate) 100.

  • Rates of missing data for vital signs are not shown because <3% of the shifts were missing these data.

Number403639,782 
Age (mean SD)67.19 15.2565.41 17.40<0.001
Male (n, %)2007 (49.73%)17,709 (44.52%)<0.001
Day shift1364 (33.80%)17,714 (44.53%)<0.001
LAPS*27.89 22.1020.49 20.16<0.001
COPS116.33 72.31100.81 68.44<0.001
Full code (n, %)3496 (86.2%)32,156 (80.8%)<0.001
ICU shift during hospitalization3964 (98.22%)7197 (18.09%)<0.001
Unplanned transfer to ICU during hospitalization353 (8.8%)1466 (3.7%)<0.001
Temperature (mean SD)98.15 (1.13)98.10 (0.85)0.009
Heart rate (mean SD)90.30 (20.48)79.86 (5.27)<0.001
Respiratory rate (mean SD)20.36 (3.70)18.87 (1.79)<0.001
Systolic blood pressure (mean SD)123.65 (23.26)126.21 (19.88)<0.001
Diastolic blood pressure (mean SD)68.38 (14.49)69.46 (11.95)<0.001
Oxygen saturation (mean SD)95.72% (3.00)96.47 % (2.26)<0.001
MEWS(re) (mean SD)3.64 (2.02)2.34 (1.61)<0.001
% <574.86%92.79% 
% 525.14%7.21%<0.001
Proxy for measured lactate# (mean SD)36.85 (28.24)28.73 (16.74)<0.001
% Missing in 24 hr before start of shift**17.91%28.78%<0.001
Blood urea nitrogen (mean SD)32.03 (25.39)22.72 (18.9)<0.001
% Missing in 24 hr before start of shift19.67%20.90%<0.001
White blood cell count 1000 (mean SD)12.33 (11.42)9.83 (6.58)<0.001
% Missing in 24 hr before start of shift21.43%30.98%<0.001
Hematocrit (mean SD)33.08 (6.28)33.07 (5.25)0.978
% Missing in 24 hr before start of shift19.87%29.55%<0.001

After conducting multiple analyses using the derivation dataset, we developed 24 submodels, a compromise between our finding that primary‐condition‐specific models showed better performance and the fact that we had very few events among patients with certain primary conditions (eg, pericarditis/valvular heart disease), which forced us to create composite categories (eg, a category pooling patients with pericarditis, atherosclerosis, and peripheral vascular disease). Table 3 lists variables included in our final submodels.

Variables Included in Final Electronic Medical Record‐Based Models
VariableDescription
  • Abbreviations: COPS, COmorbidity Point Score; LAPS, Laboratory Acute Physiology Score; LOS, length of stay.

  • LAPS based on 14 laboratory test results obtained in the 72 hr preceding hospitalization. See text and Escobar et al4 for details.

  • COPS based on a patient's diagnoses in the 12 mo preceding hospitalization. See text and Escobar et al4 for details. Indicator variable (for patients in whom a COPS could not be obtained) also included in models.

  • See text and Supporting Information, Appendices, in the online version of this article for details on imputation strategy employed when values were missing. See Wrenn14 and Rocktaeschel et al16 for justification for use of the combination of anion gap and serum bicarbonate.

Directive statusFull code or not full code
LAPS*Admission physiologic severity of illness score (continuous variable ranging from 0 to 256). Standardized and included as LAPS and LAPS squared
COPSComorbidity burden score (continuous variable ranging from 0 to 701). Standardized and included as COPS and COPS squared.
COPS statusIndicator for absent comorbidity data
LOS at T0Length of stay in the hospital (total time in hours) at the T0; standardized.
T0 time of day7 AM or 7 PM
TemperatureWorst (highest) temperature in 24 hr preceding T0; variability in temperature in 24 hr preceding T0.
Heart rateMost recent heart rate in 24 hr preceding T0; variability in heart rate in 24 hr preceding T0.
Respiratory rateMost recent respiratory rate in 24 hr preceding T0; worst (highest) respiratory rate in 24 hr preceding T0; variability in respiratory rate in 24 hr preceding T0.
Diastolic blood pressureMost recent diastolic blood pressure in 24 hr preceding T0 transformed by subtracting 70 from the actual value and squaring the result. Any value above 2000 is subsequently then set to 2000, yielding a continuous variable ranging from 0 to 2000.
Systolic pressureVariability in systolic blood pressure in 24 hr preceding T0.
  
Pulse oximetryWorst (lowest) oxygen saturation in 24 hr preceding T0; variability in oxygen saturation in 24 hr preceding T0.
Neurological statusMost recent neurological status check in 24 hr preceding T0.
Laboratory testsBlood urea nitrogen
 Proxy for measured lactate = (anion gap serum bicarbonate) 100
 Hematocrit
 Total white blood cell count

Table 4 summarizes key results in the validation dataset. Across all diagnoses, the MEWS(re) had c‐statistic of 0.709 (95% confidence interval, 0.6970.721) in the derivation dataset and 0.698 (0.6860.710) in the validation dataset. In the validation dataset, the MEWS(re) performed best among patients with a set of gastrointestinal diagnoses (c = 0.792; 0.7260.857) and worst among patients with congestive heart failure (0.541; 0.5000.620). In contrast, across all primary conditions, the EMR‐based models had a c‐statistic of 0.845 (0.8260.863) in the derivation dataset and 0.775 (0.7530.797) in the validation dataset. In the validation dataset, the EMR‐based models also performed best among patients with a set of gastrointestinal diagnoses (0.841; 0.7830.897) and worst among patients with congestive heart failure (0.683; 0.6100.755). A negative correlation (R = 0.63) was evident between the number of event shifts in a submodel and the drop in the c‐statistic seen in the validation dataset.

Best and Worst Performing Submodels in the Validation Dataset
 No. of Shifts in Validation Datasetc‐Statistic
Diagnoses Group*EventComparisonMEWS(re)EMR Model
  • Abbreviations: EMR, electronic medical record; GI, gastrointestinal; MEWS(re), Modified Early Warning Score (retrospective electronic).

  • Specific International Classification of Diseases (ICD) codes used are detailed in the Supporting Information, Appendices, in the online version of this article.

  • MEWS(re); see text, Supporting Information, Appendices, in the online version of this article, and Subbe et al10 for more details.

  • Model based on comprehensive data from EMR; see text, Table 3, and Supporting Information, Appendices, in the online version of this article for more details.

  • This group of diagnoses includes appendicitis, cholecystitis, cholangitis, hernias, and pancreatic disorders.

  • This group of diagnoses includes: gastrointestinal hemorrhage, miscellaneous disorders affecting the stomach and duodenum, diverticulitis, abdominal symptoms, nausea with vomiting, and blood in stool.

  • This group of diagnoses includes inflammatory bowel disease, malabsorption syndromes, gastrointestinal obstruction, and enteritides.

Acute myocardial infarction361690.5410.572
Diseases of pulmonary circulation and cardiac dysrhythmias403290.5650.645
Seizure disorders454970.5940.647
Rule out myocardial infarction777270.6020.648
Pneumonia1638470.7410.801
GI diagnoses, set A589420.7550.803
GI diagnoses, set B2562,6100.7720.806
GI diagnoses, set C465200.7920.841
All diagnosis2,03220,1060.6980.775

We also compared model performance when our datasets were restricted to 1 randomly selected observation per patient; in these analyses, the total number of event shifts was 3,647 and the number of comparison shifts was 29,052. The c‐statistic for the MEWS(re) in the derivation dataset was 0.709 (0.6940.725); in the validation dataset, it was 0.698 (0.6920.714). The corresponding values for the EMR‐based models were 0.856 (0.8350.877) and 0.780 (0.7560.804). We also tested models in which, instead of dropping shifts with missing vital signs, we imputed missing vital signs to their normal value. The c‐statistic for the EMR‐based model with imputed vital sign values was 0.842 (0.8230.861) in the derivation dataset and 0.773 (0.7520.794) in the validation dataset. Lastly, we applied model coefficients to a dataset consisting of 4,290 randomly selected comparison shifts plus the 429 shifts excluded because of the 4‐hour length‐of‐stay criterion. The c‐statistic for this analysis was 0.756 (0.7030.809).

As a general rule, the EMR‐based models were more than twice as efficient as the MEWS(re). For example, a MEWS(re) threshold of 6 as the trigger for an alarm would identify 15% of all transfers to the ICU, with 34.4 false alarms for each transfer; in contrast, using the EMR‐based approach to identify 15% of all transfers, there were 14.5 false alarms for each transfer. Applied to the entire KPMCP Northern California Region, using the MEWS(re), a total of 52 patients per day would need to be evaluated, but only 22 per day using the EMR‐based approach. If one employed a MEWS(re) threshold of 4, this would lead to identification of 44% of all transfers, with a ratio of 69 false alarms for each transfer; using the EMR, the ratio would be 34 to 1. Across the entire KPMCP, a total of 276 patients per day (or about 19.5 a day per hospital) would need to be evaluated using the MEWS(re), but only 136 (or about 9.5 per hospital per day) using the EMR.

DISCUSSION

Using data from a large hospital cohort, we have developed a predictive model suitable for use in non‐ICU populations cared for in integrated healthcare settings with fully automated EMRs. The overall performance of our model, which incorporates acute physiology, diagnosis, and longitudinal data, is superior to the predictive ability of a model that can be assigned manually. This is not surprising, given that scoring systems such as the MEWS make an explicit tradeoff losing information found in multiple variables in exchange for ease of manual assignment. Currently, the model described in this report is being implemented in a simulated environment, a final safety test prior to piloting real‐time provision of probability estimates to clinicians and nurses. Though not yet ready for real‐time use, it is reasonable for our model to be tested using the KPHC shadow server, since evaluation in a simulated environment constitutes a critical evaluation step prior to deployment for clinical use. We also anticipate further refinement and revalidation to occur as more inpatient data become available in the KPMCP and elsewhere.

A number of limitations to our approach must be emphasized. In developing our models, we determined that, while modeling by clinical condition was important, the study outcome was rare for some primary conditions. In these diagnostic groups, which accounted for 12.5% of the event shifts and 10.6% of the comparison shifts, the c‐statistic in the validation dataset was <0.70. Since all 22 KPMCP hospitals are now online and will generate an additional 150,000 adult hospitalizations per year, we expect to be able to correct this problem prior to deployment of these models for clinical use. Having additional data will permit us to improve model discrimination and thus decrease the evaluation‐to‐detection ratio. In future iterations of these models, more experimentation with grouping of International Classification of Diseases (ICD) codes may be required. The problem of grouping ICD codes is not an easy one to resolve, in that diagnoses in the grouping must share common pathophysiology while having a grouping with a sufficient number of adverse events for stable statistical models.

Ideally, it would have been desirable to employ a more objective measure of deterioration, since the decision to transfer a patient to the ICU is discretionary. However, we have found that key data points needed to define such a measure (eg, vital signs) are not consistently charted when a patient deterioratesthis is not surprising outside the research setting, given that nurses and physicians involved in a transfer may be focusing on caring for the patient rather than immediately charting. Given the complexities of end‐of‐life‐care decision‐making, we could not employ death as the outcome of interest. A related issue is that our model does not differentiate between reasons for needing transfer to the ICU, an issue recently discussed by Bapoje et al.18

Our model does not address an important issue raised by Bapoje et al18 and Litvak, Pronovost, and others,19, 20 namely, whether a patient should have been admitted to a non‐ICU setting in the first place. Our team is currently developing a model for doing exactly this (providing decision support for triage in the emergency department), but discussion of this methodology is outside the scope of this article.

Because of resource and data limitations, our model also does not include newborns, children, women admitted for childbirth, or patients transferred from non‐KPMCP hospitals. However, the approach described here could serve as a starting point for developing models for these other populations.

The generalizability of our model must also be considered. The Northern California KPMCP is unusual in having large electronic databases that include physiologic as well as longitudinal patient data. Many hospitals cannot take advantage of all the methods described here. However, the methods we employed could be modified for use by hospital systems in countries such as Great Britain and Canada, and entities such as the Veterans Administration Hospital System in the United States. The KPMCP population, an insured population with few barriers to access, is healthier than the general population, and some population subsets are underrepresented in our cohort. Practice patterns may also vary. Nonetheless, the model described here could serve as a good starting point for future collaborative studies, and it would be possible to develop models suitable for use by stand‐alone hospitals (eg, recalibrating so that one used a Charlson comorbidity21 score based on present on‐admission codes rather than the COPS).

The need for early detection of patient deterioration has played a major role in the development of rapid response teams, as well as scores such as the MEWS. In particular, entities such as the Institute for Healthcare Improvement have advocated the use of early warning systems.22 However, having a statistically robust model to support an early warning system is only part of the solution, and a number of new challenges must then be addressed. The first is actual electronic deployment. Existing inpatient EMRs were not designed with complex calculations in mind, and we anticipate that some degradation in performance will occur when we test our models using real‐time data capture. As Bapoje et al point out, simply having an alert may be insufficient, since not all transfers are preventable.18 Early warning systems also raise ethical issues (for example, what should be done if an alert leads a clinician to confront the fact that an end‐of‐life‐care discussion needs to occur?). From a research perspective, if one were to formally test the benefits of such models, it would be critical to define outcome measures other than death (which is strongly affected by end‐of‐life‐care decisions) or ICU transfer (which is often desirable).

In conclusion, we have developed an approach for predicting impending physiologic deterioration of hospitalized adults outside the ICU. Our approach illustrates how organizations can take maximal advantage of EMRs in a manner that exceeds meaningful use specifications.23, 24 Our study highlights the possibility of using fully automated EMR data for building and applying sophisticated statistical models in settings other than the highly monitored ICU without the need for additional equipment. It also expands the universe of severity scoring to one in which probability estimates are provided in real time and throughout an entire hospitalization. Model performance will undoubtedly improve over time, as more patient data become available. Although our approach has important limitations, it is suitable for testing using real‐time data in a simulated environment. Such testing would permit identification of unanticipated problems and quantification of the degradation of model performance due to real life factors, such as delays in vital signs charting or EMR system brownouts. It could also serve as the springboard for future collaborative studies, with a broader population base, in which the EMR becomes a tool for care, not just documentation.

Acknowledgements

We thank Ms Marla Gardner and Mr John Greene for their work in the development phase of this project. We are grateful to Brian Hoberman, Andrew Hwang, and Marc Flagg from the RIMS group; to Colin Stobbs, Sriram Thiruvenkatachari, and Sundeep Sood from KP IT, Inc; and to Dennis Andaya, Linda Gliner, and Cyndi Vasallo for their assistance with data‐quality audits. We are also grateful to Dr Philip Madvig, Dr Paul Feigenbaum, Dr Alan Whippy, Mr Gregory Adams, Ms Barbara Crawford, and Dr Marybeth Sharpe for their administrative support and encouragement; and to Dr Alan S. Go, Acting Director of the Kaiser Permanente Division of Research, for reviewing the manuscript.

References
  1. Barnett MJ,Kaboli PJ,Sirio CA,Rosenthal GE.Day of the week of intensive care admission and patient outcomes: a multisite regional evaluation.Med Care.2002;40(6):530539.
  2. Ensminger SA,Morales IJ,Peters SG, et al.The hospital mortality of patients admitted to the ICU on weekends.Chest.2004;126(4):12921298.
  3. Luyt CE,Combes A,Aegerter P, et al.Mortality among patients admitted to intensive care units during weekday day shifts compared with “off” hours.Crit Care Med.2007;35(1):311.
  4. Escobar G,Greene J,Scheirer P,Gardner M,Draper D,Kipnis P.Risk adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46(3):232239.
  5. van Walraven C,Escobar GJ,Greene JD,Forster AJ.The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population.J Clin Epidemiol.2010;63(7):798803.
  6. Escobar GJ,Greene JD,Gardner MN,Marelich GP,Quick B,Kipnis P.Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS).J Hosp Med.2011;6(2):7480.
  7. Chambrin MC,Ravaux P,Calvelo‐Aros D,Jaborska A,Chopin C,Boniface B.Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis.Intensive Care Med.1999;25(12):13601366.
  8. Saria S,Rajani AK,Gould J,Koller D,Penn AA.Integration of early physiological responses predicts later illness severity in preterm infants.Sci Transl Med.2010;2(48):48ra65.
  9. Subbe CP,Gao H,Harrison DA.Reproducibility of physiological track‐and‐trigger warning systems for identifying at‐risk patients on the ward.Intensive Care Med.2007;33(4):619624.
  10. Subbe CP,Kruger M,Rutherford P,Gemmel L.Validation of a Modified Early Warning Score in medical admissions.Q J Med.2001;94:521526.
  11. Subbe CP,Davies RG,Williams E,Rutherford P,Gemmell L.Effect of introducing the Modified Early Warning score on clinical outcomes, cardio‐pulmonary arrests and intensive care utilisation in acute medical admissions.Anaesthesia.2003;58(8):797802.
  12. MERIT Study Investigators.Introduction of the medical emergency team (MET) system: a cluster‐randomized controlled trial.Lancet.2005;365(9477):20912097.
  13. Keller AS,Kirkland LL,Rajasekaran SY,Cha S,Rady MY,Huddleston JM.Unplanned transfers to the intensive care unit: the role of the shock index.J Hosp Med.2010;5(8):460465.
  14. Wrenn K.The delta (delta) gap: an approach to mixed acid‐base disorders.Ann Emerg Med.1990;19(11):13101313.
  15. Williamson JC.Acid‐base disorders: classification and management strategies.Am Fam Physician.1995;52(2):584590.
  16. Rocktaeschel J,Morimatsu H,Uchino S,Bellomo R.Unmeasured anions in critically ill patients: can they predict mortality?Crit Care Med.2003;31(8):21312136.
  17. Cook NR.Use and misuse of the receiver operating characteristic curve in risk prediction.Circulation.2007;115(7):928935.
  18. Bapoje SR,Gaudiani JL,Narayanan V,Albert RK.Unplanned transfers to a medical intensive care unit: causes and relationship to preventable errors in care.J Hosp Med.2011;6(2):6872.
  19. Litvak E,Pronovost PJ.Rethinking rapid response teams.JAMA.2010;304(12):13751376.
  20. Winters BD,Pham J,Pronovost PJ.Rapid response teams—walk, don't run.JAMA.2006;296(13):16451647.
  21. Charlson ME,Pompei P,Ales KL,MacKenzie CR.A new method of classifying prognostic comorbidity in longitudinal populations: development and validation.J Chronic Dis.1987;40:373383.
  22. Institute for Healthcare Improvement.Early Warning Systems:The Next Level of Rapid Response.2011. http://www.ihi.org/IHI/Programs/AudioAndWebPrograms/ExpeditionEarlyWarningSystemsTheNextLevelofRapidResponse.htm?player=wmp. Accessed 4/6/11.
  23. Bowes WA.Assessing readiness for meeting meaningful use: identifying electronic health record functionality and measuring levels of adoption.AMIA Annu Symp Proc.2010;2010:6670.
  24. Medicare and Medicaid Programs;Electronic Health Record Incentive Program. Final Rule.Fed Reg.2010;75(144):4431344588.
References
  1. Barnett MJ,Kaboli PJ,Sirio CA,Rosenthal GE.Day of the week of intensive care admission and patient outcomes: a multisite regional evaluation.Med Care.2002;40(6):530539.
  2. Ensminger SA,Morales IJ,Peters SG, et al.The hospital mortality of patients admitted to the ICU on weekends.Chest.2004;126(4):12921298.
  3. Luyt CE,Combes A,Aegerter P, et al.Mortality among patients admitted to intensive care units during weekday day shifts compared with “off” hours.Crit Care Med.2007;35(1):311.
  4. Escobar G,Greene J,Scheirer P,Gardner M,Draper D,Kipnis P.Risk adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases.Med Care.2008;46(3):232239.
  5. van Walraven C,Escobar GJ,Greene JD,Forster AJ.The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population.J Clin Epidemiol.2010;63(7):798803.
  6. Escobar GJ,Greene JD,Gardner MN,Marelich GP,Quick B,Kipnis P.Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS).J Hosp Med.2011;6(2):7480.
  7. Chambrin MC,Ravaux P,Calvelo‐Aros D,Jaborska A,Chopin C,Boniface B.Multicentric study of monitoring alarms in the adult intensive care unit (ICU): a descriptive analysis.Intensive Care Med.1999;25(12):13601366.
  8. Saria S,Rajani AK,Gould J,Koller D,Penn AA.Integration of early physiological responses predicts later illness severity in preterm infants.Sci Transl Med.2010;2(48):48ra65.
  9. Subbe CP,Gao H,Harrison DA.Reproducibility of physiological track‐and‐trigger warning systems for identifying at‐risk patients on the ward.Intensive Care Med.2007;33(4):619624.
  10. Subbe CP,Kruger M,Rutherford P,Gemmel L.Validation of a Modified Early Warning Score in medical admissions.Q J Med.2001;94:521526.
  11. Subbe CP,Davies RG,Williams E,Rutherford P,Gemmell L.Effect of introducing the Modified Early Warning score on clinical outcomes, cardio‐pulmonary arrests and intensive care utilisation in acute medical admissions.Anaesthesia.2003;58(8):797802.
  12. MERIT Study Investigators.Introduction of the medical emergency team (MET) system: a cluster‐randomized controlled trial.Lancet.2005;365(9477):20912097.
  13. Keller AS,Kirkland LL,Rajasekaran SY,Cha S,Rady MY,Huddleston JM.Unplanned transfers to the intensive care unit: the role of the shock index.J Hosp Med.2010;5(8):460465.
  14. Wrenn K.The delta (delta) gap: an approach to mixed acid‐base disorders.Ann Emerg Med.1990;19(11):13101313.
  15. Williamson JC.Acid‐base disorders: classification and management strategies.Am Fam Physician.1995;52(2):584590.
  16. Rocktaeschel J,Morimatsu H,Uchino S,Bellomo R.Unmeasured anions in critically ill patients: can they predict mortality?Crit Care Med.2003;31(8):21312136.
  17. Cook NR.Use and misuse of the receiver operating characteristic curve in risk prediction.Circulation.2007;115(7):928935.
  18. Bapoje SR,Gaudiani JL,Narayanan V,Albert RK.Unplanned transfers to a medical intensive care unit: causes and relationship to preventable errors in care.J Hosp Med.2011;6(2):6872.
  19. Litvak E,Pronovost PJ.Rethinking rapid response teams.JAMA.2010;304(12):13751376.
  20. Winters BD,Pham J,Pronovost PJ.Rapid response teams—walk, don't run.JAMA.2006;296(13):16451647.
  21. Charlson ME,Pompei P,Ales KL,MacKenzie CR.A new method of classifying prognostic comorbidity in longitudinal populations: development and validation.J Chronic Dis.1987;40:373383.
  22. Institute for Healthcare Improvement.Early Warning Systems:The Next Level of Rapid Response.2011. http://www.ihi.org/IHI/Programs/AudioAndWebPrograms/ExpeditionEarlyWarningSystemsTheNextLevelofRapidResponse.htm?player=wmp. Accessed 4/6/11.
  23. Bowes WA.Assessing readiness for meeting meaningful use: identifying electronic health record functionality and measuring levels of adoption.AMIA Annu Symp Proc.2010;2010:6670.
  24. Medicare and Medicaid Programs;Electronic Health Record Incentive Program. Final Rule.Fed Reg.2010;75(144):4431344588.
Issue
Journal of Hospital Medicine - 7(5)
Issue
Journal of Hospital Medicine - 7(5)
Page Number
388-395
Page Number
388-395
Publications
Publications
Article Type
Display Headline
Early detection of impending physiologic deterioration among patients who are not in intensive care: Development of predictive models using data from an automated electronic medical record
Display Headline
Early detection of impending physiologic deterioration among patients who are not in intensive care: Development of predictive models using data from an automated electronic medical record
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Division of Research, Kaiser Permanente Medical Care Program, 2000 Broadway Ave, Oakland, CA 94612===
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files