Affiliations
Quality Institute, St. Joseph Mercy Hospital
Email
mark.cowen@stjoeshealth.org
Given name(s)
Mark E.
Family name
Cowen
Degrees
MD, SM

Mortality Risk and Patient Experience

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
The risk‐outcome‐experience triad: Mortality risk and the hospital consumer assessment of healthcare providers and systems survey

Few today deny the importance of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey.[1, 2] The Centers for Medicare and Medicaid Services' (CMS) Value Based Purchasing incentive, sympathy for the ill, and relationships between the patient experience and quality of care provide sufficient justification.[3, 4] How to improve the experience scores is not well understood. The national scores have improved only modestly over the past 3 years.[5, 6]

Clinicians may not typically compartmentalize what they do to improve outcomes versus the patient experience. A possible source for new improvement strategies is to understand the types of patients in which both adverse outcomes and suboptimal experiences are likely to occur, then redesign the multidisciplinary care processes to address both concurrently.[7] Previous studies support the existence of a relationship between a higher mortality risk on admission and subsequent worse outcomes, as well as a relationship between worse outcomes and lower HCAHPS scores.[8, 9, 10, 11, 12, 13] We hypothesized the mortality risk on admission, patient experience, and outcomes might share a triad relationship (Figure 1). In this article we explore the third edge of this triangle, the association between the mortality risk on admission and the subsequent patient experience.

Figure 1
Conceptual relationships between patients' severity of illness, experience of care (Hospital Consumer Assessment of Healthcare Providers and Systems Survey), and clinical outcomes. The absence of directional arrows between apices signifies associations without implying causality. We propose the admission severity of illness triggers stratum‐based interventions designed to improve both the clinical outcomes and the experience of care.

METHODS

We studied HCAHPS from 5 midwestern US hospitals having 113, 136, 304, 443, and 537 licensed beds, affiliated with the same multistate healthcare system. HCAHPS telephone surveys were administered via a vendor to a random sample of inpatients 18 years of age or older discharged from January 1, 2012 through June 30, 2014. Per CMS guidelines, surveyed patients must have been discharged alive after a hospital stay of at least 1 night.[14] Patients ineligible to be surveyed included those discharged to skilled nursing facilities or hospice care.[14] Because not all study hospitals provided obstetrical services, we restricted the analyses to medical and surgical respondents. With the permission of the local institutional review board, subjects' survey responses were linked confidentially to their clinical data.

We focused on the 8 dimensions of the care experience used in the CMS Value Based Purchasing program: communication with doctors, communication with nurses, responsiveness of hospital staff, pain management, communication about medicines, discharge information, hospital environment, and an overall rating of the hospital.[2] Following the scoring convention for publicly reported results, we dichotomized the 4‐level Likert scales into the most favorable response possible (always) versus all other responses.[15] Similarly we dichotomized the hospital rating scale at 9 and above for the most favorable response.

Our unit of analysis was an individual hospitalization. Our primary outcome of interest was whether or not the respondent provided the most favorable response for all questions answered within a given domain. For example, for the physician communication domain, the patient must have answered always to each of the 3 questions answered within the domain. This approach is appropriate for learning which patient‐level factors influence the survey responses, but differs from that used for the publically reported domain scores for which the relative performance of hospitals is the focus.[16] For the latter, the hospital was the unit of analysis, and the domain score was basically the average of the percentages of top box scores for the questions within a domain. For example, if 90% respondents from a hospital provided a top box response for courtesy, 80% for listening, and 70% for explanation, the hospital's physician communication score would be (90 + 80 + 70)/3 = 80%.[17]

Our primary explanatory variable was a binary high versus low mortality‐risk status of the respondent on admission based on age, gender, prior hospitalizations, clinical laboratory values, and diagnoses present on admission.[12] The calculated mortality risk was then dichotomized prior to the analysis at a probability of dying equal to 0.07 or higher. This corresponded roughly to the top quintile of predicted risk found in prior studies.[12, 13] During the study period, only 2 of the hospitals had the capability of generating mortality scores in real time, so for this study the mortality risk was calculated retrospectively, using information deemed present on admission.[12]

To estimate the sample size, we assumed that the high‐risk strata contained approximately 13% of respondents, and that the true percent of top box responses from patients in the lower‐risk stratum was approximately 80% for each domain. A meaningful difference in the proportion of most favorable responses was considered as an odds ratio (OR) of 0.75 for high risk versus low risk. A significance level of P < 0.003 was set to control study‐wide type I error due to multiple comparisons. We determined that for each dimension, approximately 8583 survey responses would be required for low‐risk patients and approximately 1116 responses for high‐risk patients to achieve 80% power under these assumptions. We were able to accrue the target number of surveys for all but 3 domains (pain management, communication about medicines, and hospital environment) because of data availability, and because patients are allowed to skip questions that do not apply. Univariate relationships were examined with 2, t test, and Fisher exact tests where indicated. Generalized linear mixed regression models with a logit link were fit to determine the association between patient mortality risk and the top box experience for each of the HCAHPS domains and for the overall rating. The patient's hospital was considered a random intercept to account for the patient‐hospital hierarchy and the unmeasured hospital‐specific practices. The multivariable models controlled for gender plus the HCAHPS patient‐mix adjustment variables of age, education, self‐rated health, language spoken at home, service line, and the number of days elapsed between the date of discharge and date of the survey.[18, 19, 20, 21] In keeping with the industry analyses, a second order interaction variable was included between surgery patients and age.[19] We considered the potential collinearity between the mortality risk status, age, and patient self‐reported health. We found the variance inflation factors were small, so we drew inference from the full multivariable model.

We also performed a post hoc sensitivity analysis to determine if our conclusions were biased due to missing patient responses for the risk‐adjustment variables. Accordingly, we imputed the response level most negatively associated with most HCAHPS domains as previously reported and reran the multivariable models.[19] We did not find a meaningful change in our conclusions (see Supporting Figure 1 in the online version of this article).

RESULTS

The hospitals discharged 152,333 patients during the study period, 39,905 of whom (26.2 %) had a predicted 30‐day mortality risk greater or equal to 0.07 (Table 1). Of the 36,280 high‐risk patients discharged alive, 5901 (16.3%) died in the ensuing 30 days, and 7951 (22%) were readmitted.

Characteristics and HCAHPS Results
Characteristic Low‐Risk Stratum, No./Discharged (%) or Mean (SD) High‐Risk Stratum, No./Discharged (%) or Mean (SD) P Value*
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems Survey; SD, standard deviation. *A 2 test evaluated categorical variables, whereas a t test evaluated continuous variables. Variables evaluated as continuous. Most favorable response. Sixty‐eight records have missing gender information.

Total discharges (row percent) 112,428/152,333 (74) 39,905/152,333 (26) <0.001
Total alive discharges (row percent) 111,600/147,880 (75) 36,280/147,880 (25) <0.001
No. of respondents (row percent) 14,996/17,509 (86) 2,513/17,509 (14)
HCAHPS surveys completed 14,996/111,600 (13) 2,513/36,280 (7) < 0.001
Readmissions within 30 days (total discharges) 12,311/112,428 (11) 7,951/39,905 (20) <0.001
Readmissions within 30 days (alive discharges) 12,311/111,600 (11) 7,951/36,280 (22) <0.001
Readmissions within 30 days (respondents) 1,220/14,996 (8) 424/2,513 (17) <0.001
Mean predicted probability of 30‐day mortality (total discharges) 0.022 (0.018) 0.200 (0.151) <0.001
Mean predicted probability of 30‐day mortality (alive discharges) 0.022 (0.018) 0.187 (0.136) <0.001
Mean predicted probability of 30‐day mortality (respondents) 0.020 (0.017) 0.151 (0.098) <0.001
In‐hospital death (total discharges) 828/112,428 (0.74) 3,625/39,905 (9) <0.001
Mortality within 30 days (total discharges) 2,455/112,428 (2) 9,526/39,905 (24) <0.001
Mortality within 30 days (alive discharges) 1,627/111,600 (1.5) 5,901/36,280 (16) <0.001
Mortality within 30 days (respondents) 9/14,996 (0.06) 16/2,513 (0.64) <0.001
Female (total discharges) 62,681/112,368 (56) 21,058/39,897 (53) <0.001
Female (alive discharges) 62,216/111,540 (56) 19,164/36,272 (53) <0.001
Female (respondents) 8,684/14,996 (58) 1,318/2,513 (52) <0.001
Age (total discharges) 61.3 (16.8) 78.3 (12.5) <0.001
Age (alive discharges) 61.2 (16.8) 78.4 (12.5) <0.001
Age (respondents) 63.1 (15.2) 76.6 (11.5) <0.001
Highest education attained
8th grade or less 297/14,996 (2) 98/2,513 (4)
Some high school 1,190/14,996 (8) 267/2,513 (11)
High school grad 4,648/14,996 (31) 930/2,513 (37) <0.001
Some college 6,338/14,996 (42) 768/2,513 (31)
4‐year college grad 1,502/14,996 (10) 183/2,513 (7)
Missing response 1,021/14,996 (7) 267/2,513 (11)
Language spoken at home
English 13,763/14,996 (92) 2,208/2,513 (88)
Spanish 56/14,996 (0.37) 8/2,513 (0.32) 0.47
Chinese 153/14,996 (1) 31/2,513 (1)
Missing response 1,024/14,996 (7) 266/2,513 (11)
Self‐rated health
Excellent 1,399/14,996 (9) 114/2,513 (5)
Very good 3,916/14,996 (26) 405/2,513 (16)
Good 4,861/14,996 (32) 713/2,513 (28)
Fair 2,900/14,996 (19) 652/2,513 (26) <0.001
Poor 1,065/14,996 (7) 396/2,513 (16)
Missing response 855/14,996 (6) 233/2,513 (9)
Length of hospitalization, d (respondents) 3.5 (2.8) 4.6 (3.6) <0.001
Consulting specialties (respondents) 1.7 (1.0) 2.2 (1.3) <0.001
Service line
Surgical 6,380/14,996 (43) 346/2,513 (14) <0.001
Medical 8,616/14,996 (57) 2,167/2,513 (86)
HCAHPS
Domain 1: Communication With Doctors 9,564/14,731 (65) 1,339/2,462 (54) <0.001
Domain 2: Communication With Nurses 10,097/14,991 (67) 1,531/2,511 (61) <0.001
Domain 3: Responsiveness of Hospital Staff 7,813/12,964 (60) 1,158/2,277 (51) <0.001
Domain 4: Pain Management 6,565/10,424 (63) 786/1,328 (59) 00.007
Domain 5: Communication About Medicines 3,769/8,088 (47) 456/1,143 (40) <0.001
Domain 6: Discharge Information 11,331/14,033 (81) 1,767/2,230 (79) 0.09
Domain 7: Hospital Environment 6,981/14,687 (48) 1,093/2,451 (45) 0.007
Overall rating 10,708/14,996 (71) 1,695/2,513 (67) <0.001

The high‐risk subset was under‐represented in those who completed the HCAHPS survey with 7% (2513/36,280) completing surveys compared to 13% of low‐risk patients (14,996/111,600) (P < 0.0001). Moreover, compared to high‐risk patients who were alive at discharge but did not complete surveys, high‐risk survey respondents were less likely to die within 30 days (16/2513 = 0.64% vs 5885/33,767 = 17.4%, P < 0.0001), and less likely to be readmitted (424/2513 = 16.9% vs 7527/33,767 = 22.3%, P < 0.0001).

On average, high‐risk respondents (compared to low risk) were slightly less likely to be female (52.4% vs 57.9%), less educated (30.6% with some college vs 42.3%), less likely to have been on a surgical service (13.8% vs 42.5%), and less likely to report good or better health (49.0% vs 68.0%, all P < 0.0001). High‐risk respondents were also older (76.6 vs 63.1 years), stayed in the hospital longer (4.6 vs 3.5 days), and received care from more specialties (2.2 vs 1.7 specialties) (all P < 0.0001). High‐risk respondents experienced more 30‐day readmissions (16.9% vs 8.1%) and deaths within 30 days (0.6 % vs 0.1 %, all P < 0.0001) than their low‐risk counterparts.

High‐risk respondents were less likely to provide the most favorable response (unadjusted) for all HCAHPS domains compared to low‐risk respondents, although the difference was not significant for discharge information (Table 1, Figure 2A). The gradient between high‐risk and low‐risk patients was seen for all domains within each hospital except for pain management, hospital environment, and overall rating (Figure 3).

Figure 2
Odds ratios for a high‐risk patient reporting a top box experience (relative to a low‐risk patient) as a single explanatory variable (A) and when controlling for hospital and Hospital Consumer Assessment of Healthcare Providers and Systems Survey risk‐adjustment factors (B).
Figure 3
Unadjusted differences in the percentage of top box responses between low‐risk patients (green column) and high‐risk (red column) for each study hospital for domains 1 to 4 (A) and domains 5 to 7 and overall (B). Each green‐red dyad represents the responses within a study hospital. The general pattern is lower scores for high‐risk (red) patients across domains per hospital.

The multivariable regression models examined whether the mortality risk on admission simply represented older medical patients and/or those who considered themselves unhealthy (Figure 2B) (see Supporting Table 1 in the online version of this article). Accounting for hospital, age, gender, language, self‐reported health, educational level, service line, and days elapsed from discharge, respondents in the high‐mortality‐risk stratum were still less likely to report an always experience for doctor communication (OR: 0.85; 95% confidence interval [CI]: 0.77‐0.94) and responsiveness of hospital staff (OR: 0.77; 95% CI: 0.69‐0.85). Higher‐risk patients also tended to have less favorable experiences with nursing communication, although the CI crossed 1 (OR: 0.91; 95% CI: 0.82‐1.01). In contrast, higher‐risk patients were more likely to provide top box responses for having received discharge information (OR: 1.30; 95% CI: 1.14‐1.48). We did not find independent associations between mortality risk and the other domains when the patient risk‐adjustment factors were considered.[18, 19, 20, 21]

DISCUSSION

The high‐mortality‐risk stratum on admission contained a subset of patients who provided less favorable responses for almost all incentivized HCAHPS domains when other risk‐adjustment variables were not taken into consideration (Figure 2A). These univariate relationships weakened when we controlled for gender, the standard HCAHPS risk‐adjustment variables, and individual hospital influences (Figure 2B).[18, 19, 20, 21] After multivariable adjustment, survey respondents in the high‐risk category remained less likely to report their physicians always communicated well and to experience hospital staff responding quickly, but were more likely to report receiving discharge information. We did not find an independent association between the underlying mortality risk and the other incentivized HCAHPS domains after risk adjustment.

We are cautious with initial interpretations of our findings in light of the relatively small number of hospitals studied and the substantial survey response bias of healthier patients. Undoubtedly, the CMS exclusions of patients discharged to hospice or skilled nursing facilities provide a partial explanation for the selection bias, but the experience of those at high risk who did not complete surveys remains conjecture at this point.[14] Previous evidence suggests sicker patients and those with worse experiences are less likely to respond to the HCAHPS survey.[18, 22] On the other hand, it is possible that high‐risk nonrespondents who died could have received better communication and staff responsiveness.[23, 24] We were unable to find a previous, patient‐level study that explicitly tested the association between the admission mortality risk and the subsequent patient experience, yet our findings are consistent with a previous single‐site study of a surgical population showing lower overall ratings from patients with higher Injury Severity Scores.[25]

Our findings provide evidence of complex relationships among admission mortality risk, the 3 domains of the patient experience, and adverse outcomes, at least within the study hospitals (Figure 1). The developing field of palliative care has found very ill patients have special communication needs regarding goals of care, as well as physical symptoms, anxiety, and depression that might prompt more calls for help.[26] If these needs were more important for high‐risk compared to low‐risk patients, and were either not recognized or adequately addressed by the clinical teams at the study hospitals, then the high‐risk patients may have been less likely to perceive their physicians listened and explained things well, or that staff responded promptly to their requests for help.[27] On the other hand, the higher ratings for discharge information suggest the needs of the high‐risk patients were relatively easier to address by current practices at these hospitals. The lack of association between the mortality risk and the other HCAHPS domains may reflect the relatively stronger influence of age, gender, educational level, provider variability, and other unmeasured influences within the study sites, or that the level of patient need was similar among high‐risk and low‐risk patients within these domains.[27]

There are several possible confounders of our observed relationship between mortality risk and HCAHPS scores. The first category of confounders represents patient level variables that might impact the communication scores, some of which are part of the formula of our mortality prediction rule, for example, cognitive impairment and emergent admission.[18, 22, 27] The effect of the mortality risk could also be confounded by unmeasured patient‐level factors such as lower socioeconomic status.[28] A second category of confounders pertains to clinical outcomes and processes of care associated with serious illness irrespective of the risk of dying. More physicians involved in the care of the seriously ill (Table 1) may impact the communication scores, due to the larger opportunity for conflicting or confusing information presented to patients and their families.[29] The longer hospital stays, readmissions, and adverse events of the seriously ill may also underlie the apparent association between mortality risk and HCAHPS scores.[8, 9, 10]

Even if we do not understand precisely if and how the mortality risk might be associated with suboptimal physician communication and staff responsiveness, there may still be some value in considering how these possible relationships could be leveraged to improve patient care. We recall Berwick's insight that every system is perfectly designed to achieve the results it achieves.[7] We have previously argued for the use of mortality‐risk strata to initiate concurrent, multidisciplinary care processes to reduce adverse outcomes.[12, 13] Others have used risk‐based approaches for anticipating clinical deterioration of surgical patients, and determining the intensity of individualized case management services.[30, 31] In this framework, all patients receive a standard set of care processes, but higher‐risk patients receive additional efforts to promote better outcomes. An efficient extension of this approach is to assume patients at risk for adverse outcomes also have additional needs for communication, coordination of specialty care, and timely response to the call button. The admission mortality risk could be used as a determinant for the level of nurse staffing to reduce deaths plus shorten response time to the call button.[32, 33] Hospitalists and specialists could work together on a standard way to conference among themselves for high‐risk patients above that needed for less‐complex cases. Patients in the high‐risk strata could be screened early to see if they might benefit from the involvement of the palliative care team.[26]

Our study has limitations in addition to those already noted. First, our use of the top box as the formulation of the outcome of interest could be challenged. We chose this to be relevant to the Value‐Based Purchasing environment, but other formulations or use of other survey instruments may be needed to tease out the complex relationships we hypothesize. Next, we do not know the extent to which the patients and care processes reflected in our study represent other settings. The literature indicates some hospitals are more effective in providing care for certain subgroups of patients than for others, and that there is substantial regional variation in care intensity that is in turn associated with the patient experience.[29, 34] The mortality‐risk experience relationship for nonstudy hospitals could be weaker or stronger than what we found. Third, many hospitals may not have the capability to generate mortality scores on admission, although more hospitals may be able to do so in the future.[35] Explicit risk strata have the benefit of providing members of the multidisciplinary team with a quick preview of the clinical needs and prognoses of patients in much the way that the term baroque alerts the audience to the genre of music. Still, clinicians in any hospital could attempt to improve outcomes and experience through the use of informal risk assessment during interdisciplinary care rounds or by simply asking the team if they would be surprised if this patient died in the next year.[30, 36] Finally, we do not know if awareness of an experience risk will identify remediable practices that actually improve the experience. Clearly, future studies are needed to answer all of these concerns.

We have provided evidence that a group of patients who were at elevated risk for dying at the time of admission were more likely to have issues with physician communication and staff responsiveness than their lower‐risk counterparts. While we await future studies to confirm these findings, clinical teams can consider whether or not their patients' HCAHPS scores reflect how their system of care addresses the needs of these vulnerable people.

Acknowledgements

The authors thank Steven Lewis for assistance in the interpretation of the HCAHPS scores, Bonita Singal, MD, PhD, for initial statistical consultation, and Frank Smith, MD, for reviewing an earlier version of the manuscript. The authors acknowledge the input of the peer reviewers.

Disclosures: Dr. Cowen and Mr. Kabara had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: all authors. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: Dr. Cowen and Mr. Kabara. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: Dr. Cowen and Mr. Kabara. Administrative, technical or material support: Ms. Czerwinski. Study supervision: Dr. Cowen and Ms. Czerwinski. Funding/support: internal. Conflicts of interest disclosures: no potential conflicts reported.

Disclosures

Dr. Cowen and Mr. Kabara had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: all authors. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: Dr. Cowen and Mr. Kabara. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: Dr. Cowen and Mr. Kabara. Administrative, technical or material support: Ms. Czerwinski. Study supervision: Dr. Cowen and Ms. Czerwinski. Funding/support: internal. Conflicts of interest disclosures: no potential conflicts reported.

Files
References
  1. Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients' perspective: an overview of the CAHPS hospital survey development process. Health Serv Res. 2005;40 (6 part 2):19771995.
  2. Centers for Medicare 79(163):4985450449.
  3. Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The relationship between patients' perception of care and measures of hospital quality and safety. Health Serv Res. 2010;45(4):10241040.
  4. Centers for Medicare 312(7031):619622.
  5. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):4148.
  6. Iannuzzi JC, Kahn SA, Zhang L, Gestring ML, Noyes K, Monson JRT. Getting satisfaction: drivers of surgical Hospital Consumer Assessment of Health care Providers and Systems survey scores. J Surg Res. 2015;197(1):155161.
  7. Tsai TC, Orav EJ, Jha AK. Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):28.
  8. Kennedy GD, Tevis SE, Kent KC. Is there a relationship between patient satisfaction and favorable outcomes? Ann Surg. 2014;260(4):592598; discussion 598–600.
  9. Cowen ME, Strawderman RL, Czerwinski JL, Smith MJ, Halasyamani LK. Mortality predictions on admission as a context for organizing care activities. J Hosp Med. 2013;8(5):229235.
  10. Cowen ME, Czerwinski JL, Posa PJ, et al. Implementation of a mortality prediction rule for real‐time decision making: feasibility and validity. J Hosp Med. 2014;9(11):720726.
  11. Centers for Medicare 40(6 pt 2):20782095.
  12. Centers for Medicare 44(2 pt 1):501518.
  13. Patient‐mix coefficients for October 2015 (1Q14 through 4Q14 discharges) publicly reported HCAHPS Results. Available at: http://www.hcahpsonline.org/Files/October_2015_PMA_Web_Document_a.pdf. Published July 2, 2015. Accessed August 4, 2015.
  14. O'Malley AJ, Zaslavsky AM, Elliott MN, Zaborski L, Cleary PD. Case‐mix adjustment of the CAHPS hospital survey. Health Serv Res. 2005;40(6):21622181.
  15. Elliott MN, Lehrman WG, Beckett MK, et.al. Gender differences in patients' perceptions of inpatient care. Health Serv Res. 2012;47(4):14821501.
  16. Elliott MN, Edwards C, Angeles J, et al. Patterns of unit and item nonresponse in the CAHPS hospital survey. Health Serv Res. 2005;40(6 pt 2):20962119.
  17. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405411.
  18. Elliott MN, Haviland AM, Cleary PD, et al. Care experiences of managed care Medicare enrollees near the end of life. J Am Geriatr Soc. 2013;61(3):407412.
  19. Kahn SA, Iannuzzi JC, Stassen NA, Bankey PE, Gestring M. Measuring satisfaction: factors that drive hospital consumer assessment of healthcare providers and systems survey responses in a trauma and acute care surgery population. Am Surg. 2015;81(5):537543.
  20. Kelley AS, Morrison RS. Palliative care for the seriously ill. N Engl J Med. 2015;373(8):747755.
  21. Elliott MN, Kanouse DE, Edwards CA, et.al. Components of care vary in importance for overall patient‐reported experience by type of hospitalization. Med Care. 2009;47(8):842849.
  22. Stringhini S, Berkman L, Dugravot A, et al. Socioeconomic status, structural and functional measures of social support, and mortality: the British Whitehall II cohort study, 1985–2009. Am J Epidemiol. 2012;175(12):12751283.
  23. Wennberg JE, Bronner K, Skinner JS, et al. Inpatient care intensity and patients' ratings of their hospital experiences. Health Aff (Millwood). 2009;28(1):103112.
  24. Ravikumar TS, Sharma C, Marini C, et al. A validated value‐based model to improve hospital‐wide perioperative outcomes. Ann Surg. 2010;252(3):486498.
  25. Amarasingham R, Patel PC, Toto K, et al. Allocating scare resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  26. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):19211931.
  27. Needleman J, Buerhaus P, Pankratz S, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011;364(11):10371045.
  28. Elliott MN, Lehrman WG, Goldstein E, et al. Do hospitals rank differently on HCAHPS for different patient subgroups? Med Care Res Rev. 2010;67(1):5673.
  29. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232239.
  30. Moss AH, Ganjoo J, Sharma S, et al. Utility of the “surprise” question to identify dialysis patients with high mortality. Clin J Am Soc Nephrol. 2008;3(5):13791384.
Article PDF
Issue
Journal of Hospital Medicine - 11(9)
Publications
Page Number
628-635
Sections
Files
Files
Article PDF
Article PDF

Few today deny the importance of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey.[1, 2] The Centers for Medicare and Medicaid Services' (CMS) Value Based Purchasing incentive, sympathy for the ill, and relationships between the patient experience and quality of care provide sufficient justification.[3, 4] How to improve the experience scores is not well understood. The national scores have improved only modestly over the past 3 years.[5, 6]

Clinicians may not typically compartmentalize what they do to improve outcomes versus the patient experience. A possible source for new improvement strategies is to understand the types of patients in which both adverse outcomes and suboptimal experiences are likely to occur, then redesign the multidisciplinary care processes to address both concurrently.[7] Previous studies support the existence of a relationship between a higher mortality risk on admission and subsequent worse outcomes, as well as a relationship between worse outcomes and lower HCAHPS scores.[8, 9, 10, 11, 12, 13] We hypothesized the mortality risk on admission, patient experience, and outcomes might share a triad relationship (Figure 1). In this article we explore the third edge of this triangle, the association between the mortality risk on admission and the subsequent patient experience.

Figure 1
Conceptual relationships between patients' severity of illness, experience of care (Hospital Consumer Assessment of Healthcare Providers and Systems Survey), and clinical outcomes. The absence of directional arrows between apices signifies associations without implying causality. We propose the admission severity of illness triggers stratum‐based interventions designed to improve both the clinical outcomes and the experience of care.

METHODS

We studied HCAHPS from 5 midwestern US hospitals having 113, 136, 304, 443, and 537 licensed beds, affiliated with the same multistate healthcare system. HCAHPS telephone surveys were administered via a vendor to a random sample of inpatients 18 years of age or older discharged from January 1, 2012 through June 30, 2014. Per CMS guidelines, surveyed patients must have been discharged alive after a hospital stay of at least 1 night.[14] Patients ineligible to be surveyed included those discharged to skilled nursing facilities or hospice care.[14] Because not all study hospitals provided obstetrical services, we restricted the analyses to medical and surgical respondents. With the permission of the local institutional review board, subjects' survey responses were linked confidentially to their clinical data.

We focused on the 8 dimensions of the care experience used in the CMS Value Based Purchasing program: communication with doctors, communication with nurses, responsiveness of hospital staff, pain management, communication about medicines, discharge information, hospital environment, and an overall rating of the hospital.[2] Following the scoring convention for publicly reported results, we dichotomized the 4‐level Likert scales into the most favorable response possible (always) versus all other responses.[15] Similarly we dichotomized the hospital rating scale at 9 and above for the most favorable response.

Our unit of analysis was an individual hospitalization. Our primary outcome of interest was whether or not the respondent provided the most favorable response for all questions answered within a given domain. For example, for the physician communication domain, the patient must have answered always to each of the 3 questions answered within the domain. This approach is appropriate for learning which patient‐level factors influence the survey responses, but differs from that used for the publically reported domain scores for which the relative performance of hospitals is the focus.[16] For the latter, the hospital was the unit of analysis, and the domain score was basically the average of the percentages of top box scores for the questions within a domain. For example, if 90% respondents from a hospital provided a top box response for courtesy, 80% for listening, and 70% for explanation, the hospital's physician communication score would be (90 + 80 + 70)/3 = 80%.[17]

Our primary explanatory variable was a binary high versus low mortality‐risk status of the respondent on admission based on age, gender, prior hospitalizations, clinical laboratory values, and diagnoses present on admission.[12] The calculated mortality risk was then dichotomized prior to the analysis at a probability of dying equal to 0.07 or higher. This corresponded roughly to the top quintile of predicted risk found in prior studies.[12, 13] During the study period, only 2 of the hospitals had the capability of generating mortality scores in real time, so for this study the mortality risk was calculated retrospectively, using information deemed present on admission.[12]

To estimate the sample size, we assumed that the high‐risk strata contained approximately 13% of respondents, and that the true percent of top box responses from patients in the lower‐risk stratum was approximately 80% for each domain. A meaningful difference in the proportion of most favorable responses was considered as an odds ratio (OR) of 0.75 for high risk versus low risk. A significance level of P < 0.003 was set to control study‐wide type I error due to multiple comparisons. We determined that for each dimension, approximately 8583 survey responses would be required for low‐risk patients and approximately 1116 responses for high‐risk patients to achieve 80% power under these assumptions. We were able to accrue the target number of surveys for all but 3 domains (pain management, communication about medicines, and hospital environment) because of data availability, and because patients are allowed to skip questions that do not apply. Univariate relationships were examined with 2, t test, and Fisher exact tests where indicated. Generalized linear mixed regression models with a logit link were fit to determine the association between patient mortality risk and the top box experience for each of the HCAHPS domains and for the overall rating. The patient's hospital was considered a random intercept to account for the patient‐hospital hierarchy and the unmeasured hospital‐specific practices. The multivariable models controlled for gender plus the HCAHPS patient‐mix adjustment variables of age, education, self‐rated health, language spoken at home, service line, and the number of days elapsed between the date of discharge and date of the survey.[18, 19, 20, 21] In keeping with the industry analyses, a second order interaction variable was included between surgery patients and age.[19] We considered the potential collinearity between the mortality risk status, age, and patient self‐reported health. We found the variance inflation factors were small, so we drew inference from the full multivariable model.

We also performed a post hoc sensitivity analysis to determine if our conclusions were biased due to missing patient responses for the risk‐adjustment variables. Accordingly, we imputed the response level most negatively associated with most HCAHPS domains as previously reported and reran the multivariable models.[19] We did not find a meaningful change in our conclusions (see Supporting Figure 1 in the online version of this article).

RESULTS

The hospitals discharged 152,333 patients during the study period, 39,905 of whom (26.2 %) had a predicted 30‐day mortality risk greater or equal to 0.07 (Table 1). Of the 36,280 high‐risk patients discharged alive, 5901 (16.3%) died in the ensuing 30 days, and 7951 (22%) were readmitted.

Characteristics and HCAHPS Results
Characteristic Low‐Risk Stratum, No./Discharged (%) or Mean (SD) High‐Risk Stratum, No./Discharged (%) or Mean (SD) P Value*
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems Survey; SD, standard deviation. *A 2 test evaluated categorical variables, whereas a t test evaluated continuous variables. Variables evaluated as continuous. Most favorable response. Sixty‐eight records have missing gender information.

Total discharges (row percent) 112,428/152,333 (74) 39,905/152,333 (26) <0.001
Total alive discharges (row percent) 111,600/147,880 (75) 36,280/147,880 (25) <0.001
No. of respondents (row percent) 14,996/17,509 (86) 2,513/17,509 (14)
HCAHPS surveys completed 14,996/111,600 (13) 2,513/36,280 (7) < 0.001
Readmissions within 30 days (total discharges) 12,311/112,428 (11) 7,951/39,905 (20) <0.001
Readmissions within 30 days (alive discharges) 12,311/111,600 (11) 7,951/36,280 (22) <0.001
Readmissions within 30 days (respondents) 1,220/14,996 (8) 424/2,513 (17) <0.001
Mean predicted probability of 30‐day mortality (total discharges) 0.022 (0.018) 0.200 (0.151) <0.001
Mean predicted probability of 30‐day mortality (alive discharges) 0.022 (0.018) 0.187 (0.136) <0.001
Mean predicted probability of 30‐day mortality (respondents) 0.020 (0.017) 0.151 (0.098) <0.001
In‐hospital death (total discharges) 828/112,428 (0.74) 3,625/39,905 (9) <0.001
Mortality within 30 days (total discharges) 2,455/112,428 (2) 9,526/39,905 (24) <0.001
Mortality within 30 days (alive discharges) 1,627/111,600 (1.5) 5,901/36,280 (16) <0.001
Mortality within 30 days (respondents) 9/14,996 (0.06) 16/2,513 (0.64) <0.001
Female (total discharges) 62,681/112,368 (56) 21,058/39,897 (53) <0.001
Female (alive discharges) 62,216/111,540 (56) 19,164/36,272 (53) <0.001
Female (respondents) 8,684/14,996 (58) 1,318/2,513 (52) <0.001
Age (total discharges) 61.3 (16.8) 78.3 (12.5) <0.001
Age (alive discharges) 61.2 (16.8) 78.4 (12.5) <0.001
Age (respondents) 63.1 (15.2) 76.6 (11.5) <0.001
Highest education attained
8th grade or less 297/14,996 (2) 98/2,513 (4)
Some high school 1,190/14,996 (8) 267/2,513 (11)
High school grad 4,648/14,996 (31) 930/2,513 (37) <0.001
Some college 6,338/14,996 (42) 768/2,513 (31)
4‐year college grad 1,502/14,996 (10) 183/2,513 (7)
Missing response 1,021/14,996 (7) 267/2,513 (11)
Language spoken at home
English 13,763/14,996 (92) 2,208/2,513 (88)
Spanish 56/14,996 (0.37) 8/2,513 (0.32) 0.47
Chinese 153/14,996 (1) 31/2,513 (1)
Missing response 1,024/14,996 (7) 266/2,513 (11)
Self‐rated health
Excellent 1,399/14,996 (9) 114/2,513 (5)
Very good 3,916/14,996 (26) 405/2,513 (16)
Good 4,861/14,996 (32) 713/2,513 (28)
Fair 2,900/14,996 (19) 652/2,513 (26) <0.001
Poor 1,065/14,996 (7) 396/2,513 (16)
Missing response 855/14,996 (6) 233/2,513 (9)
Length of hospitalization, d (respondents) 3.5 (2.8) 4.6 (3.6) <0.001
Consulting specialties (respondents) 1.7 (1.0) 2.2 (1.3) <0.001
Service line
Surgical 6,380/14,996 (43) 346/2,513 (14) <0.001
Medical 8,616/14,996 (57) 2,167/2,513 (86)
HCAHPS
Domain 1: Communication With Doctors 9,564/14,731 (65) 1,339/2,462 (54) <0.001
Domain 2: Communication With Nurses 10,097/14,991 (67) 1,531/2,511 (61) <0.001
Domain 3: Responsiveness of Hospital Staff 7,813/12,964 (60) 1,158/2,277 (51) <0.001
Domain 4: Pain Management 6,565/10,424 (63) 786/1,328 (59) 00.007
Domain 5: Communication About Medicines 3,769/8,088 (47) 456/1,143 (40) <0.001
Domain 6: Discharge Information 11,331/14,033 (81) 1,767/2,230 (79) 0.09
Domain 7: Hospital Environment 6,981/14,687 (48) 1,093/2,451 (45) 0.007
Overall rating 10,708/14,996 (71) 1,695/2,513 (67) <0.001

The high‐risk subset was under‐represented in those who completed the HCAHPS survey with 7% (2513/36,280) completing surveys compared to 13% of low‐risk patients (14,996/111,600) (P < 0.0001). Moreover, compared to high‐risk patients who were alive at discharge but did not complete surveys, high‐risk survey respondents were less likely to die within 30 days (16/2513 = 0.64% vs 5885/33,767 = 17.4%, P < 0.0001), and less likely to be readmitted (424/2513 = 16.9% vs 7527/33,767 = 22.3%, P < 0.0001).

On average, high‐risk respondents (compared to low risk) were slightly less likely to be female (52.4% vs 57.9%), less educated (30.6% with some college vs 42.3%), less likely to have been on a surgical service (13.8% vs 42.5%), and less likely to report good or better health (49.0% vs 68.0%, all P < 0.0001). High‐risk respondents were also older (76.6 vs 63.1 years), stayed in the hospital longer (4.6 vs 3.5 days), and received care from more specialties (2.2 vs 1.7 specialties) (all P < 0.0001). High‐risk respondents experienced more 30‐day readmissions (16.9% vs 8.1%) and deaths within 30 days (0.6 % vs 0.1 %, all P < 0.0001) than their low‐risk counterparts.

High‐risk respondents were less likely to provide the most favorable response (unadjusted) for all HCAHPS domains compared to low‐risk respondents, although the difference was not significant for discharge information (Table 1, Figure 2A). The gradient between high‐risk and low‐risk patients was seen for all domains within each hospital except for pain management, hospital environment, and overall rating (Figure 3).

Figure 2
Odds ratios for a high‐risk patient reporting a top box experience (relative to a low‐risk patient) as a single explanatory variable (A) and when controlling for hospital and Hospital Consumer Assessment of Healthcare Providers and Systems Survey risk‐adjustment factors (B).
Figure 3
Unadjusted differences in the percentage of top box responses between low‐risk patients (green column) and high‐risk (red column) for each study hospital for domains 1 to 4 (A) and domains 5 to 7 and overall (B). Each green‐red dyad represents the responses within a study hospital. The general pattern is lower scores for high‐risk (red) patients across domains per hospital.

The multivariable regression models examined whether the mortality risk on admission simply represented older medical patients and/or those who considered themselves unhealthy (Figure 2B) (see Supporting Table 1 in the online version of this article). Accounting for hospital, age, gender, language, self‐reported health, educational level, service line, and days elapsed from discharge, respondents in the high‐mortality‐risk stratum were still less likely to report an always experience for doctor communication (OR: 0.85; 95% confidence interval [CI]: 0.77‐0.94) and responsiveness of hospital staff (OR: 0.77; 95% CI: 0.69‐0.85). Higher‐risk patients also tended to have less favorable experiences with nursing communication, although the CI crossed 1 (OR: 0.91; 95% CI: 0.82‐1.01). In contrast, higher‐risk patients were more likely to provide top box responses for having received discharge information (OR: 1.30; 95% CI: 1.14‐1.48). We did not find independent associations between mortality risk and the other domains when the patient risk‐adjustment factors were considered.[18, 19, 20, 21]

DISCUSSION

The high‐mortality‐risk stratum on admission contained a subset of patients who provided less favorable responses for almost all incentivized HCAHPS domains when other risk‐adjustment variables were not taken into consideration (Figure 2A). These univariate relationships weakened when we controlled for gender, the standard HCAHPS risk‐adjustment variables, and individual hospital influences (Figure 2B).[18, 19, 20, 21] After multivariable adjustment, survey respondents in the high‐risk category remained less likely to report their physicians always communicated well and to experience hospital staff responding quickly, but were more likely to report receiving discharge information. We did not find an independent association between the underlying mortality risk and the other incentivized HCAHPS domains after risk adjustment.

We are cautious with initial interpretations of our findings in light of the relatively small number of hospitals studied and the substantial survey response bias of healthier patients. Undoubtedly, the CMS exclusions of patients discharged to hospice or skilled nursing facilities provide a partial explanation for the selection bias, but the experience of those at high risk who did not complete surveys remains conjecture at this point.[14] Previous evidence suggests sicker patients and those with worse experiences are less likely to respond to the HCAHPS survey.[18, 22] On the other hand, it is possible that high‐risk nonrespondents who died could have received better communication and staff responsiveness.[23, 24] We were unable to find a previous, patient‐level study that explicitly tested the association between the admission mortality risk and the subsequent patient experience, yet our findings are consistent with a previous single‐site study of a surgical population showing lower overall ratings from patients with higher Injury Severity Scores.[25]

Our findings provide evidence of complex relationships among admission mortality risk, the 3 domains of the patient experience, and adverse outcomes, at least within the study hospitals (Figure 1). The developing field of palliative care has found very ill patients have special communication needs regarding goals of care, as well as physical symptoms, anxiety, and depression that might prompt more calls for help.[26] If these needs were more important for high‐risk compared to low‐risk patients, and were either not recognized or adequately addressed by the clinical teams at the study hospitals, then the high‐risk patients may have been less likely to perceive their physicians listened and explained things well, or that staff responded promptly to their requests for help.[27] On the other hand, the higher ratings for discharge information suggest the needs of the high‐risk patients were relatively easier to address by current practices at these hospitals. The lack of association between the mortality risk and the other HCAHPS domains may reflect the relatively stronger influence of age, gender, educational level, provider variability, and other unmeasured influences within the study sites, or that the level of patient need was similar among high‐risk and low‐risk patients within these domains.[27]

There are several possible confounders of our observed relationship between mortality risk and HCAHPS scores. The first category of confounders represents patient level variables that might impact the communication scores, some of which are part of the formula of our mortality prediction rule, for example, cognitive impairment and emergent admission.[18, 22, 27] The effect of the mortality risk could also be confounded by unmeasured patient‐level factors such as lower socioeconomic status.[28] A second category of confounders pertains to clinical outcomes and processes of care associated with serious illness irrespective of the risk of dying. More physicians involved in the care of the seriously ill (Table 1) may impact the communication scores, due to the larger opportunity for conflicting or confusing information presented to patients and their families.[29] The longer hospital stays, readmissions, and adverse events of the seriously ill may also underlie the apparent association between mortality risk and HCAHPS scores.[8, 9, 10]

Even if we do not understand precisely if and how the mortality risk might be associated with suboptimal physician communication and staff responsiveness, there may still be some value in considering how these possible relationships could be leveraged to improve patient care. We recall Berwick's insight that every system is perfectly designed to achieve the results it achieves.[7] We have previously argued for the use of mortality‐risk strata to initiate concurrent, multidisciplinary care processes to reduce adverse outcomes.[12, 13] Others have used risk‐based approaches for anticipating clinical deterioration of surgical patients, and determining the intensity of individualized case management services.[30, 31] In this framework, all patients receive a standard set of care processes, but higher‐risk patients receive additional efforts to promote better outcomes. An efficient extension of this approach is to assume patients at risk for adverse outcomes also have additional needs for communication, coordination of specialty care, and timely response to the call button. The admission mortality risk could be used as a determinant for the level of nurse staffing to reduce deaths plus shorten response time to the call button.[32, 33] Hospitalists and specialists could work together on a standard way to conference among themselves for high‐risk patients above that needed for less‐complex cases. Patients in the high‐risk strata could be screened early to see if they might benefit from the involvement of the palliative care team.[26]

Our study has limitations in addition to those already noted. First, our use of the top box as the formulation of the outcome of interest could be challenged. We chose this to be relevant to the Value‐Based Purchasing environment, but other formulations or use of other survey instruments may be needed to tease out the complex relationships we hypothesize. Next, we do not know the extent to which the patients and care processes reflected in our study represent other settings. The literature indicates some hospitals are more effective in providing care for certain subgroups of patients than for others, and that there is substantial regional variation in care intensity that is in turn associated with the patient experience.[29, 34] The mortality‐risk experience relationship for nonstudy hospitals could be weaker or stronger than what we found. Third, many hospitals may not have the capability to generate mortality scores on admission, although more hospitals may be able to do so in the future.[35] Explicit risk strata have the benefit of providing members of the multidisciplinary team with a quick preview of the clinical needs and prognoses of patients in much the way that the term baroque alerts the audience to the genre of music. Still, clinicians in any hospital could attempt to improve outcomes and experience through the use of informal risk assessment during interdisciplinary care rounds or by simply asking the team if they would be surprised if this patient died in the next year.[30, 36] Finally, we do not know if awareness of an experience risk will identify remediable practices that actually improve the experience. Clearly, future studies are needed to answer all of these concerns.

We have provided evidence that a group of patients who were at elevated risk for dying at the time of admission were more likely to have issues with physician communication and staff responsiveness than their lower‐risk counterparts. While we await future studies to confirm these findings, clinical teams can consider whether or not their patients' HCAHPS scores reflect how their system of care addresses the needs of these vulnerable people.

Acknowledgements

The authors thank Steven Lewis for assistance in the interpretation of the HCAHPS scores, Bonita Singal, MD, PhD, for initial statistical consultation, and Frank Smith, MD, for reviewing an earlier version of the manuscript. The authors acknowledge the input of the peer reviewers.

Disclosures: Dr. Cowen and Mr. Kabara had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: all authors. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: Dr. Cowen and Mr. Kabara. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: Dr. Cowen and Mr. Kabara. Administrative, technical or material support: Ms. Czerwinski. Study supervision: Dr. Cowen and Ms. Czerwinski. Funding/support: internal. Conflicts of interest disclosures: no potential conflicts reported.

Disclosures

Dr. Cowen and Mr. Kabara had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: all authors. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: Dr. Cowen and Mr. Kabara. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: Dr. Cowen and Mr. Kabara. Administrative, technical or material support: Ms. Czerwinski. Study supervision: Dr. Cowen and Ms. Czerwinski. Funding/support: internal. Conflicts of interest disclosures: no potential conflicts reported.

Few today deny the importance of the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey.[1, 2] The Centers for Medicare and Medicaid Services' (CMS) Value Based Purchasing incentive, sympathy for the ill, and relationships between the patient experience and quality of care provide sufficient justification.[3, 4] How to improve the experience scores is not well understood. The national scores have improved only modestly over the past 3 years.[5, 6]

Clinicians may not typically compartmentalize what they do to improve outcomes versus the patient experience. A possible source for new improvement strategies is to understand the types of patients in which both adverse outcomes and suboptimal experiences are likely to occur, then redesign the multidisciplinary care processes to address both concurrently.[7] Previous studies support the existence of a relationship between a higher mortality risk on admission and subsequent worse outcomes, as well as a relationship between worse outcomes and lower HCAHPS scores.[8, 9, 10, 11, 12, 13] We hypothesized the mortality risk on admission, patient experience, and outcomes might share a triad relationship (Figure 1). In this article we explore the third edge of this triangle, the association between the mortality risk on admission and the subsequent patient experience.

Figure 1
Conceptual relationships between patients' severity of illness, experience of care (Hospital Consumer Assessment of Healthcare Providers and Systems Survey), and clinical outcomes. The absence of directional arrows between apices signifies associations without implying causality. We propose the admission severity of illness triggers stratum‐based interventions designed to improve both the clinical outcomes and the experience of care.

METHODS

We studied HCAHPS from 5 midwestern US hospitals having 113, 136, 304, 443, and 537 licensed beds, affiliated with the same multistate healthcare system. HCAHPS telephone surveys were administered via a vendor to a random sample of inpatients 18 years of age or older discharged from January 1, 2012 through June 30, 2014. Per CMS guidelines, surveyed patients must have been discharged alive after a hospital stay of at least 1 night.[14] Patients ineligible to be surveyed included those discharged to skilled nursing facilities or hospice care.[14] Because not all study hospitals provided obstetrical services, we restricted the analyses to medical and surgical respondents. With the permission of the local institutional review board, subjects' survey responses were linked confidentially to their clinical data.

We focused on the 8 dimensions of the care experience used in the CMS Value Based Purchasing program: communication with doctors, communication with nurses, responsiveness of hospital staff, pain management, communication about medicines, discharge information, hospital environment, and an overall rating of the hospital.[2] Following the scoring convention for publicly reported results, we dichotomized the 4‐level Likert scales into the most favorable response possible (always) versus all other responses.[15] Similarly we dichotomized the hospital rating scale at 9 and above for the most favorable response.

Our unit of analysis was an individual hospitalization. Our primary outcome of interest was whether or not the respondent provided the most favorable response for all questions answered within a given domain. For example, for the physician communication domain, the patient must have answered always to each of the 3 questions answered within the domain. This approach is appropriate for learning which patient‐level factors influence the survey responses, but differs from that used for the publically reported domain scores for which the relative performance of hospitals is the focus.[16] For the latter, the hospital was the unit of analysis, and the domain score was basically the average of the percentages of top box scores for the questions within a domain. For example, if 90% respondents from a hospital provided a top box response for courtesy, 80% for listening, and 70% for explanation, the hospital's physician communication score would be (90 + 80 + 70)/3 = 80%.[17]

Our primary explanatory variable was a binary high versus low mortality‐risk status of the respondent on admission based on age, gender, prior hospitalizations, clinical laboratory values, and diagnoses present on admission.[12] The calculated mortality risk was then dichotomized prior to the analysis at a probability of dying equal to 0.07 or higher. This corresponded roughly to the top quintile of predicted risk found in prior studies.[12, 13] During the study period, only 2 of the hospitals had the capability of generating mortality scores in real time, so for this study the mortality risk was calculated retrospectively, using information deemed present on admission.[12]

To estimate the sample size, we assumed that the high‐risk strata contained approximately 13% of respondents, and that the true percent of top box responses from patients in the lower‐risk stratum was approximately 80% for each domain. A meaningful difference in the proportion of most favorable responses was considered as an odds ratio (OR) of 0.75 for high risk versus low risk. A significance level of P < 0.003 was set to control study‐wide type I error due to multiple comparisons. We determined that for each dimension, approximately 8583 survey responses would be required for low‐risk patients and approximately 1116 responses for high‐risk patients to achieve 80% power under these assumptions. We were able to accrue the target number of surveys for all but 3 domains (pain management, communication about medicines, and hospital environment) because of data availability, and because patients are allowed to skip questions that do not apply. Univariate relationships were examined with 2, t test, and Fisher exact tests where indicated. Generalized linear mixed regression models with a logit link were fit to determine the association between patient mortality risk and the top box experience for each of the HCAHPS domains and for the overall rating. The patient's hospital was considered a random intercept to account for the patient‐hospital hierarchy and the unmeasured hospital‐specific practices. The multivariable models controlled for gender plus the HCAHPS patient‐mix adjustment variables of age, education, self‐rated health, language spoken at home, service line, and the number of days elapsed between the date of discharge and date of the survey.[18, 19, 20, 21] In keeping with the industry analyses, a second order interaction variable was included between surgery patients and age.[19] We considered the potential collinearity between the mortality risk status, age, and patient self‐reported health. We found the variance inflation factors were small, so we drew inference from the full multivariable model.

We also performed a post hoc sensitivity analysis to determine if our conclusions were biased due to missing patient responses for the risk‐adjustment variables. Accordingly, we imputed the response level most negatively associated with most HCAHPS domains as previously reported and reran the multivariable models.[19] We did not find a meaningful change in our conclusions (see Supporting Figure 1 in the online version of this article).

RESULTS

The hospitals discharged 152,333 patients during the study period, 39,905 of whom (26.2 %) had a predicted 30‐day mortality risk greater or equal to 0.07 (Table 1). Of the 36,280 high‐risk patients discharged alive, 5901 (16.3%) died in the ensuing 30 days, and 7951 (22%) were readmitted.

Characteristics and HCAHPS Results
Characteristic Low‐Risk Stratum, No./Discharged (%) or Mean (SD) High‐Risk Stratum, No./Discharged (%) or Mean (SD) P Value*
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems Survey; SD, standard deviation. *A 2 test evaluated categorical variables, whereas a t test evaluated continuous variables. Variables evaluated as continuous. Most favorable response. Sixty‐eight records have missing gender information.

Total discharges (row percent) 112,428/152,333 (74) 39,905/152,333 (26) <0.001
Total alive discharges (row percent) 111,600/147,880 (75) 36,280/147,880 (25) <0.001
No. of respondents (row percent) 14,996/17,509 (86) 2,513/17,509 (14)
HCAHPS surveys completed 14,996/111,600 (13) 2,513/36,280 (7) < 0.001
Readmissions within 30 days (total discharges) 12,311/112,428 (11) 7,951/39,905 (20) <0.001
Readmissions within 30 days (alive discharges) 12,311/111,600 (11) 7,951/36,280 (22) <0.001
Readmissions within 30 days (respondents) 1,220/14,996 (8) 424/2,513 (17) <0.001
Mean predicted probability of 30‐day mortality (total discharges) 0.022 (0.018) 0.200 (0.151) <0.001
Mean predicted probability of 30‐day mortality (alive discharges) 0.022 (0.018) 0.187 (0.136) <0.001
Mean predicted probability of 30‐day mortality (respondents) 0.020 (0.017) 0.151 (0.098) <0.001
In‐hospital death (total discharges) 828/112,428 (0.74) 3,625/39,905 (9) <0.001
Mortality within 30 days (total discharges) 2,455/112,428 (2) 9,526/39,905 (24) <0.001
Mortality within 30 days (alive discharges) 1,627/111,600 (1.5) 5,901/36,280 (16) <0.001
Mortality within 30 days (respondents) 9/14,996 (0.06) 16/2,513 (0.64) <0.001
Female (total discharges) 62,681/112,368 (56) 21,058/39,897 (53) <0.001
Female (alive discharges) 62,216/111,540 (56) 19,164/36,272 (53) <0.001
Female (respondents) 8,684/14,996 (58) 1,318/2,513 (52) <0.001
Age (total discharges) 61.3 (16.8) 78.3 (12.5) <0.001
Age (alive discharges) 61.2 (16.8) 78.4 (12.5) <0.001
Age (respondents) 63.1 (15.2) 76.6 (11.5) <0.001
Highest education attained
8th grade or less 297/14,996 (2) 98/2,513 (4)
Some high school 1,190/14,996 (8) 267/2,513 (11)
High school grad 4,648/14,996 (31) 930/2,513 (37) <0.001
Some college 6,338/14,996 (42) 768/2,513 (31)
4‐year college grad 1,502/14,996 (10) 183/2,513 (7)
Missing response 1,021/14,996 (7) 267/2,513 (11)
Language spoken at home
English 13,763/14,996 (92) 2,208/2,513 (88)
Spanish 56/14,996 (0.37) 8/2,513 (0.32) 0.47
Chinese 153/14,996 (1) 31/2,513 (1)
Missing response 1,024/14,996 (7) 266/2,513 (11)
Self‐rated health
Excellent 1,399/14,996 (9) 114/2,513 (5)
Very good 3,916/14,996 (26) 405/2,513 (16)
Good 4,861/14,996 (32) 713/2,513 (28)
Fair 2,900/14,996 (19) 652/2,513 (26) <0.001
Poor 1,065/14,996 (7) 396/2,513 (16)
Missing response 855/14,996 (6) 233/2,513 (9)
Length of hospitalization, d (respondents) 3.5 (2.8) 4.6 (3.6) <0.001
Consulting specialties (respondents) 1.7 (1.0) 2.2 (1.3) <0.001
Service line
Surgical 6,380/14,996 (43) 346/2,513 (14) <0.001
Medical 8,616/14,996 (57) 2,167/2,513 (86)
HCAHPS
Domain 1: Communication With Doctors 9,564/14,731 (65) 1,339/2,462 (54) <0.001
Domain 2: Communication With Nurses 10,097/14,991 (67) 1,531/2,511 (61) <0.001
Domain 3: Responsiveness of Hospital Staff 7,813/12,964 (60) 1,158/2,277 (51) <0.001
Domain 4: Pain Management 6,565/10,424 (63) 786/1,328 (59) 00.007
Domain 5: Communication About Medicines 3,769/8,088 (47) 456/1,143 (40) <0.001
Domain 6: Discharge Information 11,331/14,033 (81) 1,767/2,230 (79) 0.09
Domain 7: Hospital Environment 6,981/14,687 (48) 1,093/2,451 (45) 0.007
Overall rating 10,708/14,996 (71) 1,695/2,513 (67) <0.001

The high‐risk subset was under‐represented in those who completed the HCAHPS survey with 7% (2513/36,280) completing surveys compared to 13% of low‐risk patients (14,996/111,600) (P < 0.0001). Moreover, compared to high‐risk patients who were alive at discharge but did not complete surveys, high‐risk survey respondents were less likely to die within 30 days (16/2513 = 0.64% vs 5885/33,767 = 17.4%, P < 0.0001), and less likely to be readmitted (424/2513 = 16.9% vs 7527/33,767 = 22.3%, P < 0.0001).

On average, high‐risk respondents (compared to low risk) were slightly less likely to be female (52.4% vs 57.9%), less educated (30.6% with some college vs 42.3%), less likely to have been on a surgical service (13.8% vs 42.5%), and less likely to report good or better health (49.0% vs 68.0%, all P < 0.0001). High‐risk respondents were also older (76.6 vs 63.1 years), stayed in the hospital longer (4.6 vs 3.5 days), and received care from more specialties (2.2 vs 1.7 specialties) (all P < 0.0001). High‐risk respondents experienced more 30‐day readmissions (16.9% vs 8.1%) and deaths within 30 days (0.6 % vs 0.1 %, all P < 0.0001) than their low‐risk counterparts.

High‐risk respondents were less likely to provide the most favorable response (unadjusted) for all HCAHPS domains compared to low‐risk respondents, although the difference was not significant for discharge information (Table 1, Figure 2A). The gradient between high‐risk and low‐risk patients was seen for all domains within each hospital except for pain management, hospital environment, and overall rating (Figure 3).

Figure 2
Odds ratios for a high‐risk patient reporting a top box experience (relative to a low‐risk patient) as a single explanatory variable (A) and when controlling for hospital and Hospital Consumer Assessment of Healthcare Providers and Systems Survey risk‐adjustment factors (B).
Figure 3
Unadjusted differences in the percentage of top box responses between low‐risk patients (green column) and high‐risk (red column) for each study hospital for domains 1 to 4 (A) and domains 5 to 7 and overall (B). Each green‐red dyad represents the responses within a study hospital. The general pattern is lower scores for high‐risk (red) patients across domains per hospital.

The multivariable regression models examined whether the mortality risk on admission simply represented older medical patients and/or those who considered themselves unhealthy (Figure 2B) (see Supporting Table 1 in the online version of this article). Accounting for hospital, age, gender, language, self‐reported health, educational level, service line, and days elapsed from discharge, respondents in the high‐mortality‐risk stratum were still less likely to report an always experience for doctor communication (OR: 0.85; 95% confidence interval [CI]: 0.77‐0.94) and responsiveness of hospital staff (OR: 0.77; 95% CI: 0.69‐0.85). Higher‐risk patients also tended to have less favorable experiences with nursing communication, although the CI crossed 1 (OR: 0.91; 95% CI: 0.82‐1.01). In contrast, higher‐risk patients were more likely to provide top box responses for having received discharge information (OR: 1.30; 95% CI: 1.14‐1.48). We did not find independent associations between mortality risk and the other domains when the patient risk‐adjustment factors were considered.[18, 19, 20, 21]

DISCUSSION

The high‐mortality‐risk stratum on admission contained a subset of patients who provided less favorable responses for almost all incentivized HCAHPS domains when other risk‐adjustment variables were not taken into consideration (Figure 2A). These univariate relationships weakened when we controlled for gender, the standard HCAHPS risk‐adjustment variables, and individual hospital influences (Figure 2B).[18, 19, 20, 21] After multivariable adjustment, survey respondents in the high‐risk category remained less likely to report their physicians always communicated well and to experience hospital staff responding quickly, but were more likely to report receiving discharge information. We did not find an independent association between the underlying mortality risk and the other incentivized HCAHPS domains after risk adjustment.

We are cautious with initial interpretations of our findings in light of the relatively small number of hospitals studied and the substantial survey response bias of healthier patients. Undoubtedly, the CMS exclusions of patients discharged to hospice or skilled nursing facilities provide a partial explanation for the selection bias, but the experience of those at high risk who did not complete surveys remains conjecture at this point.[14] Previous evidence suggests sicker patients and those with worse experiences are less likely to respond to the HCAHPS survey.[18, 22] On the other hand, it is possible that high‐risk nonrespondents who died could have received better communication and staff responsiveness.[23, 24] We were unable to find a previous, patient‐level study that explicitly tested the association between the admission mortality risk and the subsequent patient experience, yet our findings are consistent with a previous single‐site study of a surgical population showing lower overall ratings from patients with higher Injury Severity Scores.[25]

Our findings provide evidence of complex relationships among admission mortality risk, the 3 domains of the patient experience, and adverse outcomes, at least within the study hospitals (Figure 1). The developing field of palliative care has found very ill patients have special communication needs regarding goals of care, as well as physical symptoms, anxiety, and depression that might prompt more calls for help.[26] If these needs were more important for high‐risk compared to low‐risk patients, and were either not recognized or adequately addressed by the clinical teams at the study hospitals, then the high‐risk patients may have been less likely to perceive their physicians listened and explained things well, or that staff responded promptly to their requests for help.[27] On the other hand, the higher ratings for discharge information suggest the needs of the high‐risk patients were relatively easier to address by current practices at these hospitals. The lack of association between the mortality risk and the other HCAHPS domains may reflect the relatively stronger influence of age, gender, educational level, provider variability, and other unmeasured influences within the study sites, or that the level of patient need was similar among high‐risk and low‐risk patients within these domains.[27]

There are several possible confounders of our observed relationship between mortality risk and HCAHPS scores. The first category of confounders represents patient level variables that might impact the communication scores, some of which are part of the formula of our mortality prediction rule, for example, cognitive impairment and emergent admission.[18, 22, 27] The effect of the mortality risk could also be confounded by unmeasured patient‐level factors such as lower socioeconomic status.[28] A second category of confounders pertains to clinical outcomes and processes of care associated with serious illness irrespective of the risk of dying. More physicians involved in the care of the seriously ill (Table 1) may impact the communication scores, due to the larger opportunity for conflicting or confusing information presented to patients and their families.[29] The longer hospital stays, readmissions, and adverse events of the seriously ill may also underlie the apparent association between mortality risk and HCAHPS scores.[8, 9, 10]

Even if we do not understand precisely if and how the mortality risk might be associated with suboptimal physician communication and staff responsiveness, there may still be some value in considering how these possible relationships could be leveraged to improve patient care. We recall Berwick's insight that every system is perfectly designed to achieve the results it achieves.[7] We have previously argued for the use of mortality‐risk strata to initiate concurrent, multidisciplinary care processes to reduce adverse outcomes.[12, 13] Others have used risk‐based approaches for anticipating clinical deterioration of surgical patients, and determining the intensity of individualized case management services.[30, 31] In this framework, all patients receive a standard set of care processes, but higher‐risk patients receive additional efforts to promote better outcomes. An efficient extension of this approach is to assume patients at risk for adverse outcomes also have additional needs for communication, coordination of specialty care, and timely response to the call button. The admission mortality risk could be used as a determinant for the level of nurse staffing to reduce deaths plus shorten response time to the call button.[32, 33] Hospitalists and specialists could work together on a standard way to conference among themselves for high‐risk patients above that needed for less‐complex cases. Patients in the high‐risk strata could be screened early to see if they might benefit from the involvement of the palliative care team.[26]

Our study has limitations in addition to those already noted. First, our use of the top box as the formulation of the outcome of interest could be challenged. We chose this to be relevant to the Value‐Based Purchasing environment, but other formulations or use of other survey instruments may be needed to tease out the complex relationships we hypothesize. Next, we do not know the extent to which the patients and care processes reflected in our study represent other settings. The literature indicates some hospitals are more effective in providing care for certain subgroups of patients than for others, and that there is substantial regional variation in care intensity that is in turn associated with the patient experience.[29, 34] The mortality‐risk experience relationship for nonstudy hospitals could be weaker or stronger than what we found. Third, many hospitals may not have the capability to generate mortality scores on admission, although more hospitals may be able to do so in the future.[35] Explicit risk strata have the benefit of providing members of the multidisciplinary team with a quick preview of the clinical needs and prognoses of patients in much the way that the term baroque alerts the audience to the genre of music. Still, clinicians in any hospital could attempt to improve outcomes and experience through the use of informal risk assessment during interdisciplinary care rounds or by simply asking the team if they would be surprised if this patient died in the next year.[30, 36] Finally, we do not know if awareness of an experience risk will identify remediable practices that actually improve the experience. Clearly, future studies are needed to answer all of these concerns.

We have provided evidence that a group of patients who were at elevated risk for dying at the time of admission were more likely to have issues with physician communication and staff responsiveness than their lower‐risk counterparts. While we await future studies to confirm these findings, clinical teams can consider whether or not their patients' HCAHPS scores reflect how their system of care addresses the needs of these vulnerable people.

Acknowledgements

The authors thank Steven Lewis for assistance in the interpretation of the HCAHPS scores, Bonita Singal, MD, PhD, for initial statistical consultation, and Frank Smith, MD, for reviewing an earlier version of the manuscript. The authors acknowledge the input of the peer reviewers.

Disclosures: Dr. Cowen and Mr. Kabara had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: all authors. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: Dr. Cowen and Mr. Kabara. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: Dr. Cowen and Mr. Kabara. Administrative, technical or material support: Ms. Czerwinski. Study supervision: Dr. Cowen and Ms. Czerwinski. Funding/support: internal. Conflicts of interest disclosures: no potential conflicts reported.

Disclosures

Dr. Cowen and Mr. Kabara had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. Study concept and design: all authors. Acquisition, analysis or interpretation of data: all authors. Drafting of the manuscript: Dr. Cowen and Mr. Kabara. Critical revision of the manuscript for important intellectual content: all authors. Statistical analysis: Dr. Cowen and Mr. Kabara. Administrative, technical or material support: Ms. Czerwinski. Study supervision: Dr. Cowen and Ms. Czerwinski. Funding/support: internal. Conflicts of interest disclosures: no potential conflicts reported.

References
  1. Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients' perspective: an overview of the CAHPS hospital survey development process. Health Serv Res. 2005;40 (6 part 2):19771995.
  2. Centers for Medicare 79(163):4985450449.
  3. Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The relationship between patients' perception of care and measures of hospital quality and safety. Health Serv Res. 2010;45(4):10241040.
  4. Centers for Medicare 312(7031):619622.
  5. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):4148.
  6. Iannuzzi JC, Kahn SA, Zhang L, Gestring ML, Noyes K, Monson JRT. Getting satisfaction: drivers of surgical Hospital Consumer Assessment of Health care Providers and Systems survey scores. J Surg Res. 2015;197(1):155161.
  7. Tsai TC, Orav EJ, Jha AK. Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):28.
  8. Kennedy GD, Tevis SE, Kent KC. Is there a relationship between patient satisfaction and favorable outcomes? Ann Surg. 2014;260(4):592598; discussion 598–600.
  9. Cowen ME, Strawderman RL, Czerwinski JL, Smith MJ, Halasyamani LK. Mortality predictions on admission as a context for organizing care activities. J Hosp Med. 2013;8(5):229235.
  10. Cowen ME, Czerwinski JL, Posa PJ, et al. Implementation of a mortality prediction rule for real‐time decision making: feasibility and validity. J Hosp Med. 2014;9(11):720726.
  11. Centers for Medicare 40(6 pt 2):20782095.
  12. Centers for Medicare 44(2 pt 1):501518.
  13. Patient‐mix coefficients for October 2015 (1Q14 through 4Q14 discharges) publicly reported HCAHPS Results. Available at: http://www.hcahpsonline.org/Files/October_2015_PMA_Web_Document_a.pdf. Published July 2, 2015. Accessed August 4, 2015.
  14. O'Malley AJ, Zaslavsky AM, Elliott MN, Zaborski L, Cleary PD. Case‐mix adjustment of the CAHPS hospital survey. Health Serv Res. 2005;40(6):21622181.
  15. Elliott MN, Lehrman WG, Beckett MK, et.al. Gender differences in patients' perceptions of inpatient care. Health Serv Res. 2012;47(4):14821501.
  16. Elliott MN, Edwards C, Angeles J, et al. Patterns of unit and item nonresponse in the CAHPS hospital survey. Health Serv Res. 2005;40(6 pt 2):20962119.
  17. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405411.
  18. Elliott MN, Haviland AM, Cleary PD, et al. Care experiences of managed care Medicare enrollees near the end of life. J Am Geriatr Soc. 2013;61(3):407412.
  19. Kahn SA, Iannuzzi JC, Stassen NA, Bankey PE, Gestring M. Measuring satisfaction: factors that drive hospital consumer assessment of healthcare providers and systems survey responses in a trauma and acute care surgery population. Am Surg. 2015;81(5):537543.
  20. Kelley AS, Morrison RS. Palliative care for the seriously ill. N Engl J Med. 2015;373(8):747755.
  21. Elliott MN, Kanouse DE, Edwards CA, et.al. Components of care vary in importance for overall patient‐reported experience by type of hospitalization. Med Care. 2009;47(8):842849.
  22. Stringhini S, Berkman L, Dugravot A, et al. Socioeconomic status, structural and functional measures of social support, and mortality: the British Whitehall II cohort study, 1985–2009. Am J Epidemiol. 2012;175(12):12751283.
  23. Wennberg JE, Bronner K, Skinner JS, et al. Inpatient care intensity and patients' ratings of their hospital experiences. Health Aff (Millwood). 2009;28(1):103112.
  24. Ravikumar TS, Sharma C, Marini C, et al. A validated value‐based model to improve hospital‐wide perioperative outcomes. Ann Surg. 2010;252(3):486498.
  25. Amarasingham R, Patel PC, Toto K, et al. Allocating scare resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  26. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):19211931.
  27. Needleman J, Buerhaus P, Pankratz S, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011;364(11):10371045.
  28. Elliott MN, Lehrman WG, Goldstein E, et al. Do hospitals rank differently on HCAHPS for different patient subgroups? Med Care Res Rev. 2010;67(1):5673.
  29. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232239.
  30. Moss AH, Ganjoo J, Sharma S, et al. Utility of the “surprise” question to identify dialysis patients with high mortality. Clin J Am Soc Nephrol. 2008;3(5):13791384.
References
  1. Goldstein E, Farquhar M, Crofton C, Darby C, Garfinkel S. Measuring hospital care from the patients' perspective: an overview of the CAHPS hospital survey development process. Health Serv Res. 2005;40 (6 part 2):19771995.
  2. Centers for Medicare 79(163):4985450449.
  3. Isaac T, Zaslavsky AM, Cleary PD, Landon BE. The relationship between patients' perception of care and measures of hospital quality and safety. Health Serv Res. 2010;45(4):10241040.
  4. Centers for Medicare 312(7031):619622.
  5. Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):4148.
  6. Iannuzzi JC, Kahn SA, Zhang L, Gestring ML, Noyes K, Monson JRT. Getting satisfaction: drivers of surgical Hospital Consumer Assessment of Health care Providers and Systems survey scores. J Surg Res. 2015;197(1):155161.
  7. Tsai TC, Orav EJ, Jha AK. Patient satisfaction and quality of surgical care in US hospitals. Ann Surg. 2015;261(1):28.
  8. Kennedy GD, Tevis SE, Kent KC. Is there a relationship between patient satisfaction and favorable outcomes? Ann Surg. 2014;260(4):592598; discussion 598–600.
  9. Cowen ME, Strawderman RL, Czerwinski JL, Smith MJ, Halasyamani LK. Mortality predictions on admission as a context for organizing care activities. J Hosp Med. 2013;8(5):229235.
  10. Cowen ME, Czerwinski JL, Posa PJ, et al. Implementation of a mortality prediction rule for real‐time decision making: feasibility and validity. J Hosp Med. 2014;9(11):720726.
  11. Centers for Medicare 40(6 pt 2):20782095.
  12. Centers for Medicare 44(2 pt 1):501518.
  13. Patient‐mix coefficients for October 2015 (1Q14 through 4Q14 discharges) publicly reported HCAHPS Results. Available at: http://www.hcahpsonline.org/Files/October_2015_PMA_Web_Document_a.pdf. Published July 2, 2015. Accessed August 4, 2015.
  14. O'Malley AJ, Zaslavsky AM, Elliott MN, Zaborski L, Cleary PD. Case‐mix adjustment of the CAHPS hospital survey. Health Serv Res. 2005;40(6):21622181.
  15. Elliott MN, Lehrman WG, Beckett MK, et.al. Gender differences in patients' perceptions of inpatient care. Health Serv Res. 2012;47(4):14821501.
  16. Elliott MN, Edwards C, Angeles J, et al. Patterns of unit and item nonresponse in the CAHPS hospital survey. Health Serv Res. 2005;40(6 pt 2):20962119.
  17. Fenton JJ, Jerant AF, Bertakis KD, Franks P. The cost of satisfaction: a national study of patient satisfaction, health care utilization, expenditures, and mortality. Arch Intern Med. 2012;172(5):405411.
  18. Elliott MN, Haviland AM, Cleary PD, et al. Care experiences of managed care Medicare enrollees near the end of life. J Am Geriatr Soc. 2013;61(3):407412.
  19. Kahn SA, Iannuzzi JC, Stassen NA, Bankey PE, Gestring M. Measuring satisfaction: factors that drive hospital consumer assessment of healthcare providers and systems survey responses in a trauma and acute care surgery population. Am Surg. 2015;81(5):537543.
  20. Kelley AS, Morrison RS. Palliative care for the seriously ill. N Engl J Med. 2015;373(8):747755.
  21. Elliott MN, Kanouse DE, Edwards CA, et.al. Components of care vary in importance for overall patient‐reported experience by type of hospitalization. Med Care. 2009;47(8):842849.
  22. Stringhini S, Berkman L, Dugravot A, et al. Socioeconomic status, structural and functional measures of social support, and mortality: the British Whitehall II cohort study, 1985–2009. Am J Epidemiol. 2012;175(12):12751283.
  23. Wennberg JE, Bronner K, Skinner JS, et al. Inpatient care intensity and patients' ratings of their hospital experiences. Health Aff (Millwood). 2009;28(1):103112.
  24. Ravikumar TS, Sharma C, Marini C, et al. A validated value‐based model to improve hospital‐wide perioperative outcomes. Ann Surg. 2010;252(3):486498.
  25. Amarasingham R, Patel PC, Toto K, et al. Allocating scare resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22(12):9981005.
  26. Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the United States. N Engl J Med. 2008;359(18):19211931.
  27. Needleman J, Buerhaus P, Pankratz S, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011;364(11):10371045.
  28. Elliott MN, Lehrman WG, Goldstein E, et al. Do hospitals rank differently on HCAHPS for different patient subgroups? Med Care Res Rev. 2010;67(1):5673.
  29. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46(3):232239.
  30. Moss AH, Ganjoo J, Sharma S, et al. Utility of the “surprise” question to identify dialysis patients with high mortality. Clin J Am Soc Nephrol. 2008;3(5):13791384.
Issue
Journal of Hospital Medicine - 11(9)
Issue
Journal of Hospital Medicine - 11(9)
Page Number
628-635
Page Number
628-635
Publications
Publications
Article Type
Display Headline
The risk‐outcome‐experience triad: Mortality risk and the hospital consumer assessment of healthcare providers and systems survey
Display Headline
The risk‐outcome‐experience triad: Mortality risk and the hospital consumer assessment of healthcare providers and systems survey
Sections
Article Source
© 2016 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Mark E. Cowen, MD, Quality Institute, St. Joseph Mercy Health System, Suite 400, 3075 Clark Road, Ypsilanti, MI 48197; Telephone: 734‐712‐8776; Fax: 734‐712‐8651; E‐mail: mark.cowen@stjoeshealth.org
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Generating Mortality Predictions

Article Type
Changed
Sun, 05/21/2017 - 13:45
Display Headline
Implementation of a mortality prediction rule for real‐time decision making: Feasibility and validity

The systematic deployment of prediction rules within health systems remains a challenge, although such decision aids have been available for decades.[1, 2] We previously developed and validated a prediction rule for 30‐day mortality in a retrospective cohort, noting that the mortality risk is associated with a number of other clinical events.[3] These relationships suggest risk strata, defined by the predicted probability of 30‐day mortality, and could trigger a number of coordinated care processes proportional to the level of risk.[4] For example, patients within the higher‐risk strata could be considered for placement into an intermediate or intensive care unit (ICU), be monitored more closely by physician and nurse team members for clinical deterioration, be seen by a physician within a few days of hospital discharge, and be considered for advance care planning discussions.[3, 4, 5, 6, 7] Patients within the lower‐risk strata might not need the same intensity of these processes routinely unless some other indication were present.

However attractive this conceptual framework may be, its realization is dependent on the willingness of clinical staff to generate predictions consistently on a substantial portion of the patient population, and on the accuracy of the predictions when the risk factors are determined with some level of uncertainty at the beginning of the hospitalization.[2, 8] Skepticism is justified, because the work involved in completing the prediction rule might be incompatible with existing workflow. A patient might not be scored if the emergency physician lacks time or if technical issues arise with the information system and computation process.[9] There is also a generic concern that the predictions will prove to be less accurate outside of the original study population.[8, 9, 10] A more specific concern for our rule is how well present on admission diagnoses can be determined during the relatively short emergency department or presurgery evaluation period. For example, a final diagnosis of heart failure might not be established until later in the hospitalization, after the results of diagnostic testing and clinical response to treatment are known. Moreover, our retrospective prediction rule requires an assessment of the presence or absence of sepsis and respiratory failure. These diagnoses appear to be susceptible to secular trends in medical record coding practices, suggesting the rule's accuracy might not be stable over time.[11]

We report the feasibility of having emergency physicians and the surgical preparation center team generate mortality predictions before an inpatient bed is assigned. We evaluate and report the accuracy of these prospective predictions.

METHODS

The study population consisted of all patients 18 years of age or less than 100 years who were admitted from the emergency department or assigned an inpatient bed following elective surgery at a tertiary, community teaching hospital in the Midwestern United States from September 1, 2012 through February 15, 2013. Although patients entering the hospital from these 2 pathways would be expected to have different levels of mortality risk, we used the original prediction rule for both because such distinctions were not made in its derivation and validation. Patients were not considered if they were admitted for childbirth or other obstetrical reasons, admitted directly from physician offices, the cardiac catheterization laboratory, hemodialysis unit, or from another hospital. The site institutional review board approved this study.

The implementation process began with presentations to the administrative and medical staff leadership on the accuracy of the retrospectively generated mortality predictions and risk of other adverse events.[3] The chief medical and nursing officers became project champions, secured internal funding for the technical components, and arranged to have 2 project comanagers available. A multidisciplinary task force endorsed the implementation details at biweekly meetings throughout the planning year. The leadership of the emergency department and surgical preparation center committed their colleagues to generate the predictions. The support of the emergency leadership was contingent on the completion of the entire prediction generating process in a very short time (within the time a physician could hold his/her breath). The chief medical officer, with the support of the leadership of the hospitalists and emergency physicians, made the administrative decision that a prediction must be generated prior to the assignment of a hospital room.

During the consensus‐building phase, a Web‐based application was developed to generate the predictions. Emergency physicians and surgical preparation staff were trained on the definitions of the risk factors (see Supporting Information, Appendix, in the online version of this article) and how to use the Web application. Three supporting databases were created. Each midnight, a past medical history database was updated, identifying those who had been discharged from the study hospital in the previous 365 days, and whether or not their diagnoses included atrial fibrillation, leukemia/lymphoma, metastatic cancer, cancer other than leukemia, lymphoma, cognitive disorder, or other neurological conditions (eg, Parkinson's, multiple sclerosis, epilepsy, coma, and stupor). Similarly, a clinical laboratory results database was created and updated real time through an HL7 (Health Level Seven, a standard data exchange format[12]) interface with the laboratory information system for the following tests performed in the preceding 30 days at a hospital‐affiliated facility: hemoglobin, platelet count, white blood count, serum troponin, blood urea nitrogen, serum albumin, serum lactate, arterial pH, arterial partial pressure of oxygen values. The third database, admission‐discharge‐transfer, was created and updated every 15 minutes to identify patients currently in the emergency room or scheduled for surgery. When a patient registration event was added to this database, the Web application created a record, retrieved all relevant data, and displayed the patient name for scoring. When the decision for hospitalization was made, the clinician selected the patient's name and reviewed the pre‐populated medical diagnoses of interest, which could be overwritten based on his/her own assessment (Figure 1A,B). The clinician then indicated (yes, no, or unknown) if the patient currently had or was being treated for each of the following: injury, heart failure, sepsis, respiratory failure, and whether or not the admitting service would be medicine (ie, nonsurgical, nonobstetrical). We considered unknown status to indicate the patient did not have the condition. When laboratory values were not available, a normal value was imputed using a previously developed algorithm.[3] Two additional questions, not used in the current prediction process, were answered to provide data for a future analysis: 1 concerning the change in the patient's condition while in the emergency department and the other concerning the presence of abnormal vital signs. The probability of 30‐day mortality was calculated via the Web application using the risk information supplied and the scoring weights (ie, parameter estimates) provided in the Appendices of our original publication.[3] Predictions were updated every minute as new laboratory values became available, and flagged with an alert if a more severe score resulted.

Figure 1
Screen shots of the Web application used to generate predictions (A) Patient list. The clinician in the emergency department or surgical preparation center selects the patient to be scored. (B) Diagnosis‐based risk factors to be entered. The clinician provides an answer to each question and/or reviews information that has been prepopulated from the past medical history database. Clinical laboratory values and demographic information are electronically provided. After the diagnosis information has been supplied, the clinician presses the “Generate Score” button to obtain the predicted 30‐day mortality.

For the analyses of this study, the last prospective prediction viewed by emergency department personnel, a hospital bed manager, or surgical suite staff prior to arrival on the nursing unit is the one referenced as prospective. Once the patient had been discharged from the hospital, we generated a second mortality prediction based on previously published parameter estimates applied to risk factor data ascertained retrospectively as was done in the original article[3]; we subsequently refer to this prediction as retrospective. We will report on the group of patients who had both prospective and retrospective scores (1 patient had a prospective but not retrospective score available).

The prediction scores were made available to the clinical teams gradually during the study period. All scores were viewable by the midpoint of the study for emergency department admissions and near the end of the study for elective‐surgery patients. Only 2 changes in care processes based on level of risk were introduced during the study period. The first required initial placement of patients having a probability of dying of 0.3 or greater into an intensive or intermediate care unit unless the patient or family requested a less aggressive approach. The second occurred in the final 2 months of the study when a large multispecialty practice began routinely arranging for high‐risk patients to be seen within 3 or 7 days of hospital discharge.

Statistical Analyses

SAS version 9.3 (SAS Institute Inc., Cary, NC) was used to build the datasets and perform the analyses. Feasibility was evaluated by the number of patients who were candidates for prospective scoring with a score available at the time of admission. The validity was assessed with the primary outcome of death within 30 days from the date of hospital admission, as determined from hospital administrative data and the Social Security Death Index. The primary statistical metric is the area under the receiver operating characteristic curve (AROC) and the corresponding 95% Wald confidence limits. We needed some context for understanding the performance of the prospective predictions, assuming the accuracy could deteriorate due to the instability of the prediction rule over time and/or due to imperfect clinical information at the time the risk factors were determined. Accordingly, we also calculated an AROC based on retrospectively derived covariates (but using the same set of parameter estimates) as done in our original publication so we could gauge the stability of the original prediction rule. However, the motivation was not to determine whether retrospective versus prospective predictions were more accurate, given that only prospective predictions are useful in the context of developing real‐time care processes. Rather, we wanted to know if the prospective predictions would be sufficiently accurate for use in clinical practice. A priori, we assumed the prospective predictions should have an AROC of approximately 0.80. Therefore, a target sample size of 8660 hospitalizations was determined to be adequate to assess validity, assuming a 30‐day mortality rate of 5%, a desired lower 95% confidence boundary for the area under the prospective curve at or above 0.80, with a total confidence interval width of 0.07.[13] Calibration was assessed by comparing the actual proportion of patients dying (with 95% binomial confidence intervals) with the mean predicted mortality level within 5 percentile increments of predicted risk.

Risk Strata

We categorize the probability of 30‐day mortality into strata, with the understanding that the thresholds for defining these are a work in progress. Our hospital currently has 5 strata ranging from level 1 (highest mortality risk) to level 5 (lowest risk). The corresponding thresholds (at probabilities of death of 0.005, 0.02, 0.07, 0.20) were determined by visual inspection of the event rates and slope of curves displayed in Figure 1 of the original publication.[3]

Relationship to Secondary Clinical Outcomes of Interest

The choice of clinical care processes triggered per level of risk may be informed by understanding the frequency of events that increase with the mortality risk. We therefore examined the AROC from logistic regression models for the following outcomes using the prospectively generated probability as an explanatory variable: unplanned transfer to an ICU within the first 24 hours for patients not admitted to an ICU initially, ICU use at some point during the hospitalization, the development of a condition not present on admission (complication), receipt of palliative care by the end of the hospitalization, death during the hospitalization, 30‐day readmission, and death within 180 days. The definition of these outcomes and statistical approach used has been previously reported.[3]

RESULTS

Mortality predictions were generated on demand for 7291 out of 7777 (93.8%) eligible patients admitted from the emergency department, and for 2021 out of 2250 (89.8%) eligible elective surgical cases, for a total of 9312 predictions generated out of a possible 10,027 hospitalizations (92.9%). Table 1 displays the characteristics of the study population. The mean age was 65.2 years and 53.8% were women. The most common risk factors were atrial fibrillation (16.4%) and cancer (14.6%). Orders for a comfort care approach (rather than curative) were entered within 4 hours of admission for 32/9312 patients (0.34%), and 9/9312 (0.1%) were hospice patients on admission.

Risk Factors Used in the Prediction Rule and Outcomes of Interest
Risk FactorsNo.Without ImputationNo.With Imputation
  • NOTE: Data are presented as mean (standard deviation) or number (%). Abbreviations: ICU, intensive care unit.

Clinical laboratory values within preceding 30 days   
Maximum serum blood urea nitrogen (mg/dL)8,48422.7 (17.7)9,31222.3 (16.9)
Minimum hemoglobin, g/dL,8,75012.5 (2.4)9,31212.4 (2.4)
Minimum platelet count, 1,000/UL8,737224.1 (87.4)9,312225.2 (84.7)
Maximum white blood count, 1,000/UL8,75010.3 (5.8)9,31210.3 (5.6)
Maximum serum lactate, mEq/L1,7492.2 (1.8)9,3120.7 (1.1)
Minimum serum albumin, g/dL4,0573.4 (0.7)9,3123.2 (0.5)
Minimum arterial pH5097.36 (0.10)9,3127.36 (0.02)
Minimum arterial pO2, mm Hg50973.6 (25.2)9,31298.6 (8.4)
Maximum serum troponin, ng/mL3,2170.5 (9.3)9,3120.2 (5.4)
Demographics and diagnoses
Age, y9,31265.2 (17.0)
Female sex9,3125,006 (53.8%)
Previous hospitalization within past 365 days9,3123,995 (42.9%)
Emergent admission9,3127,288 (78.3%)
Admitted to a medicine service9,3125,840 (62.7%)
Current or past atrial fibrillation9,3121,526 (16.4%)
Current or past cancer without metastases, excluding leukemia or lymphoma9,3121,356 (14.6%)
Current or past history of leukemia or lymphoma9,312145 (1.6%)
Current or past metastatic cancer9,312363 (3.9%)
Current or past cognitive deficiency9,312844 (9.1%)
Current or past history of other neurological conditions (eg, Parkinson's disease, multiple sclerosis, epilepsy, coma, stupor, brain damage)9,312952 (10.2%)
Injury such as fractures or trauma at the time of admission9,312656 (7.0%)
Sepsis at the time of admission9,312406 (4.4%)
Heart failure at the time of admission9,312776 (8.3%)
Respiratory failure on admission9,312557 (6.0%)
Outcomes of interest
Unplanned transfer to an ICU (for those not admitted to an ICU) within 24 hours of admission8,37786 (1.0%)
Ever in an ICU during the hospitalization9,3121,267 (13.6%)
Development of a condition not present on admission (complication)9,312834 (9.0%)
Within hospital mortality9,312188 (2.0%)
Mortality within 30 days of admission9,312466 (5.0%)
Mortality within 180 days of admission9,3121,070 (11.5%)
Receipt of palliative care by the end of the hospitalization9,312314 (3.4%)
Readmitted to the hospital within 30 days of discharge (patients alive at discharge)9,1241,302 (14.3%)
Readmitted to the hospital within 30 days of discharge (patients alive on admission)9,3121,302 (14.0%)

Evaluation of Prediction Accuracy

The AROC for 30‐day mortality was 0.850 (95% confidence interval [CI]: 0.833‐0.866) for prospectively collected covariates, and 0.870 (95% CI: 0.855‐0.885) for retrospectively determined risk factors. These AROCs are not substantively different from each other, demonstrating comparable prediction performance. Calibration was excellent, as indicated in Figure 2, in which the predicted level of risk lay within the 95% confidence limits of the actual 30‐day mortality for 19 out of 20 intervals of 5 percentile increments.

Figure 2
Calibration of the prediction rule. The horizontal axis displays intervals of 5 percentile increments of the predicted risk of dying within 30 days of admission (prospectively collected covariates). The vertical axis indicates the proportion of patients who actually died. The red dash marks represent the mean predicted mortality risk (and corresponding 95% confidence limits) for patients within the interval. The blue solid dot represents the actual proportion of patients within the interval who died, with the blue vertical hash marks indicating the 95% confidence limits for the proportion dying.

Relationship to Secondary Clinical Outcomes of Interest

The relationship between the prospectively generated probability of dying within 30 days and other events is quantified by the AROC displayed in Table 2. The 30‐day mortality risk has a strong association with the receipt of palliative care by hospital discharge, in‐hospital mortality, and 180‐day mortality, a fair association with the risk for 30‐day readmissions and unplanned transfers to intensive care, and weak associations with receipt of intensive unit care ever within the hospitalization or the development of a new diagnosis that was not present on admission (complication). The frequency of these events per mortality risk strata is shown in Table 3. The level 1 stratum contains a higher frequency of these events, whereas the level 5 stratum contains relatively few, reflecting the Pareto principle by which a relatively small proportion of patients contribute a disproportionate frequency of the events of interest.

Area Under the Receiver Operating Characteristic Curve Secondary Outcomes of Interest Associated With 30‐Day Mortality Risk
  • NOTE: Data are presented as MannWhitney (95% Wald confidence limits) using the calculated probability of dying within 30 days and its logarithm as the explanatory variable. Abbreviations: ICU, intensive care unit.

In‐hospital mortality0.841 (0.8140.869)
180day mortality0.836 (0.8250.848)
Receipt of palliative care by discharge0.875 (0.8580.891)
30day readmission (patients alive at discharge)0.649 (0.6340.664)
Unplanned transfer to an ICU (for those not admitted to an ICU) within 24 hours0.643 (0.5900.696)
Ever in an ICU during the hospitalization0.605 (0.5880.621)
Development of a condition not present on admission (complication)0.555 (0.5350.575)
Events Occurring Within Strata Defined by Risk of 30‐Day Mortality
Risk Strata30‐Day Mortality, Count/Cases (%)Unplanned Transfers to ICU Within 24 Hours, Count/Cases (%)Diagnosis Not Present on Admission, Complication, Count/Cases (%)Palliative Status at Discharge, Count/Cases (%)Death in Hospital, Count/Cases (%)
Risk StrataEver in ICU, Count/Cases (%)30‐Day Readmission, Count/Cases (%)Death or Readmission Within 30 Days, Count/Cases (%)180‐Day Mortality, Count/Cases (%)
  • NOTE: Abbreviations: ICU, intensive care unit.

1155/501 (30.9%)6/358 (1.7%)58/501 (11.6%)110/501 (22.0%)72/501 (14.4%)
2166/1,316 (12.6%)22/1,166 (1.9%)148/1,316 (11.3%)121/1,316 (9.2%)58/1,316 (4.4%)
3117/2,977 (3.9%)35/2,701 (1.3%)271/2,977 (9.1%)75/2,977 (2.5%)43/2,977 (1.4%)
424/3,350 (0.7%)20/3,042 (0.7%)293/3,350 (8.8%)6/3,350 (0.2%)13/3,350 (0.4%)
54/1,168 (0.3%)3/1,110 (0.3%)64/1,168 (5.5%)2/1,168 (0.2%)2/1,168 (0.2%)
Total466/9,312 (5.0%)86/8,377 (1.0%)834/9,312 (9.0%)314/9,312 (3.4%)188/9,312 (2.0%)
1165/501 (32.9%)106/429 (24.7%)243/501 (48.5%)240/501 (47.9%)
2213/1,316 (16.2%)275/1,258 (21.9%)418/1,316 (31.8%)403/1,316 (30.6%)
3412/2,977 (13.8%)521/2,934 (17.8%)612/2,977 (20.6%)344/2,977 (11.6%)
4406/3,350 (12.1%)348/3,337 (10.4%)368/3,350 (11.0%)77/3,350 (2.3%)
571/1,168 (6.1%)52/1,166 (4.5%)56/1,168 (4.8%)6/1,168 (0.5%)
Total1,267/9,312 (13.6%)1,302/9,124 (14.3%)1,697/9,312 (18.2%)1,070/9,312 (11.5%)

DISCUSSION

Emergency physicians and surgical preparation center nurses generated predictions by the time of hospital admission for over 90% of the target population during usual workflow, without the addition of staff or resources. The discrimination of the prospectively generated predictions was very good to excellent, with an AROC of 0.850 (95% CI: 0.833‐0.866), similar to that obtained from the retrospective version. Calibration was excellent. The prospectively calculated mortality risk was associated with a number of other events. As shown in Table 3, the differing frequency of events within the risk strata support the development of differing intensities of multidisciplinary strategies according to the level of risk.[5] Our study provides useful experience for others who anticipate generating real‐time predictions. We consider the key reasons for success to be the considerable time spent achieving consensus, the technical development of the Web application, the brief clinician time required for the scoring process, the leadership of the chief medical and nursing officers, and the requirement that a prediction be generated before assignment of a hospital room.

Our study has a number of limitations, some of which were noted in our original publication, and although still relevant, will not be repeated here for space considerations. This is a single‐site study that used a prediction rule developed by the same site, albeit on a patient population 4 to 5 years earlier. It is not known how well the specific rule might perform in other hospital populations; any such use should therefore be accompanied by independent validation studies prior to implementation. Our successful experience should motivate future validation studies. Second, because the prognoses of patients scored from the emergency department are likely to be worse than those of elective surgery patients, our rule should be recalibrated for each subgroup separately. We plan to do this in the near future, as well as consider additional risk factors. Third, the other events of interest might be predicted more accurately if rules specifically developed for each were deployed. The mortality risk by itself is unlikely to be a sufficiently accurate predictor, particularly for complications and intensive care use, for reasons outlined in our original publication.[3] However, the varying levels of events within the higher versus lower strata should be noted by the clinical team as they design their team‐based processes. A follow‐up visit with a physician within a few days of discharge could address the concurrent risk of dying as well as readmission, for example. Finally, it is too early to determine if the availability of mortality predictions from this rule will benefit patients.[2, 8, 10] During the study period, we implemented only 2 new care processes based on the level of risk. This lack of interventions allowed us to evaluate the prediction accuracy with minimal additional confounding, but at the expense of not yet knowing the clinical impact of this work. After the study period, we implemented a number of other interventions and plan on evaluating their effectiveness in the future. We are also considering an evaluation of the potential information gained by updating the predictions throughout the course of the hospitalization.[14]

In conclusion, it is feasible to have a reasonably accurate prediction of mortality risk for most adult patients at the beginning of their hospitalizations. The availability of this prognostic information provides an opportunity to develop proactive care plans for high‐ and low‐risk subsets of patients.

Acknowledgements

The authors acknowledge the technical assistance of Nehal Sanghvi and Ben Sutton in the development of the Web application and related databases, and the support of the Chief Nursing Officer, Joyce Young, RN, PhD, the emergency department medical staff, Mohammad Salameh, MD, David Vandenberg, MD, and the surgical preparation center staff.

Disclosure: Nothing to report.

Files
References
  1. Goldman L, Caldera DL, Nussbaum SR, et al. Multifactorial index of cardiac risk in noncardiac surgical procedures. N Engl J Med. 1977;297:845850.
  2. Stiell IG, Wells GA. Methodological standards for the development of clinical decision rules in emergency medicine. Ann Emerg Med. 1999;33:437447.
  3. Cowen ME, Strawderman RL, Czerwinski JL, Smith MJ, Halasyamani LK. Mortality predictions on admission as a context for organizing care activities. J Hosp Med. 2013;8:229235.
  4. Kellett J, Deane B. The simple clinical score predicts mortality for 30 days after admission to an acute medical unit. QJM. 2006;99:771781.
  5. Amarasingham R, Patel PC, Toto K, et al. Allocating scare resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22:9981005.
  6. Burke RE, Coleman EA. Interventions to decrease hospital readmissions: keys for cost‐effectiveness. JAMA Intern Med. 2013;173:695698.
  7. Ravikumar TS, Sharma C, Marini C, et.al. A validated value‐based model to improve hospital‐wide perioperative outcomes. Ann Surg. 2010;252:486498.
  8. Grady D, Berkowitz SA. Why is a good clinical prediction rule so hard to find? Arch Intern Med. 2011;171:17011702.
  9. Escobar GJ, LaGuardia JC, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  10. Siontis GCM, Tzoulaki I, Ioannidis JPA. Predicting death: an empirical evaluation of predictive tools for mortality. Arch Intern Med. 2011;171:17211726.
  11. Lindenauer PK, Lagu T, Shieh M‐S, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003–2009. JAMA. 2012;307:14051413.
  12. Health Level Seven International website. Available at: http://www.hl7.org/. Accessed June 21, 2014.
  13. Blume JD. Bounding sample size projections for the area under a ROC curve. J Stat Plan Inference. 2009;139:711721.
  14. Wong J, Taljaard M, Forster AJ, Escobar GJ, Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49:734743.
Article PDF
Issue
Journal of Hospital Medicine - 9(11)
Publications
Page Number
720-726
Sections
Files
Files
Article PDF
Article PDF

The systematic deployment of prediction rules within health systems remains a challenge, although such decision aids have been available for decades.[1, 2] We previously developed and validated a prediction rule for 30‐day mortality in a retrospective cohort, noting that the mortality risk is associated with a number of other clinical events.[3] These relationships suggest risk strata, defined by the predicted probability of 30‐day mortality, and could trigger a number of coordinated care processes proportional to the level of risk.[4] For example, patients within the higher‐risk strata could be considered for placement into an intermediate or intensive care unit (ICU), be monitored more closely by physician and nurse team members for clinical deterioration, be seen by a physician within a few days of hospital discharge, and be considered for advance care planning discussions.[3, 4, 5, 6, 7] Patients within the lower‐risk strata might not need the same intensity of these processes routinely unless some other indication were present.

However attractive this conceptual framework may be, its realization is dependent on the willingness of clinical staff to generate predictions consistently on a substantial portion of the patient population, and on the accuracy of the predictions when the risk factors are determined with some level of uncertainty at the beginning of the hospitalization.[2, 8] Skepticism is justified, because the work involved in completing the prediction rule might be incompatible with existing workflow. A patient might not be scored if the emergency physician lacks time or if technical issues arise with the information system and computation process.[9] There is also a generic concern that the predictions will prove to be less accurate outside of the original study population.[8, 9, 10] A more specific concern for our rule is how well present on admission diagnoses can be determined during the relatively short emergency department or presurgery evaluation period. For example, a final diagnosis of heart failure might not be established until later in the hospitalization, after the results of diagnostic testing and clinical response to treatment are known. Moreover, our retrospective prediction rule requires an assessment of the presence or absence of sepsis and respiratory failure. These diagnoses appear to be susceptible to secular trends in medical record coding practices, suggesting the rule's accuracy might not be stable over time.[11]

We report the feasibility of having emergency physicians and the surgical preparation center team generate mortality predictions before an inpatient bed is assigned. We evaluate and report the accuracy of these prospective predictions.

METHODS

The study population consisted of all patients 18 years of age or less than 100 years who were admitted from the emergency department or assigned an inpatient bed following elective surgery at a tertiary, community teaching hospital in the Midwestern United States from September 1, 2012 through February 15, 2013. Although patients entering the hospital from these 2 pathways would be expected to have different levels of mortality risk, we used the original prediction rule for both because such distinctions were not made in its derivation and validation. Patients were not considered if they were admitted for childbirth or other obstetrical reasons, admitted directly from physician offices, the cardiac catheterization laboratory, hemodialysis unit, or from another hospital. The site institutional review board approved this study.

The implementation process began with presentations to the administrative and medical staff leadership on the accuracy of the retrospectively generated mortality predictions and risk of other adverse events.[3] The chief medical and nursing officers became project champions, secured internal funding for the technical components, and arranged to have 2 project comanagers available. A multidisciplinary task force endorsed the implementation details at biweekly meetings throughout the planning year. The leadership of the emergency department and surgical preparation center committed their colleagues to generate the predictions. The support of the emergency leadership was contingent on the completion of the entire prediction generating process in a very short time (within the time a physician could hold his/her breath). The chief medical officer, with the support of the leadership of the hospitalists and emergency physicians, made the administrative decision that a prediction must be generated prior to the assignment of a hospital room.

During the consensus‐building phase, a Web‐based application was developed to generate the predictions. Emergency physicians and surgical preparation staff were trained on the definitions of the risk factors (see Supporting Information, Appendix, in the online version of this article) and how to use the Web application. Three supporting databases were created. Each midnight, a past medical history database was updated, identifying those who had been discharged from the study hospital in the previous 365 days, and whether or not their diagnoses included atrial fibrillation, leukemia/lymphoma, metastatic cancer, cancer other than leukemia, lymphoma, cognitive disorder, or other neurological conditions (eg, Parkinson's, multiple sclerosis, epilepsy, coma, and stupor). Similarly, a clinical laboratory results database was created and updated real time through an HL7 (Health Level Seven, a standard data exchange format[12]) interface with the laboratory information system for the following tests performed in the preceding 30 days at a hospital‐affiliated facility: hemoglobin, platelet count, white blood count, serum troponin, blood urea nitrogen, serum albumin, serum lactate, arterial pH, arterial partial pressure of oxygen values. The third database, admission‐discharge‐transfer, was created and updated every 15 minutes to identify patients currently in the emergency room or scheduled for surgery. When a patient registration event was added to this database, the Web application created a record, retrieved all relevant data, and displayed the patient name for scoring. When the decision for hospitalization was made, the clinician selected the patient's name and reviewed the pre‐populated medical diagnoses of interest, which could be overwritten based on his/her own assessment (Figure 1A,B). The clinician then indicated (yes, no, or unknown) if the patient currently had or was being treated for each of the following: injury, heart failure, sepsis, respiratory failure, and whether or not the admitting service would be medicine (ie, nonsurgical, nonobstetrical). We considered unknown status to indicate the patient did not have the condition. When laboratory values were not available, a normal value was imputed using a previously developed algorithm.[3] Two additional questions, not used in the current prediction process, were answered to provide data for a future analysis: 1 concerning the change in the patient's condition while in the emergency department and the other concerning the presence of abnormal vital signs. The probability of 30‐day mortality was calculated via the Web application using the risk information supplied and the scoring weights (ie, parameter estimates) provided in the Appendices of our original publication.[3] Predictions were updated every minute as new laboratory values became available, and flagged with an alert if a more severe score resulted.

Figure 1
Screen shots of the Web application used to generate predictions (A) Patient list. The clinician in the emergency department or surgical preparation center selects the patient to be scored. (B) Diagnosis‐based risk factors to be entered. The clinician provides an answer to each question and/or reviews information that has been prepopulated from the past medical history database. Clinical laboratory values and demographic information are electronically provided. After the diagnosis information has been supplied, the clinician presses the “Generate Score” button to obtain the predicted 30‐day mortality.

For the analyses of this study, the last prospective prediction viewed by emergency department personnel, a hospital bed manager, or surgical suite staff prior to arrival on the nursing unit is the one referenced as prospective. Once the patient had been discharged from the hospital, we generated a second mortality prediction based on previously published parameter estimates applied to risk factor data ascertained retrospectively as was done in the original article[3]; we subsequently refer to this prediction as retrospective. We will report on the group of patients who had both prospective and retrospective scores (1 patient had a prospective but not retrospective score available).

The prediction scores were made available to the clinical teams gradually during the study period. All scores were viewable by the midpoint of the study for emergency department admissions and near the end of the study for elective‐surgery patients. Only 2 changes in care processes based on level of risk were introduced during the study period. The first required initial placement of patients having a probability of dying of 0.3 or greater into an intensive or intermediate care unit unless the patient or family requested a less aggressive approach. The second occurred in the final 2 months of the study when a large multispecialty practice began routinely arranging for high‐risk patients to be seen within 3 or 7 days of hospital discharge.

Statistical Analyses

SAS version 9.3 (SAS Institute Inc., Cary, NC) was used to build the datasets and perform the analyses. Feasibility was evaluated by the number of patients who were candidates for prospective scoring with a score available at the time of admission. The validity was assessed with the primary outcome of death within 30 days from the date of hospital admission, as determined from hospital administrative data and the Social Security Death Index. The primary statistical metric is the area under the receiver operating characteristic curve (AROC) and the corresponding 95% Wald confidence limits. We needed some context for understanding the performance of the prospective predictions, assuming the accuracy could deteriorate due to the instability of the prediction rule over time and/or due to imperfect clinical information at the time the risk factors were determined. Accordingly, we also calculated an AROC based on retrospectively derived covariates (but using the same set of parameter estimates) as done in our original publication so we could gauge the stability of the original prediction rule. However, the motivation was not to determine whether retrospective versus prospective predictions were more accurate, given that only prospective predictions are useful in the context of developing real‐time care processes. Rather, we wanted to know if the prospective predictions would be sufficiently accurate for use in clinical practice. A priori, we assumed the prospective predictions should have an AROC of approximately 0.80. Therefore, a target sample size of 8660 hospitalizations was determined to be adequate to assess validity, assuming a 30‐day mortality rate of 5%, a desired lower 95% confidence boundary for the area under the prospective curve at or above 0.80, with a total confidence interval width of 0.07.[13] Calibration was assessed by comparing the actual proportion of patients dying (with 95% binomial confidence intervals) with the mean predicted mortality level within 5 percentile increments of predicted risk.

Risk Strata

We categorize the probability of 30‐day mortality into strata, with the understanding that the thresholds for defining these are a work in progress. Our hospital currently has 5 strata ranging from level 1 (highest mortality risk) to level 5 (lowest risk). The corresponding thresholds (at probabilities of death of 0.005, 0.02, 0.07, 0.20) were determined by visual inspection of the event rates and slope of curves displayed in Figure 1 of the original publication.[3]

Relationship to Secondary Clinical Outcomes of Interest

The choice of clinical care processes triggered per level of risk may be informed by understanding the frequency of events that increase with the mortality risk. We therefore examined the AROC from logistic regression models for the following outcomes using the prospectively generated probability as an explanatory variable: unplanned transfer to an ICU within the first 24 hours for patients not admitted to an ICU initially, ICU use at some point during the hospitalization, the development of a condition not present on admission (complication), receipt of palliative care by the end of the hospitalization, death during the hospitalization, 30‐day readmission, and death within 180 days. The definition of these outcomes and statistical approach used has been previously reported.[3]

RESULTS

Mortality predictions were generated on demand for 7291 out of 7777 (93.8%) eligible patients admitted from the emergency department, and for 2021 out of 2250 (89.8%) eligible elective surgical cases, for a total of 9312 predictions generated out of a possible 10,027 hospitalizations (92.9%). Table 1 displays the characteristics of the study population. The mean age was 65.2 years and 53.8% were women. The most common risk factors were atrial fibrillation (16.4%) and cancer (14.6%). Orders for a comfort care approach (rather than curative) were entered within 4 hours of admission for 32/9312 patients (0.34%), and 9/9312 (0.1%) were hospice patients on admission.

Risk Factors Used in the Prediction Rule and Outcomes of Interest
Risk FactorsNo.Without ImputationNo.With Imputation
  • NOTE: Data are presented as mean (standard deviation) or number (%). Abbreviations: ICU, intensive care unit.

Clinical laboratory values within preceding 30 days   
Maximum serum blood urea nitrogen (mg/dL)8,48422.7 (17.7)9,31222.3 (16.9)
Minimum hemoglobin, g/dL,8,75012.5 (2.4)9,31212.4 (2.4)
Minimum platelet count, 1,000/UL8,737224.1 (87.4)9,312225.2 (84.7)
Maximum white blood count, 1,000/UL8,75010.3 (5.8)9,31210.3 (5.6)
Maximum serum lactate, mEq/L1,7492.2 (1.8)9,3120.7 (1.1)
Minimum serum albumin, g/dL4,0573.4 (0.7)9,3123.2 (0.5)
Minimum arterial pH5097.36 (0.10)9,3127.36 (0.02)
Minimum arterial pO2, mm Hg50973.6 (25.2)9,31298.6 (8.4)
Maximum serum troponin, ng/mL3,2170.5 (9.3)9,3120.2 (5.4)
Demographics and diagnoses
Age, y9,31265.2 (17.0)
Female sex9,3125,006 (53.8%)
Previous hospitalization within past 365 days9,3123,995 (42.9%)
Emergent admission9,3127,288 (78.3%)
Admitted to a medicine service9,3125,840 (62.7%)
Current or past atrial fibrillation9,3121,526 (16.4%)
Current or past cancer without metastases, excluding leukemia or lymphoma9,3121,356 (14.6%)
Current or past history of leukemia or lymphoma9,312145 (1.6%)
Current or past metastatic cancer9,312363 (3.9%)
Current or past cognitive deficiency9,312844 (9.1%)
Current or past history of other neurological conditions (eg, Parkinson's disease, multiple sclerosis, epilepsy, coma, stupor, brain damage)9,312952 (10.2%)
Injury such as fractures or trauma at the time of admission9,312656 (7.0%)
Sepsis at the time of admission9,312406 (4.4%)
Heart failure at the time of admission9,312776 (8.3%)
Respiratory failure on admission9,312557 (6.0%)
Outcomes of interest
Unplanned transfer to an ICU (for those not admitted to an ICU) within 24 hours of admission8,37786 (1.0%)
Ever in an ICU during the hospitalization9,3121,267 (13.6%)
Development of a condition not present on admission (complication)9,312834 (9.0%)
Within hospital mortality9,312188 (2.0%)
Mortality within 30 days of admission9,312466 (5.0%)
Mortality within 180 days of admission9,3121,070 (11.5%)
Receipt of palliative care by the end of the hospitalization9,312314 (3.4%)
Readmitted to the hospital within 30 days of discharge (patients alive at discharge)9,1241,302 (14.3%)
Readmitted to the hospital within 30 days of discharge (patients alive on admission)9,3121,302 (14.0%)

Evaluation of Prediction Accuracy

The AROC for 30‐day mortality was 0.850 (95% confidence interval [CI]: 0.833‐0.866) for prospectively collected covariates, and 0.870 (95% CI: 0.855‐0.885) for retrospectively determined risk factors. These AROCs are not substantively different from each other, demonstrating comparable prediction performance. Calibration was excellent, as indicated in Figure 2, in which the predicted level of risk lay within the 95% confidence limits of the actual 30‐day mortality for 19 out of 20 intervals of 5 percentile increments.

Figure 2
Calibration of the prediction rule. The horizontal axis displays intervals of 5 percentile increments of the predicted risk of dying within 30 days of admission (prospectively collected covariates). The vertical axis indicates the proportion of patients who actually died. The red dash marks represent the mean predicted mortality risk (and corresponding 95% confidence limits) for patients within the interval. The blue solid dot represents the actual proportion of patients within the interval who died, with the blue vertical hash marks indicating the 95% confidence limits for the proportion dying.

Relationship to Secondary Clinical Outcomes of Interest

The relationship between the prospectively generated probability of dying within 30 days and other events is quantified by the AROC displayed in Table 2. The 30‐day mortality risk has a strong association with the receipt of palliative care by hospital discharge, in‐hospital mortality, and 180‐day mortality, a fair association with the risk for 30‐day readmissions and unplanned transfers to intensive care, and weak associations with receipt of intensive unit care ever within the hospitalization or the development of a new diagnosis that was not present on admission (complication). The frequency of these events per mortality risk strata is shown in Table 3. The level 1 stratum contains a higher frequency of these events, whereas the level 5 stratum contains relatively few, reflecting the Pareto principle by which a relatively small proportion of patients contribute a disproportionate frequency of the events of interest.

Area Under the Receiver Operating Characteristic Curve Secondary Outcomes of Interest Associated With 30‐Day Mortality Risk
  • NOTE: Data are presented as MannWhitney (95% Wald confidence limits) using the calculated probability of dying within 30 days and its logarithm as the explanatory variable. Abbreviations: ICU, intensive care unit.

In‐hospital mortality0.841 (0.8140.869)
180day mortality0.836 (0.8250.848)
Receipt of palliative care by discharge0.875 (0.8580.891)
30day readmission (patients alive at discharge)0.649 (0.6340.664)
Unplanned transfer to an ICU (for those not admitted to an ICU) within 24 hours0.643 (0.5900.696)
Ever in an ICU during the hospitalization0.605 (0.5880.621)
Development of a condition not present on admission (complication)0.555 (0.5350.575)
Events Occurring Within Strata Defined by Risk of 30‐Day Mortality
Risk Strata30‐Day Mortality, Count/Cases (%)Unplanned Transfers to ICU Within 24 Hours, Count/Cases (%)Diagnosis Not Present on Admission, Complication, Count/Cases (%)Palliative Status at Discharge, Count/Cases (%)Death in Hospital, Count/Cases (%)
Risk StrataEver in ICU, Count/Cases (%)30‐Day Readmission, Count/Cases (%)Death or Readmission Within 30 Days, Count/Cases (%)180‐Day Mortality, Count/Cases (%)
  • NOTE: Abbreviations: ICU, intensive care unit.

1155/501 (30.9%)6/358 (1.7%)58/501 (11.6%)110/501 (22.0%)72/501 (14.4%)
2166/1,316 (12.6%)22/1,166 (1.9%)148/1,316 (11.3%)121/1,316 (9.2%)58/1,316 (4.4%)
3117/2,977 (3.9%)35/2,701 (1.3%)271/2,977 (9.1%)75/2,977 (2.5%)43/2,977 (1.4%)
424/3,350 (0.7%)20/3,042 (0.7%)293/3,350 (8.8%)6/3,350 (0.2%)13/3,350 (0.4%)
54/1,168 (0.3%)3/1,110 (0.3%)64/1,168 (5.5%)2/1,168 (0.2%)2/1,168 (0.2%)
Total466/9,312 (5.0%)86/8,377 (1.0%)834/9,312 (9.0%)314/9,312 (3.4%)188/9,312 (2.0%)
1165/501 (32.9%)106/429 (24.7%)243/501 (48.5%)240/501 (47.9%)
2213/1,316 (16.2%)275/1,258 (21.9%)418/1,316 (31.8%)403/1,316 (30.6%)
3412/2,977 (13.8%)521/2,934 (17.8%)612/2,977 (20.6%)344/2,977 (11.6%)
4406/3,350 (12.1%)348/3,337 (10.4%)368/3,350 (11.0%)77/3,350 (2.3%)
571/1,168 (6.1%)52/1,166 (4.5%)56/1,168 (4.8%)6/1,168 (0.5%)
Total1,267/9,312 (13.6%)1,302/9,124 (14.3%)1,697/9,312 (18.2%)1,070/9,312 (11.5%)

DISCUSSION

Emergency physicians and surgical preparation center nurses generated predictions by the time of hospital admission for over 90% of the target population during usual workflow, without the addition of staff or resources. The discrimination of the prospectively generated predictions was very good to excellent, with an AROC of 0.850 (95% CI: 0.833‐0.866), similar to that obtained from the retrospective version. Calibration was excellent. The prospectively calculated mortality risk was associated with a number of other events. As shown in Table 3, the differing frequency of events within the risk strata support the development of differing intensities of multidisciplinary strategies according to the level of risk.[5] Our study provides useful experience for others who anticipate generating real‐time predictions. We consider the key reasons for success to be the considerable time spent achieving consensus, the technical development of the Web application, the brief clinician time required for the scoring process, the leadership of the chief medical and nursing officers, and the requirement that a prediction be generated before assignment of a hospital room.

Our study has a number of limitations, some of which were noted in our original publication, and although still relevant, will not be repeated here for space considerations. This is a single‐site study that used a prediction rule developed by the same site, albeit on a patient population 4 to 5 years earlier. It is not known how well the specific rule might perform in other hospital populations; any such use should therefore be accompanied by independent validation studies prior to implementation. Our successful experience should motivate future validation studies. Second, because the prognoses of patients scored from the emergency department are likely to be worse than those of elective surgery patients, our rule should be recalibrated for each subgroup separately. We plan to do this in the near future, as well as consider additional risk factors. Third, the other events of interest might be predicted more accurately if rules specifically developed for each were deployed. The mortality risk by itself is unlikely to be a sufficiently accurate predictor, particularly for complications and intensive care use, for reasons outlined in our original publication.[3] However, the varying levels of events within the higher versus lower strata should be noted by the clinical team as they design their team‐based processes. A follow‐up visit with a physician within a few days of discharge could address the concurrent risk of dying as well as readmission, for example. Finally, it is too early to determine if the availability of mortality predictions from this rule will benefit patients.[2, 8, 10] During the study period, we implemented only 2 new care processes based on the level of risk. This lack of interventions allowed us to evaluate the prediction accuracy with minimal additional confounding, but at the expense of not yet knowing the clinical impact of this work. After the study period, we implemented a number of other interventions and plan on evaluating their effectiveness in the future. We are also considering an evaluation of the potential information gained by updating the predictions throughout the course of the hospitalization.[14]

In conclusion, it is feasible to have a reasonably accurate prediction of mortality risk for most adult patients at the beginning of their hospitalizations. The availability of this prognostic information provides an opportunity to develop proactive care plans for high‐ and low‐risk subsets of patients.

Acknowledgements

The authors acknowledge the technical assistance of Nehal Sanghvi and Ben Sutton in the development of the Web application and related databases, and the support of the Chief Nursing Officer, Joyce Young, RN, PhD, the emergency department medical staff, Mohammad Salameh, MD, David Vandenberg, MD, and the surgical preparation center staff.

Disclosure: Nothing to report.

The systematic deployment of prediction rules within health systems remains a challenge, although such decision aids have been available for decades.[1, 2] We previously developed and validated a prediction rule for 30‐day mortality in a retrospective cohort, noting that the mortality risk is associated with a number of other clinical events.[3] These relationships suggest risk strata, defined by the predicted probability of 30‐day mortality, and could trigger a number of coordinated care processes proportional to the level of risk.[4] For example, patients within the higher‐risk strata could be considered for placement into an intermediate or intensive care unit (ICU), be monitored more closely by physician and nurse team members for clinical deterioration, be seen by a physician within a few days of hospital discharge, and be considered for advance care planning discussions.[3, 4, 5, 6, 7] Patients within the lower‐risk strata might not need the same intensity of these processes routinely unless some other indication were present.

However attractive this conceptual framework may be, its realization is dependent on the willingness of clinical staff to generate predictions consistently on a substantial portion of the patient population, and on the accuracy of the predictions when the risk factors are determined with some level of uncertainty at the beginning of the hospitalization.[2, 8] Skepticism is justified, because the work involved in completing the prediction rule might be incompatible with existing workflow. A patient might not be scored if the emergency physician lacks time or if technical issues arise with the information system and computation process.[9] There is also a generic concern that the predictions will prove to be less accurate outside of the original study population.[8, 9, 10] A more specific concern for our rule is how well present on admission diagnoses can be determined during the relatively short emergency department or presurgery evaluation period. For example, a final diagnosis of heart failure might not be established until later in the hospitalization, after the results of diagnostic testing and clinical response to treatment are known. Moreover, our retrospective prediction rule requires an assessment of the presence or absence of sepsis and respiratory failure. These diagnoses appear to be susceptible to secular trends in medical record coding practices, suggesting the rule's accuracy might not be stable over time.[11]

We report the feasibility of having emergency physicians and the surgical preparation center team generate mortality predictions before an inpatient bed is assigned. We evaluate and report the accuracy of these prospective predictions.

METHODS

The study population consisted of all patients 18 years of age or less than 100 years who were admitted from the emergency department or assigned an inpatient bed following elective surgery at a tertiary, community teaching hospital in the Midwestern United States from September 1, 2012 through February 15, 2013. Although patients entering the hospital from these 2 pathways would be expected to have different levels of mortality risk, we used the original prediction rule for both because such distinctions were not made in its derivation and validation. Patients were not considered if they were admitted for childbirth or other obstetrical reasons, admitted directly from physician offices, the cardiac catheterization laboratory, hemodialysis unit, or from another hospital. The site institutional review board approved this study.

The implementation process began with presentations to the administrative and medical staff leadership on the accuracy of the retrospectively generated mortality predictions and risk of other adverse events.[3] The chief medical and nursing officers became project champions, secured internal funding for the technical components, and arranged to have 2 project comanagers available. A multidisciplinary task force endorsed the implementation details at biweekly meetings throughout the planning year. The leadership of the emergency department and surgical preparation center committed their colleagues to generate the predictions. The support of the emergency leadership was contingent on the completion of the entire prediction generating process in a very short time (within the time a physician could hold his/her breath). The chief medical officer, with the support of the leadership of the hospitalists and emergency physicians, made the administrative decision that a prediction must be generated prior to the assignment of a hospital room.

During the consensus‐building phase, a Web‐based application was developed to generate the predictions. Emergency physicians and surgical preparation staff were trained on the definitions of the risk factors (see Supporting Information, Appendix, in the online version of this article) and how to use the Web application. Three supporting databases were created. Each midnight, a past medical history database was updated, identifying those who had been discharged from the study hospital in the previous 365 days, and whether or not their diagnoses included atrial fibrillation, leukemia/lymphoma, metastatic cancer, cancer other than leukemia, lymphoma, cognitive disorder, or other neurological conditions (eg, Parkinson's, multiple sclerosis, epilepsy, coma, and stupor). Similarly, a clinical laboratory results database was created and updated real time through an HL7 (Health Level Seven, a standard data exchange format[12]) interface with the laboratory information system for the following tests performed in the preceding 30 days at a hospital‐affiliated facility: hemoglobin, platelet count, white blood count, serum troponin, blood urea nitrogen, serum albumin, serum lactate, arterial pH, arterial partial pressure of oxygen values. The third database, admission‐discharge‐transfer, was created and updated every 15 minutes to identify patients currently in the emergency room or scheduled for surgery. When a patient registration event was added to this database, the Web application created a record, retrieved all relevant data, and displayed the patient name for scoring. When the decision for hospitalization was made, the clinician selected the patient's name and reviewed the pre‐populated medical diagnoses of interest, which could be overwritten based on his/her own assessment (Figure 1A,B). The clinician then indicated (yes, no, or unknown) if the patient currently had or was being treated for each of the following: injury, heart failure, sepsis, respiratory failure, and whether or not the admitting service would be medicine (ie, nonsurgical, nonobstetrical). We considered unknown status to indicate the patient did not have the condition. When laboratory values were not available, a normal value was imputed using a previously developed algorithm.[3] Two additional questions, not used in the current prediction process, were answered to provide data for a future analysis: 1 concerning the change in the patient's condition while in the emergency department and the other concerning the presence of abnormal vital signs. The probability of 30‐day mortality was calculated via the Web application using the risk information supplied and the scoring weights (ie, parameter estimates) provided in the Appendices of our original publication.[3] Predictions were updated every minute as new laboratory values became available, and flagged with an alert if a more severe score resulted.

Figure 1
Screen shots of the Web application used to generate predictions (A) Patient list. The clinician in the emergency department or surgical preparation center selects the patient to be scored. (B) Diagnosis‐based risk factors to be entered. The clinician provides an answer to each question and/or reviews information that has been prepopulated from the past medical history database. Clinical laboratory values and demographic information are electronically provided. After the diagnosis information has been supplied, the clinician presses the “Generate Score” button to obtain the predicted 30‐day mortality.

For the analyses of this study, the last prospective prediction viewed by emergency department personnel, a hospital bed manager, or surgical suite staff prior to arrival on the nursing unit is the one referenced as prospective. Once the patient had been discharged from the hospital, we generated a second mortality prediction based on previously published parameter estimates applied to risk factor data ascertained retrospectively as was done in the original article[3]; we subsequently refer to this prediction as retrospective. We will report on the group of patients who had both prospective and retrospective scores (1 patient had a prospective but not retrospective score available).

The prediction scores were made available to the clinical teams gradually during the study period. All scores were viewable by the midpoint of the study for emergency department admissions and near the end of the study for elective‐surgery patients. Only 2 changes in care processes based on level of risk were introduced during the study period. The first required initial placement of patients having a probability of dying of 0.3 or greater into an intensive or intermediate care unit unless the patient or family requested a less aggressive approach. The second occurred in the final 2 months of the study when a large multispecialty practice began routinely arranging for high‐risk patients to be seen within 3 or 7 days of hospital discharge.

Statistical Analyses

SAS version 9.3 (SAS Institute Inc., Cary, NC) was used to build the datasets and perform the analyses. Feasibility was evaluated by the number of patients who were candidates for prospective scoring with a score available at the time of admission. The validity was assessed with the primary outcome of death within 30 days from the date of hospital admission, as determined from hospital administrative data and the Social Security Death Index. The primary statistical metric is the area under the receiver operating characteristic curve (AROC) and the corresponding 95% Wald confidence limits. We needed some context for understanding the performance of the prospective predictions, assuming the accuracy could deteriorate due to the instability of the prediction rule over time and/or due to imperfect clinical information at the time the risk factors were determined. Accordingly, we also calculated an AROC based on retrospectively derived covariates (but using the same set of parameter estimates) as done in our original publication so we could gauge the stability of the original prediction rule. However, the motivation was not to determine whether retrospective versus prospective predictions were more accurate, given that only prospective predictions are useful in the context of developing real‐time care processes. Rather, we wanted to know if the prospective predictions would be sufficiently accurate for use in clinical practice. A priori, we assumed the prospective predictions should have an AROC of approximately 0.80. Therefore, a target sample size of 8660 hospitalizations was determined to be adequate to assess validity, assuming a 30‐day mortality rate of 5%, a desired lower 95% confidence boundary for the area under the prospective curve at or above 0.80, with a total confidence interval width of 0.07.[13] Calibration was assessed by comparing the actual proportion of patients dying (with 95% binomial confidence intervals) with the mean predicted mortality level within 5 percentile increments of predicted risk.

Risk Strata

We categorize the probability of 30‐day mortality into strata, with the understanding that the thresholds for defining these are a work in progress. Our hospital currently has 5 strata ranging from level 1 (highest mortality risk) to level 5 (lowest risk). The corresponding thresholds (at probabilities of death of 0.005, 0.02, 0.07, 0.20) were determined by visual inspection of the event rates and slope of curves displayed in Figure 1 of the original publication.[3]

Relationship to Secondary Clinical Outcomes of Interest

The choice of clinical care processes triggered per level of risk may be informed by understanding the frequency of events that increase with the mortality risk. We therefore examined the AROC from logistic regression models for the following outcomes using the prospectively generated probability as an explanatory variable: unplanned transfer to an ICU within the first 24 hours for patients not admitted to an ICU initially, ICU use at some point during the hospitalization, the development of a condition not present on admission (complication), receipt of palliative care by the end of the hospitalization, death during the hospitalization, 30‐day readmission, and death within 180 days. The definition of these outcomes and statistical approach used has been previously reported.[3]

RESULTS

Mortality predictions were generated on demand for 7291 out of 7777 (93.8%) eligible patients admitted from the emergency department, and for 2021 out of 2250 (89.8%) eligible elective surgical cases, for a total of 9312 predictions generated out of a possible 10,027 hospitalizations (92.9%). Table 1 displays the characteristics of the study population. The mean age was 65.2 years and 53.8% were women. The most common risk factors were atrial fibrillation (16.4%) and cancer (14.6%). Orders for a comfort care approach (rather than curative) were entered within 4 hours of admission for 32/9312 patients (0.34%), and 9/9312 (0.1%) were hospice patients on admission.

Risk Factors Used in the Prediction Rule and Outcomes of Interest
Risk FactorsNo.Without ImputationNo.With Imputation
  • NOTE: Data are presented as mean (standard deviation) or number (%). Abbreviations: ICU, intensive care unit.

Clinical laboratory values within preceding 30 days   
Maximum serum blood urea nitrogen (mg/dL)8,48422.7 (17.7)9,31222.3 (16.9)
Minimum hemoglobin, g/dL,8,75012.5 (2.4)9,31212.4 (2.4)
Minimum platelet count, 1,000/UL8,737224.1 (87.4)9,312225.2 (84.7)
Maximum white blood count, 1,000/UL8,75010.3 (5.8)9,31210.3 (5.6)
Maximum serum lactate, mEq/L1,7492.2 (1.8)9,3120.7 (1.1)
Minimum serum albumin, g/dL4,0573.4 (0.7)9,3123.2 (0.5)
Minimum arterial pH5097.36 (0.10)9,3127.36 (0.02)
Minimum arterial pO2, mm Hg50973.6 (25.2)9,31298.6 (8.4)
Maximum serum troponin, ng/mL3,2170.5 (9.3)9,3120.2 (5.4)
Demographics and diagnoses
Age, y9,31265.2 (17.0)
Female sex9,3125,006 (53.8%)
Previous hospitalization within past 365 days9,3123,995 (42.9%)
Emergent admission9,3127,288 (78.3%)
Admitted to a medicine service9,3125,840 (62.7%)
Current or past atrial fibrillation9,3121,526 (16.4%)
Current or past cancer without metastases, excluding leukemia or lymphoma9,3121,356 (14.6%)
Current or past history of leukemia or lymphoma9,312145 (1.6%)
Current or past metastatic cancer9,312363 (3.9%)
Current or past cognitive deficiency9,312844 (9.1%)
Current or past history of other neurological conditions (eg, Parkinson's disease, multiple sclerosis, epilepsy, coma, stupor, brain damage)9,312952 (10.2%)
Injury such as fractures or trauma at the time of admission9,312656 (7.0%)
Sepsis at the time of admission9,312406 (4.4%)
Heart failure at the time of admission9,312776 (8.3%)
Respiratory failure on admission9,312557 (6.0%)
Outcomes of interest
Unplanned transfer to an ICU (for those not admitted to an ICU) within 24 hours of admission8,37786 (1.0%)
Ever in an ICU during the hospitalization9,3121,267 (13.6%)
Development of a condition not present on admission (complication)9,312834 (9.0%)
Within hospital mortality9,312188 (2.0%)
Mortality within 30 days of admission9,312466 (5.0%)
Mortality within 180 days of admission9,3121,070 (11.5%)
Receipt of palliative care by the end of the hospitalization9,312314 (3.4%)
Readmitted to the hospital within 30 days of discharge (patients alive at discharge)9,1241,302 (14.3%)
Readmitted to the hospital within 30 days of discharge (patients alive on admission)9,3121,302 (14.0%)

Evaluation of Prediction Accuracy

The AROC for 30‐day mortality was 0.850 (95% confidence interval [CI]: 0.833‐0.866) for prospectively collected covariates, and 0.870 (95% CI: 0.855‐0.885) for retrospectively determined risk factors. These AROCs are not substantively different from each other, demonstrating comparable prediction performance. Calibration was excellent, as indicated in Figure 2, in which the predicted level of risk lay within the 95% confidence limits of the actual 30‐day mortality for 19 out of 20 intervals of 5 percentile increments.

Figure 2
Calibration of the prediction rule. The horizontal axis displays intervals of 5 percentile increments of the predicted risk of dying within 30 days of admission (prospectively collected covariates). The vertical axis indicates the proportion of patients who actually died. The red dash marks represent the mean predicted mortality risk (and corresponding 95% confidence limits) for patients within the interval. The blue solid dot represents the actual proportion of patients within the interval who died, with the blue vertical hash marks indicating the 95% confidence limits for the proportion dying.

Relationship to Secondary Clinical Outcomes of Interest

The relationship between the prospectively generated probability of dying within 30 days and other events is quantified by the AROC displayed in Table 2. The 30‐day mortality risk has a strong association with the receipt of palliative care by hospital discharge, in‐hospital mortality, and 180‐day mortality, a fair association with the risk for 30‐day readmissions and unplanned transfers to intensive care, and weak associations with receipt of intensive unit care ever within the hospitalization or the development of a new diagnosis that was not present on admission (complication). The frequency of these events per mortality risk strata is shown in Table 3. The level 1 stratum contains a higher frequency of these events, whereas the level 5 stratum contains relatively few, reflecting the Pareto principle by which a relatively small proportion of patients contribute a disproportionate frequency of the events of interest.

Area Under the Receiver Operating Characteristic Curve Secondary Outcomes of Interest Associated With 30‐Day Mortality Risk
  • NOTE: Data are presented as MannWhitney (95% Wald confidence limits) using the calculated probability of dying within 30 days and its logarithm as the explanatory variable. Abbreviations: ICU, intensive care unit.

In‐hospital mortality0.841 (0.8140.869)
180day mortality0.836 (0.8250.848)
Receipt of palliative care by discharge0.875 (0.8580.891)
30day readmission (patients alive at discharge)0.649 (0.6340.664)
Unplanned transfer to an ICU (for those not admitted to an ICU) within 24 hours0.643 (0.5900.696)
Ever in an ICU during the hospitalization0.605 (0.5880.621)
Development of a condition not present on admission (complication)0.555 (0.5350.575)
Events Occurring Within Strata Defined by Risk of 30‐Day Mortality
Risk Strata30‐Day Mortality, Count/Cases (%)Unplanned Transfers to ICU Within 24 Hours, Count/Cases (%)Diagnosis Not Present on Admission, Complication, Count/Cases (%)Palliative Status at Discharge, Count/Cases (%)Death in Hospital, Count/Cases (%)
Risk StrataEver in ICU, Count/Cases (%)30‐Day Readmission, Count/Cases (%)Death or Readmission Within 30 Days, Count/Cases (%)180‐Day Mortality, Count/Cases (%)
  • NOTE: Abbreviations: ICU, intensive care unit.

1155/501 (30.9%)6/358 (1.7%)58/501 (11.6%)110/501 (22.0%)72/501 (14.4%)
2166/1,316 (12.6%)22/1,166 (1.9%)148/1,316 (11.3%)121/1,316 (9.2%)58/1,316 (4.4%)
3117/2,977 (3.9%)35/2,701 (1.3%)271/2,977 (9.1%)75/2,977 (2.5%)43/2,977 (1.4%)
424/3,350 (0.7%)20/3,042 (0.7%)293/3,350 (8.8%)6/3,350 (0.2%)13/3,350 (0.4%)
54/1,168 (0.3%)3/1,110 (0.3%)64/1,168 (5.5%)2/1,168 (0.2%)2/1,168 (0.2%)
Total466/9,312 (5.0%)86/8,377 (1.0%)834/9,312 (9.0%)314/9,312 (3.4%)188/9,312 (2.0%)
1165/501 (32.9%)106/429 (24.7%)243/501 (48.5%)240/501 (47.9%)
2213/1,316 (16.2%)275/1,258 (21.9%)418/1,316 (31.8%)403/1,316 (30.6%)
3412/2,977 (13.8%)521/2,934 (17.8%)612/2,977 (20.6%)344/2,977 (11.6%)
4406/3,350 (12.1%)348/3,337 (10.4%)368/3,350 (11.0%)77/3,350 (2.3%)
571/1,168 (6.1%)52/1,166 (4.5%)56/1,168 (4.8%)6/1,168 (0.5%)
Total1,267/9,312 (13.6%)1,302/9,124 (14.3%)1,697/9,312 (18.2%)1,070/9,312 (11.5%)

DISCUSSION

Emergency physicians and surgical preparation center nurses generated predictions by the time of hospital admission for over 90% of the target population during usual workflow, without the addition of staff or resources. The discrimination of the prospectively generated predictions was very good to excellent, with an AROC of 0.850 (95% CI: 0.833‐0.866), similar to that obtained from the retrospective version. Calibration was excellent. The prospectively calculated mortality risk was associated with a number of other events. As shown in Table 3, the differing frequency of events within the risk strata support the development of differing intensities of multidisciplinary strategies according to the level of risk.[5] Our study provides useful experience for others who anticipate generating real‐time predictions. We consider the key reasons for success to be the considerable time spent achieving consensus, the technical development of the Web application, the brief clinician time required for the scoring process, the leadership of the chief medical and nursing officers, and the requirement that a prediction be generated before assignment of a hospital room.

Our study has a number of limitations, some of which were noted in our original publication, and although still relevant, will not be repeated here for space considerations. This is a single‐site study that used a prediction rule developed by the same site, albeit on a patient population 4 to 5 years earlier. It is not known how well the specific rule might perform in other hospital populations; any such use should therefore be accompanied by independent validation studies prior to implementation. Our successful experience should motivate future validation studies. Second, because the prognoses of patients scored from the emergency department are likely to be worse than those of elective surgery patients, our rule should be recalibrated for each subgroup separately. We plan to do this in the near future, as well as consider additional risk factors. Third, the other events of interest might be predicted more accurately if rules specifically developed for each were deployed. The mortality risk by itself is unlikely to be a sufficiently accurate predictor, particularly for complications and intensive care use, for reasons outlined in our original publication.[3] However, the varying levels of events within the higher versus lower strata should be noted by the clinical team as they design their team‐based processes. A follow‐up visit with a physician within a few days of discharge could address the concurrent risk of dying as well as readmission, for example. Finally, it is too early to determine if the availability of mortality predictions from this rule will benefit patients.[2, 8, 10] During the study period, we implemented only 2 new care processes based on the level of risk. This lack of interventions allowed us to evaluate the prediction accuracy with minimal additional confounding, but at the expense of not yet knowing the clinical impact of this work. After the study period, we implemented a number of other interventions and plan on evaluating their effectiveness in the future. We are also considering an evaluation of the potential information gained by updating the predictions throughout the course of the hospitalization.[14]

In conclusion, it is feasible to have a reasonably accurate prediction of mortality risk for most adult patients at the beginning of their hospitalizations. The availability of this prognostic information provides an opportunity to develop proactive care plans for high‐ and low‐risk subsets of patients.

Acknowledgements

The authors acknowledge the technical assistance of Nehal Sanghvi and Ben Sutton in the development of the Web application and related databases, and the support of the Chief Nursing Officer, Joyce Young, RN, PhD, the emergency department medical staff, Mohammad Salameh, MD, David Vandenberg, MD, and the surgical preparation center staff.

Disclosure: Nothing to report.

References
  1. Goldman L, Caldera DL, Nussbaum SR, et al. Multifactorial index of cardiac risk in noncardiac surgical procedures. N Engl J Med. 1977;297:845850.
  2. Stiell IG, Wells GA. Methodological standards for the development of clinical decision rules in emergency medicine. Ann Emerg Med. 1999;33:437447.
  3. Cowen ME, Strawderman RL, Czerwinski JL, Smith MJ, Halasyamani LK. Mortality predictions on admission as a context for organizing care activities. J Hosp Med. 2013;8:229235.
  4. Kellett J, Deane B. The simple clinical score predicts mortality for 30 days after admission to an acute medical unit. QJM. 2006;99:771781.
  5. Amarasingham R, Patel PC, Toto K, et al. Allocating scare resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22:9981005.
  6. Burke RE, Coleman EA. Interventions to decrease hospital readmissions: keys for cost‐effectiveness. JAMA Intern Med. 2013;173:695698.
  7. Ravikumar TS, Sharma C, Marini C, et.al. A validated value‐based model to improve hospital‐wide perioperative outcomes. Ann Surg. 2010;252:486498.
  8. Grady D, Berkowitz SA. Why is a good clinical prediction rule so hard to find? Arch Intern Med. 2011;171:17011702.
  9. Escobar GJ, LaGuardia JC, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  10. Siontis GCM, Tzoulaki I, Ioannidis JPA. Predicting death: an empirical evaluation of predictive tools for mortality. Arch Intern Med. 2011;171:17211726.
  11. Lindenauer PK, Lagu T, Shieh M‐S, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003–2009. JAMA. 2012;307:14051413.
  12. Health Level Seven International website. Available at: http://www.hl7.org/. Accessed June 21, 2014.
  13. Blume JD. Bounding sample size projections for the area under a ROC curve. J Stat Plan Inference. 2009;139:711721.
  14. Wong J, Taljaard M, Forster AJ, Escobar GJ, Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49:734743.
References
  1. Goldman L, Caldera DL, Nussbaum SR, et al. Multifactorial index of cardiac risk in noncardiac surgical procedures. N Engl J Med. 1977;297:845850.
  2. Stiell IG, Wells GA. Methodological standards for the development of clinical decision rules in emergency medicine. Ann Emerg Med. 1999;33:437447.
  3. Cowen ME, Strawderman RL, Czerwinski JL, Smith MJ, Halasyamani LK. Mortality predictions on admission as a context for organizing care activities. J Hosp Med. 2013;8:229235.
  4. Kellett J, Deane B. The simple clinical score predicts mortality for 30 days after admission to an acute medical unit. QJM. 2006;99:771781.
  5. Amarasingham R, Patel PC, Toto K, et al. Allocating scare resources in real‐time to reduce heart failure readmissions: a prospective, controlled study. BMJ Qual Saf. 2013;22:9981005.
  6. Burke RE, Coleman EA. Interventions to decrease hospital readmissions: keys for cost‐effectiveness. JAMA Intern Med. 2013;173:695698.
  7. Ravikumar TS, Sharma C, Marini C, et.al. A validated value‐based model to improve hospital‐wide perioperative outcomes. Ann Surg. 2010;252:486498.
  8. Grady D, Berkowitz SA. Why is a good clinical prediction rule so hard to find? Arch Intern Med. 2011;171:17011702.
  9. Escobar GJ, LaGuardia JC, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388395.
  10. Siontis GCM, Tzoulaki I, Ioannidis JPA. Predicting death: an empirical evaluation of predictive tools for mortality. Arch Intern Med. 2011;171:17211726.
  11. Lindenauer PK, Lagu T, Shieh M‐S, Pekow PS, Rothberg MB. Association of diagnostic coding with trends in hospitalizations and mortality of patients with pneumonia, 2003–2009. JAMA. 2012;307:14051413.
  12. Health Level Seven International website. Available at: http://www.hl7.org/. Accessed June 21, 2014.
  13. Blume JD. Bounding sample size projections for the area under a ROC curve. J Stat Plan Inference. 2009;139:711721.
  14. Wong J, Taljaard M, Forster AJ, Escobar GJ, Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49:734743.
Issue
Journal of Hospital Medicine - 9(11)
Issue
Journal of Hospital Medicine - 9(11)
Page Number
720-726
Page Number
720-726
Publications
Publications
Article Type
Display Headline
Implementation of a mortality prediction rule for real‐time decision making: Feasibility and validity
Display Headline
Implementation of a mortality prediction rule for real‐time decision making: Feasibility and validity
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Mark E. Cowen, MD, Quality Institute, St. Joseph Mercy Hospital, 5333 McAuley Drive, Suite 3112, Ypsilanti, MI 48197; Telephone: 734‐712‐8776; Fax: 734‐712‐8651; E‐mail: cowenm@trinity-health.org
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Prediction Mortality and Adverse Events

Article Type
Changed
Sun, 05/21/2017 - 18:20
Display Headline
Mortality predictions on admission as a context for organizing care activities

Favorable health outcomes are more likely to occur when the healthcare team quickly identifies and responds to patients at risk.[1, 2, 3] However, the treatment process can break down during handoffs if the clinical condition and active issues are not well communicated.[4] Patients whose decline cannot be reversed also challenge the health team. Many are referred to hospice late,[5] and some do not receive the type of end‐of‐life care matching their preferences.[6]

Progress toward the elusive goal of more effective and efficient care might be made via an industrial engineering approach, mass customization, in which bundles of services are delivered based on the anticipated needs of subsets of patients.[7, 8] An underlying rationale is the frequent finding that a small proportion of individuals experiences the majority of the events of interest, commonly referenced as the Pareto principle.[7] Clinical prediction rules can help identify these high‐risk subsets.[9] However, as more condition‐specific rules become available, the clinical team faces logistical challenges when attempting to incorporate these into practice. For example, which team member will be responsible for generating the prediction and communicating the level of risk? What actions should follow for a given level of risk? What should be done for patients with conditions not addressed by an existing rule?

In this study, we present our rationale for health systems to implement a process for generating mortality predictions at the time of admission on most, if not all, adult patients as a context for the activities of the various clinical team members. Recent studies demonstrate that in‐hospital or 30‐day mortality can be predicted with substantial accuracy using information available at the time of admission.[10, 11, 12, 13, 14, 15, 16, 17, 18, 19] Relationships are beginning to be explored among the risk factors for mortality and other outcomes such as length of stay, unplanned transfers to intensive care units, 30‐day readmissions, and extended care facility placement.[10, 20, 21, 22] We extend this work by examining how a number of adverse events can be understood through their relationship with the risk of dying. We begin by deriving and validating a new mortality prediction rule using information feasible for our institution to use in its implementation.

METHODS

The prediction rule was derived from data on all inpatients (n = 56,003) 18 to 99 years old from St. Joseph Mercy Hospital, Ann Arbor from 2008 to 2009. This is a community‐based, tertiary‐care center. We reference derivation cases as D1, validation cases from the same hospital in the following year (2010) as V1, and data from a second hospital in 2010 as V2. The V2 hospital belonged to the same parent health corporation and shared some physician specialists with D1 and V1 but had separate medical and nursing staff.

The primary outcome predicted is 30‐day mortality from the time of admission. We chose 30‐day rather than in‐hospital mortality to address concerns of potential confounding of duration of hospital stay and likelihood of dying in the hospital.[23] Risk factors were considered for inclusion into the prediction rule based on their prevalence, conceptual, and univariable association with death (details provided in the Supporting information, Appendix I and II, in the online version of this article). The types of risk factors considered were patient diagnoses as of the time of admission obtained from hospital administrative data and grouped by the 2011 Clinical Classification Software (http://www.hcupus.ahrq.gov/toolssoftware/ccs/ccs.jsp#download, accessed June 6, 2012), administrative data from previous hospitalizations within the health system in the preceding 12 months, and the worst value of clinical laboratory blood tests obtained within 30 days prior to the time of admission. When a given patient had missing values for the laboratory tests of interest, we imputed a normal value, assuming the clinician had not ordered these tests because he/she expected the patient would have normal results. The imputed normal values were derived from available results from patients discharged alive with short hospital stays (3 days) in 2007 to 2008. The datasets were built and analyzed using SAS version 9.1, 9.2 (SAS Institute, Inc., Cary, NC) and R (R Foundation for Statistical Computing, Vienna, Austria; http://www.R‐project.org).

Prediction Rule Derivation Using D1 Dataset

Random forest procedures with a variety of variable importance measures were used with D1 data to reduce the number of potential predictor variables.[24] Model‐based recursive partitioning, a technique that combines features of multivariable logistic regression and classification and regression trees, was then used to develop the multivariable prediction model.[25, 26] Model building was done in R, employing functions provided as part of the randomForest and party packages. The final prediction rule consisted of 4 multivariable logistic regression models, each being specific to 1 of 4 possible population subgroups: females with/females without previous hospitalizations, and males with/males without previous hospitalizations. Each logistic regression model contains exactly the same predictor variables; however, the regression coefficients are subgroup specific. Therefore, the predicted probability of 30‐day mortality for a patient having a given set of predictor variables depends on the subgroup to which the patient is a member.

Validation, Discrimination, Calibration

The prediction rule was validated by generating a predicted probability of 30‐day mortality for each patient in V1 and V2, using their observed risk factor information combined with the scoring weights (ie, regression coefficients) derived from D1, then comparing predicted vs actual outcomes. Discriminatory accuracy is reported as the area under the receiver operating characteristic (ROC) curve that can range from 0.5 indicating pure chance, to 1.0 or perfect prediction.[27] Values above 0.8 are often interpreted as indicating strong predictive relationships, values between 0.7 and 0.79 as modest, and values between 0.6 and 0.69 as weak.[28] Model calibration was tested in all datasets across 20 intervals representing the spectrum of mortality risk, by assessing whether or not the 95% confidence limits for the actual proportion of patients dying encompassed the mean predicted mortality for the interval. These 20 intervals were defined using 5 percentile increments of the probability of dying for D1. The use of intervals based on percentiles ensures similarity in the level of predicted risk within an interval for V1 and V2, while allowing the proportion of patients contained within that interval to vary across hospitals.

Relationships With Other Adverse Events

We then used each patient's calculated probability of 30‐day mortality to predict the occurrence of other adverse events. We first derived scoring weights (ie, regression parameter estimates) from logistic regression models designed to relate each secondary outcome to the predicted 30‐day mortality using D1 data. These scoring weights were then respectively applied to the V1 and V2 patients' predicted 30‐day mortality rate to generate their predicted probabilities for: in‐hospital death, a stay in an intensive care unit at some point during the hospitalization, the occurrence of a condition not present on admission (a complication, see the Supporting information, Appendix I, in the online version of this article), palliative care status at the time of discharge (International Classification of Diseases, 9th Revision code V66.7), 30‐day readmission, and death within 180 days (determined for the first hospitalization of the patient in the calendar year, using hospital administrative data and the Social Security Death Index). Additionally, for V1 patients but not V2 due to unavailability of data, we predicted the occurrence of an unplanned transfer to an intensive care unit within the first 24 hours for those not admitted to the intensive care unit (ICU), and resuscitative efforts for cardiopulmonary arrests (code blue, as determined from hospital paging records and resuscitation documentation, with the realization that some resuscitations within the intensive care units might be undercaptured by this approach). Predicted vs actual outcomes were assessed using SAS version 9.2 by examining the areas under the receiver operating curves generated by the PROC LOGISTIC ROC.

Implications for Care Redesign

To illustrate how the mortality prediction provides a context for organizing the work of multiple health professionals, we created 5 risk strata[10] based on quintiles of D1 mortality risk. To display the time frame in which the peak risk of death occurs, we plotted the unadjusted hazard function per strata using SAS PROC LIFETEST.

RESULTS

Table 1 displays the risk factors used in the 30‐day mortality prediction rule, their distribution in the populations of interest, and the frequency of the outcomes of interest. The derivation (D1) and validation (V1) populations were clinically similar; the patients of hospital V2 differed in the proportion of risk factors and outcomes. The scoring weights or parameter estimates for the risk factors are given in the Appendix (see Supporting Information, Appendix I, in the online version of this article).

Demographics, Risk Factors, and Outcomes
 Hospital AHospital V2
D1 Derivation, N = 56,003V1 Validation, N = 28,441V2 Validation, N = 14,867
  • NOTE: Abbreviations: ICU, intensive care unit; NA, not applicable.
The 24 risk factors used in the prediction rule
Age in years, mean (standard deviation)59.8 (19.8)60.2 (19.8)66.4 (20.2)
Female33,185 (59.3%)16,992 (59.7%)8,935 (60.1%)
Respiratory failure on admission2,235 (4.0%)1,198 (4.2%)948 (6.4%)
Previous hospitalization19,560 (34.9%)10,155 (35.7%)5,925 (39.9%)
Hospitalization billed as an emergency admission[38]30,116 (53.8%)15,445 (54.3%)11,272 (75.8%)
Admitted to medicine service29,472 (52.6%)16,260 (57.2%)11,870 (79.8%)
Heart failure at the time of admission7,558 (13.5%)4,046 (14.2%)2,492 (16.8%)
Injury such as fractures or trauma at the time of admission7,007 (12.5%)3,612 (12.7%)2,205 (14.8%)
Sepsis at the time of admission2,278 (4.1%)1,025 (3.6%)850 (5.7%)
Current or past atrial fibrillation8,329 (14.9%)4,657 (16.4%)2,533 (17.0%)
Current or past metastatic cancer2,216 (4.0%)1,109 (3.9%)428 (2.9%)
Current or past cancer without metastases5,260 (9.34%)2,668 (9.4%)1,248 (8.4%)
Current or past history of leukemia or lymphoma1,025 (1.8%)526 (1.9%)278 (1.9%)
Current or past cognitive deficiency3,708 (6.6%)1,973 (6.9%)2,728 (18.4%)
Current or past history of other neurological conditions (such as Parkinson's disease, multiple sclerosis, epilepsy, coma, stupor, brain damage)4,671 (8.3%)2,537 (8.9%)1,606 (10.8%)
Maximum serum blood urea nitrogen (mg/dL), continuous21.9 (15.1)21.8 (15.1)25.9 (18.2)
Maximum white blood count (1,000/UL), continuous2.99 (4.00)3.10 (4.12)3.15 (3.81)
Minimum platelet count (1,000/UL), continuous240.5 (85.5)228.0 (79.6)220.0 (78.6)
Minimum hemoglobin (g/dL), continuous12.3 (1.83)12.3 (1.9)12.1 (1.9)
Minimum serum albumin (g/dL) <3.14, binary indicator11,032 (19.7%)3,848 (13.53%)2,235 (15.0%)
Minimum arterial pH <7.3, binary indicator1,095 (2.0%)473 (1.7%)308 (2.1%)
Minimum arterial pO2 (mm Hg) <85, binary indicator1,827 (3.3%)747 (2.6%)471 (3.2%)
Maximum serum troponin (ng/mL) >0.4, binary indicator6,268 (11.2%)1,154 (4.1%)2,312 (15.6%)
Maximum serum lactate (mEq/L) >4.0, binary indicator533 (1.0%)372 (1.3%)106 (0.7%)
Outcomes of interest
30‐day mortalityprimary outcome of interest2,775 (5.0%)1,412 (5.0%)1,193 (8.0%)
In‐hospital mortality1,392 (2.5%)636 (2.2%)467 (3.1%)
180‐day mortality (deaths/first hospitalization for patient that year)2,928/38,995 (7.5%)1,657/21,377 (7.8%)1,180/10,447 (11.3%)
Unplanned transfer to ICU within first 24 hours/number of patients with data not admitted to ICU434/46,647 (0.9%)276/25,920 (1.1%)NA
Ever in ICU during hospitalization/those with ICU information available5,906/55,998 (10.6%)3,191/28,429 (11.2%)642/14,848 (4.32%)
Any complication6,768 (12.1%)2,447 (8.6%)868 (5.8%)
Cardiopulmonary arrest228 (0.4%)151 (0.5%)NA
Patients discharged with palliative care V code1,151 (2.1%)962 (3.4%)340 (2.3%)
30‐day rehospitalization/patients discharged alive6,616/54,606 (12.1%)3,602/27,793 (13.0%)2,002/14,381 (13.9%)

Predicting 30‐Day Mortality

The areas under the ROC (95% confidence interval [CI]) for the D1, V1, and V2 populations were 0.876 (95% CI, 0.870‐0.882), 0.885 (95% CI, 0.877‐0.893), and 0.883 (95% CI, 0.875‐0.892), respectively. The calibration curves for all 3 populations are shown in Figure 1. The overlap of symbols indicates that the level of predicted risk matched actual mortality for most intervals, with slight underprediction for those in the highest risk percentiles.

Figure 1
Calibration. The horizontal axis displays 20 intervals of risk, containing 5‐percentile increments of the predicted mortality based on the D1 population. The vertical axis displays the actual proportion of patients within the interval who died within 30 days. The cluster of 3 symbols represent the mean predicted chance of dying for the derivation and 2 validation populations, respectively. The crosshatches represent the actual proportion of patients within each interval who died, with the 95% binomial confidence limits represented by the length of the vertical bar. The 20 intervals (named for the highest percentile within the interval) with corresponding probabilities of death: 5th percentile (probability 0‐0.0008); 10th percentile (probability 0.0008‐0.0011); 15th percentile (probability 0.0011‐0.0021); 20 (0.0021‐0.0033); 25 (0.0033‐0.0049); 30 (0.0049‐0.0067); 35 (0.0067‐0.0087); 40 (0.0087‐0.0108); 45 (0.0108‐0.0134); 50 (0.0134‐0.0165); 55 (0.0165‐0.0201); 60 (0.0201‐0.0247); 65 (0.0247‐0.0308); 70 (0.0308‐0.0392); 75 (0.0392‐0.0503); 80 (0.0503‐0.0669); 85 (0.0669‐0.0916); 90 (0.0916‐0.1308); 95 (0.1308‐0.2186); 100 (0.2186‐1.0).

Example of Risk Strata

Figure 2 displays the relationship between the predicted probability of dying within 30 days and the outcomes of interest for V1, and illustrates the Pareto principle for defining high‐ and low‐risk subgroups. Most of the 30‐day deaths (74.7% of D1, 74.2% of V1, and 85.3% of V2) occurred in the small subset of patients with a predicted probability of death exceeding 0.067 (the top quintile of risk of D1, the top 18 % of V1, and the top 29.8% of V2). In contrast, the mortality rate for those with a predicted risk of 0.0033 was 0.02% for the lowest quintile of risk in D1, 0.07% for the 19.3% having the lowest risk in V1, and 0% for the 9.7% of patients with the lowest risk in V2. Figure 3 indicates that the risk for dying peaks within the first few days of the hospitalization. Moreover, those in the high‐risk group remained at elevated risk relative to the lower risk strata for at least 100 days.

Figure 2
Risk of outcomes within intervals of mortality risk (validation hospital V1). The curves for the other 2 populations (D1, V2) were similar (see the Supporting information, Appendix II, in the online version of this article). Examples of possible risk strata are indicated.
Figure 3
Instantaneous risk of death (hazard function) following hospital admission—validation hospital V1. For sake of clarity, 5 ordinal categories of predicted risk are shown. The curves for the other 2 populations (D1, V2) were similar and are shown in the Appendix II (see the Supporting information, Appendix I, in the online version of this article).

Relationships With Other Outcomes of Interest

The graphical curves of Figure 2 represent the occurrence of adverse events. The rising slopes indicate the risk for other events increases with the risk of dying within 30 days (for details and data for D1 and V2, see the Supporting Information, Appendix II, in the online version of this article). The strength of these relationships is quantified by the areas under the ROC curve (Table 2). The probability of 30‐day mortality strongly predicted the occurrence of in‐hospital death, palliative care status, and death within 180 days; modestly predicted having an unplanned transfer to an ICU within the first 24 hours of the hospitalization and undergoing resuscitative efforts for cardiopulmonary arrest; and weakly predicted intensive care unit use at some point in the hospitalization, occurrence of a condition not present on admission (complication), and being rehospitalized within 30 days

Area Under the Receiver Operating Characteristic Curve Models Predicting Secondary Outcomes of Interest
OutcomeHospital AHospital V2
D1DerivationV1ValidationV2Validation
  • NOTE: Mann‐Whitney (95% Wald confidence limits). Each outcome of interest was predicted by the patients' calculated probability of dying within 30 days and its logarithm. Details are provided in the Appendix II. Abbreviations: ICU, intensive care unit; NA, not applicable.
Unplanned transfer to an ICU within the first 24 hours (for those not admitted to an ICU)0.712 (0.690‐0.734)0.735 (0.709‐0.761)NA
Resuscitation efforts for cardiopulmonary arrest0.709 (0.678‐0.739)0.737 (0.700‐0.775)NA
ICU stay at some point during the hospitalization0.659 (0.652‐0.666)0.663 (0.654‐0.672)0.702 (0.682‐0.722)
Intrahospital complication (condition not present on admission)0.682 (0.676‐0.689)0.624 (0.613‐0.635)0.646 (0.628‐0.664)
Palliative care status0.883 (0.875‐0.891)0.887 (0.878‐0.896)0.900 (0.888‐0.912)
Death within hospitalization0.861 (0.852‐0.870)0.875 (0.862‐0.887)0.880 (0.866‐0.893)
30‐day readmission0.685 (0.679‐0.692)0.685 (0.676‐0.694)0.677 (0.665‐0.689)
Death within 180 days0.890 (0.885‐0.896)0.889 (0.882‐0.896)0.873 (0.864‐0.883)

DISCUSSION

The primary contribution of our work concerns the number and strength of associations between the probability of dying within 30 days and other events, and the implications for organizing the healthcare delivery model. We also add to the growing evidence that death within 30 days can be accurately predicted at the time of admission from demographic information, modest levels of diagnostic information, and clinical laboratory values. We developed a new prediction rule with excellent accuracy that compares well to a rule recently developed by the Kaiser Permanente system.[13, 14] Feasibility considerations are likely to be the ultimate determinant of which prediction rule a health system chooses.[13, 14, 29] An independent evaluation of the candidate rules applied to the same data is required to compare their accuracy.

These results suggest a context for the coordination of clinical care processes, although mortality risk is not the only domain health systems must address. For illustrative purposes, we will refer to the risk strata shown in Figure 2. After the decisions to admit the patient to the hospital and whether or not surgical intervention is needed, the next decision concerns the level and type of nursing care needed.[10] Recent studies continue to show challenges both with unplanned transfers to intensive care units[21] and care delivered that is consistently concordant with patient wishes.[6, 30] The level of risk for multiple adverse outcomes suggests stratum 1 patients would be the priority group for perfecting the placement and preference assessment process. Our institution is currently piloting an internal placement guideline recommending that nonpalliative patients in the top 2.5 percentile of mortality risk be placed initially in either an intensive or intermediate care unit to receive the potential benefit of higher nursing staffing levels.[31] However, mortality risk cannot be the only criterion used for placement, as demonstrated by its relatively weak association with overall ICU utilization. Our findings may reflect the role of unmeasured factors such as the need for mechanical ventilation, patient preference for comfort care, bed availability, change in patient condition after admission, and inconsistent application of admission criteria.[17, 21, 32, 33, 34]

After the placement decision, the team could decide if the usual level of monitoring, physician rounding, and care coordination would be adequate for the level of risk or whether an additional anticipatory approach is needed. The weak relationship between the risk of death and incidence of complications, although not a new finding,[35, 36] suggests routine surveillance activities need to be conducted on all patients regardless of risk to detect a complication, but that a rescue plan be developed in advance for high mortality risk patients, for example strata 1 and 2, in the event they should develop a complication.[36] Inclusion of the patient's risk strata as part of the routine hand‐off communication among hospitalists, nurses, and other team members could provide a succinct common alert for the likelihood of adverse events.

The 30‐day mortality risk also informs the transition care plan following hospitalization, given the strong association with death in 180 days and the persistent level of this risk (Figure 3). Again, communication of the risk status (stratum 1) to the team caring for the patient after the hospitalization provides a common reference for prognosis and level of attention needed. However, the prediction accuracy is not sufficient to refer high‐risk patients into hospice, but rather, to identify the high‐risk subset having the most urgent need to have their preferences for future end‐of‐life care understood and addressed. The weak relationship of mortality risk with 30‐day readmissions indicates that our rule would have a limited role in identifying readmission risk per se. Others have noted the difficulty in accurately predicting readmissions, most likely because the underlying causes are multifactorial.[37] Our results suggest that 1 dynamic for readmission is the risk of dying, and so the underlying causes of this risk should be addressed in the transition plan.

There are a number of limitations with our study. First, this rule was developed and validated on data from only 2 institutions, assembled retrospectively, with diagnostic information determined from administrative data. One cannot assume the accuracy will carry over to other institutions[29] or when there is diagnostic uncertainty at the time of admission. Second, the 30‐day mortality risk should not be used as the sole criterion for determining the service intensity for individual patients because of issues with calibration, interpretation of risk, and confounding. The calibration curves (Figure 2) show the slight underprediction of the risk of dying for high‐risk groups. Other studies have also noted problems with precise calibration in validation datasets.[13, 14] Caution is also needed in the interpretation of what it means to be at high risk. Most patients in stratum 1 were alive at 30 days; therefore, being at high risk is not a death sentence. Furthermore, the relative weights of the risk factors reflect (ie, are confounded by) the level of treatment rendered. Some deaths within the higher‐risk percentiles undoubtedly occurred in patients choosing a palliative rather than a curative approach, perhaps partially explaining the slight underprediction of deaths. Conversely, the low mortality experienced by patients within the lower‐risk strata may indicate the treatment provided was effective. Low mortality risk does not imply less care is needed.

A third limitation is that we have not defined the thresholds of risk that should trigger placement and care intensity, although we provide examples on how this could be done. Each institution will need to calibrate the thresholds and associated decision‐making processes according to its own environment.[14] Interested readers can explore the sensitivity and specificity of various thresholds\ by using the tables in the Appendix (see the Supporting information, Appendix II, in the online version of this article). Finally, we do not know if identifying the mortality risk on admission will lead to better outcomes[19, 29]

CONCLUSIONS

Death within 30 days can be predicted with information known at the time of admission, and is associated with the risk of having other adverse events. We believe the probability of death can be used to define strata of risk that provide a succinct common reference point for the multidisciplinary team to anticipate the clinical course of subsets of patients and intervene with proportional intensity.

Acknowledgments

This work benefited from multiple conversations with Patricia Posa, RN, MSA, Elizabeth Van Hoek, MHSA, and the Redesigning Care Task Force of St. Joseph Mercy Hospital, Ann Arbor, Michigan.

Disclosure: Nothing to report.

Files
References
  1. Brodie BR, Stuckey TD, Wall TC, et al. Importance of time to reperfusion for 30‐day and late survival and recovery of left ventricular function after primary angioplasty for acute myocardial infarction. J Am Coll Cardiol. 1998;32:13121319.
  2. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345:13681377.
  3. ATLANTIS, ECASS, NINDS rt‐PA Study Group Investigators. Association of outcome with early stroke treatment: pooled analysis of ATLANTIS, ECASS, and NINDS rt‐PA stroke trials. Lancet. 2004;363:768774.
  4. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34:563570.
  5. National Hospice and Palliative Care Organization. NHPCO facts and figures: hospice care in America 2010 Edition. Available at: http://www.nhpco.org. Accessed October 3,2011.
  6. Mack JW, Weeks JC, Wright AA, Block SD, Prigerson HG. End‐of‐life discussions, goal attainment, and distress at the end of life: predictors and outcomes of receipt of care consistent with preferences. J Clin Oncol. 2010;28:12031208.
  7. Committee on Quality of Health Care in America, Institute of Medicine (IOM).Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academies Press;2001.
  8. Levy MM, Dellinger RP, Townsend SR, et al. The surviving sepsis campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Intensive Care Med. 2010;36:222231.
  9. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low‐risk patients with community‐acquired pneumonia. N Engl J Med. 1997;336:243250.
  10. Kellett J, Deane B. The simple clinical score predicts mortality for 30 days after admission to an acute medical unit. Q J Med. 2006;99:771781.
  11. Pine M, Jordan HS, Elixhauser A, et al. Enhancement of claims data to improve risk adjustment of hospital mortality. JAMA. 2007;297:7176.
  12. Tabak YP, Johannes RS, Silber JH. Using automated clinical data for risk adjustment. Med Care. 2007;45:789805.
  13. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232239.
  14. Walraven C, Escobar GJ, Green JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798803.
  15. Silke B, Kellett J, Rooney T, Bennett K, O'Riordan D. An improved medical admissions risk system using multivariable fractional polynomial logistic regression modeling. Q J Med. 2010;103:2332.
  16. Brabrand M, Folkestad L, Clausen NG, Knudsen T, Hallas J. Risk scoring systems for adults admitted to the emergency department: a systematic review. Scand J Trauma Resusc Emerg Med. 2010;18:8.
  17. Wong J, Taljaard M, Forster AJ, Escobar GJ, von Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49:734743.
  18. Asadollahi K, Hasting IM, Gill GV, Beeching NJ. Prediction of hospital mortality from admission laboratory data and patient age: a simple model. Emerg Med Australas. 2011;23:354363.
  19. Siontis GCM, Tzoulaki I, Ioannidis JPA. Predicting death: an empirical evaluation of predictive tools for mortality. Arch Intern Med. 2011;171:17211726.
  20. Liu V, Kipnis P, Gould MK, Escobar GJ. Length of stay predictions: improvements through the use of automated laboratory and comorbidity variables. Med Care. 2010;48:739744.
  21. Escobar GJ, Greene JD, Gardner MN, Marelich GP, Quick B, Kipnis P. Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS). J Hosp Med. 2011;6:7480.
  22. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48:981988.
  23. Baker DW, Einstadter D, Thomas CL, Husak SS, Gordon NH, Cebul RD. Mortality trends during a program that publicly reported hospital performance. Med Care. 2002;40:879890.
  24. Liaw A, Wiener M. Classification and regression by randomForest. R News. 2002;2:1822.
  25. Zeileis A, Hothorn T, Hornik K. Model‐based recursive partitioning. J Comput Graph Stat. 2008;17:492514.
  26. Breiman L, Friedman JH, Olshen RA, Stone CJ.Classification and Regression Trees.Belmont, CA:Wadsworth Inc.,1984.
  27. Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA. Evaluating the yield of medical tests. JAMA. 1982;247:25432546.
  28. Ohman EM, Granger CB, Harrington RA, Lee KL. Risk stratification and therapeutic decision making in acute coronary syndromes. JAMA. 2000;284:876878.
  29. Grady D, Berkowitz SA. Why is a good clinical prediction rule so hard to find?Arch Intern Med. 2011;171:17011702.
  30. Silveira MJ, Kim SYH, Langa KM. Advance directives and outcomes of surrogate decision making before death. N Engl J Med. 2010;362:12111218.
  31. Needleman J, Buerhaus P, Pankratz S, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011;364:10371045.
  32. Simchen E, Sprung CL, Galai N, et al. Survival of critically ill patients hospitalized in and out of intensive care. Crit Care Med. 2007;35:449457.
  33. Walter KL, Siegler M, Hall JB. How decisions are made to admit patients to medical intensive care units (MICUs): a survey of MICU directors at academic medical centers across the United States. Crit Care Med. 2008;36:414420.
  34. Litvak E, Pronovost P. Rethinking rapid response teams. JAMA. 2010;204:13751376.
  35. Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery: a study of adverse occurrence and failure to rescue. Med Care. 1992;30:615629.
  36. Ghaferi AA, Birkmeyer JD, Dimick JB. Variation in hospital mortality associated with inpatient surgery. N Engl J Med. 2009;361:13681375.
  37. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306:16881698.
  38. Department of Health and Human Services, Centers for Medicare and Medicaid Services, CMS Manual System, Pub 100–04 Medicare Claims Processing, November 3, 2006. Available at: http://www. cms.gov/Regulations‐and‐Guidance/Guidance/Transmittals/Downloads/R1104CP.pdf. Accessed September 5,2012.
Article PDF
Issue
Journal of Hospital Medicine - 8(5)
Publications
Page Number
229-235
Sections
Files
Files
Article PDF
Article PDF

Favorable health outcomes are more likely to occur when the healthcare team quickly identifies and responds to patients at risk.[1, 2, 3] However, the treatment process can break down during handoffs if the clinical condition and active issues are not well communicated.[4] Patients whose decline cannot be reversed also challenge the health team. Many are referred to hospice late,[5] and some do not receive the type of end‐of‐life care matching their preferences.[6]

Progress toward the elusive goal of more effective and efficient care might be made via an industrial engineering approach, mass customization, in which bundles of services are delivered based on the anticipated needs of subsets of patients.[7, 8] An underlying rationale is the frequent finding that a small proportion of individuals experiences the majority of the events of interest, commonly referenced as the Pareto principle.[7] Clinical prediction rules can help identify these high‐risk subsets.[9] However, as more condition‐specific rules become available, the clinical team faces logistical challenges when attempting to incorporate these into practice. For example, which team member will be responsible for generating the prediction and communicating the level of risk? What actions should follow for a given level of risk? What should be done for patients with conditions not addressed by an existing rule?

In this study, we present our rationale for health systems to implement a process for generating mortality predictions at the time of admission on most, if not all, adult patients as a context for the activities of the various clinical team members. Recent studies demonstrate that in‐hospital or 30‐day mortality can be predicted with substantial accuracy using information available at the time of admission.[10, 11, 12, 13, 14, 15, 16, 17, 18, 19] Relationships are beginning to be explored among the risk factors for mortality and other outcomes such as length of stay, unplanned transfers to intensive care units, 30‐day readmissions, and extended care facility placement.[10, 20, 21, 22] We extend this work by examining how a number of adverse events can be understood through their relationship with the risk of dying. We begin by deriving and validating a new mortality prediction rule using information feasible for our institution to use in its implementation.

METHODS

The prediction rule was derived from data on all inpatients (n = 56,003) 18 to 99 years old from St. Joseph Mercy Hospital, Ann Arbor from 2008 to 2009. This is a community‐based, tertiary‐care center. We reference derivation cases as D1, validation cases from the same hospital in the following year (2010) as V1, and data from a second hospital in 2010 as V2. The V2 hospital belonged to the same parent health corporation and shared some physician specialists with D1 and V1 but had separate medical and nursing staff.

The primary outcome predicted is 30‐day mortality from the time of admission. We chose 30‐day rather than in‐hospital mortality to address concerns of potential confounding of duration of hospital stay and likelihood of dying in the hospital.[23] Risk factors were considered for inclusion into the prediction rule based on their prevalence, conceptual, and univariable association with death (details provided in the Supporting information, Appendix I and II, in the online version of this article). The types of risk factors considered were patient diagnoses as of the time of admission obtained from hospital administrative data and grouped by the 2011 Clinical Classification Software (http://www.hcupus.ahrq.gov/toolssoftware/ccs/ccs.jsp#download, accessed June 6, 2012), administrative data from previous hospitalizations within the health system in the preceding 12 months, and the worst value of clinical laboratory blood tests obtained within 30 days prior to the time of admission. When a given patient had missing values for the laboratory tests of interest, we imputed a normal value, assuming the clinician had not ordered these tests because he/she expected the patient would have normal results. The imputed normal values were derived from available results from patients discharged alive with short hospital stays (3 days) in 2007 to 2008. The datasets were built and analyzed using SAS version 9.1, 9.2 (SAS Institute, Inc., Cary, NC) and R (R Foundation for Statistical Computing, Vienna, Austria; http://www.R‐project.org).

Prediction Rule Derivation Using D1 Dataset

Random forest procedures with a variety of variable importance measures were used with D1 data to reduce the number of potential predictor variables.[24] Model‐based recursive partitioning, a technique that combines features of multivariable logistic regression and classification and regression trees, was then used to develop the multivariable prediction model.[25, 26] Model building was done in R, employing functions provided as part of the randomForest and party packages. The final prediction rule consisted of 4 multivariable logistic regression models, each being specific to 1 of 4 possible population subgroups: females with/females without previous hospitalizations, and males with/males without previous hospitalizations. Each logistic regression model contains exactly the same predictor variables; however, the regression coefficients are subgroup specific. Therefore, the predicted probability of 30‐day mortality for a patient having a given set of predictor variables depends on the subgroup to which the patient is a member.

Validation, Discrimination, Calibration

The prediction rule was validated by generating a predicted probability of 30‐day mortality for each patient in V1 and V2, using their observed risk factor information combined with the scoring weights (ie, regression coefficients) derived from D1, then comparing predicted vs actual outcomes. Discriminatory accuracy is reported as the area under the receiver operating characteristic (ROC) curve that can range from 0.5 indicating pure chance, to 1.0 or perfect prediction.[27] Values above 0.8 are often interpreted as indicating strong predictive relationships, values between 0.7 and 0.79 as modest, and values between 0.6 and 0.69 as weak.[28] Model calibration was tested in all datasets across 20 intervals representing the spectrum of mortality risk, by assessing whether or not the 95% confidence limits for the actual proportion of patients dying encompassed the mean predicted mortality for the interval. These 20 intervals were defined using 5 percentile increments of the probability of dying for D1. The use of intervals based on percentiles ensures similarity in the level of predicted risk within an interval for V1 and V2, while allowing the proportion of patients contained within that interval to vary across hospitals.

Relationships With Other Adverse Events

We then used each patient's calculated probability of 30‐day mortality to predict the occurrence of other adverse events. We first derived scoring weights (ie, regression parameter estimates) from logistic regression models designed to relate each secondary outcome to the predicted 30‐day mortality using D1 data. These scoring weights were then respectively applied to the V1 and V2 patients' predicted 30‐day mortality rate to generate their predicted probabilities for: in‐hospital death, a stay in an intensive care unit at some point during the hospitalization, the occurrence of a condition not present on admission (a complication, see the Supporting information, Appendix I, in the online version of this article), palliative care status at the time of discharge (International Classification of Diseases, 9th Revision code V66.7), 30‐day readmission, and death within 180 days (determined for the first hospitalization of the patient in the calendar year, using hospital administrative data and the Social Security Death Index). Additionally, for V1 patients but not V2 due to unavailability of data, we predicted the occurrence of an unplanned transfer to an intensive care unit within the first 24 hours for those not admitted to the intensive care unit (ICU), and resuscitative efforts for cardiopulmonary arrests (code blue, as determined from hospital paging records and resuscitation documentation, with the realization that some resuscitations within the intensive care units might be undercaptured by this approach). Predicted vs actual outcomes were assessed using SAS version 9.2 by examining the areas under the receiver operating curves generated by the PROC LOGISTIC ROC.

Implications for Care Redesign

To illustrate how the mortality prediction provides a context for organizing the work of multiple health professionals, we created 5 risk strata[10] based on quintiles of D1 mortality risk. To display the time frame in which the peak risk of death occurs, we plotted the unadjusted hazard function per strata using SAS PROC LIFETEST.

RESULTS

Table 1 displays the risk factors used in the 30‐day mortality prediction rule, their distribution in the populations of interest, and the frequency of the outcomes of interest. The derivation (D1) and validation (V1) populations were clinically similar; the patients of hospital V2 differed in the proportion of risk factors and outcomes. The scoring weights or parameter estimates for the risk factors are given in the Appendix (see Supporting Information, Appendix I, in the online version of this article).

Demographics, Risk Factors, and Outcomes
 Hospital AHospital V2
D1 Derivation, N = 56,003V1 Validation, N = 28,441V2 Validation, N = 14,867
  • NOTE: Abbreviations: ICU, intensive care unit; NA, not applicable.
The 24 risk factors used in the prediction rule
Age in years, mean (standard deviation)59.8 (19.8)60.2 (19.8)66.4 (20.2)
Female33,185 (59.3%)16,992 (59.7%)8,935 (60.1%)
Respiratory failure on admission2,235 (4.0%)1,198 (4.2%)948 (6.4%)
Previous hospitalization19,560 (34.9%)10,155 (35.7%)5,925 (39.9%)
Hospitalization billed as an emergency admission[38]30,116 (53.8%)15,445 (54.3%)11,272 (75.8%)
Admitted to medicine service29,472 (52.6%)16,260 (57.2%)11,870 (79.8%)
Heart failure at the time of admission7,558 (13.5%)4,046 (14.2%)2,492 (16.8%)
Injury such as fractures or trauma at the time of admission7,007 (12.5%)3,612 (12.7%)2,205 (14.8%)
Sepsis at the time of admission2,278 (4.1%)1,025 (3.6%)850 (5.7%)
Current or past atrial fibrillation8,329 (14.9%)4,657 (16.4%)2,533 (17.0%)
Current or past metastatic cancer2,216 (4.0%)1,109 (3.9%)428 (2.9%)
Current or past cancer without metastases5,260 (9.34%)2,668 (9.4%)1,248 (8.4%)
Current or past history of leukemia or lymphoma1,025 (1.8%)526 (1.9%)278 (1.9%)
Current or past cognitive deficiency3,708 (6.6%)1,973 (6.9%)2,728 (18.4%)
Current or past history of other neurological conditions (such as Parkinson's disease, multiple sclerosis, epilepsy, coma, stupor, brain damage)4,671 (8.3%)2,537 (8.9%)1,606 (10.8%)
Maximum serum blood urea nitrogen (mg/dL), continuous21.9 (15.1)21.8 (15.1)25.9 (18.2)
Maximum white blood count (1,000/UL), continuous2.99 (4.00)3.10 (4.12)3.15 (3.81)
Minimum platelet count (1,000/UL), continuous240.5 (85.5)228.0 (79.6)220.0 (78.6)
Minimum hemoglobin (g/dL), continuous12.3 (1.83)12.3 (1.9)12.1 (1.9)
Minimum serum albumin (g/dL) <3.14, binary indicator11,032 (19.7%)3,848 (13.53%)2,235 (15.0%)
Minimum arterial pH <7.3, binary indicator1,095 (2.0%)473 (1.7%)308 (2.1%)
Minimum arterial pO2 (mm Hg) <85, binary indicator1,827 (3.3%)747 (2.6%)471 (3.2%)
Maximum serum troponin (ng/mL) >0.4, binary indicator6,268 (11.2%)1,154 (4.1%)2,312 (15.6%)
Maximum serum lactate (mEq/L) >4.0, binary indicator533 (1.0%)372 (1.3%)106 (0.7%)
Outcomes of interest
30‐day mortalityprimary outcome of interest2,775 (5.0%)1,412 (5.0%)1,193 (8.0%)
In‐hospital mortality1,392 (2.5%)636 (2.2%)467 (3.1%)
180‐day mortality (deaths/first hospitalization for patient that year)2,928/38,995 (7.5%)1,657/21,377 (7.8%)1,180/10,447 (11.3%)
Unplanned transfer to ICU within first 24 hours/number of patients with data not admitted to ICU434/46,647 (0.9%)276/25,920 (1.1%)NA
Ever in ICU during hospitalization/those with ICU information available5,906/55,998 (10.6%)3,191/28,429 (11.2%)642/14,848 (4.32%)
Any complication6,768 (12.1%)2,447 (8.6%)868 (5.8%)
Cardiopulmonary arrest228 (0.4%)151 (0.5%)NA
Patients discharged with palliative care V code1,151 (2.1%)962 (3.4%)340 (2.3%)
30‐day rehospitalization/patients discharged alive6,616/54,606 (12.1%)3,602/27,793 (13.0%)2,002/14,381 (13.9%)

Predicting 30‐Day Mortality

The areas under the ROC (95% confidence interval [CI]) for the D1, V1, and V2 populations were 0.876 (95% CI, 0.870‐0.882), 0.885 (95% CI, 0.877‐0.893), and 0.883 (95% CI, 0.875‐0.892), respectively. The calibration curves for all 3 populations are shown in Figure 1. The overlap of symbols indicates that the level of predicted risk matched actual mortality for most intervals, with slight underprediction for those in the highest risk percentiles.

Figure 1
Calibration. The horizontal axis displays 20 intervals of risk, containing 5‐percentile increments of the predicted mortality based on the D1 population. The vertical axis displays the actual proportion of patients within the interval who died within 30 days. The cluster of 3 symbols represent the mean predicted chance of dying for the derivation and 2 validation populations, respectively. The crosshatches represent the actual proportion of patients within each interval who died, with the 95% binomial confidence limits represented by the length of the vertical bar. The 20 intervals (named for the highest percentile within the interval) with corresponding probabilities of death: 5th percentile (probability 0‐0.0008); 10th percentile (probability 0.0008‐0.0011); 15th percentile (probability 0.0011‐0.0021); 20 (0.0021‐0.0033); 25 (0.0033‐0.0049); 30 (0.0049‐0.0067); 35 (0.0067‐0.0087); 40 (0.0087‐0.0108); 45 (0.0108‐0.0134); 50 (0.0134‐0.0165); 55 (0.0165‐0.0201); 60 (0.0201‐0.0247); 65 (0.0247‐0.0308); 70 (0.0308‐0.0392); 75 (0.0392‐0.0503); 80 (0.0503‐0.0669); 85 (0.0669‐0.0916); 90 (0.0916‐0.1308); 95 (0.1308‐0.2186); 100 (0.2186‐1.0).

Example of Risk Strata

Figure 2 displays the relationship between the predicted probability of dying within 30 days and the outcomes of interest for V1, and illustrates the Pareto principle for defining high‐ and low‐risk subgroups. Most of the 30‐day deaths (74.7% of D1, 74.2% of V1, and 85.3% of V2) occurred in the small subset of patients with a predicted probability of death exceeding 0.067 (the top quintile of risk of D1, the top 18 % of V1, and the top 29.8% of V2). In contrast, the mortality rate for those with a predicted risk of 0.0033 was 0.02% for the lowest quintile of risk in D1, 0.07% for the 19.3% having the lowest risk in V1, and 0% for the 9.7% of patients with the lowest risk in V2. Figure 3 indicates that the risk for dying peaks within the first few days of the hospitalization. Moreover, those in the high‐risk group remained at elevated risk relative to the lower risk strata for at least 100 days.

Figure 2
Risk of outcomes within intervals of mortality risk (validation hospital V1). The curves for the other 2 populations (D1, V2) were similar (see the Supporting information, Appendix II, in the online version of this article). Examples of possible risk strata are indicated.
Figure 3
Instantaneous risk of death (hazard function) following hospital admission—validation hospital V1. For sake of clarity, 5 ordinal categories of predicted risk are shown. The curves for the other 2 populations (D1, V2) were similar and are shown in the Appendix II (see the Supporting information, Appendix I, in the online version of this article).

Relationships With Other Outcomes of Interest

The graphical curves of Figure 2 represent the occurrence of adverse events. The rising slopes indicate the risk for other events increases with the risk of dying within 30 days (for details and data for D1 and V2, see the Supporting Information, Appendix II, in the online version of this article). The strength of these relationships is quantified by the areas under the ROC curve (Table 2). The probability of 30‐day mortality strongly predicted the occurrence of in‐hospital death, palliative care status, and death within 180 days; modestly predicted having an unplanned transfer to an ICU within the first 24 hours of the hospitalization and undergoing resuscitative efforts for cardiopulmonary arrest; and weakly predicted intensive care unit use at some point in the hospitalization, occurrence of a condition not present on admission (complication), and being rehospitalized within 30 days

Area Under the Receiver Operating Characteristic Curve Models Predicting Secondary Outcomes of Interest
OutcomeHospital AHospital V2
D1DerivationV1ValidationV2Validation
  • NOTE: Mann‐Whitney (95% Wald confidence limits). Each outcome of interest was predicted by the patients' calculated probability of dying within 30 days and its logarithm. Details are provided in the Appendix II. Abbreviations: ICU, intensive care unit; NA, not applicable.
Unplanned transfer to an ICU within the first 24 hours (for those not admitted to an ICU)0.712 (0.690‐0.734)0.735 (0.709‐0.761)NA
Resuscitation efforts for cardiopulmonary arrest0.709 (0.678‐0.739)0.737 (0.700‐0.775)NA
ICU stay at some point during the hospitalization0.659 (0.652‐0.666)0.663 (0.654‐0.672)0.702 (0.682‐0.722)
Intrahospital complication (condition not present on admission)0.682 (0.676‐0.689)0.624 (0.613‐0.635)0.646 (0.628‐0.664)
Palliative care status0.883 (0.875‐0.891)0.887 (0.878‐0.896)0.900 (0.888‐0.912)
Death within hospitalization0.861 (0.852‐0.870)0.875 (0.862‐0.887)0.880 (0.866‐0.893)
30‐day readmission0.685 (0.679‐0.692)0.685 (0.676‐0.694)0.677 (0.665‐0.689)
Death within 180 days0.890 (0.885‐0.896)0.889 (0.882‐0.896)0.873 (0.864‐0.883)

DISCUSSION

The primary contribution of our work concerns the number and strength of associations between the probability of dying within 30 days and other events, and the implications for organizing the healthcare delivery model. We also add to the growing evidence that death within 30 days can be accurately predicted at the time of admission from demographic information, modest levels of diagnostic information, and clinical laboratory values. We developed a new prediction rule with excellent accuracy that compares well to a rule recently developed by the Kaiser Permanente system.[13, 14] Feasibility considerations are likely to be the ultimate determinant of which prediction rule a health system chooses.[13, 14, 29] An independent evaluation of the candidate rules applied to the same data is required to compare their accuracy.

These results suggest a context for the coordination of clinical care processes, although mortality risk is not the only domain health systems must address. For illustrative purposes, we will refer to the risk strata shown in Figure 2. After the decisions to admit the patient to the hospital and whether or not surgical intervention is needed, the next decision concerns the level and type of nursing care needed.[10] Recent studies continue to show challenges both with unplanned transfers to intensive care units[21] and care delivered that is consistently concordant with patient wishes.[6, 30] The level of risk for multiple adverse outcomes suggests stratum 1 patients would be the priority group for perfecting the placement and preference assessment process. Our institution is currently piloting an internal placement guideline recommending that nonpalliative patients in the top 2.5 percentile of mortality risk be placed initially in either an intensive or intermediate care unit to receive the potential benefit of higher nursing staffing levels.[31] However, mortality risk cannot be the only criterion used for placement, as demonstrated by its relatively weak association with overall ICU utilization. Our findings may reflect the role of unmeasured factors such as the need for mechanical ventilation, patient preference for comfort care, bed availability, change in patient condition after admission, and inconsistent application of admission criteria.[17, 21, 32, 33, 34]

After the placement decision, the team could decide if the usual level of monitoring, physician rounding, and care coordination would be adequate for the level of risk or whether an additional anticipatory approach is needed. The weak relationship between the risk of death and incidence of complications, although not a new finding,[35, 36] suggests routine surveillance activities need to be conducted on all patients regardless of risk to detect a complication, but that a rescue plan be developed in advance for high mortality risk patients, for example strata 1 and 2, in the event they should develop a complication.[36] Inclusion of the patient's risk strata as part of the routine hand‐off communication among hospitalists, nurses, and other team members could provide a succinct common alert for the likelihood of adverse events.

The 30‐day mortality risk also informs the transition care plan following hospitalization, given the strong association with death in 180 days and the persistent level of this risk (Figure 3). Again, communication of the risk status (stratum 1) to the team caring for the patient after the hospitalization provides a common reference for prognosis and level of attention needed. However, the prediction accuracy is not sufficient to refer high‐risk patients into hospice, but rather, to identify the high‐risk subset having the most urgent need to have their preferences for future end‐of‐life care understood and addressed. The weak relationship of mortality risk with 30‐day readmissions indicates that our rule would have a limited role in identifying readmission risk per se. Others have noted the difficulty in accurately predicting readmissions, most likely because the underlying causes are multifactorial.[37] Our results suggest that 1 dynamic for readmission is the risk of dying, and so the underlying causes of this risk should be addressed in the transition plan.

There are a number of limitations with our study. First, this rule was developed and validated on data from only 2 institutions, assembled retrospectively, with diagnostic information determined from administrative data. One cannot assume the accuracy will carry over to other institutions[29] or when there is diagnostic uncertainty at the time of admission. Second, the 30‐day mortality risk should not be used as the sole criterion for determining the service intensity for individual patients because of issues with calibration, interpretation of risk, and confounding. The calibration curves (Figure 2) show the slight underprediction of the risk of dying for high‐risk groups. Other studies have also noted problems with precise calibration in validation datasets.[13, 14] Caution is also needed in the interpretation of what it means to be at high risk. Most patients in stratum 1 were alive at 30 days; therefore, being at high risk is not a death sentence. Furthermore, the relative weights of the risk factors reflect (ie, are confounded by) the level of treatment rendered. Some deaths within the higher‐risk percentiles undoubtedly occurred in patients choosing a palliative rather than a curative approach, perhaps partially explaining the slight underprediction of deaths. Conversely, the low mortality experienced by patients within the lower‐risk strata may indicate the treatment provided was effective. Low mortality risk does not imply less care is needed.

A third limitation is that we have not defined the thresholds of risk that should trigger placement and care intensity, although we provide examples on how this could be done. Each institution will need to calibrate the thresholds and associated decision‐making processes according to its own environment.[14] Interested readers can explore the sensitivity and specificity of various thresholds\ by using the tables in the Appendix (see the Supporting information, Appendix II, in the online version of this article). Finally, we do not know if identifying the mortality risk on admission will lead to better outcomes[19, 29]

CONCLUSIONS

Death within 30 days can be predicted with information known at the time of admission, and is associated with the risk of having other adverse events. We believe the probability of death can be used to define strata of risk that provide a succinct common reference point for the multidisciplinary team to anticipate the clinical course of subsets of patients and intervene with proportional intensity.

Acknowledgments

This work benefited from multiple conversations with Patricia Posa, RN, MSA, Elizabeth Van Hoek, MHSA, and the Redesigning Care Task Force of St. Joseph Mercy Hospital, Ann Arbor, Michigan.

Disclosure: Nothing to report.

Favorable health outcomes are more likely to occur when the healthcare team quickly identifies and responds to patients at risk.[1, 2, 3] However, the treatment process can break down during handoffs if the clinical condition and active issues are not well communicated.[4] Patients whose decline cannot be reversed also challenge the health team. Many are referred to hospice late,[5] and some do not receive the type of end‐of‐life care matching their preferences.[6]

Progress toward the elusive goal of more effective and efficient care might be made via an industrial engineering approach, mass customization, in which bundles of services are delivered based on the anticipated needs of subsets of patients.[7, 8] An underlying rationale is the frequent finding that a small proportion of individuals experiences the majority of the events of interest, commonly referenced as the Pareto principle.[7] Clinical prediction rules can help identify these high‐risk subsets.[9] However, as more condition‐specific rules become available, the clinical team faces logistical challenges when attempting to incorporate these into practice. For example, which team member will be responsible for generating the prediction and communicating the level of risk? What actions should follow for a given level of risk? What should be done for patients with conditions not addressed by an existing rule?

In this study, we present our rationale for health systems to implement a process for generating mortality predictions at the time of admission on most, if not all, adult patients as a context for the activities of the various clinical team members. Recent studies demonstrate that in‐hospital or 30‐day mortality can be predicted with substantial accuracy using information available at the time of admission.[10, 11, 12, 13, 14, 15, 16, 17, 18, 19] Relationships are beginning to be explored among the risk factors for mortality and other outcomes such as length of stay, unplanned transfers to intensive care units, 30‐day readmissions, and extended care facility placement.[10, 20, 21, 22] We extend this work by examining how a number of adverse events can be understood through their relationship with the risk of dying. We begin by deriving and validating a new mortality prediction rule using information feasible for our institution to use in its implementation.

METHODS

The prediction rule was derived from data on all inpatients (n = 56,003) 18 to 99 years old from St. Joseph Mercy Hospital, Ann Arbor from 2008 to 2009. This is a community‐based, tertiary‐care center. We reference derivation cases as D1, validation cases from the same hospital in the following year (2010) as V1, and data from a second hospital in 2010 as V2. The V2 hospital belonged to the same parent health corporation and shared some physician specialists with D1 and V1 but had separate medical and nursing staff.

The primary outcome predicted is 30‐day mortality from the time of admission. We chose 30‐day rather than in‐hospital mortality to address concerns of potential confounding of duration of hospital stay and likelihood of dying in the hospital.[23] Risk factors were considered for inclusion into the prediction rule based on their prevalence, conceptual, and univariable association with death (details provided in the Supporting information, Appendix I and II, in the online version of this article). The types of risk factors considered were patient diagnoses as of the time of admission obtained from hospital administrative data and grouped by the 2011 Clinical Classification Software (http://www.hcupus.ahrq.gov/toolssoftware/ccs/ccs.jsp#download, accessed June 6, 2012), administrative data from previous hospitalizations within the health system in the preceding 12 months, and the worst value of clinical laboratory blood tests obtained within 30 days prior to the time of admission. When a given patient had missing values for the laboratory tests of interest, we imputed a normal value, assuming the clinician had not ordered these tests because he/she expected the patient would have normal results. The imputed normal values were derived from available results from patients discharged alive with short hospital stays (3 days) in 2007 to 2008. The datasets were built and analyzed using SAS version 9.1, 9.2 (SAS Institute, Inc., Cary, NC) and R (R Foundation for Statistical Computing, Vienna, Austria; http://www.R‐project.org).

Prediction Rule Derivation Using D1 Dataset

Random forest procedures with a variety of variable importance measures were used with D1 data to reduce the number of potential predictor variables.[24] Model‐based recursive partitioning, a technique that combines features of multivariable logistic regression and classification and regression trees, was then used to develop the multivariable prediction model.[25, 26] Model building was done in R, employing functions provided as part of the randomForest and party packages. The final prediction rule consisted of 4 multivariable logistic regression models, each being specific to 1 of 4 possible population subgroups: females with/females without previous hospitalizations, and males with/males without previous hospitalizations. Each logistic regression model contains exactly the same predictor variables; however, the regression coefficients are subgroup specific. Therefore, the predicted probability of 30‐day mortality for a patient having a given set of predictor variables depends on the subgroup to which the patient is a member.

Validation, Discrimination, Calibration

The prediction rule was validated by generating a predicted probability of 30‐day mortality for each patient in V1 and V2, using their observed risk factor information combined with the scoring weights (ie, regression coefficients) derived from D1, then comparing predicted vs actual outcomes. Discriminatory accuracy is reported as the area under the receiver operating characteristic (ROC) curve that can range from 0.5 indicating pure chance, to 1.0 or perfect prediction.[27] Values above 0.8 are often interpreted as indicating strong predictive relationships, values between 0.7 and 0.79 as modest, and values between 0.6 and 0.69 as weak.[28] Model calibration was tested in all datasets across 20 intervals representing the spectrum of mortality risk, by assessing whether or not the 95% confidence limits for the actual proportion of patients dying encompassed the mean predicted mortality for the interval. These 20 intervals were defined using 5 percentile increments of the probability of dying for D1. The use of intervals based on percentiles ensures similarity in the level of predicted risk within an interval for V1 and V2, while allowing the proportion of patients contained within that interval to vary across hospitals.

Relationships With Other Adverse Events

We then used each patient's calculated probability of 30‐day mortality to predict the occurrence of other adverse events. We first derived scoring weights (ie, regression parameter estimates) from logistic regression models designed to relate each secondary outcome to the predicted 30‐day mortality using D1 data. These scoring weights were then respectively applied to the V1 and V2 patients' predicted 30‐day mortality rate to generate their predicted probabilities for: in‐hospital death, a stay in an intensive care unit at some point during the hospitalization, the occurrence of a condition not present on admission (a complication, see the Supporting information, Appendix I, in the online version of this article), palliative care status at the time of discharge (International Classification of Diseases, 9th Revision code V66.7), 30‐day readmission, and death within 180 days (determined for the first hospitalization of the patient in the calendar year, using hospital administrative data and the Social Security Death Index). Additionally, for V1 patients but not V2 due to unavailability of data, we predicted the occurrence of an unplanned transfer to an intensive care unit within the first 24 hours for those not admitted to the intensive care unit (ICU), and resuscitative efforts for cardiopulmonary arrests (code blue, as determined from hospital paging records and resuscitation documentation, with the realization that some resuscitations within the intensive care units might be undercaptured by this approach). Predicted vs actual outcomes were assessed using SAS version 9.2 by examining the areas under the receiver operating curves generated by the PROC LOGISTIC ROC.

Implications for Care Redesign

To illustrate how the mortality prediction provides a context for organizing the work of multiple health professionals, we created 5 risk strata[10] based on quintiles of D1 mortality risk. To display the time frame in which the peak risk of death occurs, we plotted the unadjusted hazard function per strata using SAS PROC LIFETEST.

RESULTS

Table 1 displays the risk factors used in the 30‐day mortality prediction rule, their distribution in the populations of interest, and the frequency of the outcomes of interest. The derivation (D1) and validation (V1) populations were clinically similar; the patients of hospital V2 differed in the proportion of risk factors and outcomes. The scoring weights or parameter estimates for the risk factors are given in the Appendix (see Supporting Information, Appendix I, in the online version of this article).

Demographics, Risk Factors, and Outcomes
 Hospital AHospital V2
D1 Derivation, N = 56,003V1 Validation, N = 28,441V2 Validation, N = 14,867
  • NOTE: Abbreviations: ICU, intensive care unit; NA, not applicable.
The 24 risk factors used in the prediction rule
Age in years, mean (standard deviation)59.8 (19.8)60.2 (19.8)66.4 (20.2)
Female33,185 (59.3%)16,992 (59.7%)8,935 (60.1%)
Respiratory failure on admission2,235 (4.0%)1,198 (4.2%)948 (6.4%)
Previous hospitalization19,560 (34.9%)10,155 (35.7%)5,925 (39.9%)
Hospitalization billed as an emergency admission[38]30,116 (53.8%)15,445 (54.3%)11,272 (75.8%)
Admitted to medicine service29,472 (52.6%)16,260 (57.2%)11,870 (79.8%)
Heart failure at the time of admission7,558 (13.5%)4,046 (14.2%)2,492 (16.8%)
Injury such as fractures or trauma at the time of admission7,007 (12.5%)3,612 (12.7%)2,205 (14.8%)
Sepsis at the time of admission2,278 (4.1%)1,025 (3.6%)850 (5.7%)
Current or past atrial fibrillation8,329 (14.9%)4,657 (16.4%)2,533 (17.0%)
Current or past metastatic cancer2,216 (4.0%)1,109 (3.9%)428 (2.9%)
Current or past cancer without metastases5,260 (9.34%)2,668 (9.4%)1,248 (8.4%)
Current or past history of leukemia or lymphoma1,025 (1.8%)526 (1.9%)278 (1.9%)
Current or past cognitive deficiency3,708 (6.6%)1,973 (6.9%)2,728 (18.4%)
Current or past history of other neurological conditions (such as Parkinson's disease, multiple sclerosis, epilepsy, coma, stupor, brain damage)4,671 (8.3%)2,537 (8.9%)1,606 (10.8%)
Maximum serum blood urea nitrogen (mg/dL), continuous21.9 (15.1)21.8 (15.1)25.9 (18.2)
Maximum white blood count (1,000/UL), continuous2.99 (4.00)3.10 (4.12)3.15 (3.81)
Minimum platelet count (1,000/UL), continuous240.5 (85.5)228.0 (79.6)220.0 (78.6)
Minimum hemoglobin (g/dL), continuous12.3 (1.83)12.3 (1.9)12.1 (1.9)
Minimum serum albumin (g/dL) <3.14, binary indicator11,032 (19.7%)3,848 (13.53%)2,235 (15.0%)
Minimum arterial pH <7.3, binary indicator1,095 (2.0%)473 (1.7%)308 (2.1%)
Minimum arterial pO2 (mm Hg) <85, binary indicator1,827 (3.3%)747 (2.6%)471 (3.2%)
Maximum serum troponin (ng/mL) >0.4, binary indicator6,268 (11.2%)1,154 (4.1%)2,312 (15.6%)
Maximum serum lactate (mEq/L) >4.0, binary indicator533 (1.0%)372 (1.3%)106 (0.7%)
Outcomes of interest
30‐day mortalityprimary outcome of interest2,775 (5.0%)1,412 (5.0%)1,193 (8.0%)
In‐hospital mortality1,392 (2.5%)636 (2.2%)467 (3.1%)
180‐day mortality (deaths/first hospitalization for patient that year)2,928/38,995 (7.5%)1,657/21,377 (7.8%)1,180/10,447 (11.3%)
Unplanned transfer to ICU within first 24 hours/number of patients with data not admitted to ICU434/46,647 (0.9%)276/25,920 (1.1%)NA
Ever in ICU during hospitalization/those with ICU information available5,906/55,998 (10.6%)3,191/28,429 (11.2%)642/14,848 (4.32%)
Any complication6,768 (12.1%)2,447 (8.6%)868 (5.8%)
Cardiopulmonary arrest228 (0.4%)151 (0.5%)NA
Patients discharged with palliative care V code1,151 (2.1%)962 (3.4%)340 (2.3%)
30‐day rehospitalization/patients discharged alive6,616/54,606 (12.1%)3,602/27,793 (13.0%)2,002/14,381 (13.9%)

Predicting 30‐Day Mortality

The areas under the ROC (95% confidence interval [CI]) for the D1, V1, and V2 populations were 0.876 (95% CI, 0.870‐0.882), 0.885 (95% CI, 0.877‐0.893), and 0.883 (95% CI, 0.875‐0.892), respectively. The calibration curves for all 3 populations are shown in Figure 1. The overlap of symbols indicates that the level of predicted risk matched actual mortality for most intervals, with slight underprediction for those in the highest risk percentiles.

Figure 1
Calibration. The horizontal axis displays 20 intervals of risk, containing 5‐percentile increments of the predicted mortality based on the D1 population. The vertical axis displays the actual proportion of patients within the interval who died within 30 days. The cluster of 3 symbols represent the mean predicted chance of dying for the derivation and 2 validation populations, respectively. The crosshatches represent the actual proportion of patients within each interval who died, with the 95% binomial confidence limits represented by the length of the vertical bar. The 20 intervals (named for the highest percentile within the interval) with corresponding probabilities of death: 5th percentile (probability 0‐0.0008); 10th percentile (probability 0.0008‐0.0011); 15th percentile (probability 0.0011‐0.0021); 20 (0.0021‐0.0033); 25 (0.0033‐0.0049); 30 (0.0049‐0.0067); 35 (0.0067‐0.0087); 40 (0.0087‐0.0108); 45 (0.0108‐0.0134); 50 (0.0134‐0.0165); 55 (0.0165‐0.0201); 60 (0.0201‐0.0247); 65 (0.0247‐0.0308); 70 (0.0308‐0.0392); 75 (0.0392‐0.0503); 80 (0.0503‐0.0669); 85 (0.0669‐0.0916); 90 (0.0916‐0.1308); 95 (0.1308‐0.2186); 100 (0.2186‐1.0).

Example of Risk Strata

Figure 2 displays the relationship between the predicted probability of dying within 30 days and the outcomes of interest for V1, and illustrates the Pareto principle for defining high‐ and low‐risk subgroups. Most of the 30‐day deaths (74.7% of D1, 74.2% of V1, and 85.3% of V2) occurred in the small subset of patients with a predicted probability of death exceeding 0.067 (the top quintile of risk of D1, the top 18 % of V1, and the top 29.8% of V2). In contrast, the mortality rate for those with a predicted risk of 0.0033 was 0.02% for the lowest quintile of risk in D1, 0.07% for the 19.3% having the lowest risk in V1, and 0% for the 9.7% of patients with the lowest risk in V2. Figure 3 indicates that the risk for dying peaks within the first few days of the hospitalization. Moreover, those in the high‐risk group remained at elevated risk relative to the lower risk strata for at least 100 days.

Figure 2
Risk of outcomes within intervals of mortality risk (validation hospital V1). The curves for the other 2 populations (D1, V2) were similar (see the Supporting information, Appendix II, in the online version of this article). Examples of possible risk strata are indicated.
Figure 3
Instantaneous risk of death (hazard function) following hospital admission—validation hospital V1. For sake of clarity, 5 ordinal categories of predicted risk are shown. The curves for the other 2 populations (D1, V2) were similar and are shown in the Appendix II (see the Supporting information, Appendix I, in the online version of this article).

Relationships With Other Outcomes of Interest

The graphical curves of Figure 2 represent the occurrence of adverse events. The rising slopes indicate the risk for other events increases with the risk of dying within 30 days (for details and data for D1 and V2, see the Supporting Information, Appendix II, in the online version of this article). The strength of these relationships is quantified by the areas under the ROC curve (Table 2). The probability of 30‐day mortality strongly predicted the occurrence of in‐hospital death, palliative care status, and death within 180 days; modestly predicted having an unplanned transfer to an ICU within the first 24 hours of the hospitalization and undergoing resuscitative efforts for cardiopulmonary arrest; and weakly predicted intensive care unit use at some point in the hospitalization, occurrence of a condition not present on admission (complication), and being rehospitalized within 30 days

Area Under the Receiver Operating Characteristic Curve Models Predicting Secondary Outcomes of Interest
OutcomeHospital AHospital V2
D1DerivationV1ValidationV2Validation
  • NOTE: Mann‐Whitney (95% Wald confidence limits). Each outcome of interest was predicted by the patients' calculated probability of dying within 30 days and its logarithm. Details are provided in the Appendix II. Abbreviations: ICU, intensive care unit; NA, not applicable.
Unplanned transfer to an ICU within the first 24 hours (for those not admitted to an ICU)0.712 (0.690‐0.734)0.735 (0.709‐0.761)NA
Resuscitation efforts for cardiopulmonary arrest0.709 (0.678‐0.739)0.737 (0.700‐0.775)NA
ICU stay at some point during the hospitalization0.659 (0.652‐0.666)0.663 (0.654‐0.672)0.702 (0.682‐0.722)
Intrahospital complication (condition not present on admission)0.682 (0.676‐0.689)0.624 (0.613‐0.635)0.646 (0.628‐0.664)
Palliative care status0.883 (0.875‐0.891)0.887 (0.878‐0.896)0.900 (0.888‐0.912)
Death within hospitalization0.861 (0.852‐0.870)0.875 (0.862‐0.887)0.880 (0.866‐0.893)
30‐day readmission0.685 (0.679‐0.692)0.685 (0.676‐0.694)0.677 (0.665‐0.689)
Death within 180 days0.890 (0.885‐0.896)0.889 (0.882‐0.896)0.873 (0.864‐0.883)

DISCUSSION

The primary contribution of our work concerns the number and strength of associations between the probability of dying within 30 days and other events, and the implications for organizing the healthcare delivery model. We also add to the growing evidence that death within 30 days can be accurately predicted at the time of admission from demographic information, modest levels of diagnostic information, and clinical laboratory values. We developed a new prediction rule with excellent accuracy that compares well to a rule recently developed by the Kaiser Permanente system.[13, 14] Feasibility considerations are likely to be the ultimate determinant of which prediction rule a health system chooses.[13, 14, 29] An independent evaluation of the candidate rules applied to the same data is required to compare their accuracy.

These results suggest a context for the coordination of clinical care processes, although mortality risk is not the only domain health systems must address. For illustrative purposes, we will refer to the risk strata shown in Figure 2. After the decisions to admit the patient to the hospital and whether or not surgical intervention is needed, the next decision concerns the level and type of nursing care needed.[10] Recent studies continue to show challenges both with unplanned transfers to intensive care units[21] and care delivered that is consistently concordant with patient wishes.[6, 30] The level of risk for multiple adverse outcomes suggests stratum 1 patients would be the priority group for perfecting the placement and preference assessment process. Our institution is currently piloting an internal placement guideline recommending that nonpalliative patients in the top 2.5 percentile of mortality risk be placed initially in either an intensive or intermediate care unit to receive the potential benefit of higher nursing staffing levels.[31] However, mortality risk cannot be the only criterion used for placement, as demonstrated by its relatively weak association with overall ICU utilization. Our findings may reflect the role of unmeasured factors such as the need for mechanical ventilation, patient preference for comfort care, bed availability, change in patient condition after admission, and inconsistent application of admission criteria.[17, 21, 32, 33, 34]

After the placement decision, the team could decide if the usual level of monitoring, physician rounding, and care coordination would be adequate for the level of risk or whether an additional anticipatory approach is needed. The weak relationship between the risk of death and incidence of complications, although not a new finding,[35, 36] suggests routine surveillance activities need to be conducted on all patients regardless of risk to detect a complication, but that a rescue plan be developed in advance for high mortality risk patients, for example strata 1 and 2, in the event they should develop a complication.[36] Inclusion of the patient's risk strata as part of the routine hand‐off communication among hospitalists, nurses, and other team members could provide a succinct common alert for the likelihood of adverse events.

The 30‐day mortality risk also informs the transition care plan following hospitalization, given the strong association with death in 180 days and the persistent level of this risk (Figure 3). Again, communication of the risk status (stratum 1) to the team caring for the patient after the hospitalization provides a common reference for prognosis and level of attention needed. However, the prediction accuracy is not sufficient to refer high‐risk patients into hospice, but rather, to identify the high‐risk subset having the most urgent need to have their preferences for future end‐of‐life care understood and addressed. The weak relationship of mortality risk with 30‐day readmissions indicates that our rule would have a limited role in identifying readmission risk per se. Others have noted the difficulty in accurately predicting readmissions, most likely because the underlying causes are multifactorial.[37] Our results suggest that 1 dynamic for readmission is the risk of dying, and so the underlying causes of this risk should be addressed in the transition plan.

There are a number of limitations with our study. First, this rule was developed and validated on data from only 2 institutions, assembled retrospectively, with diagnostic information determined from administrative data. One cannot assume the accuracy will carry over to other institutions[29] or when there is diagnostic uncertainty at the time of admission. Second, the 30‐day mortality risk should not be used as the sole criterion for determining the service intensity for individual patients because of issues with calibration, interpretation of risk, and confounding. The calibration curves (Figure 2) show the slight underprediction of the risk of dying for high‐risk groups. Other studies have also noted problems with precise calibration in validation datasets.[13, 14] Caution is also needed in the interpretation of what it means to be at high risk. Most patients in stratum 1 were alive at 30 days; therefore, being at high risk is not a death sentence. Furthermore, the relative weights of the risk factors reflect (ie, are confounded by) the level of treatment rendered. Some deaths within the higher‐risk percentiles undoubtedly occurred in patients choosing a palliative rather than a curative approach, perhaps partially explaining the slight underprediction of deaths. Conversely, the low mortality experienced by patients within the lower‐risk strata may indicate the treatment provided was effective. Low mortality risk does not imply less care is needed.

A third limitation is that we have not defined the thresholds of risk that should trigger placement and care intensity, although we provide examples on how this could be done. Each institution will need to calibrate the thresholds and associated decision‐making processes according to its own environment.[14] Interested readers can explore the sensitivity and specificity of various thresholds\ by using the tables in the Appendix (see the Supporting information, Appendix II, in the online version of this article). Finally, we do not know if identifying the mortality risk on admission will lead to better outcomes[19, 29]

CONCLUSIONS

Death within 30 days can be predicted with information known at the time of admission, and is associated with the risk of having other adverse events. We believe the probability of death can be used to define strata of risk that provide a succinct common reference point for the multidisciplinary team to anticipate the clinical course of subsets of patients and intervene with proportional intensity.

Acknowledgments

This work benefited from multiple conversations with Patricia Posa, RN, MSA, Elizabeth Van Hoek, MHSA, and the Redesigning Care Task Force of St. Joseph Mercy Hospital, Ann Arbor, Michigan.

Disclosure: Nothing to report.

References
  1. Brodie BR, Stuckey TD, Wall TC, et al. Importance of time to reperfusion for 30‐day and late survival and recovery of left ventricular function after primary angioplasty for acute myocardial infarction. J Am Coll Cardiol. 1998;32:13121319.
  2. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345:13681377.
  3. ATLANTIS, ECASS, NINDS rt‐PA Study Group Investigators. Association of outcome with early stroke treatment: pooled analysis of ATLANTIS, ECASS, and NINDS rt‐PA stroke trials. Lancet. 2004;363:768774.
  4. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34:563570.
  5. National Hospice and Palliative Care Organization. NHPCO facts and figures: hospice care in America 2010 Edition. Available at: http://www.nhpco.org. Accessed October 3,2011.
  6. Mack JW, Weeks JC, Wright AA, Block SD, Prigerson HG. End‐of‐life discussions, goal attainment, and distress at the end of life: predictors and outcomes of receipt of care consistent with preferences. J Clin Oncol. 2010;28:12031208.
  7. Committee on Quality of Health Care in America, Institute of Medicine (IOM).Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academies Press;2001.
  8. Levy MM, Dellinger RP, Townsend SR, et al. The surviving sepsis campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Intensive Care Med. 2010;36:222231.
  9. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low‐risk patients with community‐acquired pneumonia. N Engl J Med. 1997;336:243250.
  10. Kellett J, Deane B. The simple clinical score predicts mortality for 30 days after admission to an acute medical unit. Q J Med. 2006;99:771781.
  11. Pine M, Jordan HS, Elixhauser A, et al. Enhancement of claims data to improve risk adjustment of hospital mortality. JAMA. 2007;297:7176.
  12. Tabak YP, Johannes RS, Silber JH. Using automated clinical data for risk adjustment. Med Care. 2007;45:789805.
  13. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232239.
  14. Walraven C, Escobar GJ, Green JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798803.
  15. Silke B, Kellett J, Rooney T, Bennett K, O'Riordan D. An improved medical admissions risk system using multivariable fractional polynomial logistic regression modeling. Q J Med. 2010;103:2332.
  16. Brabrand M, Folkestad L, Clausen NG, Knudsen T, Hallas J. Risk scoring systems for adults admitted to the emergency department: a systematic review. Scand J Trauma Resusc Emerg Med. 2010;18:8.
  17. Wong J, Taljaard M, Forster AJ, Escobar GJ, von Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49:734743.
  18. Asadollahi K, Hasting IM, Gill GV, Beeching NJ. Prediction of hospital mortality from admission laboratory data and patient age: a simple model. Emerg Med Australas. 2011;23:354363.
  19. Siontis GCM, Tzoulaki I, Ioannidis JPA. Predicting death: an empirical evaluation of predictive tools for mortality. Arch Intern Med. 2011;171:17211726.
  20. Liu V, Kipnis P, Gould MK, Escobar GJ. Length of stay predictions: improvements through the use of automated laboratory and comorbidity variables. Med Care. 2010;48:739744.
  21. Escobar GJ, Greene JD, Gardner MN, Marelich GP, Quick B, Kipnis P. Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS). J Hosp Med. 2011;6:7480.
  22. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48:981988.
  23. Baker DW, Einstadter D, Thomas CL, Husak SS, Gordon NH, Cebul RD. Mortality trends during a program that publicly reported hospital performance. Med Care. 2002;40:879890.
  24. Liaw A, Wiener M. Classification and regression by randomForest. R News. 2002;2:1822.
  25. Zeileis A, Hothorn T, Hornik K. Model‐based recursive partitioning. J Comput Graph Stat. 2008;17:492514.
  26. Breiman L, Friedman JH, Olshen RA, Stone CJ.Classification and Regression Trees.Belmont, CA:Wadsworth Inc.,1984.
  27. Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA. Evaluating the yield of medical tests. JAMA. 1982;247:25432546.
  28. Ohman EM, Granger CB, Harrington RA, Lee KL. Risk stratification and therapeutic decision making in acute coronary syndromes. JAMA. 2000;284:876878.
  29. Grady D, Berkowitz SA. Why is a good clinical prediction rule so hard to find?Arch Intern Med. 2011;171:17011702.
  30. Silveira MJ, Kim SYH, Langa KM. Advance directives and outcomes of surrogate decision making before death. N Engl J Med. 2010;362:12111218.
  31. Needleman J, Buerhaus P, Pankratz S, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011;364:10371045.
  32. Simchen E, Sprung CL, Galai N, et al. Survival of critically ill patients hospitalized in and out of intensive care. Crit Care Med. 2007;35:449457.
  33. Walter KL, Siegler M, Hall JB. How decisions are made to admit patients to medical intensive care units (MICUs): a survey of MICU directors at academic medical centers across the United States. Crit Care Med. 2008;36:414420.
  34. Litvak E, Pronovost P. Rethinking rapid response teams. JAMA. 2010;204:13751376.
  35. Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery: a study of adverse occurrence and failure to rescue. Med Care. 1992;30:615629.
  36. Ghaferi AA, Birkmeyer JD, Dimick JB. Variation in hospital mortality associated with inpatient surgery. N Engl J Med. 2009;361:13681375.
  37. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306:16881698.
  38. Department of Health and Human Services, Centers for Medicare and Medicaid Services, CMS Manual System, Pub 100–04 Medicare Claims Processing, November 3, 2006. Available at: http://www. cms.gov/Regulations‐and‐Guidance/Guidance/Transmittals/Downloads/R1104CP.pdf. Accessed September 5,2012.
References
  1. Brodie BR, Stuckey TD, Wall TC, et al. Importance of time to reperfusion for 30‐day and late survival and recovery of left ventricular function after primary angioplasty for acute myocardial infarction. J Am Coll Cardiol. 1998;32:13121319.
  2. Rivers E, Nguyen B, Havstad S, et al. Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345:13681377.
  3. ATLANTIS, ECASS, NINDS rt‐PA Study Group Investigators. Association of outcome with early stroke treatment: pooled analysis of ATLANTIS, ECASS, and NINDS rt‐PA stroke trials. Lancet. 2004;363:768774.
  4. Kitch BT, Cooper JB, Zapol WM, et al. Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34:563570.
  5. National Hospice and Palliative Care Organization. NHPCO facts and figures: hospice care in America 2010 Edition. Available at: http://www.nhpco.org. Accessed October 3,2011.
  6. Mack JW, Weeks JC, Wright AA, Block SD, Prigerson HG. End‐of‐life discussions, goal attainment, and distress at the end of life: predictors and outcomes of receipt of care consistent with preferences. J Clin Oncol. 2010;28:12031208.
  7. Committee on Quality of Health Care in America, Institute of Medicine (IOM).Crossing the Quality Chasm: A New Health System for the 21st Century.Washington, DC:National Academies Press;2001.
  8. Levy MM, Dellinger RP, Townsend SR, et al. The surviving sepsis campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Intensive Care Med. 2010;36:222231.
  9. Fine MJ, Auble TE, Yealy DM, et al. A prediction rule to identify low‐risk patients with community‐acquired pneumonia. N Engl J Med. 1997;336:243250.
  10. Kellett J, Deane B. The simple clinical score predicts mortality for 30 days after admission to an acute medical unit. Q J Med. 2006;99:771781.
  11. Pine M, Jordan HS, Elixhauser A, et al. Enhancement of claims data to improve risk adjustment of hospital mortality. JAMA. 2007;297:7176.
  12. Tabak YP, Johannes RS, Silber JH. Using automated clinical data for risk adjustment. Med Care. 2007;45:789805.
  13. Escobar GJ, Greene JD, Scheirer P, Gardner MN, Draper D, Kipnis P. Risk‐adjusting hospital inpatient mortality using automated inpatient, outpatient, and laboratory databases. Med Care. 2008;46:232239.
  14. Walraven C, Escobar GJ, Green JD, Forster AJ. The Kaiser Permanente inpatient risk adjustment methodology was valid in an external patient population. J Clin Epidemiol. 2010;63:798803.
  15. Silke B, Kellett J, Rooney T, Bennett K, O'Riordan D. An improved medical admissions risk system using multivariable fractional polynomial logistic regression modeling. Q J Med. 2010;103:2332.
  16. Brabrand M, Folkestad L, Clausen NG, Knudsen T, Hallas J. Risk scoring systems for adults admitted to the emergency department: a systematic review. Scand J Trauma Resusc Emerg Med. 2010;18:8.
  17. Wong J, Taljaard M, Forster AJ, Escobar GJ, von Walraven C. Derivation and validation of a model to predict daily risk of death in hospital. Med Care. 2011;49:734743.
  18. Asadollahi K, Hasting IM, Gill GV, Beeching NJ. Prediction of hospital mortality from admission laboratory data and patient age: a simple model. Emerg Med Australas. 2011;23:354363.
  19. Siontis GCM, Tzoulaki I, Ioannidis JPA. Predicting death: an empirical evaluation of predictive tools for mortality. Arch Intern Med. 2011;171:17211726.
  20. Liu V, Kipnis P, Gould MK, Escobar GJ. Length of stay predictions: improvements through the use of automated laboratory and comorbidity variables. Med Care. 2010;48:739744.
  21. Escobar GJ, Greene JD, Gardner MN, Marelich GP, Quick B, Kipnis P. Intra‐hospital transfers to a higher level of care: contribution to total hospital and intensive care unit (ICU) mortality and length of stay (LOS). J Hosp Med. 2011;6:7480.
  22. Amarasingham R, Moore BJ, Tabak YP, et al. An automated model to identify heart failure patients at risk for 30‐day readmission or death using electronic medical record data. Med Care. 2010;48:981988.
  23. Baker DW, Einstadter D, Thomas CL, Husak SS, Gordon NH, Cebul RD. Mortality trends during a program that publicly reported hospital performance. Med Care. 2002;40:879890.
  24. Liaw A, Wiener M. Classification and regression by randomForest. R News. 2002;2:1822.
  25. Zeileis A, Hothorn T, Hornik K. Model‐based recursive partitioning. J Comput Graph Stat. 2008;17:492514.
  26. Breiman L, Friedman JH, Olshen RA, Stone CJ.Classification and Regression Trees.Belmont, CA:Wadsworth Inc.,1984.
  27. Harrell FE, Califf RM, Pryor DB, Lee KL, Rosati RA. Evaluating the yield of medical tests. JAMA. 1982;247:25432546.
  28. Ohman EM, Granger CB, Harrington RA, Lee KL. Risk stratification and therapeutic decision making in acute coronary syndromes. JAMA. 2000;284:876878.
  29. Grady D, Berkowitz SA. Why is a good clinical prediction rule so hard to find?Arch Intern Med. 2011;171:17011702.
  30. Silveira MJ, Kim SYH, Langa KM. Advance directives and outcomes of surrogate decision making before death. N Engl J Med. 2010;362:12111218.
  31. Needleman J, Buerhaus P, Pankratz S, Leibson CL, Stevens SR, Harris M. Nurse staffing and inpatient hospital mortality. N Engl J Med. 2011;364:10371045.
  32. Simchen E, Sprung CL, Galai N, et al. Survival of critically ill patients hospitalized in and out of intensive care. Crit Care Med. 2007;35:449457.
  33. Walter KL, Siegler M, Hall JB. How decisions are made to admit patients to medical intensive care units (MICUs): a survey of MICU directors at academic medical centers across the United States. Crit Care Med. 2008;36:414420.
  34. Litvak E, Pronovost P. Rethinking rapid response teams. JAMA. 2010;204:13751376.
  35. Silber JH, Williams SV, Krakauer H, Schwartz JS. Hospital and patient characteristics associated with death after surgery: a study of adverse occurrence and failure to rescue. Med Care. 1992;30:615629.
  36. Ghaferi AA, Birkmeyer JD, Dimick JB. Variation in hospital mortality associated with inpatient surgery. N Engl J Med. 2009;361:13681375.
  37. Kansagara D, Englander H, Salanitro A, et al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306:16881698.
  38. Department of Health and Human Services, Centers for Medicare and Medicaid Services, CMS Manual System, Pub 100–04 Medicare Claims Processing, November 3, 2006. Available at: http://www. cms.gov/Regulations‐and‐Guidance/Guidance/Transmittals/Downloads/R1104CP.pdf. Accessed September 5,2012.
Issue
Journal of Hospital Medicine - 8(5)
Issue
Journal of Hospital Medicine - 8(5)
Page Number
229-235
Page Number
229-235
Publications
Publications
Article Type
Display Headline
Mortality predictions on admission as a context for organizing care activities
Display Headline
Mortality predictions on admission as a context for organizing care activities
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Mark E. Cowen, MD, SM, Quality Institute, St. Joseph Mercy Hospital, 5333 McAuley Dr., Suite 3112, Ypsilanti, MI 48197. E-mail: cowenm@trinity-health.org Telephone: 734‐712‐8776; Fax: 734‐712‐8651
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files