Affiliations
Yale New Haven Health Services Corporation/The Center for Outcomes Research and Evaluation, New Haven, Connecticut
Section of Cardiovascular Medicine, Yale University School of Medicine, New Haven, Connecticut
Given name(s)
Zhenqiu
Family name
Lin
Degrees
PhD

Analysis of Hospital Resource Availability and COVID-19 Mortality Across the United States

Article Type
Changed
Wed, 03/31/2021 - 14:17

The COVID-19 pandemic is a crisis of mismatch between resources and infection burden. There is extraordinary heterogeneity across time and geography in the pandemic impact, with hospitals in New York City initially inundated while hospitals in major urban areas of California were comparatively quiet. Efforts to “flatten the curve” are intended to improve outcomes by reducing health system overload.1 In the case of hospital-based care, health systems’ primary resources include emergency and critical care bed and staff capacity.

Prior work has documented wide variability in intensive care capacity across the United States and hypothesized that even moderate disease outbreaks could overwhelm hospital referral regions (HRRs).2,3 Various simulations of outbreaks suggested that thousands of deaths are potentially preventable depending on the health system’s response,4 although the degree to which resource limitations have contributed to mortality during this COVID-19 pandemic has yet to be explored. The objective of this analysis was to examine the association between hospital resources and COVID-19 deaths amongst HRRs in the United States in the period from March 1 to July 26, 2020.

METHODS

Data

This was an analysis of the American Hospital Association Annual Survey Database from 2017 and 2018, including hospital resource variables such as total hospital beds, hospitalists, intensive care beds, intensivists, emergency physicians, and nurses.5 The analysis was limited to general medical and surgical hospitals capable of providing acute care services, defined as those reporting at least 500 emergency department visits in 2018. Where data were missing on analysis variables (26.0% missing overall), the data were drawn from the 2017 survey results (reduced to 23.8% missing) from the same site as available, and the remainder were imputed with multivariate imputation by chained equations. An identical analysis without imputation was performed as a sensitivity analysis that showed a similar pattern of results. Total resources were tabulated amongst HRRs, and the hospital resources per COVID-19 case calculated. HRRs are a geographic category devised to represent regional health care markets, and each includes hospital sites performing major procedures.3 These were the focus of the analysis because they may represent a meaningful geographic division of hospital-based resources. COVID-19 case and death counts (as of July 26, 2020) were drawn from publicly available county-level data curated by the New York Times from state and local governments as well as health departments nationwide,6 separated by month (ie, March, April, May, June, and July). Data on New York City were available in aggregate (rather than separated by borough). Cases and deaths were therefore apportioned to the three HRRs involving New York City in proportion to that area’s population. To adjust for the lag between COVID-19 cases and deaths,7,8 we offset deaths 2 weeks into the future so that the April COVID-19 death count for a given HRR included deaths that occurred for 1 month beginning 2 weeks after the start of April, and so on.

Analysis

We estimated Poisson distribution regressions for the outcome of COVID-19 death count in each HRR and month with one model for each of our six hospital-based resource variables. The offset (exposure) variable was COVID-19 case count. To adjust for the possibility of varying effects of hospital resources on deaths by month (ie, in anticipation that health systems might evolve in response to the pandemic over time), each model includes terms for the interaction between hospital-based resource and an indicator variable for month, as well as a fifth term for month. Standard errors were adjusted for clustering within HRR. We report resultant incident rate ratios (IRRs) with 95% CIs, and we report these as statistically significant at the 5% level only after adjustment for multiple comparisons across our six hospital-resource variables using the conservative Bonferroni adjustment. The pseudo-R2 for each of these six models is also reported to summarize the amount of variation in deaths explained. For our model with ICU beds per COVID-19 case, we perform postestimation prediction of number of deaths by HRR, assuming the counterfactual in which HRRs with fewer than average ICU beds per COVID-19 case instead had the average observed number of ICU beds per COVID-19 case by HRR in April, which functioned as a measure of early excess deaths potentially related to resource limitations. The study was classified as exempt by the Institutional Review Board at the Yale School of Medicine, New Haven, Connecticut. Analyses were conducted in Stata 15 (StataCorp LLC) and R.

RESULTS

A total of 4,453 hospitals across 306 HRRs were included and linked to 2,827 county-level COVID-19 case and death counts in each of 5 months (March through July 2020). The median HRR in our analysis included 14 hospitals, with a maximum of 76 hospitals (Los Angeles, California) and a minimum of 1 (Longview, Texas). Among HRRs, 206 (67.3%) had experienced caseloads exceeding 20 per 10,000 population, while 85 (27.8%) had experienced greater than 100 per 10,000 population in the peak month during the study period. The Table depicts results of each of six Poisson distribution regression models, with the finding that greater number of ICU beds (IRR, 0.194; 95% CI, 0.076-0.491), general medical/surgical beds (IRR, 0.800; 95% CI, 0.696-0.920), and nurses (IRR, 0.927; 95% CI, 0.888-0.967) per COVID-19 case in April were statistically significantly associated with reduced deaths.

 IRRs for Hospital-Based Resources on COVID-19 Deaths in March Through July 2020

The model including ICU beds per COVID-19 case had the largest pseudo-R2 at 0.6018, which suggests that ICU bed availability explains the most variation in death count among hospital resource variables analyzed. The incident rate ratio in this model implies that, for an entire additional ICU bed for each COVID-19 case (a one-unit increase in that variable), there is an associated one-fifth decrease in incidence rate (IRR, 0.194) of death in April. The mean value among HRRs in April was 0.25 ICU beds per case (one ICU bed for every four COVID-19 cases), but it was as low as 0.01 to 0.005 in hard-hit areas (one ICU bed for every 100 to 200 COVID-19 cases). The early excess deaths observed in April were not observed in later months. The magnitude of this effect can be summarized as follows: If the 152 HRRs in April with fewer than the mean number of ICU beds per COVID-19 case were to instead have the mean number (one ICU bed for every four COVID-19 cases), our model estimates that there would have been 15,571 fewer deaths that month. The HRRs with the largest number of early excess deaths were Manhattan in New York City (1,466), Bronx in New York City (1,315), Boston, Massachusetts (1,293), Philadelphia, Pennsylvania (955), Hartford, Connecticut (682), Detroit, Michigan (499), and Camden, New Jersey (484). The Figure depicts HRRs in the United States with early excess deaths by this measure in April.

April COVID-19 Excess Deaths Estimated in Model of ICU Bed Availability

DISCUSSION

We found significant associations between availability of hospital-based resources and COVID-19 deaths in the month of April 2020. This observation was consistent across measures of both hospital bed and staff capacity but not statistically significant in all cases. This provides empiric evidence in support of a preprint simulation publication by Branas et al showing the potential for thousands of excess deaths related to lack of available resources.4 Interestingly, the relationship between hospital-based resources per COVID-19 case and death count is not seen in May, June, or July. This may be because hospitals and health systems were rapidly adapting to pandemic demands9 by shifting resources or reorganizing existing infrastructure to free up beds and personnel.

Our findings underscore the importance of analyses that address heterogeneity in health system response over time and across different geographic areas. That the relationship is not seen after the early pandemic period, when hospitals and health systems were most overwhelmed, suggests that health systems and communities were able to adapt. Importantly, this work does not address the likely complex relationships among hospital resources and outcomes (for example, the benefit of ICU bed availability is likely limited when there are insufficient intensivists and nurses). These complexities should be a focus of future work. Furthermore, hospital resource flexibility, community efforts to slow transmission, and improvements in testing availability and the management of COVID-19 among hospitalized patients may all play a role in attenuating the relationship between baseline resource limitations and outcomes for patients with COVID-19.

These results merit further granular studies to examine specific hospital resources and observed variation in outcomes. Prior literature has linked inpatient capacity—variously defined as high census, acuity, turnover, or delayed admission—to outcomes including mortality among patients with stroke, among those with acute coronary syndrome, and among those requiring intensive care.10 Literature from Italy’s experience shows there was large variation in the case fatality rate among regions of Northern Italy and argues this was partially due to hospital resource limitations.11 Future work can also address whether just-in-time resource mobilization, such as temporary ICU expansion, physician cross-staffing, telemedicine, and dedicated units for COVID-19 patients, attenuated the impact of potential hospital resource scarcity on outcomes.

The present analysis is limited by the quality of the data. There is likely variation of available COVID-19 testing by HRR. It may be that areas with larger outbreaks early on generally tested a smaller, sicker proportion of population-level cases than did those with smaller outbreaks. This effect may be reversed if larger HRRs in urban areas have health systems and public health departments more inclined toward or capable of doing more testing. Furthermore, deaths related to COVID-19 are likely related to community-based factors, including nonhealthcare resources and underlying population characteristics, that likely correlate with the availability of hospital-based resources within HRRs. Some have called into question whether, a priori, we should expect hospital-based capacity to be an important driver of mortality at all,12 arguing that, when critical care capacity is exceeded, resources may be efficiently reallocated away from patients who are least likely to benefit. Because we used the American Hospital Association data, this snapshot of hospital resources is not limited to critical care capacity because there could be alternative explanations for situations in which mortality for both COVID-19 and non–COVID-19 patients may be lower and hospital resources are better matched with demand. For example, patients may seek care earlier in their disease course (whether COVID-19 or otherwise)13 if their local emergency department is not thought to be overwhelmed with case volume.

CONCLUSION

We find that COVID-19 deaths vary among HRRs. The availability of several hospital-based resources is associated with death rates and supports early efforts across the United States to “flatten the curve” to prevent hospital overload. Continued surveillance of this relationship is essential to guide policymakers and hospitals seeking to balance the still limited supply of resources with the demands of caring for both infectious and noninfectious patients in the coming months of this outbreak and in future pandemics.

Acknowledgment

The authors gratefully acknowledge the help of Carolyn Lusch, AICP, in generating depictions of results in Geographic Information Systems.

References

1. Phua J, Weng L, Ling L, et al; Asian Critical Care Clinical Trials Group. Intensive care management of coronavirus disease 2019 (COVID-19): challenges and recommendations. Lancet Respir Med. 2020;8(5):506-517. https://doi.org/10.1016/s2213-2600(20)30161-2
2. Carr BG, Addyson DK, Kahn JM. Variation in critical care beds per capita in the United States: implications for pandemic and disaster planning. JAMA. 2010;303(14):1371-1372. https://doi.org/10.1001/jama.2010.394
3. General FAQ. Dartmouth Atlas Project. 2020. Accessed July 8, 2020. https://www.dartmouthatlas.org/faq/
4. Branas CC, Rundle A, Pei S, et al. Flattening the curve before it flattens us: hospital critical care capacity limits and mortality from novel coronavirus (SARS-CoV2) cases in US counties. medRxiv. Preprint posted online April 6, 2020. https://doi.org/10.1101/2020.04.01.20049759
5. American Hospital Association Annual Survey Database. American Hospital Association. 2018. Accessed July 8, 2020. https://www.ahadata.com/aha-annual-survey-database
6. An Ongoing Repository of Data on Coronavirus Cases and Deaths in the U.S. New York Times. 2020. Accessed July 8, 2020. https://github.com/nytimes/covid-19-data
7. Baud D, Qi X, Nielsen-Saines K, Musso D, Pomar L, Favre G. Real estimates of mortality following COVID-19 infection. Lancet Infect Dis. 2020;20(7):773. https://doi.org/10.1016/s1473-3099(20)30195-x
8. Rosakis P, Marketou ME. Rethinking case fatality ratios for COVID-19 from a data-driven viewpoint. J Infect. 2020;81(2);e162-e164. https://doi.org/10.1016/j.jinf.2020.06.010
9. Auerbach A, O’Leary KJ, Greysen SR, et al; HOMERuN COVID-19 Collaborative Group. Hospital ward adaptation during the COVID-19 pandemic: a national survey of academic medical centers. J Hosp Med. 2020;15(8):483-488. https://doi.org/10.12788/jhm.3476
10. Eriksson CO, Stoner RC, Eden KB, Newgard CD, Guide JM. The association between hospital capacity strain and inpatient outcomes in highly developed countries: a systematic review. J Gen Intern Med. 2017;32(6):686-696. https://doi.org/10.1007/s11606-016-3936-3
11. Volpato S, Landi F, Incalzi RA. A frail health care system for an old population: lesson form [sic] the COVID-19 outbreak in Italy. J Gerontol Series A. 2020;75(9):e126-e127. https://doi.org/10.1093/gerona/glaa087
12. Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455. https://doi.org/10.7326/0003-4819-159-7-201310010-00004
13. Moroni F, Gramegna M, Agello S, et al. Collateral damage: medical care avoidance behavior among patients with myocardial infarction during the COVID-19 pandemic. JACC Case Rep. 2020;2(10):1620-1624. https://doi.org/10.1016/j.jaccas.2020.04.010

Article PDF
Author and Disclosure Information

1Department of Emergency Medicine, Yale School of Medicine, New Haven, Connecticut; 2Center for Outcomes Research and Evaluation, Yale University, New Haven, Connecticut; 3Department of Surgery, Yale School of Medicine, New Haven, Connecticut.

Disclosures

Dr Venkatesh reports support of Contract Number HHSM-500-2013-13018I- T0001 Modification 000002 by the Centers for Medicare & Medicaid Services, an agency of the U.S. Department of Health & Human Services. Dr Venkatesh also reports career development support of grant KL2TR001862 from the National Center for Advancing Translational Science and Yale Center for Clinical Investigation and the American Board of Emergency Medicine–National Academy of Medicine Anniversary Fellowship. The other authors report having nothing to disclose.

Issue
Journal of Hospital Medicine 16(4)
Publications
Topics
Page Number
211-214. Published Online First January 20, 2021
Sections
Author and Disclosure Information

1Department of Emergency Medicine, Yale School of Medicine, New Haven, Connecticut; 2Center for Outcomes Research and Evaluation, Yale University, New Haven, Connecticut; 3Department of Surgery, Yale School of Medicine, New Haven, Connecticut.

Disclosures

Dr Venkatesh reports support of Contract Number HHSM-500-2013-13018I- T0001 Modification 000002 by the Centers for Medicare & Medicaid Services, an agency of the U.S. Department of Health & Human Services. Dr Venkatesh also reports career development support of grant KL2TR001862 from the National Center for Advancing Translational Science and Yale Center for Clinical Investigation and the American Board of Emergency Medicine–National Academy of Medicine Anniversary Fellowship. The other authors report having nothing to disclose.

Author and Disclosure Information

1Department of Emergency Medicine, Yale School of Medicine, New Haven, Connecticut; 2Center for Outcomes Research and Evaluation, Yale University, New Haven, Connecticut; 3Department of Surgery, Yale School of Medicine, New Haven, Connecticut.

Disclosures

Dr Venkatesh reports support of Contract Number HHSM-500-2013-13018I- T0001 Modification 000002 by the Centers for Medicare & Medicaid Services, an agency of the U.S. Department of Health & Human Services. Dr Venkatesh also reports career development support of grant KL2TR001862 from the National Center for Advancing Translational Science and Yale Center for Clinical Investigation and the American Board of Emergency Medicine–National Academy of Medicine Anniversary Fellowship. The other authors report having nothing to disclose.

Article PDF
Article PDF
Related Articles

The COVID-19 pandemic is a crisis of mismatch between resources and infection burden. There is extraordinary heterogeneity across time and geography in the pandemic impact, with hospitals in New York City initially inundated while hospitals in major urban areas of California were comparatively quiet. Efforts to “flatten the curve” are intended to improve outcomes by reducing health system overload.1 In the case of hospital-based care, health systems’ primary resources include emergency and critical care bed and staff capacity.

Prior work has documented wide variability in intensive care capacity across the United States and hypothesized that even moderate disease outbreaks could overwhelm hospital referral regions (HRRs).2,3 Various simulations of outbreaks suggested that thousands of deaths are potentially preventable depending on the health system’s response,4 although the degree to which resource limitations have contributed to mortality during this COVID-19 pandemic has yet to be explored. The objective of this analysis was to examine the association between hospital resources and COVID-19 deaths amongst HRRs in the United States in the period from March 1 to July 26, 2020.

METHODS

Data

This was an analysis of the American Hospital Association Annual Survey Database from 2017 and 2018, including hospital resource variables such as total hospital beds, hospitalists, intensive care beds, intensivists, emergency physicians, and nurses.5 The analysis was limited to general medical and surgical hospitals capable of providing acute care services, defined as those reporting at least 500 emergency department visits in 2018. Where data were missing on analysis variables (26.0% missing overall), the data were drawn from the 2017 survey results (reduced to 23.8% missing) from the same site as available, and the remainder were imputed with multivariate imputation by chained equations. An identical analysis without imputation was performed as a sensitivity analysis that showed a similar pattern of results. Total resources were tabulated amongst HRRs, and the hospital resources per COVID-19 case calculated. HRRs are a geographic category devised to represent regional health care markets, and each includes hospital sites performing major procedures.3 These were the focus of the analysis because they may represent a meaningful geographic division of hospital-based resources. COVID-19 case and death counts (as of July 26, 2020) were drawn from publicly available county-level data curated by the New York Times from state and local governments as well as health departments nationwide,6 separated by month (ie, March, April, May, June, and July). Data on New York City were available in aggregate (rather than separated by borough). Cases and deaths were therefore apportioned to the three HRRs involving New York City in proportion to that area’s population. To adjust for the lag between COVID-19 cases and deaths,7,8 we offset deaths 2 weeks into the future so that the April COVID-19 death count for a given HRR included deaths that occurred for 1 month beginning 2 weeks after the start of April, and so on.

Analysis

We estimated Poisson distribution regressions for the outcome of COVID-19 death count in each HRR and month with one model for each of our six hospital-based resource variables. The offset (exposure) variable was COVID-19 case count. To adjust for the possibility of varying effects of hospital resources on deaths by month (ie, in anticipation that health systems might evolve in response to the pandemic over time), each model includes terms for the interaction between hospital-based resource and an indicator variable for month, as well as a fifth term for month. Standard errors were adjusted for clustering within HRR. We report resultant incident rate ratios (IRRs) with 95% CIs, and we report these as statistically significant at the 5% level only after adjustment for multiple comparisons across our six hospital-resource variables using the conservative Bonferroni adjustment. The pseudo-R2 for each of these six models is also reported to summarize the amount of variation in deaths explained. For our model with ICU beds per COVID-19 case, we perform postestimation prediction of number of deaths by HRR, assuming the counterfactual in which HRRs with fewer than average ICU beds per COVID-19 case instead had the average observed number of ICU beds per COVID-19 case by HRR in April, which functioned as a measure of early excess deaths potentially related to resource limitations. The study was classified as exempt by the Institutional Review Board at the Yale School of Medicine, New Haven, Connecticut. Analyses were conducted in Stata 15 (StataCorp LLC) and R.

RESULTS

A total of 4,453 hospitals across 306 HRRs were included and linked to 2,827 county-level COVID-19 case and death counts in each of 5 months (March through July 2020). The median HRR in our analysis included 14 hospitals, with a maximum of 76 hospitals (Los Angeles, California) and a minimum of 1 (Longview, Texas). Among HRRs, 206 (67.3%) had experienced caseloads exceeding 20 per 10,000 population, while 85 (27.8%) had experienced greater than 100 per 10,000 population in the peak month during the study period. The Table depicts results of each of six Poisson distribution regression models, with the finding that greater number of ICU beds (IRR, 0.194; 95% CI, 0.076-0.491), general medical/surgical beds (IRR, 0.800; 95% CI, 0.696-0.920), and nurses (IRR, 0.927; 95% CI, 0.888-0.967) per COVID-19 case in April were statistically significantly associated with reduced deaths.

 IRRs for Hospital-Based Resources on COVID-19 Deaths in March Through July 2020

The model including ICU beds per COVID-19 case had the largest pseudo-R2 at 0.6018, which suggests that ICU bed availability explains the most variation in death count among hospital resource variables analyzed. The incident rate ratio in this model implies that, for an entire additional ICU bed for each COVID-19 case (a one-unit increase in that variable), there is an associated one-fifth decrease in incidence rate (IRR, 0.194) of death in April. The mean value among HRRs in April was 0.25 ICU beds per case (one ICU bed for every four COVID-19 cases), but it was as low as 0.01 to 0.005 in hard-hit areas (one ICU bed for every 100 to 200 COVID-19 cases). The early excess deaths observed in April were not observed in later months. The magnitude of this effect can be summarized as follows: If the 152 HRRs in April with fewer than the mean number of ICU beds per COVID-19 case were to instead have the mean number (one ICU bed for every four COVID-19 cases), our model estimates that there would have been 15,571 fewer deaths that month. The HRRs with the largest number of early excess deaths were Manhattan in New York City (1,466), Bronx in New York City (1,315), Boston, Massachusetts (1,293), Philadelphia, Pennsylvania (955), Hartford, Connecticut (682), Detroit, Michigan (499), and Camden, New Jersey (484). The Figure depicts HRRs in the United States with early excess deaths by this measure in April.

April COVID-19 Excess Deaths Estimated in Model of ICU Bed Availability

DISCUSSION

We found significant associations between availability of hospital-based resources and COVID-19 deaths in the month of April 2020. This observation was consistent across measures of both hospital bed and staff capacity but not statistically significant in all cases. This provides empiric evidence in support of a preprint simulation publication by Branas et al showing the potential for thousands of excess deaths related to lack of available resources.4 Interestingly, the relationship between hospital-based resources per COVID-19 case and death count is not seen in May, June, or July. This may be because hospitals and health systems were rapidly adapting to pandemic demands9 by shifting resources or reorganizing existing infrastructure to free up beds and personnel.

Our findings underscore the importance of analyses that address heterogeneity in health system response over time and across different geographic areas. That the relationship is not seen after the early pandemic period, when hospitals and health systems were most overwhelmed, suggests that health systems and communities were able to adapt. Importantly, this work does not address the likely complex relationships among hospital resources and outcomes (for example, the benefit of ICU bed availability is likely limited when there are insufficient intensivists and nurses). These complexities should be a focus of future work. Furthermore, hospital resource flexibility, community efforts to slow transmission, and improvements in testing availability and the management of COVID-19 among hospitalized patients may all play a role in attenuating the relationship between baseline resource limitations and outcomes for patients with COVID-19.

These results merit further granular studies to examine specific hospital resources and observed variation in outcomes. Prior literature has linked inpatient capacity—variously defined as high census, acuity, turnover, or delayed admission—to outcomes including mortality among patients with stroke, among those with acute coronary syndrome, and among those requiring intensive care.10 Literature from Italy’s experience shows there was large variation in the case fatality rate among regions of Northern Italy and argues this was partially due to hospital resource limitations.11 Future work can also address whether just-in-time resource mobilization, such as temporary ICU expansion, physician cross-staffing, telemedicine, and dedicated units for COVID-19 patients, attenuated the impact of potential hospital resource scarcity on outcomes.

The present analysis is limited by the quality of the data. There is likely variation of available COVID-19 testing by HRR. It may be that areas with larger outbreaks early on generally tested a smaller, sicker proportion of population-level cases than did those with smaller outbreaks. This effect may be reversed if larger HRRs in urban areas have health systems and public health departments more inclined toward or capable of doing more testing. Furthermore, deaths related to COVID-19 are likely related to community-based factors, including nonhealthcare resources and underlying population characteristics, that likely correlate with the availability of hospital-based resources within HRRs. Some have called into question whether, a priori, we should expect hospital-based capacity to be an important driver of mortality at all,12 arguing that, when critical care capacity is exceeded, resources may be efficiently reallocated away from patients who are least likely to benefit. Because we used the American Hospital Association data, this snapshot of hospital resources is not limited to critical care capacity because there could be alternative explanations for situations in which mortality for both COVID-19 and non–COVID-19 patients may be lower and hospital resources are better matched with demand. For example, patients may seek care earlier in their disease course (whether COVID-19 or otherwise)13 if their local emergency department is not thought to be overwhelmed with case volume.

CONCLUSION

We find that COVID-19 deaths vary among HRRs. The availability of several hospital-based resources is associated with death rates and supports early efforts across the United States to “flatten the curve” to prevent hospital overload. Continued surveillance of this relationship is essential to guide policymakers and hospitals seeking to balance the still limited supply of resources with the demands of caring for both infectious and noninfectious patients in the coming months of this outbreak and in future pandemics.

Acknowledgment

The authors gratefully acknowledge the help of Carolyn Lusch, AICP, in generating depictions of results in Geographic Information Systems.

The COVID-19 pandemic is a crisis of mismatch between resources and infection burden. There is extraordinary heterogeneity across time and geography in the pandemic impact, with hospitals in New York City initially inundated while hospitals in major urban areas of California were comparatively quiet. Efforts to “flatten the curve” are intended to improve outcomes by reducing health system overload.1 In the case of hospital-based care, health systems’ primary resources include emergency and critical care bed and staff capacity.

Prior work has documented wide variability in intensive care capacity across the United States and hypothesized that even moderate disease outbreaks could overwhelm hospital referral regions (HRRs).2,3 Various simulations of outbreaks suggested that thousands of deaths are potentially preventable depending on the health system’s response,4 although the degree to which resource limitations have contributed to mortality during this COVID-19 pandemic has yet to be explored. The objective of this analysis was to examine the association between hospital resources and COVID-19 deaths amongst HRRs in the United States in the period from March 1 to July 26, 2020.

METHODS

Data

This was an analysis of the American Hospital Association Annual Survey Database from 2017 and 2018, including hospital resource variables such as total hospital beds, hospitalists, intensive care beds, intensivists, emergency physicians, and nurses.5 The analysis was limited to general medical and surgical hospitals capable of providing acute care services, defined as those reporting at least 500 emergency department visits in 2018. Where data were missing on analysis variables (26.0% missing overall), the data were drawn from the 2017 survey results (reduced to 23.8% missing) from the same site as available, and the remainder were imputed with multivariate imputation by chained equations. An identical analysis without imputation was performed as a sensitivity analysis that showed a similar pattern of results. Total resources were tabulated amongst HRRs, and the hospital resources per COVID-19 case calculated. HRRs are a geographic category devised to represent regional health care markets, and each includes hospital sites performing major procedures.3 These were the focus of the analysis because they may represent a meaningful geographic division of hospital-based resources. COVID-19 case and death counts (as of July 26, 2020) were drawn from publicly available county-level data curated by the New York Times from state and local governments as well as health departments nationwide,6 separated by month (ie, March, April, May, June, and July). Data on New York City were available in aggregate (rather than separated by borough). Cases and deaths were therefore apportioned to the three HRRs involving New York City in proportion to that area’s population. To adjust for the lag between COVID-19 cases and deaths,7,8 we offset deaths 2 weeks into the future so that the April COVID-19 death count for a given HRR included deaths that occurred for 1 month beginning 2 weeks after the start of April, and so on.

Analysis

We estimated Poisson distribution regressions for the outcome of COVID-19 death count in each HRR and month with one model for each of our six hospital-based resource variables. The offset (exposure) variable was COVID-19 case count. To adjust for the possibility of varying effects of hospital resources on deaths by month (ie, in anticipation that health systems might evolve in response to the pandemic over time), each model includes terms for the interaction between hospital-based resource and an indicator variable for month, as well as a fifth term for month. Standard errors were adjusted for clustering within HRR. We report resultant incident rate ratios (IRRs) with 95% CIs, and we report these as statistically significant at the 5% level only after adjustment for multiple comparisons across our six hospital-resource variables using the conservative Bonferroni adjustment. The pseudo-R2 for each of these six models is also reported to summarize the amount of variation in deaths explained. For our model with ICU beds per COVID-19 case, we perform postestimation prediction of number of deaths by HRR, assuming the counterfactual in which HRRs with fewer than average ICU beds per COVID-19 case instead had the average observed number of ICU beds per COVID-19 case by HRR in April, which functioned as a measure of early excess deaths potentially related to resource limitations. The study was classified as exempt by the Institutional Review Board at the Yale School of Medicine, New Haven, Connecticut. Analyses were conducted in Stata 15 (StataCorp LLC) and R.

RESULTS

A total of 4,453 hospitals across 306 HRRs were included and linked to 2,827 county-level COVID-19 case and death counts in each of 5 months (March through July 2020). The median HRR in our analysis included 14 hospitals, with a maximum of 76 hospitals (Los Angeles, California) and a minimum of 1 (Longview, Texas). Among HRRs, 206 (67.3%) had experienced caseloads exceeding 20 per 10,000 population, while 85 (27.8%) had experienced greater than 100 per 10,000 population in the peak month during the study period. The Table depicts results of each of six Poisson distribution regression models, with the finding that greater number of ICU beds (IRR, 0.194; 95% CI, 0.076-0.491), general medical/surgical beds (IRR, 0.800; 95% CI, 0.696-0.920), and nurses (IRR, 0.927; 95% CI, 0.888-0.967) per COVID-19 case in April were statistically significantly associated with reduced deaths.

 IRRs for Hospital-Based Resources on COVID-19 Deaths in March Through July 2020

The model including ICU beds per COVID-19 case had the largest pseudo-R2 at 0.6018, which suggests that ICU bed availability explains the most variation in death count among hospital resource variables analyzed. The incident rate ratio in this model implies that, for an entire additional ICU bed for each COVID-19 case (a one-unit increase in that variable), there is an associated one-fifth decrease in incidence rate (IRR, 0.194) of death in April. The mean value among HRRs in April was 0.25 ICU beds per case (one ICU bed for every four COVID-19 cases), but it was as low as 0.01 to 0.005 in hard-hit areas (one ICU bed for every 100 to 200 COVID-19 cases). The early excess deaths observed in April were not observed in later months. The magnitude of this effect can be summarized as follows: If the 152 HRRs in April with fewer than the mean number of ICU beds per COVID-19 case were to instead have the mean number (one ICU bed for every four COVID-19 cases), our model estimates that there would have been 15,571 fewer deaths that month. The HRRs with the largest number of early excess deaths were Manhattan in New York City (1,466), Bronx in New York City (1,315), Boston, Massachusetts (1,293), Philadelphia, Pennsylvania (955), Hartford, Connecticut (682), Detroit, Michigan (499), and Camden, New Jersey (484). The Figure depicts HRRs in the United States with early excess deaths by this measure in April.

April COVID-19 Excess Deaths Estimated in Model of ICU Bed Availability

DISCUSSION

We found significant associations between availability of hospital-based resources and COVID-19 deaths in the month of April 2020. This observation was consistent across measures of both hospital bed and staff capacity but not statistically significant in all cases. This provides empiric evidence in support of a preprint simulation publication by Branas et al showing the potential for thousands of excess deaths related to lack of available resources.4 Interestingly, the relationship between hospital-based resources per COVID-19 case and death count is not seen in May, June, or July. This may be because hospitals and health systems were rapidly adapting to pandemic demands9 by shifting resources or reorganizing existing infrastructure to free up beds and personnel.

Our findings underscore the importance of analyses that address heterogeneity in health system response over time and across different geographic areas. That the relationship is not seen after the early pandemic period, when hospitals and health systems were most overwhelmed, suggests that health systems and communities were able to adapt. Importantly, this work does not address the likely complex relationships among hospital resources and outcomes (for example, the benefit of ICU bed availability is likely limited when there are insufficient intensivists and nurses). These complexities should be a focus of future work. Furthermore, hospital resource flexibility, community efforts to slow transmission, and improvements in testing availability and the management of COVID-19 among hospitalized patients may all play a role in attenuating the relationship between baseline resource limitations and outcomes for patients with COVID-19.

These results merit further granular studies to examine specific hospital resources and observed variation in outcomes. Prior literature has linked inpatient capacity—variously defined as high census, acuity, turnover, or delayed admission—to outcomes including mortality among patients with stroke, among those with acute coronary syndrome, and among those requiring intensive care.10 Literature from Italy’s experience shows there was large variation in the case fatality rate among regions of Northern Italy and argues this was partially due to hospital resource limitations.11 Future work can also address whether just-in-time resource mobilization, such as temporary ICU expansion, physician cross-staffing, telemedicine, and dedicated units for COVID-19 patients, attenuated the impact of potential hospital resource scarcity on outcomes.

The present analysis is limited by the quality of the data. There is likely variation of available COVID-19 testing by HRR. It may be that areas with larger outbreaks early on generally tested a smaller, sicker proportion of population-level cases than did those with smaller outbreaks. This effect may be reversed if larger HRRs in urban areas have health systems and public health departments more inclined toward or capable of doing more testing. Furthermore, deaths related to COVID-19 are likely related to community-based factors, including nonhealthcare resources and underlying population characteristics, that likely correlate with the availability of hospital-based resources within HRRs. Some have called into question whether, a priori, we should expect hospital-based capacity to be an important driver of mortality at all,12 arguing that, when critical care capacity is exceeded, resources may be efficiently reallocated away from patients who are least likely to benefit. Because we used the American Hospital Association data, this snapshot of hospital resources is not limited to critical care capacity because there could be alternative explanations for situations in which mortality for both COVID-19 and non–COVID-19 patients may be lower and hospital resources are better matched with demand. For example, patients may seek care earlier in their disease course (whether COVID-19 or otherwise)13 if their local emergency department is not thought to be overwhelmed with case volume.

CONCLUSION

We find that COVID-19 deaths vary among HRRs. The availability of several hospital-based resources is associated with death rates and supports early efforts across the United States to “flatten the curve” to prevent hospital overload. Continued surveillance of this relationship is essential to guide policymakers and hospitals seeking to balance the still limited supply of resources with the demands of caring for both infectious and noninfectious patients in the coming months of this outbreak and in future pandemics.

Acknowledgment

The authors gratefully acknowledge the help of Carolyn Lusch, AICP, in generating depictions of results in Geographic Information Systems.

References

1. Phua J, Weng L, Ling L, et al; Asian Critical Care Clinical Trials Group. Intensive care management of coronavirus disease 2019 (COVID-19): challenges and recommendations. Lancet Respir Med. 2020;8(5):506-517. https://doi.org/10.1016/s2213-2600(20)30161-2
2. Carr BG, Addyson DK, Kahn JM. Variation in critical care beds per capita in the United States: implications for pandemic and disaster planning. JAMA. 2010;303(14):1371-1372. https://doi.org/10.1001/jama.2010.394
3. General FAQ. Dartmouth Atlas Project. 2020. Accessed July 8, 2020. https://www.dartmouthatlas.org/faq/
4. Branas CC, Rundle A, Pei S, et al. Flattening the curve before it flattens us: hospital critical care capacity limits and mortality from novel coronavirus (SARS-CoV2) cases in US counties. medRxiv. Preprint posted online April 6, 2020. https://doi.org/10.1101/2020.04.01.20049759
5. American Hospital Association Annual Survey Database. American Hospital Association. 2018. Accessed July 8, 2020. https://www.ahadata.com/aha-annual-survey-database
6. An Ongoing Repository of Data on Coronavirus Cases and Deaths in the U.S. New York Times. 2020. Accessed July 8, 2020. https://github.com/nytimes/covid-19-data
7. Baud D, Qi X, Nielsen-Saines K, Musso D, Pomar L, Favre G. Real estimates of mortality following COVID-19 infection. Lancet Infect Dis. 2020;20(7):773. https://doi.org/10.1016/s1473-3099(20)30195-x
8. Rosakis P, Marketou ME. Rethinking case fatality ratios for COVID-19 from a data-driven viewpoint. J Infect. 2020;81(2);e162-e164. https://doi.org/10.1016/j.jinf.2020.06.010
9. Auerbach A, O’Leary KJ, Greysen SR, et al; HOMERuN COVID-19 Collaborative Group. Hospital ward adaptation during the COVID-19 pandemic: a national survey of academic medical centers. J Hosp Med. 2020;15(8):483-488. https://doi.org/10.12788/jhm.3476
10. Eriksson CO, Stoner RC, Eden KB, Newgard CD, Guide JM. The association between hospital capacity strain and inpatient outcomes in highly developed countries: a systematic review. J Gen Intern Med. 2017;32(6):686-696. https://doi.org/10.1007/s11606-016-3936-3
11. Volpato S, Landi F, Incalzi RA. A frail health care system for an old population: lesson form [sic] the COVID-19 outbreak in Italy. J Gerontol Series A. 2020;75(9):e126-e127. https://doi.org/10.1093/gerona/glaa087
12. Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455. https://doi.org/10.7326/0003-4819-159-7-201310010-00004
13. Moroni F, Gramegna M, Agello S, et al. Collateral damage: medical care avoidance behavior among patients with myocardial infarction during the COVID-19 pandemic. JACC Case Rep. 2020;2(10):1620-1624. https://doi.org/10.1016/j.jaccas.2020.04.010

References

1. Phua J, Weng L, Ling L, et al; Asian Critical Care Clinical Trials Group. Intensive care management of coronavirus disease 2019 (COVID-19): challenges and recommendations. Lancet Respir Med. 2020;8(5):506-517. https://doi.org/10.1016/s2213-2600(20)30161-2
2. Carr BG, Addyson DK, Kahn JM. Variation in critical care beds per capita in the United States: implications for pandemic and disaster planning. JAMA. 2010;303(14):1371-1372. https://doi.org/10.1001/jama.2010.394
3. General FAQ. Dartmouth Atlas Project. 2020. Accessed July 8, 2020. https://www.dartmouthatlas.org/faq/
4. Branas CC, Rundle A, Pei S, et al. Flattening the curve before it flattens us: hospital critical care capacity limits and mortality from novel coronavirus (SARS-CoV2) cases in US counties. medRxiv. Preprint posted online April 6, 2020. https://doi.org/10.1101/2020.04.01.20049759
5. American Hospital Association Annual Survey Database. American Hospital Association. 2018. Accessed July 8, 2020. https://www.ahadata.com/aha-annual-survey-database
6. An Ongoing Repository of Data on Coronavirus Cases and Deaths in the U.S. New York Times. 2020. Accessed July 8, 2020. https://github.com/nytimes/covid-19-data
7. Baud D, Qi X, Nielsen-Saines K, Musso D, Pomar L, Favre G. Real estimates of mortality following COVID-19 infection. Lancet Infect Dis. 2020;20(7):773. https://doi.org/10.1016/s1473-3099(20)30195-x
8. Rosakis P, Marketou ME. Rethinking case fatality ratios for COVID-19 from a data-driven viewpoint. J Infect. 2020;81(2);e162-e164. https://doi.org/10.1016/j.jinf.2020.06.010
9. Auerbach A, O’Leary KJ, Greysen SR, et al; HOMERuN COVID-19 Collaborative Group. Hospital ward adaptation during the COVID-19 pandemic: a national survey of academic medical centers. J Hosp Med. 2020;15(8):483-488. https://doi.org/10.12788/jhm.3476
10. Eriksson CO, Stoner RC, Eden KB, Newgard CD, Guide JM. The association between hospital capacity strain and inpatient outcomes in highly developed countries: a systematic review. J Gen Intern Med. 2017;32(6):686-696. https://doi.org/10.1007/s11606-016-3936-3
11. Volpato S, Landi F, Incalzi RA. A frail health care system for an old population: lesson form [sic] the COVID-19 outbreak in Italy. J Gerontol Series A. 2020;75(9):e126-e127. https://doi.org/10.1093/gerona/glaa087
12. Wagner J, Gabler NB, Ratcliffe SJ, Brown SE, Strom BL, Halpern SD. Outcomes among patients discharged from busy intensive care units. Ann Intern Med. 2013;159(7):447-455. https://doi.org/10.7326/0003-4819-159-7-201310010-00004
13. Moroni F, Gramegna M, Agello S, et al. Collateral damage: medical care avoidance behavior among patients with myocardial infarction during the COVID-19 pandemic. JACC Case Rep. 2020;2(10):1620-1624. https://doi.org/10.1016/j.jaccas.2020.04.010

Issue
Journal of Hospital Medicine 16(4)
Issue
Journal of Hospital Medicine 16(4)
Page Number
211-214. Published Online First January 20, 2021
Page Number
211-214. Published Online First January 20, 2021
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2021 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Alexander T Janke, MD; Email: alexander.janke@yale.edu; Telephone: 203-737-2644.
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Article PDF Media

Planned Readmission Algorithm

Article Type
Changed
Tue, 05/16/2017 - 22:59
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

Files
References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
Article PDF
Issue
Journal of Hospital Medicine - 10(10)
Publications
Page Number
670-677
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

The Centers for Medicare & Medicaid Services (CMS) publicly reports all‐cause risk‐standardized readmission rates after acute‐care hospitalization for acute myocardial infarction, pneumonia, heart failure, total hip and knee arthroplasty, chronic obstructive pulmonary disease, stroke, and for patients hospital‐wide.[1, 2, 3, 4, 5] Ideally, these measures should capture unplanned readmissions that arise from acute clinical events requiring urgent rehospitalization. Planned readmissions, which are scheduled admissions usually involving nonurgent procedures, may not be a signal of quality of care. Including planned readmissions in readmission quality measures could create a disincentive to provide appropriate care to patients who are scheduled for elective or necessary procedures unrelated to the quality of the prior admission. Accordingly, under contract to the CMS, we were asked to develop an algorithm to identify planned readmissions. A version of this algorithm is now incorporated into all publicly reported readmission measures.

Given the widespread use of the planned readmission algorithm in public reporting and its implications for hospital quality measurement and evaluation, the objective of this study was to describe the development process, and to validate and refine the algorithm by reviewing charts of readmitted patients.

METHODS

Algorithm Development

To create a planned readmission algorithm, we first defined planned. We determined that readmissions for obstetrical delivery, maintenance chemotherapy, major organ transplant, and rehabilitation should always be considered planned in the sense that they are desired and/or inevitable, even if not specifically planned on a certain date. Apart from these specific types of readmissions, we defined planned readmissions as nonacute readmissions for scheduled procedures, because the vast majority of planned admissions are related to procedures. We also defined readmissions for acute illness or for complications of care as unplanned for the purposes of a quality measure. Even if such readmissions included a potentially planned procedure, because complications of care represent an important dimension of quality that should not be excluded from outcome measurement, these admissions should not be removed from the measure outcome. This definition of planned readmissions does not imply that all unplanned readmissions are unexpected or avoidable. However, it has proven very difficult to reliably define avoidable readmissions, even by expert review of charts, and we did not attempt to do so here.[6, 7]

In the second stage, we operationalized this definition into an algorithm. We used the Agency for Healthcare Research and Quality's Clinical Classification Software (CCS) codes to group thousands of individual procedure and diagnosis International Classification of Disease, Ninth Revision, Clinical Modification (ICD‐9‐CM) codes into clinically coherent, mutually exclusive procedure CCS categories and mutually exclusive diagnosis CCS categories, respectively. Clinicians on the investigative team reviewed the procedure categories to identify those that are commonly planned and that would require inpatient admission. We also reviewed the diagnosis categories to identify acute diagnoses unlikely to accompany elective procedures. We then created a flow diagram through which every readmission could be run to determine whether it was planned or unplanned based on our categorizations of procedures and diagnoses (Figure 1, and Supporting Information, Appendix A, in the online version of this article). This version of the algorithm (v1.0) was submitted to the National Quality Forum (NQF) as part of the hospital‐wide readmission measure. The measure (NQR #1789) received endorsement in April 2012.

Figure 1
Flow diagram for planned readmissions (see Supporting Information, Appendix A, in the online version of this article for referenced tables).

In the third stage of development, we posted the algorithm for 2 public comment periods and recruited 27 outside experts to review and refine the algorithm following a standardized, structured process (see Supporting Information, Appendix B, in the online version of this article). Because the measures publicly report and hold hospitals accountable for unplanned readmission rates, we felt it most important that the algorithm include as few planned readmissions in the reported, unplanned outcome as possible (ie, have high negative predictive value). Therefore, in equivocal situations in which experts felt procedure categories were equally often planned or unplanned, we added those procedures to the potentially planned list. We also solicited feedback from hospitals on algorithm performance during a confidential test run of the hospital‐wide readmission measure in the fall of 2012. Based on all of this feedback, we made a number of changes to the algorithm, which was then identified as v2.1. Version 2.1 of the algorithm was submitted to the NQF as part of the endorsement process for the acute myocardial infarction and heart failure readmission measures and was endorsed by the NQF in January 2013. The algorithm (v2.1) is now applied, adapted if necessary, to all publicly reported readmission measures.[8]

Algorithm Validation: Study Cohort

We recruited 2 hospital systems to participate in a chart validation study of the accuracy of the planned readmission algorithm (v2.1). Within these 2 health systems, we selected 7 hospitals with varying bed size, teaching status, and safety‐net status. Each included 1 large academic teaching hospital that serves as a regional referral center. For each hospital's index admissions, we applied the inclusion and exclusion criteria from the hospital‐wide readmission measure. Index admissions were included for patients age 65 years or older; enrolled in Medicare fee‐for‐service (FFS); discharged from a nonfederal, short‐stay, acute‐care hospital or critical access hospital; without an in‐hospital death; not transferred to another acute‐care facility; and enrolled in Part A Medicare for 1 year prior to discharge. We excluded index admissions for patients without at least 30 days postdischarge enrollment in FFS Medicare, discharged against medical advice, admitted for medical treatment of cancer or primary psychiatric disease, admitted to a Prospective Payment System‐exempt cancer hospital, or who died during the index hospitalization. In addition, for this study, we included only index admissions that were followed by a readmission to a hospital within the participating health system between July 1, 2011 and June 30, 2012. Institutional review board approval was obtained from each of the participating health systems, which granted waivers of signed informed consent and Health Insurance Portability and Accountability Act waivers.

Algorithm Validation: Sample Size Calculation

We determined a priori that the minimum acceptable positive predictive value, or proportion of all readmissions the algorithm labels planned that are truly planned, would be 60%, and the minimum acceptable negative predictive value, or proportion of all readmissions the algorithm labels as unplanned that are truly unplanned, would be 80%. We calculated the sample size required to be confident of these values 10% and determined we would need a total of 291 planned charts and 162 unplanned charts. We inflated these numbers by 20% to account for missing or unobtainable charts for a total of 550 charts. To achieve this sample size, we included all eligible readmissions from all participating hospitals that were categorized as planned. At the 5 smaller hospitals, we randomly selected an equal number of unplanned readmissions occurring at any hospital in its healthcare system. At the 2 largest hospitals, we randomly selected 50 unplanned readmissions occurring at any hospital in its healthcare system.

Algorithm Validation: Data Abstraction

We developed an abstraction tool, tested and refined it using sample charts, and built the final the tool into a secure, password‐protected Microsoft Access 2007 (Microsoft Corp., Redmond, WA) database (see Supporting Information, Appendix C, in the online version of this article). Experienced chart abstractors with RN or MD degrees from each hospital site participated in a 1‐hour training session to become familiar with reviewing medical charts, defining planned/unplanned readmissions, and the data abstraction process. For each readmission, we asked abstractors to review as needed: emergency department triage and physician notes, admission history and physical, operative report, discharge summary, and/or discharge summary from a prior admission. The abstractors verified the accuracy of the administrative billing data, including procedures and principal diagnosis. In addition, they abstracted the source of admission and dates of all major procedures. Then the abstractors provided their opinion and supporting rationale as to whether a readmission was planned or unplanned. They were not asked to determine whether the readmission was preventable. To determine the inter‐rater reliability of data abstraction, an independent abstractor at each health system recoded a random sample of 10% of the charts.

Statistical Analysis

To ensure that we had obtained a representative sample of charts, we identified the 10 most commonly planned procedures among cases identified as planned by the algorithm in the validation cohort and then compared this with planned cases nationally. To confirm the reliability of the abstraction process, we used the kappa statistic to determine the inter‐rater reliability of the determination of planned or unplanned status. Additionally, the full study team, including 5 practicing clinicians, reviewed the details of every chart abstraction in which the algorithm was found to have misclassified the readmission as planned or unplanned. In 11 cases we determined that the abstractor had misunderstood the definition of planned readmission (ie, not all direct admissions are necessarily planned) and we reclassified the chart review assignment accordingly.

We calculated sensitivity, specificity, positive predictive value, and negative predictive value of the algorithm for the validation cohort as a whole, weighted to account for the prevalence of planned readmissions as defined by the algorithm in the national data (7.8%). Weighting is necessary because we did not obtain a pure random sample, but rather selected a stratified sample that oversampled algorithm‐identified planned readmissions.[9] We also calculated these rates separately for large hospitals (>600 beds) and for small hospitals (600 beds).

Finally, we examined performance of the algorithm for individual procedures and diagnoses to determine whether any procedures or diagnoses should be added or removed from the algorithm. First, we reviewed the diagnoses, procedures, and brief narratives provided by the abstractors for all cases in which the algorithm misclassified the readmission as either planned or unplanned. Second, we calculated the positive predictive value for each procedure that had been flagged as planned by the algorithm, and reviewed all readmissions (correctly and incorrectly classified) in which procedures with low positive predictive value took place. We also calculated the frequency with which the procedure was the only qualifying procedure resulting in an accurate or inaccurate classification. Third, to identify changes that should be made to the lists of acute and nonacute diagnoses, we reviewed the principal diagnosis for all readmissions misclassified by the algorithm as either planned or unplanned, and examined the specific ICD‐9‐CM codes within each CCS group that were most commonly associated with misclassifications.

After determining the changes that should be made to the algorithm based on these analyses, we recalculated the sensitivity, specificity, positive predictive value, and negative predictive value of the proposed revised algorithm (v3.0). All analyses used SAS version 9.3 (SAS Institute, Cary, NC).

RESULTS

Study Cohort

Characteristics of participating hospitals are shown in Table 1. Hospitals represented in this sample ranged in size, teaching status, and safety net status, although all were nonprofit. We selected 663 readmissions for review, 363 planned and 300 unplanned. Overall we were able to select 80% of hospitals planned cases for review; the remainder occurred at hospitals outside the participating hospital system. Abstractors were able to locate and review 634 (96%) of the eligible charts (range, 86%100% per hospital). The kappa statistic for inter‐rater reliability was 0.83.

Hospital Characteristics
DescriptionHospitals, NReadmissions Selected for Review, N*Readmissions Reviewed, N (% of Eligible)Unplanned Readmissions Reviewed, NPlanned Readmissions Reviewed, N% of Hospital's Planned Readmissions Reviewed*
  • NOTE: *Nonselected cases were readmitted to hospitals outside the system and could not be reviewed.

All hospitals7663634 (95.6)28335177.3
No. of beds>6002346339 (98.0)11622384.5
>3006002190173 (91.1)858887.1
<3003127122 (96.0)824044.9
OwnershipGovernment0     
For profit0     
Not for profit7663634 (95.6)28335177.3
Teaching statusTeaching2346339 (98.0)11622384.5
Nonteaching5317295 (93.1)16712867.4
Safety net statusSafety net2346339 (98.0)11622384.5
Nonsafety net5317295 (93.1)16712867.4
RegionNew England3409392 (95.8)15523785.9
South Central4254242 (95.3)12811464.0

The study sample included 57/67 (85%) of the procedure or condition categories on the potentially planned list. The most common procedure CCS categories among planned readmissions (v2.1) in the validation cohort were very similar to those in the national dataset (see Supporting Information, Appendix D, in the online version of this article). Of the top 20 most commonly planned procedure CCS categories in the validation set, all but 2, therapeutic radiology for cancer treatment (CCS 211) and peripheral vascular bypass (CCS 55), were among the top 20 most commonly planned procedure CCS categories in the national data.

Test Characteristics of Algorithm

The weighted test characteristics of the current algorithm (v2.1) are shown in Table 2. Overall, the algorithm correctly identified 266 readmissions as unplanned and 181 readmissions as planned, and misidentified 170 readmissions as planned and 15 as unplanned. Once weighted to account for the stratified sampling design, the overall prevalence of true planned readmissions was 8.9% of readmissions. The weighted sensitivity was 45.1% overall and was higher in large teaching centers than in smaller community hospitals. The weighted specificity was 95.9%. The positive predictive value was 51.6%, and the negative predictive value was 94.7%.

Test Characteristics of the Algorithm
CohortSensitivitySpecificityPositive Predictive ValueNegative Predictive Value
Algorithm v2.1
Full cohort45.1%95.9%51.6%94.7%
Large hospitals50.9%96.1%53.8%95.6%
Small hospitals40.2%95.5%47.7%94.0%
Revised algorithm v3.0
Full cohort49.8%96.5%58.7%94.5%
Large hospitals57.1%96.8%63.0%95.9%
Small hospitals42.6%95.9%52.6%93.9%

Accuracy of Individual Diagnoses and Procedures

The positive predictive value of the algorithm for individual procedure categories varied widely, from 0% to 100% among procedures with at least 10 cases (Table 3). The procedure for which the algorithm was least accurate was CCS 211, therapeutic radiology for cancer treatment (0% positive predictive value). By contrast, maintenance chemotherapy (90%) and other therapeutic procedures, hemic and lymphatic system (100%) were most accurate. Common procedures with less than 50% positive predictive value (ie, that the algorithm commonly misclassified as planned) were diagnostic cardiac catheterization (25%); debridement of wound, infection, or burn (25%); amputation of lower extremity (29%); insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (33%); and other hernia repair (43%). Of these, diagnostic cardiac catheterization and cardiac devices are the first and second most common procedures nationally, respectively.

Positive Predictive Value of Algorithm by Procedure Category (Among Procedures With at Least Ten Readmissions in Validation Cohort)
Readmission Procedure CCS CodeTotal Categorized as Planned by Algorithm, NVerified as Planned by Chart Review, NPositive Predictive Value
  • NOTE: Abbreviations: CCS, Clinical Classification Software; OR, operating room.

47 Diagnostic cardiac catheterization; coronary arteriography441125%
224 Cancer chemotherapy402255%
157 Amputation of lower extremity31929%
49 Other operating room heart procedures271659%
48 Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator24833%
43 Heart valve procedures201680%
Maintenance chemotherapy (diagnosis CCS 45)201890%
78 Colorectal resection18950%
169 Debridement of wound, infection or burn16425%
84 Cholecystectomy and common duct exploration16531%
99 Other OR gastrointestinal therapeutic procedures16850%
158 Spinal fusion151173%
142 Partial excision bone141071%
86 Other hernia repair14642%
44 Coronary artery bypass graft131077%
67 Other therapeutic procedures, hemic and lymphatic system1313100%
211 Therapeutic radiology for cancer treatment1200%
45 Percutaneous transluminal coronary angioplasty11764%
Total49727254.7%

The readmissions with least abstractor agreement were those involving CCS 157 (amputation of lower extremity) and CCS 169 (debridement of wound, infection or burn). Readmissions for these procedures were nearly always performed as a consequence of acute worsening of chronic conditions such as osteomyelitis or ulceration. Abstractors were divided over whether these readmissions were appropriate to call planned.

Changes to the Algorithm

We determined that the accuracy of the algorithm would be improved by removing 2 procedure categories from the planned procedure list (therapeutic radiation [CCS 211] and cancer chemotherapy [CCS 224]), adding 1 diagnosis category to the acute diagnosis list (hypertension with complications [CCS 99]), and splitting 2 diagnosis condition categories into acute and nonacute ICD‐9‐CM codes (pancreatic disorders [CCS 149] and biliary tract disease [CCS 152]). Detailed rationales for each modification to the planned readmission algorithm are described in Table 4. We felt further examination of diagnostic cardiac catheterization and cardiac devices was warranted given their high frequency, despite low positive predictive value. We also elected not to alter the categorization of amputation or debridement because it was not easy to determine whether these admissions were planned or unplanned even with chart review. We plan further analyses of these procedure categories.

Suggested Changes to Planned Readmission Algorithm v2.1 With Rationale
ActionDiagnosis or Procedure CategoryAlgorithmChartNRationale for Change
  • NOTE: Abbreviations: CCS, Clinical Classification Software; ICD‐9, International Classification od Diseases, Ninth Revision. *Number of cases in which CCS 47 was the only qualifying procedure Number of cases in which CCS 48 was the only qualifying procedure.

Remove from planned procedure listTherapeutic radiation (CCS 211)Accurate  The algorithm was inaccurate in every case. All therapeutic radiology during readmissions was performed because of acute illness (pain crisis, neurologic crisis) or because scheduled treatment occurred during an unplanned readmission. In national data, this ranks as the 25th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned0
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Cancer chemotherapy (CCS 224)Accurate  Of the 22 correctly identified as planned, 18 (82%) would already have been categorized as planned because of a principal diagnosis of maintenance chemotherapy. Therefore, removing CCS 224 from the planned procedure list would only miss a small fraction of planned readmissions but would avoid a large number of misclassifications. In national data, this ranks as the 8th most common planned procedure identified by the algorithm v2.1.
PlannedPlanned22
UnplannedUnplanned0
Inaccurate  
UnplannedPlanned0
PlannedUnplanned18
Add to planned procedure listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. A handful of these cases were missed because the planned procedure was not on the current planned procedure list; however, those procedures (eg, abdominal paracentesis, colonoscopy, endoscopy) were nearly always unplanned overall and should therefore not be added as procedures that potentially qualify as an admission as planned.
Remove from acute diagnosis listNone   The abstractors felt a planned readmission was missed by the algorithm in 15 cases. The relevant disqualifying acute diagnoses were much more often associated with unplanned readmissions in our dataset.
Add to acute diagnosis listHypertension with complications (CCS 99)Accurate  This CCS was associated with only 1 planned readmission (for elective nephrectomy, a very rare procedure). Every other time this CCS appeared in the dataset, it was associated with an unplanned readmission (12/13, 92%); 10 of those, however, were misclassified by the algorithm as planned because they were not excluded by diagnosis (91% error rate). Consequently, adding this CCS to the acute diagnosis list is likely to miss only a very small fraction of planned readmissions, while making the overall algorithm much more accurate.
PlannedPlanned1
UnplannedUnplanned2
Inaccurate  
UnplannedPlanned0
PlannedUnplanned10
Split diagnosis condition category into component ICD‐9 codesPancreatic disorders (CCS 152)Accurate  ICD‐9 code 577.0 (acute pancreatitis) is the only acute code in this CCS. Acute pancreatitis was present in 2 cases that were misclassified as planned. Clinically, there is no situation in which a planned procedure would reasonably be performed in the setting of acute pancreatitis. Moving ICD‐9 code 577.0 to the acute list and leaving the rest of the ICD‐9 codes in CCS 152 on the nonacute list will enable the algorithm to continue to identify planned procedures for chronic pancreatitis.
PlannedPlanned0
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned0
PlannedUnplanned2
Biliary tract disease (CCS 149)Accurate  This CCS is a mix of acute and chronic diagnoses. Of 14 charts classified as planned with CCS 149 in the principal diagnosis field, 12 were misclassified (of which 10 were associated with cholecystectomy). Separating out the acute and nonacute diagnoses will increase the accuracy of the algorithm while still ensuring that planned cholecystectomies and other procedures can be identified. Of the ICD‐9 codes in CCS 149, the following will be added to the acute diagnosis list: 574.0, 574.3, 574.6, 574.8, 575.0, 575.12, 576.1.
PlannedPlanned2
UnplannedUnplanned3
Inaccurate  
UnplannedPlanned0
PlannedUnplanned12
Consider for change after additional studyDiagnostic cardiac catheterization (CCS 47)Accurate  The algorithm misclassified as planned 25/38 (66%) unplanned readmissions in which diagnostic catheterizations were the only qualifying planned procedure. It also correctly identified 3/3 (100%) planned readmissions in which diagnostic cardiac catheterizations were the only qualifying planned procedure. This is the highest volume procedure in national data.
PlannedPlanned3*
UnplannedUnplanned13*
Inaccurate  
UnplannedPlanned0*
PlannedUnplanned25*
Insertion, revision, replacement, removal of cardiac pacemaker or cardioverter/defibrillator (CCS 48)Accurate  The algorithm misclassified as planned 4/5 (80%) unplanned readmissions in which cardiac devices were the only qualifying procedure. However, it also correctly identified 7/8 (87.5%) planned readmissions in which cardiac devices were the only qualifying planned procedure. CCS 48 is the second most common planned procedure category nationally.
PlannedPlanned7
UnplannedUnplanned1
Inaccurate  
UnplannedPlanned1
PlannedUnplanned4

The revised algorithm (v3.0) had a weighted sensitivity of 49.8%, weighted specificity of 96.5%, positive predictive value of 58.7%, and negative predictive value of 94.5% (Table 2). In aggregate, these changes would increase the reported unplanned readmission rate from 16.0% to 16.1% in the hospital‐wide readmission measure, using 2011 to 2012 data, and would decrease the fraction of all readmissions considered planned from 7.8% to 7.2%.

DISCUSSION

We developed an algorithm based on administrative data that in its currently implemented form is very accurate at identifying unplanned readmissions, ensuring that readmissions included in publicly reported readmission measures are likely to be truly unplanned. However, nearly half of readmissions the algorithm classifies as planned are actually unplanned. That is, the algorithm is overcautious in excluding unplanned readmissions that could have counted as outcomes, particularly among admissions that include diagnostic cardiac catheterization or placement of cardiac devices (pacemakers, defibrillators). However, these errors only occur within the 7.8% of readmissions that are classified as planned and therefore do not affect overall readmission rates dramatically. A perfect algorithm would reclassify approximately half of these planned readmissions as unplanned, increasing the overall readmission rate by 0.6 percentage points.

On the other hand, the algorithm also only identifies approximately half of true planned readmissions as planned. Because the true prevalence of planned readmissions is low (approximately 9% of readmissions based on weighted chart review prevalence, or an absolute rate of 1.4%), this low sensitivity has a small effect on algorithm performance. Removing all true planned readmissions from the measure outcome would decrease the overall readmission rate by 0.8 percentage points, similar to the expected 0.6 percentage point increase that would result from better identifying unplanned readmissions; thus, a perfect algorithm would likely decrease the reported unplanned readmission rate by a net 0.2%. Overall, the existing algorithm appears to come close to the true prevalence of planned readmissions, despite inaccuracy on an individual‐case basis. The algorithm performed best at large hospitals, which are at greatest risk of being statistical outliers and of accruing penalties under the Hospital Readmissions Reduction Program.[10]

We identified several changes that marginally improved the performance of the algorithm by reducing the number of unplanned readmissions that are incorrectly removed from the measure, while avoiding the inappropriate inclusion of planned readmissions in the outcome. This revised algorithm, v3.0, was applied to public reporting of readmission rates at the end of 2014. Overall, implementing these changes increases the reported readmission rate very slightly. We also identified other procedures associated with high inaccuracy rates, removal of which would have larger impact on reporting rates, and which therefore merit further evaluation.

There are other potential methods of identifying planned readmissions. For instance, as of October 1, 2013, new administrative billing codes were created to allow hospitals to indicate that a patient was discharged with a planned acute‐care hospital inpatient readmission, without limitation as to when it will take place.[11] This code must be used at the time of the index admission to indicate that a future planned admission is expected, and was specified only to be used for neonates and patients with acute myocardial infarction. This approach, however, would omit planned readmissions that are not known to the initial discharging team, potentially missing planned readmissions. Conversely, some patients discharged with a plan for readmission may be unexpectedly readmitted for an unplanned reason. Given that the new codes were not available at the time we conducted the validation study, we were not able to determine how often the billing codes accurately identified planned readmissions. This would be an important area to consider for future study.

An alternative approach would be to create indicator codes to be applied at the time of readmission that would indicate whether that admission was planned or unplanned. Such a code would have the advantage of allowing each planned readmission to be flagged by the admitting clinicians at the time of admission rather than by an algorithm that inherently cannot be perfect. However, identifying planned readmissions at the time of readmission would also create opportunity for gaming and inconsistent application of definitions between hospitals; additional checks would need to be put in place to guard against these possibilities.

Our study has some limitations. We relied on the opinion of chart abstractors to determine whether a readmission was planned or unplanned; in a few cases, such as smoldering wounds that ultimately require surgical intervention, that determination is debatable. Abstractions were done at local institutions to minimize risks to patient privacy, and therefore we could not centrally verify determinations of planned status except by reviewing source of admission, dates of procedures, and narrative comments reported by the abstractors. Finally, we did not have sufficient volume of planned procedures to determine accuracy of the algorithm for less common procedure categories or individual procedures within categories.

In summary, we developed an algorithm to identify planned readmissions from administrative data that had high specificity and moderate sensitivity, and refined it based on chart validation. This algorithm is in use in public reporting of readmission measures to maximize the probability that the reported readmission rates represent truly unplanned readmissions.[12]

Disclosures: Financial supportThis work was performed under contract HHSM‐500‐2008‐0025I/HHSM‐500‐T0001, Modification No. 000008, titled Measure Instrument Development and Support, funded by the Centers for Medicare and Medicaid Services (CMS), an agency of the US Department of Health and Human Services. Drs. Horwitz and Ross are supported by the National Institute on Aging (K08 AG038336 and K08 AG032886, respectively) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Krumholz is supported by grant U01 HL105270‐05 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. No funding source had any role in the study design; in the collection, analysis, and interpretation of data; or in the writing of the article. The CMS reviewed and approved the use of its data for this work and approved submission of the manuscript. All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare that all authors have support from the CMS for the submitted work. In addition, Dr. Ross is a member of a scientific advisory board for FAIR Health Inc. Dr. Krumholz chairs a cardiac scientific advisory board for UnitedHealth and is the recipient of research agreements from Medtronic and Johnson & Johnson through Yale University, to develop methods of clinical trial data sharing. All other authors report no conflicts of interest.

References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
References
  1. Lindenauer PK, Normand SL, Drye EE, et al. Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia. J Hosp Med. 2011;6(3):142150.
  2. Krumholz HM, Lin Z, Drye EE, et al. An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction. Circ Cardiovasc Qual Outcomes. 2011;4(2):243252.
  3. Keenan PS, Normand SL, Lin Z, et al. An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure. Circ Cardiovasc Qual Outcomes. 2008;1:2937.
  4. Grosso LM, Curtis JP, Lin Z, et al. Hospital‐level 30‐day all‐cause risk‐standardized readmission rate following elective primary total hip arthroplasty (THA) and/or total knee arthroplasty (TKA). Available at: http://www.qualitynet.org/dcs/ContentServer?c=Page161(supp10 l):S66S75.
  5. Walraven C, Jennings A, Forster AJ. A meta‐analysis of hospital 30‐day avoidable readmission rates. J Eval Clin Pract. 2011;18(6):12111218.
  6. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391E402.
  7. Horwitz LI, Partovian C, Lin Z, et al. Centers for Medicare 3(4):477492.
  8. Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342343.
  9. Centers for Medicare and Medicaid Services. Inpatient Prospective Payment System/Long‐Term Care Hospital (IPPS/LTCH) final rule. Fed Regist. 2013;78:5053350534.
  10. Long SK, Stockley K, Dahlen H. Massachusetts health reforms: uninsurance remains low, self‐reported health status improves as state prepares to tackle costs. Health Aff (Millwood). 2012;31(2):444451.
Issue
Journal of Hospital Medicine - 10(10)
Issue
Journal of Hospital Medicine - 10(10)
Page Number
670-677
Page Number
670-677
Publications
Publications
Article Type
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data
Display Headline
Development and Validation of an Algorithm to Identify Planned Readmissions From Claims Data
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Leora Horwitz, MD, Department of Population Health, NYU School of Medicine, 550 First Avenue, TRB, Room 607, New York, NY 10016; Telephone: 646‐501‐2685; Fax: 646‐501‐2706; E‐mail: leora.horwitz@nyumc.org
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Continuing Medical Education Program in

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Continuing Medical Education Program in the Journal of Hospital Medicine

If you wish to receive credit for this activity, which beginson the next page, please refer to the website: www.blackwellpublishing.com/cme.

Accreditation and Designation Statement

Blackwell Futura Media Services designates this educational activity for a 1 AMA PRA Category 1 Credit. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Blackwell Futura Media Services is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians.

Educational Objectives

Upon completion of this educational activity, participants will be better able to:

  • Identify the approximate 30‐day readmission rate of Medicare patient hospitalized initially for pneumonia.

  • Distinguish which variables were accounted and unaccounted for in the development of a pneumonia readmission model.

Continuous participation in the Journal of Hospital Medicine CME program will enable learners to be better able to:

  • Interpret clinical guidelines and their applications for higher quality and more efficient care for all hospitalized patients.

  • Describe the standard of care for common illnesses and conditions treated in the hospital; such as pneumonia, COPD exacerbation, acute coronary syndrome, HF exacerbation, glycemic control, venous thromboembolic disease, stroke, etc.

  • Discuss evidence‐based recommendations involving transitions of care, including the hospital discharge process.

  • Gain insights into the roles of hospitalists as medical educators, researchers, medical ethicists, palliative care providers, and hospital‐based geriatricians.

  • Incorporate best practices for hospitalist administration, including quality improvement, patient safety, practice management, leadership, and demonstrating hospitalist value.

  • Identify evidence‐based best practices and trends for both adult and pediatric hospital medicine.

Instructions on Receiving Credit

For information on applicability and acceptance of continuing medical education credit for this activity, please consult your professional licensing board.

This activity is designed to be completed within the time designated on the title page; physicians should claim only those credits that reflect the time actually spent in the activity. To successfully earn credit, participants must complete the activity during the valid credit period that is noted on the title page.

Follow these steps to earn credit:

  • Log on to www.blackwellpublishing.com/cme.

  • Read the target audience, learning objectives, and author disclosures.

  • Read the article in print or online format.

  • Reflect on the article.

  • Access the CME Exam, and choose the best answer to each question.

  • Complete the required evaluation component of the activity.

Article PDF
Issue
Journal of Hospital Medicine - 6(3)
Publications
Page Number
141-141
Sections
Article PDF
Article PDF

If you wish to receive credit for this activity, which beginson the next page, please refer to the website: www.blackwellpublishing.com/cme.

Accreditation and Designation Statement

Blackwell Futura Media Services designates this educational activity for a 1 AMA PRA Category 1 Credit. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Blackwell Futura Media Services is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians.

Educational Objectives

Upon completion of this educational activity, participants will be better able to:

  • Identify the approximate 30‐day readmission rate of Medicare patient hospitalized initially for pneumonia.

  • Distinguish which variables were accounted and unaccounted for in the development of a pneumonia readmission model.

Continuous participation in the Journal of Hospital Medicine CME program will enable learners to be better able to:

  • Interpret clinical guidelines and their applications for higher quality and more efficient care for all hospitalized patients.

  • Describe the standard of care for common illnesses and conditions treated in the hospital; such as pneumonia, COPD exacerbation, acute coronary syndrome, HF exacerbation, glycemic control, venous thromboembolic disease, stroke, etc.

  • Discuss evidence‐based recommendations involving transitions of care, including the hospital discharge process.

  • Gain insights into the roles of hospitalists as medical educators, researchers, medical ethicists, palliative care providers, and hospital‐based geriatricians.

  • Incorporate best practices for hospitalist administration, including quality improvement, patient safety, practice management, leadership, and demonstrating hospitalist value.

  • Identify evidence‐based best practices and trends for both adult and pediatric hospital medicine.

Instructions on Receiving Credit

For information on applicability and acceptance of continuing medical education credit for this activity, please consult your professional licensing board.

This activity is designed to be completed within the time designated on the title page; physicians should claim only those credits that reflect the time actually spent in the activity. To successfully earn credit, participants must complete the activity during the valid credit period that is noted on the title page.

Follow these steps to earn credit:

  • Log on to www.blackwellpublishing.com/cme.

  • Read the target audience, learning objectives, and author disclosures.

  • Read the article in print or online format.

  • Reflect on the article.

  • Access the CME Exam, and choose the best answer to each question.

  • Complete the required evaluation component of the activity.

If you wish to receive credit for this activity, which beginson the next page, please refer to the website: www.blackwellpublishing.com/cme.

Accreditation and Designation Statement

Blackwell Futura Media Services designates this educational activity for a 1 AMA PRA Category 1 Credit. Physicians should only claim credit commensurate with the extent of their participation in the activity.

Blackwell Futura Media Services is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians.

Educational Objectives

Upon completion of this educational activity, participants will be better able to:

  • Identify the approximate 30‐day readmission rate of Medicare patient hospitalized initially for pneumonia.

  • Distinguish which variables were accounted and unaccounted for in the development of a pneumonia readmission model.

Continuous participation in the Journal of Hospital Medicine CME program will enable learners to be better able to:

  • Interpret clinical guidelines and their applications for higher quality and more efficient care for all hospitalized patients.

  • Describe the standard of care for common illnesses and conditions treated in the hospital; such as pneumonia, COPD exacerbation, acute coronary syndrome, HF exacerbation, glycemic control, venous thromboembolic disease, stroke, etc.

  • Discuss evidence‐based recommendations involving transitions of care, including the hospital discharge process.

  • Gain insights into the roles of hospitalists as medical educators, researchers, medical ethicists, palliative care providers, and hospital‐based geriatricians.

  • Incorporate best practices for hospitalist administration, including quality improvement, patient safety, practice management, leadership, and demonstrating hospitalist value.

  • Identify evidence‐based best practices and trends for both adult and pediatric hospital medicine.

Instructions on Receiving Credit

For information on applicability and acceptance of continuing medical education credit for this activity, please consult your professional licensing board.

This activity is designed to be completed within the time designated on the title page; physicians should claim only those credits that reflect the time actually spent in the activity. To successfully earn credit, participants must complete the activity during the valid credit period that is noted on the title page.

Follow these steps to earn credit:

  • Log on to www.blackwellpublishing.com/cme.

  • Read the target audience, learning objectives, and author disclosures.

  • Read the article in print or online format.

  • Reflect on the article.

  • Access the CME Exam, and choose the best answer to each question.

  • Complete the required evaluation component of the activity.

Issue
Journal of Hospital Medicine - 6(3)
Issue
Journal of Hospital Medicine - 6(3)
Page Number
141-141
Page Number
141-141
Publications
Publications
Article Type
Display Headline
Continuing Medical Education Program in the Journal of Hospital Medicine
Display Headline
Continuing Medical Education Program in the Journal of Hospital Medicine
Sections
Article Source
Copyright © 2011 Society of Hospital Medicine
Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media

Pneumonia Readmission Validation

Article Type
Changed
Thu, 05/25/2017 - 21:25
Display Headline
Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia

Hospital readmissions are emblematic of the numerous challenges facing the US health care system. Despite high levels of spending, nearly 20% of Medicare beneficiaries are readmitted within 30 days of hospital discharge, many readmissions are considered preventable, and rates vary widely by hospital and region.1 Further, while readmissions have been estimated to cost taxpayers as much as $17 billion annually, the current fee‐for‐service method of paying for the acute care needs of seniors rewards hospitals financially for readmission, not their prevention.2

Pneumonia is the second most common reason for hospitalization among Medicare beneficiaries, accounting for approximately 650,000 admissions annually,3 and has been a focus of national quality‐improvement efforts for more than a decade.4, 5 Despite improvements in key processes of care, rates of readmission within 30 days of discharge following a hospitalization for pneumonia have been reported to vary from 10% to 24%.68 Among several factors, readmissions are believed to be influenced by the quality of both inpatient and outpatient care, and by care‐coordination activities occurring in the transition from inpatient to outpatient status.912

Public reporting of hospital performance is considered a key strategy for improving quality, reducing costs, and increasing the value of hospital care, both in the US and worldwide.13 In 2009, the Centers for Medicare & Medicaid Services (CMS) expanded its reporting initiatives by adding risk‐adjusted hospital readmission rates for acute myocardial infarction, heart failure, and pneumonia to the Hospital Compare website.14, 15 Readmission rates are an attractive focus for public reporting for several reasons. First, in contrast to most process‐based measures of quality (eg, whether a patient with pneumonia received a particular antibiotic), a readmission is an adverse outcome that matters to patients and families.16 Second, unlike process measures whose assessment requires detailed review of medical records, readmissions can be easily determined from standard hospital claims. Finally, readmissions are costly, and their prevention could yield substantial savings to society.

A necessary prerequisite for public reporting of readmission is a validated, risk‐adjusted measure that can be used to track performance over time and can facilitate comparisons across institutions. Toward this end, we describe the development, validation, and results of a National Quality Forum‐approved and CMS‐adopted model to estimate hospital‐specific, risk‐standardized, 30‐day readmission rates for Medicare patients hospitalized with pneumonia.17

METHODS

Data Sources

We used 20052006 claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files to develop and validate the administrative model. The Medicare Enrollment Database was used to determine Medicare fee‐for‐service enrollment and mortality statuses. A medical record model, used for additional validation of the administrative model, was developed using information abstracted from the charts of 75,616 pneumonia cases from 19982001 as part of the National Pneumonia Project, a CMS quality improvement initiative.18

Study Cohort

We identified hospitalizations of patients 65 years of age and older with a principal diagnosis of pneumonia (International Classification of Diseases, 9th Revision, Clinical Modification codes 480.XX, 481, 482.XX, 483.X, 485, 486, 487.0) as potential index pneumonia admissions. Because our focus was readmission for patients discharged from acute care settings, we excluded admissions in which patients died or were transferred to another acute care facility. Additionally, we restricted analysis to patients who had been enrolled in fee‐for‐service Medicare Parts A and B, for at least 12 months prior to their pneumonia hospitalization, so that we could use diagnostic codes from all inpatient and outpatient encounters during that period to enhance identification of comorbidities.

Outcome

The outcome was 30‐day readmission, defined as occurrence of at least one hospitalization for any cause within 30 days of discharge after an index admission. Readmissions were identified from hospital claims data, and were attributed to the hospital that had discharged the patient. A 30‐day time frame was selected because it is a clinically meaningful period during which hospitals can be expected to collaborate with other organizations and providers to implement measures to reduce the risk of rehospitalization.

Candidate and Final Model Variables

Candidate variables for the administrative claims model were selected by a clinician team from 189 diagnostic groups included in the Hierarchical Condition Category (HCC) clinical classification system.19 The HCC clinical classification system was developed for CMS in preparation for all‐encounter risk adjustment for Medicare Advantage (managed care). Under the HCC algorithm, the 15,000+ ICD‐9‐CM diagnosis codes are assigned to one of 189 clinically‐coherent condition categories (CCs). We used the April 2008 version of the ICD‐9‐CM to CC assignment map, which is maintained by CMS and posted at http://www.qualitynet.org. A total of 154 CCs were considered to be potentially relevant to readmission outcome and were included for further consideration. Some CCs were further combined into clinically coherent groupings of CCs. Our set of candidate variables ultimately included 97 CC‐based variables, two demographic variables (age and sex), and two procedure codes potentially relevant to readmission risk (history of percutaneous coronary intervention [PCI] and history of coronary artery bypass graft [CABG]).

The final risk‐adjustment model included 39 variables selected by the team of clinicians and analysts, primarily based on their clinical relevance but also with knowledge of the strength of their statistical association with readmission outcome (Table 1). For each patient, the presence or absence of these conditions was assessed from multiple sources, including secondary diagnoses during the index admission, principal and secondary diagnoses from hospital admissions in the 12 months prior to the index admission, and diagnoses from hospital outpatient and physician encounters 12 months before the index admission. A small number of CCs were considered to represent potential complications of care (eg, bleeding). Because we did not want to adjust for complications of care occurring during the index admission, a patient was not considered to have one of these conditions unless it was also present in at least one encounter prior to the index admission.

Regression Model Variables and Results in Derivation Sample
VariableFrequenciesEstimateStandard ErrorOdds Ratio95% CI 
  • Abbreviations: CABG, coronary artery bypass graft; CC, condition category; CI, confidence interval; COPD, chronic obstructive pulmonary disease; DM, diabetes mellitus.

Intercept 2.3950.021   
Age 65 (years above 65, continuous) 0.00010.0011.0000.9981.001
Male450.0710.0121.0731.0481.099
History of CABG5.20.1790.0270.8360.7930.881
Metastatic cancer and acute leukemia (CC 7)4.30.1770.0291.1941.1281.263
Lung, upper digestive tract, and other severe cancers (CC 8)6.00.2560.0241.2921.2321.354
Diabetes and DM complications (CC 15‐20, 119, 120)360.0590.0121.0611.0361.087
Disorders of fluid/electrolyte/acid‐base (CC 22, 23)340.1490.0131.1601.1311.191
Iron deficiency and other/unspecified anemias and blood disease (CC 47)460.1180.0121.1261.0991.153
Other psychiatric disorders (CC 60)120.1080.0171.1141.0771.151
Cardio‐respiratory failure and shock (CC 79)160.1140.0161.1211.0871.156
Congestive heart failure (CC 80)390.1510.0141.1631.1331.194
Chronic atherosclerosis (CC 83, 84)470.0510.0131.0531.0271.079
Valvular and rheumatic heart disease (CC 86)230.0620.0141.0641.0361.093
Arrhythmias (CC 92, 93)380.1260.0131.1341.1071.163
Vascular or circulatory disease (CC 104‐106)380.0880.0121.0921.0661.119
COPD (CC 108)580.1860.0131.2051.1751.235
Fibrosis of lung and other chronic lung disorders (CC 109)170.0860.0151.0901.0591.122
Renal failure (CC 131)170.1470.0161.1581.1221.196
Protein‐calorie malnutrition (CC 21)7.90.1210.0201.1291.0861.173
History of infection (CC 1, 3‐6)350.0680.0121.0711.0451.097
Severe hematological disorders (CC 44)3.60.1170.0281.1251.0641.188
Decubitus ulcer or chronic skin ulcer (CC 148, 149)100.1010.0181.1061.0671.146
History of pneumonia (CC 111‐113)440.0650.0131.0671.0411.094
Vertebral fractures (CC 157)5.10.1130.0241.1201.0681.174
Other injuries (CC 162)320.0610.0121.0631.0381.089
Urinary tract infection (CC 135)260.0640.0141.0661.0381.095
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal, and other cancers and tumors (CC 9‐10)160.0500.0161.0511.0181.084
End‐stage renal disease or dialysis (CC 129, 130)1.90.1310.0371.1401.0601.226
Drug/alcohol abuse/dependence/psychosis (CC 51‐53)120.0810.0171.0841.0481.121
Septicemia/shock (CC 2)6.30.0940.0221.0981.0521.146
Other gastrointestinal disorders (CC 36)560.0730.0121.0761.0511.102
Acute coronary syndrome (CC 81, 82)8.30.1260.0191.1341.0921.178
Pleural effusion/pneumothorax (CC 114)120.0830.0171.0861.0511.123
Other urinary tract disorders (CC 136)240.0590.0141.0611.0331.090
Stroke (CC 95, 96)100.0470.0191.0491.0111.088
Dementia and senility (CC 49, 50)270.0310.0141.0311.0041.059
Hemiplegia, paraplegia, paralysis, functional disability (CC 67‐69, 100‐102, 177, 178)7.40.0680.0211.0701.0261.116
Other lung disorders (CC 115)450.0050.0121.0050.9821.030
Major psychiatric disorders (CC 54‐56)110.0380.0181.0381.0031.075
Asthma (CC 110)120.0060.0181.0060.9721.041

Model Derivation

For the development of the administrative claims model, we randomly sampled half of 2006 hospitalizations that met inclusion criteria. To assess model performance at the patient level, we calculated the area under the receiver operating curve (AUC), and calculated observed readmission rates in the lowest and highest deciles on the basis of predicted readmission probabilities. We also compared performance with a null model, a model that adjusted for age and sex, and a model that included all candidate variables.20

Risk‐Standardized Readmission Rates

Using hierarchical logistic regression, we modeled the log‐odds of readmission within 30 days of discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics, and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation, or clustering, of observed outcomes, and models the assumption that underlying differences in quality among hospitals being evaluated lead to systematic differences in outcomes. We then calculated hospital‐specific readmission rates as the ratio of predicted‐to‐expected readmissions (similar to observed/expected ratio), multiplied by the national unadjusted ratea form of indirect standardization. Predicted number of readmissions in each hospital is estimated given the same patient mix and its estimated hospital‐specific intercept. Expected number of readmissions in each hospital is estimated using its patient mix and the average hospital‐specific intercept. To assess hospital performance in any given year, we re‐estimate model coefficients using that year's data.

Model Validation: Administrative Claims

We compared the model performance in the development sample with its performance in the sample from the 2006 data that was not selected for the development set, and separately among pneumonia admissions in 2005. The model was recalibrated in each validation set.

Model Validation: Medical Record Abstraction

We developed a separate medical record‐based model of readmission risk using information from charts that had previously been abstracted as part of CMS's National Pneumonia Project. To select variables for this model, the clinician team: 1) reviewed the list of variables that were included in a medical record model that was previously developed for validating the National Quality Forum‐approved pneumonia mortality measure; 2) reviewed a list of other potential candidate variables available in the National Pneumonia Project dataset; and 3) reviewed variables that emerged as potentially important predictors of readmission, based on a systematic review of the literature that was conducted as part of measure development. This selection process resulted in a final medical record model that included 35 variables.

We linked patients in the National Pneumonia Project cohort to their Medicare claims data, including claims from one year before the index hospitalization, so that we could calculate risk‐standardized readmission rates in this cohort separately using medical record and claims‐based models. This analysis was conducted at the state level, for the 50 states plus the District of Columbia and Puerto Rico, because medical record data were unavailable in sufficient numbers to permit hospital‐level comparisons. To examine the relationship between risk‐standardized rates obtained from medical record and administrative data models, we estimated a linear regression model describing the association between the two rates, weighting each state by number of index hospitalizations, and calculated the correlation coefficient and the intercept and slope of this equation. A slope close to 1 and an intercept close to 0 would provide evidence that risk‐standardized state readmission rates from the medical record and claims models were similar. We also calculated the difference between state risk‐standardized readmission rates from the two models.

Analyses were conducted with the use of SAS version 9.1.3 (SAS Institute Inc, Cary, NC). Models were fitted separately for the National Pneumonia Project and 2006 cohort. We estimated the hierarchical models using the GLIMMIX procedure in SAS. The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation and Performance

After exclusions were applied, the 2006 sample included 453,251 pneumonia hospitalizations (Figure 1). The development sample consisted of 226,545 hospitalizations at 4675 hospitals, with an overall unadjusted 30‐day readmission rate of 17.4%. In 11,694 index cases (5.2%), the patient died within 30 days without being readmitted. Median readmission rate was 16.3%, 25th and 75th percentile rates were 11.1% and 21.3%, and at the 10th and 90th percentile, hospital readmission rates ranged from 4.6% to 26.7% (Figure 2).

Figure 1
Pneumonia admissions included in measure calculation.
Figure 2
Distribution of unadjusted readmission rates.

The claims model included 39 variables (age, sex, and 37 clinical variables) (Table 1). The mean age of the cohort was 80.0 years, with 55.5% women and 11.1% nonwhite patients. Mean observed readmission rate in the development sample ranged from 9% in the lowest decile of predicted pneumonia readmission rates to 32% in the highest predicted decile, a range of 23%. The AUC was 0.63. For comparison, a model with only age and sex had an AUC of 0.51, and a model with all candidate variables had an AUC equal to 0.63 (Table 2).

Readmission Model Performance of Administrative Claims Models
 Calibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • NOTE: Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted Probabilities (p) = 1/(1+exp{Xb}), and Z = Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample; eg, Logit(P(Y = 1|Z)) = 0 + 1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Development sample
2006(1st half) N = 226,545(0, 1)(0.09, 0.32)0.63082.627.399.996,843 (40)
Validation sample
2006(2nd half) N = 226,706(0.002, 0.997)(0.09, 0.31)0.63082.557.459.996,870 (40)
2005N = 536,015(0.035, 1.008)(0.08, 0.31)0.63082.677.3110.0316,241 (40)

Hospital Risk‐Standardized Readmission Rates

Risk‐standardized readmission rates varied across hospitals (Figure 3). Median risk‐standardized readmission rate was 17.3%, and the 25th and 75th percentiles were 16.9% and 17.9%, respectively. The 5th percentile was 16.0% and the 95th percentile was 19.1%. Odds of readmission for a hospital one standard deviation above average was 1.4 times that of a hospital one standard deviation below average.

Figure 3
Distribution of risk‐standardized readmission rates.

Administrative Model Validation

In the remaining 50% of pneumonia index hospitalizations from 2006, and the entire 2005 cohort, regression coefficients and standard errors of model variables were similar to those in the development data set. Model performance using 2005 data was consistent with model performance using the 2006 development and validation half‐samples (Table 2).

Medical Record Validation

After exclusions, the medical record sample taken from the National Pneumonia Project included 47,429 cases, with an unadjusted 30‐day readmission rate of 17.0%. The final medical record risk‐adjustment model included a total of 35 variables, whose prevalence and association with readmission risk varied modestly (Table 3). Performance of the medical record and administrative models was similar (areas under the ROC curve 0.59 and 0.63, respectively) (Table 4). Additionally, in the administrative model, predicted readmission rates ranged from 8% in the lowest predicted decile to 30% in the highest predicted decile, while in the medical record model, the corresponding rates varied from 10% to 26%.

Regression Model Results from Medical Record Sample
VariablePercentEstimateStandard ErrorOdds Ratio95% CI
  • NOTE: Between‐state variance = 0.024; standard error = 0.00.

  • Abbreviations: BP, blood pressure; BUN, blood urea nitrogen; CI, confidence interval; SD, standard deviation; WBC, white blood cell count.

Age 65, mean (SD)15.24 (7.87)0.0030.0020.9970.9931.000
Male46.180.1220.0251.1301.0751.188
Nursing home resident17.710.0350.0371.0360.9631.114
Neoplastic disease6.800.1300.0491.1391.0341.254
Liver disease1.040.0890.1230.9150.7191.164
History of heart failure28.980.2340.0291.2641.1941.339
History of renal disease8.510.1880.0471.2061.1001.323
Altered mental status17.950.0090.0341.0090.9441.080
Pleural effusion21.200.1650.0301.1791.1111.251
BUN 30 mg/dl23.280.1600.0331.1741.1001.252
BUN missing14.560.1010.1850.9040.6301.298
Systolic BP <90 mmHg2.950.0680.0701.0700.9321.228
Systolic BP missing11.210.1490.4251.1600.5042.669
Pulse 125/min7.730.0360.0471.0360.9451.137
Pulse missing11.220.2100.4051.2340.5582.729
Respiratory rate 30/min16.380.0790.0341.0821.0121.157
Respiratory rate missing11.390.2040.2401.2260.7651.964
Sodium <130 mmol/L4.820.1360.0571.1451.0251.280
Sodium missing14.390.0490.1431.0500.7931.391
Glucose 250 mg/dl5.190.0050.0570.9950.8891.114
Glucose missing15.440.1560.1050.8550.6961.051
Hematocrit <30%7.770.2700.0441.3101.2021.428
Hematocrit missing13.620.0710.1350.9320.7151.215
Creatinine 2.5 mg/dL4.680.1090.0621.1150.9891.258
Creatinine missing14.630.2000.1671.2210.8801.695
WBC 6‐12 b/L38.040.0210.0490.9790.8891.079
WBC >12 b/L41.450.0680.0490.9340.8481.029
WBC missing12.850.1670.1621.1810.8601.623
Immunosuppressive therapy15.010.3470.0351.4151.3211.516
Chronic lung disease42.160.1370.0281.1471.0861.211
Coronary artery disease39.570.1500.0281.1621.1001.227
Diabetes mellitus20.900.1370.0331.1471.0761.223
Alcohol/drug abuse3.400.0990.0710.9060.7881.041
Dementia/Alzheimer's disease16.380.1250.0381.1331.0521.222
Splenectomy0.440.0160.1861.0160.7061.463
Model Performance of Medical Record Model
ModelCalibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Medical Record Model Development Sample (NP)
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.10, 0.26)0.59083.045.2811.68710 (35)
Linked Administrative Model Validation Sample
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.08, 0.30)0.63083.046.9410.011,414 (40)

The correlation coefficient of the estimated state‐specific standardized readmission rates from the administrative and medical record models was 0.96, and the proportion of the variance explained by the model was 0.92 (Figure 4).

Figure 4
Comparison of state‐level risk‐standardized readmission rates from medical record and administrative models. Abbreviations: HGLM, hierarchical generalized linear models.

DISCUSSION

We have described the development, validation, and results of a hospital, 30‐day, risk‐standardized readmission model for pneumonia that was created to support current federal transparency initiatives. The model uses administrative claims data from Medicare fee‐for‐service patients and produces results that are comparable to a model based on information obtained through manual abstraction of medical records. We observed an overall 30‐day readmission rate of 17%, and our analyses revealed substantial variation across US hospitals, suggesting that improvement by lower performing institutions is an achievable goal.

Because more than one in six pneumonia patients are rehospitalized shortly after discharge, and because pneumonia hospitalizations represent an enormous expense to the Medicare program, prevention of readmissions is now widely recognized to offer a substantial opportunity to improve patient outcomes while simultaneously lowering health care costs. Accordingly, promotion of strategies to reduce readmission rates has become a key priority for payers and quality‐improvement organizations. These range from policy‐level attempts to stimulate change, such as publicly reporting hospital readmission rates on government websites, to establishing accreditation standardssuch as the Joint Commission's requirement to accurately reconcile medications, to the creation of quality improvement collaboratives focused on sharing best practices across institutions. Regardless of the approach taken, a valid, risk‐adjusted measure of performance is required to evaluate and track performance over time. The measure we have described meets the National Quality Forum's measure evaluation criteria in that it addresses an important clinical topic for which there appears to be significant opportunities for improvement, the measure is precisely defined and has been subjected to validity and reliability testing, it is risk‐adjusted based on patient clinical factors present at the start of care, is feasible to produce, and is understandable by a broad range of potential users.21 Because hospitalists are the physicians primarily responsible for the care of patients with pneumonia at US hospitals, and because they frequently serve as the physician champions for quality improvement activities related to pneumonia, it is especially important that they maintain a thorough understanding of the measures and methodologies underlying current efforts to measure hospital performance.

Several features of our approach warrant additional comment. First, we deliberately chose to measure all readmission events rather than attempt to discriminate between potentially preventable and nonpreventable readmissions. From the patient perspective, readmission for any reason is a concern, and limiting the measure to pneumonia‐related readmissions could make it susceptible to gaming by hospitals. Moreover, determining whether a readmission is related to a potential quality problem is not straightforward. For example, a patient with pneumonia whose discharge medications were prescribed incorrectly may be readmitted with a hip fracture following an episode of syncope. It would be inappropriate to treat this readmission as unrelated to the care the patient received for pneumonia. Additionally, while our approach does not presume that every readmission is preventable, the goal is to reduce the risk of readmissions generally (not just in narrowly defined subpopulations), and successful interventions to reduce rehospitalization have typically demonstrated reductions in all‐cause readmission.9, 22 Second, deaths that occurred within 30 days of discharge, yet that were not accompanied by a hospital readmission, were not counted as a readmission outcome. While it may seem inappropriate to treat a postdischarge death as a nonevent (rather than censoring or excluding such cases), alternative analytic approaches, such as using a hierarchical survival model, are not currently computationally feasible with large national data sets. Fortunately, only a relatively small proportion of discharges fell into this category (5.2% of index cases in the 2006 development sample died within 30 days of discharge without being readmitted). An alternative approach to handling the competing outcome of death would have been to use a composite outcome of readmission or death. However, we believe that it is important to report the outcomes separately because factors that predict readmission and mortality may differ, and when making comparisons across hospitals it would not be possible to determine whether differences in rate were due to readmission or mortality. Third, while the patient‐level readmission model showed only modest discrimination, we intentionally excluded covariates such as race and socioeconomic status, as well as in‐hospital events and potential complications of care, and whether patients were discharged home or to a skilled nursing facility. While these variables could have improved predictive ability, they may be directly or indirectly related to quality or supply factors that should not be included in a model that seeks to control for patient clinical characteristics. For example, if hospitals with a large share of poor patients have higher readmission rates, then including income in the model will obscure differences that are important to identify. While we believe that the decision to exclude such factors in the model is in the best interest of patients, and supports efforts to reduce health inequality in society more generally, we also recognize that hospitals that care for a disproportionate share of poor patients are likely to require additional resources to overcome these social factors. Fourth, we limited the analysis to patients with a principal diagnosis of pneumonia, and chose not to also include those with a principal diagnosis of sepsis or respiratory failure coupled with a secondary diagnosis of pneumonia. While the broader definition is used by CMS in the National Pneumonia Project, that initiative relied on chart abstraction to differentiate pneumonia present at the time of admission from cases developing as a complication of hospitalization. Additionally, we did not attempt to differentiate between community‐acquired and healthcare‐associated pneumonia, however our approach is consistent with the National Pneumonia Project and Pneumonia Patient Outcomes Research Team.18 Fifth, while our model estimates readmission rates at the hospital level, we recognize that readmissions are influenced by a complex and extensive range of factors. In this context, greater cooperation between hospitals and other care providers will almost certainly be required in order to achieve dramatic improvement in readmission rates, which in turn will depend upon changes to the way serious illness is paid for. Some options that have recently been described include imposing financial penalties for early readmission, extending the boundaries of case‐based payment beyond hospital discharge, and bundling payments between hospitals and physicians.2325

Our measure has several limitations. First, our models were developed and validated using Medicare data, and the results may not apply to pneumonia patients less than 65 years of age. However, most patients hospitalized with pneumonia in the US are 65 or older. In addition, we were unable to test the model with a Medicare managed care population, because data are not currently available on such patients. Finally, the medical record‐based validation was conducted by state‐level analysis because the sample size was insufficient to carry this out at the hospital level.

In conclusion, more than 17% of Medicare beneficiaries are readmitted within 30 days following discharge after a hospitalization for pneumonia, and rates vary substantially across institutions. The development of a valid measure of hospital performance and public reporting are important first steps towards focusing attention on this problem. Actual improvement will now depend on whether hospitals and partner organizations are successful at identifying and implementing effective methods to prevent readmission.

Files
References
  1. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
  2. Medicare Payment Advisory Commission.Report to the Congress: Promoting Greater Efficiency in Medicare.2007.
  3. Levit K,Wier L,Ryan K,Elixhauser A,Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007.2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed November 7, 2009.
  4. Centers for Medicare 353(3):255264.
  5. Baker DW,Einstadter D,Husak SS,Cebul RD.Trends in postdischarge mortality and readmissions: has length of stay declined too far?Arch Intern Med.2004;164(5):538544.
  6. Vecchiarino P,Bohannon RW,Ferullo J,Maljanian R.Short‐term outcomes and their predictors for patients hospitalized with community‐acquired pneumonia.Heart Lung.2004;33(5):301307.
  7. Dean NC,Bateman KA,Donnelly SM, et al.Improved clinical outcomes with utilization of a community‐acquired pneumonia guideline.Chest.2006;130(3):794799.
  8. Gleason PP,Meehan TP,Fine JM,Galusha DH,Fine MJ.Associations between initial antimicrobial therapy and medical outcomes for hospitalized elderly patients with pneumonia.Arch Intern Med.1999;159(21):25622572.
  9. Benbassat J,Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  10. Coleman EA,Parry C,Chalmers S,Min S.The care transitions intervention: results of a randomized controlled trial.Arch Intern Med.2006;166(17):18221828.
  11. Corrigan JM, Eden J, Smith BM, eds.Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Committee on Enhancing Federal Healthcare Quality Programs.Washington, DC:National Academies Press,2003.
  12. Medicare.gov—Hospital Compare. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp?version=default1(1):2937.
  13. Krumholz HM,Normand ST,Spertus JA,Shahian DM,Bradley EH.Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement.Health Aff.2007;26(1):7585.
  14. NQF‐Endorsed® Standards. Available at: http://www.qualityforum.org/Measures_List.aspx. Accessed November 6,2009.
  15. Houck PM,Bratzler DW,Nsa W,Ma A,Bartlett JG.Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community‐acquired pneumonia.Arch Intern Med.2004;164(6):637644.
  16. Pope G,Ellis R,Ash A. Diagnostic Cost Group Hierarchical Condition Category Models for Medicare Risk Adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc;2000. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed November 7, 2009.
  17. Harrell FEJ.Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis.1st ed.New York:Springer;2006.
  18. National Quality Forum—Measure Evaluation Criteria.2008. Available at: http://www.qualityforum.org/uploadedFiles/Quality_Forum/Measuring_Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008–08‐28Final.pdf?n=4701.
  19. Naylor MD,Brooten D,Campbell R, et al.Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial.JAMA.1999;281(7):613620.
  20. Davis K.Paying for care episodes and care coordination.N Engl J Med.2007;356(11):11661168.
  21. Luft HS.Health care reform—toward more freedom, and responsibility, for physicians.N Engl J Med.2009;361(6):623628.
  22. Rosenthal MB.Beyond pay for performance—emerging models of provider‐payment reform.N Engl J Med.2008;359(12):11971200.
Article PDF
Issue
Journal of Hospital Medicine - 6(3)
Publications
Page Number
142-150
Legacy Keywords
cost analysis, cost per day, end of life, hospice, length of stay, palliative care, triggers
Sections
Files
Files
Article PDF
Article PDF

Hospital readmissions are emblematic of the numerous challenges facing the US health care system. Despite high levels of spending, nearly 20% of Medicare beneficiaries are readmitted within 30 days of hospital discharge, many readmissions are considered preventable, and rates vary widely by hospital and region.1 Further, while readmissions have been estimated to cost taxpayers as much as $17 billion annually, the current fee‐for‐service method of paying for the acute care needs of seniors rewards hospitals financially for readmission, not their prevention.2

Pneumonia is the second most common reason for hospitalization among Medicare beneficiaries, accounting for approximately 650,000 admissions annually,3 and has been a focus of national quality‐improvement efforts for more than a decade.4, 5 Despite improvements in key processes of care, rates of readmission within 30 days of discharge following a hospitalization for pneumonia have been reported to vary from 10% to 24%.68 Among several factors, readmissions are believed to be influenced by the quality of both inpatient and outpatient care, and by care‐coordination activities occurring in the transition from inpatient to outpatient status.912

Public reporting of hospital performance is considered a key strategy for improving quality, reducing costs, and increasing the value of hospital care, both in the US and worldwide.13 In 2009, the Centers for Medicare & Medicaid Services (CMS) expanded its reporting initiatives by adding risk‐adjusted hospital readmission rates for acute myocardial infarction, heart failure, and pneumonia to the Hospital Compare website.14, 15 Readmission rates are an attractive focus for public reporting for several reasons. First, in contrast to most process‐based measures of quality (eg, whether a patient with pneumonia received a particular antibiotic), a readmission is an adverse outcome that matters to patients and families.16 Second, unlike process measures whose assessment requires detailed review of medical records, readmissions can be easily determined from standard hospital claims. Finally, readmissions are costly, and their prevention could yield substantial savings to society.

A necessary prerequisite for public reporting of readmission is a validated, risk‐adjusted measure that can be used to track performance over time and can facilitate comparisons across institutions. Toward this end, we describe the development, validation, and results of a National Quality Forum‐approved and CMS‐adopted model to estimate hospital‐specific, risk‐standardized, 30‐day readmission rates for Medicare patients hospitalized with pneumonia.17

METHODS

Data Sources

We used 20052006 claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files to develop and validate the administrative model. The Medicare Enrollment Database was used to determine Medicare fee‐for‐service enrollment and mortality statuses. A medical record model, used for additional validation of the administrative model, was developed using information abstracted from the charts of 75,616 pneumonia cases from 19982001 as part of the National Pneumonia Project, a CMS quality improvement initiative.18

Study Cohort

We identified hospitalizations of patients 65 years of age and older with a principal diagnosis of pneumonia (International Classification of Diseases, 9th Revision, Clinical Modification codes 480.XX, 481, 482.XX, 483.X, 485, 486, 487.0) as potential index pneumonia admissions. Because our focus was readmission for patients discharged from acute care settings, we excluded admissions in which patients died or were transferred to another acute care facility. Additionally, we restricted analysis to patients who had been enrolled in fee‐for‐service Medicare Parts A and B, for at least 12 months prior to their pneumonia hospitalization, so that we could use diagnostic codes from all inpatient and outpatient encounters during that period to enhance identification of comorbidities.

Outcome

The outcome was 30‐day readmission, defined as occurrence of at least one hospitalization for any cause within 30 days of discharge after an index admission. Readmissions were identified from hospital claims data, and were attributed to the hospital that had discharged the patient. A 30‐day time frame was selected because it is a clinically meaningful period during which hospitals can be expected to collaborate with other organizations and providers to implement measures to reduce the risk of rehospitalization.

Candidate and Final Model Variables

Candidate variables for the administrative claims model were selected by a clinician team from 189 diagnostic groups included in the Hierarchical Condition Category (HCC) clinical classification system.19 The HCC clinical classification system was developed for CMS in preparation for all‐encounter risk adjustment for Medicare Advantage (managed care). Under the HCC algorithm, the 15,000+ ICD‐9‐CM diagnosis codes are assigned to one of 189 clinically‐coherent condition categories (CCs). We used the April 2008 version of the ICD‐9‐CM to CC assignment map, which is maintained by CMS and posted at http://www.qualitynet.org. A total of 154 CCs were considered to be potentially relevant to readmission outcome and were included for further consideration. Some CCs were further combined into clinically coherent groupings of CCs. Our set of candidate variables ultimately included 97 CC‐based variables, two demographic variables (age and sex), and two procedure codes potentially relevant to readmission risk (history of percutaneous coronary intervention [PCI] and history of coronary artery bypass graft [CABG]).

The final risk‐adjustment model included 39 variables selected by the team of clinicians and analysts, primarily based on their clinical relevance but also with knowledge of the strength of their statistical association with readmission outcome (Table 1). For each patient, the presence or absence of these conditions was assessed from multiple sources, including secondary diagnoses during the index admission, principal and secondary diagnoses from hospital admissions in the 12 months prior to the index admission, and diagnoses from hospital outpatient and physician encounters 12 months before the index admission. A small number of CCs were considered to represent potential complications of care (eg, bleeding). Because we did not want to adjust for complications of care occurring during the index admission, a patient was not considered to have one of these conditions unless it was also present in at least one encounter prior to the index admission.

Regression Model Variables and Results in Derivation Sample
VariableFrequenciesEstimateStandard ErrorOdds Ratio95% CI 
  • Abbreviations: CABG, coronary artery bypass graft; CC, condition category; CI, confidence interval; COPD, chronic obstructive pulmonary disease; DM, diabetes mellitus.

Intercept 2.3950.021   
Age 65 (years above 65, continuous) 0.00010.0011.0000.9981.001
Male450.0710.0121.0731.0481.099
History of CABG5.20.1790.0270.8360.7930.881
Metastatic cancer and acute leukemia (CC 7)4.30.1770.0291.1941.1281.263
Lung, upper digestive tract, and other severe cancers (CC 8)6.00.2560.0241.2921.2321.354
Diabetes and DM complications (CC 15‐20, 119, 120)360.0590.0121.0611.0361.087
Disorders of fluid/electrolyte/acid‐base (CC 22, 23)340.1490.0131.1601.1311.191
Iron deficiency and other/unspecified anemias and blood disease (CC 47)460.1180.0121.1261.0991.153
Other psychiatric disorders (CC 60)120.1080.0171.1141.0771.151
Cardio‐respiratory failure and shock (CC 79)160.1140.0161.1211.0871.156
Congestive heart failure (CC 80)390.1510.0141.1631.1331.194
Chronic atherosclerosis (CC 83, 84)470.0510.0131.0531.0271.079
Valvular and rheumatic heart disease (CC 86)230.0620.0141.0641.0361.093
Arrhythmias (CC 92, 93)380.1260.0131.1341.1071.163
Vascular or circulatory disease (CC 104‐106)380.0880.0121.0921.0661.119
COPD (CC 108)580.1860.0131.2051.1751.235
Fibrosis of lung and other chronic lung disorders (CC 109)170.0860.0151.0901.0591.122
Renal failure (CC 131)170.1470.0161.1581.1221.196
Protein‐calorie malnutrition (CC 21)7.90.1210.0201.1291.0861.173
History of infection (CC 1, 3‐6)350.0680.0121.0711.0451.097
Severe hematological disorders (CC 44)3.60.1170.0281.1251.0641.188
Decubitus ulcer or chronic skin ulcer (CC 148, 149)100.1010.0181.1061.0671.146
History of pneumonia (CC 111‐113)440.0650.0131.0671.0411.094
Vertebral fractures (CC 157)5.10.1130.0241.1201.0681.174
Other injuries (CC 162)320.0610.0121.0631.0381.089
Urinary tract infection (CC 135)260.0640.0141.0661.0381.095
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal, and other cancers and tumors (CC 9‐10)160.0500.0161.0511.0181.084
End‐stage renal disease or dialysis (CC 129, 130)1.90.1310.0371.1401.0601.226
Drug/alcohol abuse/dependence/psychosis (CC 51‐53)120.0810.0171.0841.0481.121
Septicemia/shock (CC 2)6.30.0940.0221.0981.0521.146
Other gastrointestinal disorders (CC 36)560.0730.0121.0761.0511.102
Acute coronary syndrome (CC 81, 82)8.30.1260.0191.1341.0921.178
Pleural effusion/pneumothorax (CC 114)120.0830.0171.0861.0511.123
Other urinary tract disorders (CC 136)240.0590.0141.0611.0331.090
Stroke (CC 95, 96)100.0470.0191.0491.0111.088
Dementia and senility (CC 49, 50)270.0310.0141.0311.0041.059
Hemiplegia, paraplegia, paralysis, functional disability (CC 67‐69, 100‐102, 177, 178)7.40.0680.0211.0701.0261.116
Other lung disorders (CC 115)450.0050.0121.0050.9821.030
Major psychiatric disorders (CC 54‐56)110.0380.0181.0381.0031.075
Asthma (CC 110)120.0060.0181.0060.9721.041

Model Derivation

For the development of the administrative claims model, we randomly sampled half of 2006 hospitalizations that met inclusion criteria. To assess model performance at the patient level, we calculated the area under the receiver operating curve (AUC), and calculated observed readmission rates in the lowest and highest deciles on the basis of predicted readmission probabilities. We also compared performance with a null model, a model that adjusted for age and sex, and a model that included all candidate variables.20

Risk‐Standardized Readmission Rates

Using hierarchical logistic regression, we modeled the log‐odds of readmission within 30 days of discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics, and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation, or clustering, of observed outcomes, and models the assumption that underlying differences in quality among hospitals being evaluated lead to systematic differences in outcomes. We then calculated hospital‐specific readmission rates as the ratio of predicted‐to‐expected readmissions (similar to observed/expected ratio), multiplied by the national unadjusted ratea form of indirect standardization. Predicted number of readmissions in each hospital is estimated given the same patient mix and its estimated hospital‐specific intercept. Expected number of readmissions in each hospital is estimated using its patient mix and the average hospital‐specific intercept. To assess hospital performance in any given year, we re‐estimate model coefficients using that year's data.

Model Validation: Administrative Claims

We compared the model performance in the development sample with its performance in the sample from the 2006 data that was not selected for the development set, and separately among pneumonia admissions in 2005. The model was recalibrated in each validation set.

Model Validation: Medical Record Abstraction

We developed a separate medical record‐based model of readmission risk using information from charts that had previously been abstracted as part of CMS's National Pneumonia Project. To select variables for this model, the clinician team: 1) reviewed the list of variables that were included in a medical record model that was previously developed for validating the National Quality Forum‐approved pneumonia mortality measure; 2) reviewed a list of other potential candidate variables available in the National Pneumonia Project dataset; and 3) reviewed variables that emerged as potentially important predictors of readmission, based on a systematic review of the literature that was conducted as part of measure development. This selection process resulted in a final medical record model that included 35 variables.

We linked patients in the National Pneumonia Project cohort to their Medicare claims data, including claims from one year before the index hospitalization, so that we could calculate risk‐standardized readmission rates in this cohort separately using medical record and claims‐based models. This analysis was conducted at the state level, for the 50 states plus the District of Columbia and Puerto Rico, because medical record data were unavailable in sufficient numbers to permit hospital‐level comparisons. To examine the relationship between risk‐standardized rates obtained from medical record and administrative data models, we estimated a linear regression model describing the association between the two rates, weighting each state by number of index hospitalizations, and calculated the correlation coefficient and the intercept and slope of this equation. A slope close to 1 and an intercept close to 0 would provide evidence that risk‐standardized state readmission rates from the medical record and claims models were similar. We also calculated the difference between state risk‐standardized readmission rates from the two models.

Analyses were conducted with the use of SAS version 9.1.3 (SAS Institute Inc, Cary, NC). Models were fitted separately for the National Pneumonia Project and 2006 cohort. We estimated the hierarchical models using the GLIMMIX procedure in SAS. The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation and Performance

After exclusions were applied, the 2006 sample included 453,251 pneumonia hospitalizations (Figure 1). The development sample consisted of 226,545 hospitalizations at 4675 hospitals, with an overall unadjusted 30‐day readmission rate of 17.4%. In 11,694 index cases (5.2%), the patient died within 30 days without being readmitted. Median readmission rate was 16.3%, 25th and 75th percentile rates were 11.1% and 21.3%, and at the 10th and 90th percentile, hospital readmission rates ranged from 4.6% to 26.7% (Figure 2).

Figure 1
Pneumonia admissions included in measure calculation.
Figure 2
Distribution of unadjusted readmission rates.

The claims model included 39 variables (age, sex, and 37 clinical variables) (Table 1). The mean age of the cohort was 80.0 years, with 55.5% women and 11.1% nonwhite patients. Mean observed readmission rate in the development sample ranged from 9% in the lowest decile of predicted pneumonia readmission rates to 32% in the highest predicted decile, a range of 23%. The AUC was 0.63. For comparison, a model with only age and sex had an AUC of 0.51, and a model with all candidate variables had an AUC equal to 0.63 (Table 2).

Readmission Model Performance of Administrative Claims Models
 Calibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • NOTE: Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted Probabilities (p) = 1/(1+exp{Xb}), and Z = Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample; eg, Logit(P(Y = 1|Z)) = 0 + 1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Development sample
2006(1st half) N = 226,545(0, 1)(0.09, 0.32)0.63082.627.399.996,843 (40)
Validation sample
2006(2nd half) N = 226,706(0.002, 0.997)(0.09, 0.31)0.63082.557.459.996,870 (40)
2005N = 536,015(0.035, 1.008)(0.08, 0.31)0.63082.677.3110.0316,241 (40)

Hospital Risk‐Standardized Readmission Rates

Risk‐standardized readmission rates varied across hospitals (Figure 3). Median risk‐standardized readmission rate was 17.3%, and the 25th and 75th percentiles were 16.9% and 17.9%, respectively. The 5th percentile was 16.0% and the 95th percentile was 19.1%. Odds of readmission for a hospital one standard deviation above average was 1.4 times that of a hospital one standard deviation below average.

Figure 3
Distribution of risk‐standardized readmission rates.

Administrative Model Validation

In the remaining 50% of pneumonia index hospitalizations from 2006, and the entire 2005 cohort, regression coefficients and standard errors of model variables were similar to those in the development data set. Model performance using 2005 data was consistent with model performance using the 2006 development and validation half‐samples (Table 2).

Medical Record Validation

After exclusions, the medical record sample taken from the National Pneumonia Project included 47,429 cases, with an unadjusted 30‐day readmission rate of 17.0%. The final medical record risk‐adjustment model included a total of 35 variables, whose prevalence and association with readmission risk varied modestly (Table 3). Performance of the medical record and administrative models was similar (areas under the ROC curve 0.59 and 0.63, respectively) (Table 4). Additionally, in the administrative model, predicted readmission rates ranged from 8% in the lowest predicted decile to 30% in the highest predicted decile, while in the medical record model, the corresponding rates varied from 10% to 26%.

Regression Model Results from Medical Record Sample
VariablePercentEstimateStandard ErrorOdds Ratio95% CI
  • NOTE: Between‐state variance = 0.024; standard error = 0.00.

  • Abbreviations: BP, blood pressure; BUN, blood urea nitrogen; CI, confidence interval; SD, standard deviation; WBC, white blood cell count.

Age 65, mean (SD)15.24 (7.87)0.0030.0020.9970.9931.000
Male46.180.1220.0251.1301.0751.188
Nursing home resident17.710.0350.0371.0360.9631.114
Neoplastic disease6.800.1300.0491.1391.0341.254
Liver disease1.040.0890.1230.9150.7191.164
History of heart failure28.980.2340.0291.2641.1941.339
History of renal disease8.510.1880.0471.2061.1001.323
Altered mental status17.950.0090.0341.0090.9441.080
Pleural effusion21.200.1650.0301.1791.1111.251
BUN 30 mg/dl23.280.1600.0331.1741.1001.252
BUN missing14.560.1010.1850.9040.6301.298
Systolic BP <90 mmHg2.950.0680.0701.0700.9321.228
Systolic BP missing11.210.1490.4251.1600.5042.669
Pulse 125/min7.730.0360.0471.0360.9451.137
Pulse missing11.220.2100.4051.2340.5582.729
Respiratory rate 30/min16.380.0790.0341.0821.0121.157
Respiratory rate missing11.390.2040.2401.2260.7651.964
Sodium <130 mmol/L4.820.1360.0571.1451.0251.280
Sodium missing14.390.0490.1431.0500.7931.391
Glucose 250 mg/dl5.190.0050.0570.9950.8891.114
Glucose missing15.440.1560.1050.8550.6961.051
Hematocrit <30%7.770.2700.0441.3101.2021.428
Hematocrit missing13.620.0710.1350.9320.7151.215
Creatinine 2.5 mg/dL4.680.1090.0621.1150.9891.258
Creatinine missing14.630.2000.1671.2210.8801.695
WBC 6‐12 b/L38.040.0210.0490.9790.8891.079
WBC >12 b/L41.450.0680.0490.9340.8481.029
WBC missing12.850.1670.1621.1810.8601.623
Immunosuppressive therapy15.010.3470.0351.4151.3211.516
Chronic lung disease42.160.1370.0281.1471.0861.211
Coronary artery disease39.570.1500.0281.1621.1001.227
Diabetes mellitus20.900.1370.0331.1471.0761.223
Alcohol/drug abuse3.400.0990.0710.9060.7881.041
Dementia/Alzheimer's disease16.380.1250.0381.1331.0521.222
Splenectomy0.440.0160.1861.0160.7061.463
Model Performance of Medical Record Model
ModelCalibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Medical Record Model Development Sample (NP)
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.10, 0.26)0.59083.045.2811.68710 (35)
Linked Administrative Model Validation Sample
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.08, 0.30)0.63083.046.9410.011,414 (40)

The correlation coefficient of the estimated state‐specific standardized readmission rates from the administrative and medical record models was 0.96, and the proportion of the variance explained by the model was 0.92 (Figure 4).

Figure 4
Comparison of state‐level risk‐standardized readmission rates from medical record and administrative models. Abbreviations: HGLM, hierarchical generalized linear models.

DISCUSSION

We have described the development, validation, and results of a hospital, 30‐day, risk‐standardized readmission model for pneumonia that was created to support current federal transparency initiatives. The model uses administrative claims data from Medicare fee‐for‐service patients and produces results that are comparable to a model based on information obtained through manual abstraction of medical records. We observed an overall 30‐day readmission rate of 17%, and our analyses revealed substantial variation across US hospitals, suggesting that improvement by lower performing institutions is an achievable goal.

Because more than one in six pneumonia patients are rehospitalized shortly after discharge, and because pneumonia hospitalizations represent an enormous expense to the Medicare program, prevention of readmissions is now widely recognized to offer a substantial opportunity to improve patient outcomes while simultaneously lowering health care costs. Accordingly, promotion of strategies to reduce readmission rates has become a key priority for payers and quality‐improvement organizations. These range from policy‐level attempts to stimulate change, such as publicly reporting hospital readmission rates on government websites, to establishing accreditation standardssuch as the Joint Commission's requirement to accurately reconcile medications, to the creation of quality improvement collaboratives focused on sharing best practices across institutions. Regardless of the approach taken, a valid, risk‐adjusted measure of performance is required to evaluate and track performance over time. The measure we have described meets the National Quality Forum's measure evaluation criteria in that it addresses an important clinical topic for which there appears to be significant opportunities for improvement, the measure is precisely defined and has been subjected to validity and reliability testing, it is risk‐adjusted based on patient clinical factors present at the start of care, is feasible to produce, and is understandable by a broad range of potential users.21 Because hospitalists are the physicians primarily responsible for the care of patients with pneumonia at US hospitals, and because they frequently serve as the physician champions for quality improvement activities related to pneumonia, it is especially important that they maintain a thorough understanding of the measures and methodologies underlying current efforts to measure hospital performance.

Several features of our approach warrant additional comment. First, we deliberately chose to measure all readmission events rather than attempt to discriminate between potentially preventable and nonpreventable readmissions. From the patient perspective, readmission for any reason is a concern, and limiting the measure to pneumonia‐related readmissions could make it susceptible to gaming by hospitals. Moreover, determining whether a readmission is related to a potential quality problem is not straightforward. For example, a patient with pneumonia whose discharge medications were prescribed incorrectly may be readmitted with a hip fracture following an episode of syncope. It would be inappropriate to treat this readmission as unrelated to the care the patient received for pneumonia. Additionally, while our approach does not presume that every readmission is preventable, the goal is to reduce the risk of readmissions generally (not just in narrowly defined subpopulations), and successful interventions to reduce rehospitalization have typically demonstrated reductions in all‐cause readmission.9, 22 Second, deaths that occurred within 30 days of discharge, yet that were not accompanied by a hospital readmission, were not counted as a readmission outcome. While it may seem inappropriate to treat a postdischarge death as a nonevent (rather than censoring or excluding such cases), alternative analytic approaches, such as using a hierarchical survival model, are not currently computationally feasible with large national data sets. Fortunately, only a relatively small proportion of discharges fell into this category (5.2% of index cases in the 2006 development sample died within 30 days of discharge without being readmitted). An alternative approach to handling the competing outcome of death would have been to use a composite outcome of readmission or death. However, we believe that it is important to report the outcomes separately because factors that predict readmission and mortality may differ, and when making comparisons across hospitals it would not be possible to determine whether differences in rate were due to readmission or mortality. Third, while the patient‐level readmission model showed only modest discrimination, we intentionally excluded covariates such as race and socioeconomic status, as well as in‐hospital events and potential complications of care, and whether patients were discharged home or to a skilled nursing facility. While these variables could have improved predictive ability, they may be directly or indirectly related to quality or supply factors that should not be included in a model that seeks to control for patient clinical characteristics. For example, if hospitals with a large share of poor patients have higher readmission rates, then including income in the model will obscure differences that are important to identify. While we believe that the decision to exclude such factors in the model is in the best interest of patients, and supports efforts to reduce health inequality in society more generally, we also recognize that hospitals that care for a disproportionate share of poor patients are likely to require additional resources to overcome these social factors. Fourth, we limited the analysis to patients with a principal diagnosis of pneumonia, and chose not to also include those with a principal diagnosis of sepsis or respiratory failure coupled with a secondary diagnosis of pneumonia. While the broader definition is used by CMS in the National Pneumonia Project, that initiative relied on chart abstraction to differentiate pneumonia present at the time of admission from cases developing as a complication of hospitalization. Additionally, we did not attempt to differentiate between community‐acquired and healthcare‐associated pneumonia, however our approach is consistent with the National Pneumonia Project and Pneumonia Patient Outcomes Research Team.18 Fifth, while our model estimates readmission rates at the hospital level, we recognize that readmissions are influenced by a complex and extensive range of factors. In this context, greater cooperation between hospitals and other care providers will almost certainly be required in order to achieve dramatic improvement in readmission rates, which in turn will depend upon changes to the way serious illness is paid for. Some options that have recently been described include imposing financial penalties for early readmission, extending the boundaries of case‐based payment beyond hospital discharge, and bundling payments between hospitals and physicians.2325

Our measure has several limitations. First, our models were developed and validated using Medicare data, and the results may not apply to pneumonia patients less than 65 years of age. However, most patients hospitalized with pneumonia in the US are 65 or older. In addition, we were unable to test the model with a Medicare managed care population, because data are not currently available on such patients. Finally, the medical record‐based validation was conducted by state‐level analysis because the sample size was insufficient to carry this out at the hospital level.

In conclusion, more than 17% of Medicare beneficiaries are readmitted within 30 days following discharge after a hospitalization for pneumonia, and rates vary substantially across institutions. The development of a valid measure of hospital performance and public reporting are important first steps towards focusing attention on this problem. Actual improvement will now depend on whether hospitals and partner organizations are successful at identifying and implementing effective methods to prevent readmission.

Hospital readmissions are emblematic of the numerous challenges facing the US health care system. Despite high levels of spending, nearly 20% of Medicare beneficiaries are readmitted within 30 days of hospital discharge, many readmissions are considered preventable, and rates vary widely by hospital and region.1 Further, while readmissions have been estimated to cost taxpayers as much as $17 billion annually, the current fee‐for‐service method of paying for the acute care needs of seniors rewards hospitals financially for readmission, not their prevention.2

Pneumonia is the second most common reason for hospitalization among Medicare beneficiaries, accounting for approximately 650,000 admissions annually,3 and has been a focus of national quality‐improvement efforts for more than a decade.4, 5 Despite improvements in key processes of care, rates of readmission within 30 days of discharge following a hospitalization for pneumonia have been reported to vary from 10% to 24%.68 Among several factors, readmissions are believed to be influenced by the quality of both inpatient and outpatient care, and by care‐coordination activities occurring in the transition from inpatient to outpatient status.912

Public reporting of hospital performance is considered a key strategy for improving quality, reducing costs, and increasing the value of hospital care, both in the US and worldwide.13 In 2009, the Centers for Medicare & Medicaid Services (CMS) expanded its reporting initiatives by adding risk‐adjusted hospital readmission rates for acute myocardial infarction, heart failure, and pneumonia to the Hospital Compare website.14, 15 Readmission rates are an attractive focus for public reporting for several reasons. First, in contrast to most process‐based measures of quality (eg, whether a patient with pneumonia received a particular antibiotic), a readmission is an adverse outcome that matters to patients and families.16 Second, unlike process measures whose assessment requires detailed review of medical records, readmissions can be easily determined from standard hospital claims. Finally, readmissions are costly, and their prevention could yield substantial savings to society.

A necessary prerequisite for public reporting of readmission is a validated, risk‐adjusted measure that can be used to track performance over time and can facilitate comparisons across institutions. Toward this end, we describe the development, validation, and results of a National Quality Forum‐approved and CMS‐adopted model to estimate hospital‐specific, risk‐standardized, 30‐day readmission rates for Medicare patients hospitalized with pneumonia.17

METHODS

Data Sources

We used 20052006 claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files to develop and validate the administrative model. The Medicare Enrollment Database was used to determine Medicare fee‐for‐service enrollment and mortality statuses. A medical record model, used for additional validation of the administrative model, was developed using information abstracted from the charts of 75,616 pneumonia cases from 19982001 as part of the National Pneumonia Project, a CMS quality improvement initiative.18

Study Cohort

We identified hospitalizations of patients 65 years of age and older with a principal diagnosis of pneumonia (International Classification of Diseases, 9th Revision, Clinical Modification codes 480.XX, 481, 482.XX, 483.X, 485, 486, 487.0) as potential index pneumonia admissions. Because our focus was readmission for patients discharged from acute care settings, we excluded admissions in which patients died or were transferred to another acute care facility. Additionally, we restricted analysis to patients who had been enrolled in fee‐for‐service Medicare Parts A and B, for at least 12 months prior to their pneumonia hospitalization, so that we could use diagnostic codes from all inpatient and outpatient encounters during that period to enhance identification of comorbidities.

Outcome

The outcome was 30‐day readmission, defined as occurrence of at least one hospitalization for any cause within 30 days of discharge after an index admission. Readmissions were identified from hospital claims data, and were attributed to the hospital that had discharged the patient. A 30‐day time frame was selected because it is a clinically meaningful period during which hospitals can be expected to collaborate with other organizations and providers to implement measures to reduce the risk of rehospitalization.

Candidate and Final Model Variables

Candidate variables for the administrative claims model were selected by a clinician team from 189 diagnostic groups included in the Hierarchical Condition Category (HCC) clinical classification system.19 The HCC clinical classification system was developed for CMS in preparation for all‐encounter risk adjustment for Medicare Advantage (managed care). Under the HCC algorithm, the 15,000+ ICD‐9‐CM diagnosis codes are assigned to one of 189 clinically‐coherent condition categories (CCs). We used the April 2008 version of the ICD‐9‐CM to CC assignment map, which is maintained by CMS and posted at http://www.qualitynet.org. A total of 154 CCs were considered to be potentially relevant to readmission outcome and were included for further consideration. Some CCs were further combined into clinically coherent groupings of CCs. Our set of candidate variables ultimately included 97 CC‐based variables, two demographic variables (age and sex), and two procedure codes potentially relevant to readmission risk (history of percutaneous coronary intervention [PCI] and history of coronary artery bypass graft [CABG]).

The final risk‐adjustment model included 39 variables selected by the team of clinicians and analysts, primarily based on their clinical relevance but also with knowledge of the strength of their statistical association with readmission outcome (Table 1). For each patient, the presence or absence of these conditions was assessed from multiple sources, including secondary diagnoses during the index admission, principal and secondary diagnoses from hospital admissions in the 12 months prior to the index admission, and diagnoses from hospital outpatient and physician encounters 12 months before the index admission. A small number of CCs were considered to represent potential complications of care (eg, bleeding). Because we did not want to adjust for complications of care occurring during the index admission, a patient was not considered to have one of these conditions unless it was also present in at least one encounter prior to the index admission.

Regression Model Variables and Results in Derivation Sample
VariableFrequenciesEstimateStandard ErrorOdds Ratio95% CI 
  • Abbreviations: CABG, coronary artery bypass graft; CC, condition category; CI, confidence interval; COPD, chronic obstructive pulmonary disease; DM, diabetes mellitus.

Intercept 2.3950.021   
Age 65 (years above 65, continuous) 0.00010.0011.0000.9981.001
Male450.0710.0121.0731.0481.099
History of CABG5.20.1790.0270.8360.7930.881
Metastatic cancer and acute leukemia (CC 7)4.30.1770.0291.1941.1281.263
Lung, upper digestive tract, and other severe cancers (CC 8)6.00.2560.0241.2921.2321.354
Diabetes and DM complications (CC 15‐20, 119, 120)360.0590.0121.0611.0361.087
Disorders of fluid/electrolyte/acid‐base (CC 22, 23)340.1490.0131.1601.1311.191
Iron deficiency and other/unspecified anemias and blood disease (CC 47)460.1180.0121.1261.0991.153
Other psychiatric disorders (CC 60)120.1080.0171.1141.0771.151
Cardio‐respiratory failure and shock (CC 79)160.1140.0161.1211.0871.156
Congestive heart failure (CC 80)390.1510.0141.1631.1331.194
Chronic atherosclerosis (CC 83, 84)470.0510.0131.0531.0271.079
Valvular and rheumatic heart disease (CC 86)230.0620.0141.0641.0361.093
Arrhythmias (CC 92, 93)380.1260.0131.1341.1071.163
Vascular or circulatory disease (CC 104‐106)380.0880.0121.0921.0661.119
COPD (CC 108)580.1860.0131.2051.1751.235
Fibrosis of lung and other chronic lung disorders (CC 109)170.0860.0151.0901.0591.122
Renal failure (CC 131)170.1470.0161.1581.1221.196
Protein‐calorie malnutrition (CC 21)7.90.1210.0201.1291.0861.173
History of infection (CC 1, 3‐6)350.0680.0121.0711.0451.097
Severe hematological disorders (CC 44)3.60.1170.0281.1251.0641.188
Decubitus ulcer or chronic skin ulcer (CC 148, 149)100.1010.0181.1061.0671.146
History of pneumonia (CC 111‐113)440.0650.0131.0671.0411.094
Vertebral fractures (CC 157)5.10.1130.0241.1201.0681.174
Other injuries (CC 162)320.0610.0121.0631.0381.089
Urinary tract infection (CC 135)260.0640.0141.0661.0381.095
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal, and other cancers and tumors (CC 9‐10)160.0500.0161.0511.0181.084
End‐stage renal disease or dialysis (CC 129, 130)1.90.1310.0371.1401.0601.226
Drug/alcohol abuse/dependence/psychosis (CC 51‐53)120.0810.0171.0841.0481.121
Septicemia/shock (CC 2)6.30.0940.0221.0981.0521.146
Other gastrointestinal disorders (CC 36)560.0730.0121.0761.0511.102
Acute coronary syndrome (CC 81, 82)8.30.1260.0191.1341.0921.178
Pleural effusion/pneumothorax (CC 114)120.0830.0171.0861.0511.123
Other urinary tract disorders (CC 136)240.0590.0141.0611.0331.090
Stroke (CC 95, 96)100.0470.0191.0491.0111.088
Dementia and senility (CC 49, 50)270.0310.0141.0311.0041.059
Hemiplegia, paraplegia, paralysis, functional disability (CC 67‐69, 100‐102, 177, 178)7.40.0680.0211.0701.0261.116
Other lung disorders (CC 115)450.0050.0121.0050.9821.030
Major psychiatric disorders (CC 54‐56)110.0380.0181.0381.0031.075
Asthma (CC 110)120.0060.0181.0060.9721.041

Model Derivation

For the development of the administrative claims model, we randomly sampled half of 2006 hospitalizations that met inclusion criteria. To assess model performance at the patient level, we calculated the area under the receiver operating curve (AUC), and calculated observed readmission rates in the lowest and highest deciles on the basis of predicted readmission probabilities. We also compared performance with a null model, a model that adjusted for age and sex, and a model that included all candidate variables.20

Risk‐Standardized Readmission Rates

Using hierarchical logistic regression, we modeled the log‐odds of readmission within 30 days of discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics, and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation, or clustering, of observed outcomes, and models the assumption that underlying differences in quality among hospitals being evaluated lead to systematic differences in outcomes. We then calculated hospital‐specific readmission rates as the ratio of predicted‐to‐expected readmissions (similar to observed/expected ratio), multiplied by the national unadjusted ratea form of indirect standardization. Predicted number of readmissions in each hospital is estimated given the same patient mix and its estimated hospital‐specific intercept. Expected number of readmissions in each hospital is estimated using its patient mix and the average hospital‐specific intercept. To assess hospital performance in any given year, we re‐estimate model coefficients using that year's data.

Model Validation: Administrative Claims

We compared the model performance in the development sample with its performance in the sample from the 2006 data that was not selected for the development set, and separately among pneumonia admissions in 2005. The model was recalibrated in each validation set.

Model Validation: Medical Record Abstraction

We developed a separate medical record‐based model of readmission risk using information from charts that had previously been abstracted as part of CMS's National Pneumonia Project. To select variables for this model, the clinician team: 1) reviewed the list of variables that were included in a medical record model that was previously developed for validating the National Quality Forum‐approved pneumonia mortality measure; 2) reviewed a list of other potential candidate variables available in the National Pneumonia Project dataset; and 3) reviewed variables that emerged as potentially important predictors of readmission, based on a systematic review of the literature that was conducted as part of measure development. This selection process resulted in a final medical record model that included 35 variables.

We linked patients in the National Pneumonia Project cohort to their Medicare claims data, including claims from one year before the index hospitalization, so that we could calculate risk‐standardized readmission rates in this cohort separately using medical record and claims‐based models. This analysis was conducted at the state level, for the 50 states plus the District of Columbia and Puerto Rico, because medical record data were unavailable in sufficient numbers to permit hospital‐level comparisons. To examine the relationship between risk‐standardized rates obtained from medical record and administrative data models, we estimated a linear regression model describing the association between the two rates, weighting each state by number of index hospitalizations, and calculated the correlation coefficient and the intercept and slope of this equation. A slope close to 1 and an intercept close to 0 would provide evidence that risk‐standardized state readmission rates from the medical record and claims models were similar. We also calculated the difference between state risk‐standardized readmission rates from the two models.

Analyses were conducted with the use of SAS version 9.1.3 (SAS Institute Inc, Cary, NC). Models were fitted separately for the National Pneumonia Project and 2006 cohort. We estimated the hierarchical models using the GLIMMIX procedure in SAS. The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation and Performance

After exclusions were applied, the 2006 sample included 453,251 pneumonia hospitalizations (Figure 1). The development sample consisted of 226,545 hospitalizations at 4675 hospitals, with an overall unadjusted 30‐day readmission rate of 17.4%. In 11,694 index cases (5.2%), the patient died within 30 days without being readmitted. Median readmission rate was 16.3%, 25th and 75th percentile rates were 11.1% and 21.3%, and at the 10th and 90th percentile, hospital readmission rates ranged from 4.6% to 26.7% (Figure 2).

Figure 1
Pneumonia admissions included in measure calculation.
Figure 2
Distribution of unadjusted readmission rates.

The claims model included 39 variables (age, sex, and 37 clinical variables) (Table 1). The mean age of the cohort was 80.0 years, with 55.5% women and 11.1% nonwhite patients. Mean observed readmission rate in the development sample ranged from 9% in the lowest decile of predicted pneumonia readmission rates to 32% in the highest predicted decile, a range of 23%. The AUC was 0.63. For comparison, a model with only age and sex had an AUC of 0.51, and a model with all candidate variables had an AUC equal to 0.63 (Table 2).

Readmission Model Performance of Administrative Claims Models
 Calibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • NOTE: Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted Probabilities (p) = 1/(1+exp{Xb}), and Z = Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample; eg, Logit(P(Y = 1|Z)) = 0 + 1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Development sample
2006(1st half) N = 226,545(0, 1)(0.09, 0.32)0.63082.627.399.996,843 (40)
Validation sample
2006(2nd half) N = 226,706(0.002, 0.997)(0.09, 0.31)0.63082.557.459.996,870 (40)
2005N = 536,015(0.035, 1.008)(0.08, 0.31)0.63082.677.3110.0316,241 (40)

Hospital Risk‐Standardized Readmission Rates

Risk‐standardized readmission rates varied across hospitals (Figure 3). Median risk‐standardized readmission rate was 17.3%, and the 25th and 75th percentiles were 16.9% and 17.9%, respectively. The 5th percentile was 16.0% and the 95th percentile was 19.1%. Odds of readmission for a hospital one standard deviation above average was 1.4 times that of a hospital one standard deviation below average.

Figure 3
Distribution of risk‐standardized readmission rates.

Administrative Model Validation

In the remaining 50% of pneumonia index hospitalizations from 2006, and the entire 2005 cohort, regression coefficients and standard errors of model variables were similar to those in the development data set. Model performance using 2005 data was consistent with model performance using the 2006 development and validation half‐samples (Table 2).

Medical Record Validation

After exclusions, the medical record sample taken from the National Pneumonia Project included 47,429 cases, with an unadjusted 30‐day readmission rate of 17.0%. The final medical record risk‐adjustment model included a total of 35 variables, whose prevalence and association with readmission risk varied modestly (Table 3). Performance of the medical record and administrative models was similar (areas under the ROC curve 0.59 and 0.63, respectively) (Table 4). Additionally, in the administrative model, predicted readmission rates ranged from 8% in the lowest predicted decile to 30% in the highest predicted decile, while in the medical record model, the corresponding rates varied from 10% to 26%.

Regression Model Results from Medical Record Sample
VariablePercentEstimateStandard ErrorOdds Ratio95% CI
  • NOTE: Between‐state variance = 0.024; standard error = 0.00.

  • Abbreviations: BP, blood pressure; BUN, blood urea nitrogen; CI, confidence interval; SD, standard deviation; WBC, white blood cell count.

Age 65, mean (SD)15.24 (7.87)0.0030.0020.9970.9931.000
Male46.180.1220.0251.1301.0751.188
Nursing home resident17.710.0350.0371.0360.9631.114
Neoplastic disease6.800.1300.0491.1391.0341.254
Liver disease1.040.0890.1230.9150.7191.164
History of heart failure28.980.2340.0291.2641.1941.339
History of renal disease8.510.1880.0471.2061.1001.323
Altered mental status17.950.0090.0341.0090.9441.080
Pleural effusion21.200.1650.0301.1791.1111.251
BUN 30 mg/dl23.280.1600.0331.1741.1001.252
BUN missing14.560.1010.1850.9040.6301.298
Systolic BP <90 mmHg2.950.0680.0701.0700.9321.228
Systolic BP missing11.210.1490.4251.1600.5042.669
Pulse 125/min7.730.0360.0471.0360.9451.137
Pulse missing11.220.2100.4051.2340.5582.729
Respiratory rate 30/min16.380.0790.0341.0821.0121.157
Respiratory rate missing11.390.2040.2401.2260.7651.964
Sodium <130 mmol/L4.820.1360.0571.1451.0251.280
Sodium missing14.390.0490.1431.0500.7931.391
Glucose 250 mg/dl5.190.0050.0570.9950.8891.114
Glucose missing15.440.1560.1050.8550.6961.051
Hematocrit <30%7.770.2700.0441.3101.2021.428
Hematocrit missing13.620.0710.1350.9320.7151.215
Creatinine 2.5 mg/dL4.680.1090.0621.1150.9891.258
Creatinine missing14.630.2000.1671.2210.8801.695
WBC 6‐12 b/L38.040.0210.0490.9790.8891.079
WBC >12 b/L41.450.0680.0490.9340.8481.029
WBC missing12.850.1670.1621.1810.8601.623
Immunosuppressive therapy15.010.3470.0351.4151.3211.516
Chronic lung disease42.160.1370.0281.1471.0861.211
Coronary artery disease39.570.1500.0281.1621.1001.227
Diabetes mellitus20.900.1370.0331.1471.0761.223
Alcohol/drug abuse3.400.0990.0710.9060.7881.041
Dementia/Alzheimer's disease16.380.1250.0381.1331.0521.222
Splenectomy0.440.0160.1861.0160.7061.463
Model Performance of Medical Record Model
ModelCalibration (0, 1)*DiscriminationResiduals Lack of Fit (Pearson Residual Fall %)Model 2 (No. of Covariates)
Predictive Ability (Lowest Decile, Highest Decile)AUC(<2)(2, 0)(0, 2)(2+)
  • Abbreviations: AUC, area under the receiver operating curve.

  • Max‐rescaled R‐square.

  • Observed rates.

  • Wald chi‐square.

Medical Record Model Development Sample (NP)
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.10, 0.26)0.59083.045.2811.68710 (35)
Linked Administrative Model Validation Sample
N = 47,429 No. of 30‐day readmissions = 8,042(1, 0)(0.08, 0.30)0.63083.046.9410.011,414 (40)

The correlation coefficient of the estimated state‐specific standardized readmission rates from the administrative and medical record models was 0.96, and the proportion of the variance explained by the model was 0.92 (Figure 4).

Figure 4
Comparison of state‐level risk‐standardized readmission rates from medical record and administrative models. Abbreviations: HGLM, hierarchical generalized linear models.

DISCUSSION

We have described the development, validation, and results of a hospital, 30‐day, risk‐standardized readmission model for pneumonia that was created to support current federal transparency initiatives. The model uses administrative claims data from Medicare fee‐for‐service patients and produces results that are comparable to a model based on information obtained through manual abstraction of medical records. We observed an overall 30‐day readmission rate of 17%, and our analyses revealed substantial variation across US hospitals, suggesting that improvement by lower performing institutions is an achievable goal.

Because more than one in six pneumonia patients are rehospitalized shortly after discharge, and because pneumonia hospitalizations represent an enormous expense to the Medicare program, prevention of readmissions is now widely recognized to offer a substantial opportunity to improve patient outcomes while simultaneously lowering health care costs. Accordingly, promotion of strategies to reduce readmission rates has become a key priority for payers and quality‐improvement organizations. These range from policy‐level attempts to stimulate change, such as publicly reporting hospital readmission rates on government websites, to establishing accreditation standardssuch as the Joint Commission's requirement to accurately reconcile medications, to the creation of quality improvement collaboratives focused on sharing best practices across institutions. Regardless of the approach taken, a valid, risk‐adjusted measure of performance is required to evaluate and track performance over time. The measure we have described meets the National Quality Forum's measure evaluation criteria in that it addresses an important clinical topic for which there appears to be significant opportunities for improvement, the measure is precisely defined and has been subjected to validity and reliability testing, it is risk‐adjusted based on patient clinical factors present at the start of care, is feasible to produce, and is understandable by a broad range of potential users.21 Because hospitalists are the physicians primarily responsible for the care of patients with pneumonia at US hospitals, and because they frequently serve as the physician champions for quality improvement activities related to pneumonia, it is especially important that they maintain a thorough understanding of the measures and methodologies underlying current efforts to measure hospital performance.

Several features of our approach warrant additional comment. First, we deliberately chose to measure all readmission events rather than attempt to discriminate between potentially preventable and nonpreventable readmissions. From the patient perspective, readmission for any reason is a concern, and limiting the measure to pneumonia‐related readmissions could make it susceptible to gaming by hospitals. Moreover, determining whether a readmission is related to a potential quality problem is not straightforward. For example, a patient with pneumonia whose discharge medications were prescribed incorrectly may be readmitted with a hip fracture following an episode of syncope. It would be inappropriate to treat this readmission as unrelated to the care the patient received for pneumonia. Additionally, while our approach does not presume that every readmission is preventable, the goal is to reduce the risk of readmissions generally (not just in narrowly defined subpopulations), and successful interventions to reduce rehospitalization have typically demonstrated reductions in all‐cause readmission.9, 22 Second, deaths that occurred within 30 days of discharge, yet that were not accompanied by a hospital readmission, were not counted as a readmission outcome. While it may seem inappropriate to treat a postdischarge death as a nonevent (rather than censoring or excluding such cases), alternative analytic approaches, such as using a hierarchical survival model, are not currently computationally feasible with large national data sets. Fortunately, only a relatively small proportion of discharges fell into this category (5.2% of index cases in the 2006 development sample died within 30 days of discharge without being readmitted). An alternative approach to handling the competing outcome of death would have been to use a composite outcome of readmission or death. However, we believe that it is important to report the outcomes separately because factors that predict readmission and mortality may differ, and when making comparisons across hospitals it would not be possible to determine whether differences in rate were due to readmission or mortality. Third, while the patient‐level readmission model showed only modest discrimination, we intentionally excluded covariates such as race and socioeconomic status, as well as in‐hospital events and potential complications of care, and whether patients were discharged home or to a skilled nursing facility. While these variables could have improved predictive ability, they may be directly or indirectly related to quality or supply factors that should not be included in a model that seeks to control for patient clinical characteristics. For example, if hospitals with a large share of poor patients have higher readmission rates, then including income in the model will obscure differences that are important to identify. While we believe that the decision to exclude such factors in the model is in the best interest of patients, and supports efforts to reduce health inequality in society more generally, we also recognize that hospitals that care for a disproportionate share of poor patients are likely to require additional resources to overcome these social factors. Fourth, we limited the analysis to patients with a principal diagnosis of pneumonia, and chose not to also include those with a principal diagnosis of sepsis or respiratory failure coupled with a secondary diagnosis of pneumonia. While the broader definition is used by CMS in the National Pneumonia Project, that initiative relied on chart abstraction to differentiate pneumonia present at the time of admission from cases developing as a complication of hospitalization. Additionally, we did not attempt to differentiate between community‐acquired and healthcare‐associated pneumonia, however our approach is consistent with the National Pneumonia Project and Pneumonia Patient Outcomes Research Team.18 Fifth, while our model estimates readmission rates at the hospital level, we recognize that readmissions are influenced by a complex and extensive range of factors. In this context, greater cooperation between hospitals and other care providers will almost certainly be required in order to achieve dramatic improvement in readmission rates, which in turn will depend upon changes to the way serious illness is paid for. Some options that have recently been described include imposing financial penalties for early readmission, extending the boundaries of case‐based payment beyond hospital discharge, and bundling payments between hospitals and physicians.2325

Our measure has several limitations. First, our models were developed and validated using Medicare data, and the results may not apply to pneumonia patients less than 65 years of age. However, most patients hospitalized with pneumonia in the US are 65 or older. In addition, we were unable to test the model with a Medicare managed care population, because data are not currently available on such patients. Finally, the medical record‐based validation was conducted by state‐level analysis because the sample size was insufficient to carry this out at the hospital level.

In conclusion, more than 17% of Medicare beneficiaries are readmitted within 30 days following discharge after a hospitalization for pneumonia, and rates vary substantially across institutions. The development of a valid measure of hospital performance and public reporting are important first steps towards focusing attention on this problem. Actual improvement will now depend on whether hospitals and partner organizations are successful at identifying and implementing effective methods to prevent readmission.

References
  1. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
  2. Medicare Payment Advisory Commission.Report to the Congress: Promoting Greater Efficiency in Medicare.2007.
  3. Levit K,Wier L,Ryan K,Elixhauser A,Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007.2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed November 7, 2009.
  4. Centers for Medicare 353(3):255264.
  5. Baker DW,Einstadter D,Husak SS,Cebul RD.Trends in postdischarge mortality and readmissions: has length of stay declined too far?Arch Intern Med.2004;164(5):538544.
  6. Vecchiarino P,Bohannon RW,Ferullo J,Maljanian R.Short‐term outcomes and their predictors for patients hospitalized with community‐acquired pneumonia.Heart Lung.2004;33(5):301307.
  7. Dean NC,Bateman KA,Donnelly SM, et al.Improved clinical outcomes with utilization of a community‐acquired pneumonia guideline.Chest.2006;130(3):794799.
  8. Gleason PP,Meehan TP,Fine JM,Galusha DH,Fine MJ.Associations between initial antimicrobial therapy and medical outcomes for hospitalized elderly patients with pneumonia.Arch Intern Med.1999;159(21):25622572.
  9. Benbassat J,Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  10. Coleman EA,Parry C,Chalmers S,Min S.The care transitions intervention: results of a randomized controlled trial.Arch Intern Med.2006;166(17):18221828.
  11. Corrigan JM, Eden J, Smith BM, eds.Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Committee on Enhancing Federal Healthcare Quality Programs.Washington, DC:National Academies Press,2003.
  12. Medicare.gov—Hospital Compare. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp?version=default1(1):2937.
  13. Krumholz HM,Normand ST,Spertus JA,Shahian DM,Bradley EH.Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement.Health Aff.2007;26(1):7585.
  14. NQF‐Endorsed® Standards. Available at: http://www.qualityforum.org/Measures_List.aspx. Accessed November 6,2009.
  15. Houck PM,Bratzler DW,Nsa W,Ma A,Bartlett JG.Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community‐acquired pneumonia.Arch Intern Med.2004;164(6):637644.
  16. Pope G,Ellis R,Ash A. Diagnostic Cost Group Hierarchical Condition Category Models for Medicare Risk Adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc;2000. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed November 7, 2009.
  17. Harrell FEJ.Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis.1st ed.New York:Springer;2006.
  18. National Quality Forum—Measure Evaluation Criteria.2008. Available at: http://www.qualityforum.org/uploadedFiles/Quality_Forum/Measuring_Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008–08‐28Final.pdf?n=4701.
  19. Naylor MD,Brooten D,Campbell R, et al.Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial.JAMA.1999;281(7):613620.
  20. Davis K.Paying for care episodes and care coordination.N Engl J Med.2007;356(11):11661168.
  21. Luft HS.Health care reform—toward more freedom, and responsibility, for physicians.N Engl J Med.2009;361(6):623628.
  22. Rosenthal MB.Beyond pay for performance—emerging models of provider‐payment reform.N Engl J Med.2008;359(12):11971200.
References
  1. Jencks SF,Williams MV,Coleman EA.Rehospitalizations among patients in the Medicare Fee‐for‐Service Program.N Engl J Med.2009;360(14):14181428.
  2. Medicare Payment Advisory Commission.Report to the Congress: Promoting Greater Efficiency in Medicare.2007.
  3. Levit K,Wier L,Ryan K,Elixhauser A,Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007.2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed November 7, 2009.
  4. Centers for Medicare 353(3):255264.
  5. Baker DW,Einstadter D,Husak SS,Cebul RD.Trends in postdischarge mortality and readmissions: has length of stay declined too far?Arch Intern Med.2004;164(5):538544.
  6. Vecchiarino P,Bohannon RW,Ferullo J,Maljanian R.Short‐term outcomes and their predictors for patients hospitalized with community‐acquired pneumonia.Heart Lung.2004;33(5):301307.
  7. Dean NC,Bateman KA,Donnelly SM, et al.Improved clinical outcomes with utilization of a community‐acquired pneumonia guideline.Chest.2006;130(3):794799.
  8. Gleason PP,Meehan TP,Fine JM,Galusha DH,Fine MJ.Associations between initial antimicrobial therapy and medical outcomes for hospitalized elderly patients with pneumonia.Arch Intern Med.1999;159(21):25622572.
  9. Benbassat J,Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  10. Coleman EA,Parry C,Chalmers S,Min S.The care transitions intervention: results of a randomized controlled trial.Arch Intern Med.2006;166(17):18221828.
  11. Corrigan JM, Eden J, Smith BM, eds.Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Committee on Enhancing Federal Healthcare Quality Programs.Washington, DC:National Academies Press,2003.
  12. Medicare.gov—Hospital Compare. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp?version=default1(1):2937.
  13. Krumholz HM,Normand ST,Spertus JA,Shahian DM,Bradley EH.Measuring performance for treating heart attacks and heart failure: the case for outcomes measurement.Health Aff.2007;26(1):7585.
  14. NQF‐Endorsed® Standards. Available at: http://www.qualityforum.org/Measures_List.aspx. Accessed November 6,2009.
  15. Houck PM,Bratzler DW,Nsa W,Ma A,Bartlett JG.Timing of antibiotic administration and outcomes for Medicare patients hospitalized with community‐acquired pneumonia.Arch Intern Med.2004;164(6):637644.
  16. Pope G,Ellis R,Ash A. Diagnostic Cost Group Hierarchical Condition Category Models for Medicare Risk Adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc;2000. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed November 7, 2009.
  17. Harrell FEJ.Regression Modeling Strategies: With Applications to Linear Models, Logistic Regression, and Survival Analysis.1st ed.New York:Springer;2006.
  18. National Quality Forum—Measure Evaluation Criteria.2008. Available at: http://www.qualityforum.org/uploadedFiles/Quality_Forum/Measuring_Performance/Consensus_Development_Process%E2%80%99s_Principle/EvalCriteria2008–08‐28Final.pdf?n=4701.
  19. Naylor MD,Brooten D,Campbell R, et al.Comprehensive discharge planning and home follow‐up of hospitalized elders: a randomized clinical trial.JAMA.1999;281(7):613620.
  20. Davis K.Paying for care episodes and care coordination.N Engl J Med.2007;356(11):11661168.
  21. Luft HS.Health care reform—toward more freedom, and responsibility, for physicians.N Engl J Med.2009;361(6):623628.
  22. Rosenthal MB.Beyond pay for performance—emerging models of provider‐payment reform.N Engl J Med.2008;359(12):11971200.
Issue
Journal of Hospital Medicine - 6(3)
Issue
Journal of Hospital Medicine - 6(3)
Page Number
142-150
Page Number
142-150
Publications
Publications
Article Type
Display Headline
Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia
Display Headline
Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia
Legacy Keywords
cost analysis, cost per day, end of life, hospice, length of stay, palliative care, triggers
Legacy Keywords
cost analysis, cost per day, end of life, hospice, length of stay, palliative care, triggers
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut Street, Springfield, MA 01199
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Readmission and Mortality [Rates] in Pneumonia

Article Type
Changed
Sun, 05/28/2017 - 20:18
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Article PDF
Issue
Journal of Hospital Medicine - 5(6)
Publications
Page Number
E12-E18
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article PDF
Article PDF

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Issue
Journal of Hospital Medicine - 5(6)
Issue
Journal of Hospital Medicine - 5(6)
Page Number
E12-E18
Page Number
E12-E18
Publications
Publications
Article Type
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut St., Springfield, MA 01199
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media