Affiliations
Mathematica Policy Research, Inc., Cambridge, Massachusetts
Given name(s)
Yongfei
Family name
Wang
Degrees
MS

Association Between Postdischarge Emergency Department Visitation and Readmission Rates

Article Type
Changed
Fri, 10/04/2019 - 15:46

Hospital readmissions for acute myocardial infarction (AMI), heart failure, and pneumonia have become central to quality-measurement efforts by the Centers for Medicare & Medicaid Services (CMS), which seek to improve hospital care transitions through public reporting and payment programs.1 Most current measures are limited to readmissions that require inpatient hospitalization and do not capture return visits to the emergency department (ED) that do not result in readmission but rather ED discharge. These visits may reflect important needs for acute, unscheduled care during the vulnerable posthospitalization period.2-5 While previous research has suggested that nearly 10% of patients may return to the ED following hospital discharge without readmission, the characteristics of these visits among Medicare beneficiaries and the implications for national care-coordination quality-measurement initiatives have not been explored.6,7

As the locus of acute outpatient care and the primary portal of hospital admissions and readmissions, ED visits following hospital discharge may convey meaningful information about posthospitalization care transitions.8,9 In addition, recent reviews and perspectives have highlighted the role of ED care-coordination services as interventions to reduce inpatient hospitalizations and improve care transitions,10,11 yet no empirical studies have evaluated the relationship between these unique care-coordination opportunities in the ED and care-coordination outcomes, such as hospital readmissions. As policymakers seek to develop accountability measures that capture the totality of acute, unscheduled visits following hospital discharge, describing the relationship between ED visits and readmissions will be essential to providers for benchmarking and to policymakers and payers seeking to reduce the total cost of care.12,13

Accordingly, we sought to characterize the frequency, diagnoses, and hospital-level variation in treat-and-discharge ED visitation following hospital discharge for 3 conditions for which hospital readmission is publicly reported by the CMS: AMI, heart failure, and pneumonia. We also sought to evaluate the relationship between hospital-level ED visitation following hospital discharge and publicly reported, risk-standardized readmission rates (RSRRs).

METHODS

Study Design

This study was a cross-sectional analysis of Medicare beneficiaries discharged alive following hospitalization for AMI, heart failure, and pneumonia between July 2011 and June 2012.

Selection of Participants

We used Medicare Standard Analytic Files to identify inpatient hospitalizations for each disease cohort based on principal discharge diagnoses. Each condition-specific cohort was constructed to be consistent with the CMS’s readmission measures using International Classification of Diseases, 9th Revision-Clinical Modification codes to identify AMI, heart failure, and pneumonia discharges.1 We included only patients who were enrolled in fee-for-service (FFS) Medicare parts A and B for 12 months prior to their index hospitalization to maximize the capture of diagnoses for risk adjustment. Each cohort included only patients who were discharged alive while maintaining FFS coverage for at least 30 days following hospital discharge to minimize bias in outcome ascertainment. We excluded patients who were discharged against medical advice. All contiguous admissions that were identified in a transfer chain were considered to be a single admission. Hospitals with fewer than 25 condition-specific index hospital admissions were excluded from this analysis for consistency with publicly reported measures.1

Measurements

We measured postdischarge, treat-and release ED visits that occurred at any hospital within 30 days of hospital discharge from the index hospitalization. ED visits were identified as a hospital outpatient claim for ED services using hospital outpatient revenue center codes 0450, 0451, 0452, 0456, and 0981. This definition is consistent with those of previous studies.3,14 We defined postdischarge ED visits as treat-and-discharge visits or visits that did not result in inpatient readmission or observation stays. Similar to readmission measures, only 1 postdischarge ED visit was counted toward the hospital-level outcome in patients with multiple ED visits within the 30 days following hospital discharge. We defined readmission as the first unplanned, inpatient hospitalization occurring at any hospital within the 30-day period following discharge. Any subsequent inpatient admission following the 30-day period was considered a distinct index admission if it met the inclusion criteria. Consistent with CMS methods, unplanned, inpatient readmissions are from any source and are not limited to patients who were first evaluated in the ED.

 

 

Outcomes

We describe hospital-level, postdischarge ED visitation as the risk-standardized postdischarge ED visit rate. The general construct of this measure is consistent with those of prior studies that define postdischarge ED visitation as the proportion of index admissions followed by a treat-and-discharge ED visit without hospital readmission2,3; however, this outcome also incorporates a risk-standardization model with covariates that are identical to the risk-standardization approach that is used for readmission measurement.

We describe hospital-level readmission by calculating RSRRs consistent with CMS readmission measures, which are endorsed by the National Quality Forum and used for public reporting.15-17 Detailed technical documentation, including the SAS code used to replicate hospital-level measures of readmission, are available publicly through the CMS QualityNet portal.18

We calculated risk-standardized postdischarge ED visit rates and RSRRs as the ratio of the predicted number of postdischarge ED visits or readmissions for a hospital given its observed case mix to the expected number of postdischarge ED visits or readmissions based on the nation’s performance with that hospital’s case mix, respectively. This approach estimates a distinct risk-standardized postdischarge ED visit rate and RSRR for each hospital using hierarchical generalized linear models (HGLMs) and using a logit link with a first-level adjustment for age, sex, 29 clinical covariates for AMI, 35 clinical covariates for heart failure, and 38 clinical covariates for pneumonia. Each clinical covariate is identified based on inpatient and outpatient claims during the 12 months prior to the index hospitalization. The second level of the HGLM includes a random hospital-level intercept. This approach to measuring hospital readmissions accounts for the correlated nature of observed readmission rates within a hospital and reflects the assumption that after adjustment for patient characteristics and sampling variability, the remaining variation in postdischarge ED visit rates or readmission rates reflects hospital quality.

Analysis

In order to characterize treat-and-discharge postdischarge ED visits, we first described the clinical conditions that were evaluated during the first postdischarge ED visit. Based on the principal discharge diagnosis, ED visits were grouped into clinically meaningful categories using the Agency for Healthcare Research and Quality Clinical Classifications Software (CCS).19 We also report hospital-level variation in risk-standardized postdischarge ED visit rates for AMI, heart failure, and pneumonia.

Next, we examined the relationship between hospital characteristics and risk-standardized postdischarge ED visit rates. We linked hospital characteristics from the American Hospital Association (AHA) Annual Survey to the study dataset, including the following: safety-net status, teaching status, and urban or rural status. Consistent with prior work, hospital safety-net status was defined as a hospital Medicaid caseload greater than 1 standard deviation above the mean Medicaid caseload in the hospital’s state. Approximately 94% of the hospitals included in the 3 condition cohorts in the dataset had complete data in the 2011 AHA Annual Survey to be included in this analysis.

We evaluated the relationship between postdischarge ED visit rates and hospital readmission rates in 2 ways. First, we calculated Spearman rank correlation coefficients between hospital-level, risk-standardized postdischarge ED visit rates and RSRRs. Second, we calculated hospital-level variation in RSRRs based on the strata of risk-standardized postdischarge ED visit rates. Given the normal distribution of postdischarge ED visit rates, we grouped hospitals by quartile of postdischarge ED visit rates and 1 group for hospitals with no postdischarge ED visits.

Based on preliminary analyses indicating a relationship between hospital size, measured by condition-specific index hospitalization volume, and postdischarge treat-and-discharge ED visit rates, all descriptive statistics and correlations reported are weighted by the volume of condition-specific index hospitalizations. The study was approved by the Yale University Human Research Protection Program. All analyses were conducted using SAS 9.1 (SAS Institute Inc, Cary, NC). The analytic plan and results reported in this work are in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology checklist.20

RESULTS

During the 1-year study period, we included a total of 157,035 patients who were hospitalized at 1656 hospitals for AMI, 391,209 at 3044 hospitals for heart failure, and 342,376 at 3484 hospitals for pneumonia. Details of study cohort creation are available in supplementary Table 1. After hospitalization for AMI, 14,714 patients experienced a postdischarge ED visit (8.4%) and 27,214 an inpatient readmissions (17.3%) within 30 days of discharge; 31,621 (7.6%) and 88,106 (22.5%) patients after hospitalization for heart failure and 26,681 (7.4%) and 59,352 (17.3%) patients after hospitalization for pneumonia experienced a postdischarge ED visit and an inpatient readmission within 30 days of discharge, respectively.

Postdischarge ED visits were for a wide variety of conditions, with the top 10 CCS categories comprising 44% of postdischarge ED visits following AMI hospitalizations, 44% of following heart failure hospitalizations, and 41% following pneumonia hospitalizations (supplementary Table 2). The first postdischarge ED visit was rarely for the same condition as the index hospitalization in the AMI cohort (224 visits; 1.5%) as well as the pneumonia cohort (1401 visits; 5.3%). Among patients who were originally admitted for heart failure, 10.6% of the first postdischarge ED visits were also for congestive heart failure. However, the first postdischarge ED visit was commonly for associated conditions, such as coronary artery disease in the case of AMI or chronic obstructive pulmonary disease in the case of pneumonia, albeit these related conditions did not comprise the majority of postdischarge ED visitation.

We found wide hospital-level variation in postdischarge ED visit rates for each condition: AMI (median: 8.3%; 5th and 95th percentile: 2.8%-14.3%), heart failure (median: 7.3%; 5th and 95th percentile: 3.0%-13.3%), and pneumonia (median: 7.1%; 5th and 95th percentile: 2.4%-13.2%; supplementary Table 3). The variation persisted after accounting for hospital case mix, as evidenced in the supplementary Figure, which describes hospital variation in risk-standardized postdischarge ED visit rates. This variation was statistically significant (P < .001), as demonstrated by the isolated relationship between the random effect and the outcome (AMI: random effect estimate 0.0849 [95% confidence interval (CI), 0.0832 to 0.0866]; heart failure: random effect estimate 0.0796 [95% CI, 0.0784 to 0.0809]; pneumonia: random effect estimate 0.0753 [95% CI, 0.0741 to 0.0764]).

Across all 3 conditions, hospitals located in rural areas had significantly higher risk-standardized postdischarge ED visit rates than hospitals located in urban areas (10.1% vs 8.6% for AMI, 8.4% vs 7.5% for heart failure, and 8.0% vs 7.4% for pneumonia). In comparison to teaching hospitals, nonteaching hospitals had significantly higher risk-standardized postdischarge ED visit rates following hospital discharge for pneumonia (7.6% vs 7.1%). Safety-net hospitals also had higher risk-standardized postdischarge ED visitation rates following discharge for heart failure (8.4% vs 7.7%) and pneumonia (7.7% vs 7.3%). Risk-standardized postdischarge ED visit rates were higher in publicly owned hospitals than in nonprofit or privately owned hospitals for heart failure (8.0% vs 7.5% in nonprofit hospitals or 7.5% in private hospitals) and pneumonia (7.7% vs 7.4% in nonprofit hospitals and 7.3% in private hospitals; Table).



Among hospitals with RSRRs that were publicly reported by CMS, we found a moderate inverse correlation between risk-standardized postdischarge ED visit rates and hospital RSRRs for each condition: AMI (r = −0.23; 95% CI, −0.29 to −0.19), heart failure (r = −0.29; 95% CI, −0.34 to −0.27), and pneumonia (r = −0.18; 95% CI, −0.22 to −0.15; Figure).

 

 

DISCUSSION

Across a national cohort of Medicare beneficiaries, we found frequent treat-and-discharge ED utilization following hospital discharge for AMI, heart failure, and pneumonia, suggesting that publicly reported readmission measures are capturing only a portion of postdischarge acute-care use. Our findings confirm prior work describing a 30-day postdischarge ED visit rate of 8% to 9% among Medicare beneficiaries for all hospitalizations in several states.3,6While many of the first postdischarge ED visits were for conditions related to the index hospitalization, the majority represent acute, unscheduled visits for different diagnoses. These findings are consistent with prior work studying inpatient readmissions and observation readmissions that find similar heterogeneity in the clinical reasons for hospital return.21,22

We also described substantial hospital-level variation in risk-standardized ED postdischarge rates. Prior work by Vashi et al.3 demonstrated substantial variation in observed postdischarge ED visit rates and inpatient readmissions following hospital discharge between clinical conditions in a population-level study. Our work extends upon this by demonstrating hospital-level variation for 3 conditions of high volume and substantial policy importance after accounting for differences in hospital case mix. Interestingly, our work also found similar rates of postdischarge ED treat-and-discharge visitation as recent work by Sabbatini et al.23 analyzing an all-payer, adult population with any clinical condition. Taken together, these studies show the substantial volume of postdischarge acute-care utilization in the ED not captured by existing readmission measures.

We found several hospital characteristics of importance in describing variation in postdischarge ED visitation rates. Notably, hospitals located in rural areas and safety-net hospitals demonstrated higher postdischarge ED visitation rates. This may reflect a higher use of the ED as an acute, unscheduled care access point in rural communities without access to alternative acute diagnostic and treatment services.24 Similarly, safety-net hospitals may be more likely to provide unscheduled care for patients with poor access to primary care in the ED setting. Yet, consistent with prior work, our results also indicate that these differences do not result in different readmission rates.25 Regarding hospital teaching status, unlike prior work suggesting that teaching hospitals care for more safety-net Medicare beneficiaries,26 our work found opposite patterns of postdischarge ED visitation between hospital teaching and safety-net status following pneumonia hospitalization. This may reflect differences in the organization of acute care as patients with limited access to unscheduled primary and specialty care in safety-net communities utilize the ED, whereas patients in teaching-hospital communities may be able to access hospital-based clinics for care.

Contrary to the expectations of many clinicians and policymakers, we found an inverse relationship between postdischarge ED visit rates and readmission rates. While the cross-sectional design of our study cannot provide a causal explanation, these findings merit policy attention and future exploration of several hypotheses. One possible explanation for this finding is that hospitals with high postdischarge ED visit rates provide care in communities in which acute, unscheduled care is consolidated to the ED setting and thereby permits the ED to serve a gatekeeper function for scarce inpatient resources. This hypothesis may also be supported by recent interventions demonstrating that the use of ED care coordination and geriatric ED services at higher-volume EDs can reduce hospitalizations. Also, hospitals with greater ED capacity may have easier ED access and may be able to see patients earlier in their disease courses post discharge or more frequently in the ED for follow-up, therefore increasing ED visits but avoiding rehospitalization. Another possible explanation is that hospitals with lower postdischarge ED visit rates may also have a lower propensity to admit patients. Because our definition of postdischarge ED visitation did not include ED visits that result in hospitalization, hospitals with a lower propensity to admit from the ED may therefore appear to have higher ED visit rates. This explanation may be further supported by our finding that many postdischarge ED visits are for conditions that are associated with discretionary hospitalization in the ED.27 A third explanation for this finding may be that poor access to outpatient care outside the hospital setting results in higher postdischarge ED visit rates without increasing the acuity of these revisits or increasing readmission rates28; however, given the validated, risk-standardized approach to readmission measurement, this is unlikely. This is also unlikely given recent work by Sabbatini et al.23 demonstrating substantial acuity among patients who return to the ED following hospital discharge. Future work should seek to evaluate the relationship between the availability of ED care-coordination services and the specific ED, hospital, and community care-coordination activities undertaken in the ED following hospital discharge to reduce readmission rates.

This work should be interpreted within the confines of its design. First, it is possible that some of the variation detected in postdischarge ED visit rates is mediated by hospital-level variation in postdischarge observation visits that are not captured in this outcome. However, in previous work, we have demonstrated that almost one-third of hospitals have no postdischarge observation stays and that most postdischarge observation stays are for more than 24 hours, which is unlikely to reflect the intensity of care of postdischarge ED visits.27 Second, our analyses were limited to Medicare FFS beneficiaries, which may limit the generalizability of this work to other patient populations. However, this dataset did include a national cohort of Medicare beneficiaries that is identical to those included in publicly reported CMS readmission measures; therefore, these results have substantial policy relevance. Third, this work was limited to 3 conditions of high illness severity of policy focus, and future work applying similar analyses to less severe conditions may find different degrees of hospital-level variation in postdischarge outcomes that are amenable to quality improvement. Finally, we assessed the rate of treat-and-discharge ED visits only after hospital discharge; this understates the frequency of ED visits since repeat ED visits and ED visits resulting in rehospitalization are not included. However, our definition was designed to mirror the definition used to assess hospital readmissions for policy purposes and is a conservative approach.

In summary, ED visits following hospital discharge are common, as Medicare beneficiaries have 1 treat-and-discharge ED visit for every 2 readmissions within 30 days of hospital discharge. Postdischarge ED visits occur for a wide variety of conditions, with wide risk-standardized, hospital-level variation. Hospitals with the highest risk-standardized postdischarge ED visitation rates demonstrated lower RSRRs, suggesting that policymakers and researchers should further examine the role of the hospital-based ED in providing access to acute care and supporting care transitions for the vulnerable Medicare population.

 

 

Disclosure

 Dr. Venkatesh received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, and grant support from the Emergency Medicine Foundation’s Health Policy Research Scholar Award during the conduct of the study; and Dr. Wang, Mr. Wang, Ms. Altaf, Dr. Bernheim, and Dr. Horwitz received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, during the conduct of the study.

Files
References

1. Dorsey KB GJ, Desai N, Lindenauer P, et al. 2015 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures: AMI-Version 8.0, HF-Version 8.0, Pneumonia-Version 8.0, COPD-Version 4.0, and Stroke-Version 4.0. 2015. https://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228890435217&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DRdmn_AMIHFPNCOPDSTK_Msr_UpdtRpt.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on July 8, 2015.
2. Rising KL, White LF, Fernandez WG, Boutwell AE. Emergency department visits after hospital discharge: a missing part of the equation. Ann Emerg Med. 2013;62(2):145-150. PubMed
3. Vashi AA, Fox JP, Carr BG, et al. Use of hospital-based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364-371. PubMed
4. Kocher KE, Nallamothu BK, Birkmeyer JD, Dimick JB. Emergency department visits after surgery are common for Medicare patients, suggesting opportunities to improve care. Health Aff (Millwood). 2013;32(9):1600-1607. PubMed
5. Krumholz HM. Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
6. Baier RR, Gardner RL, Coleman EA, Jencks SF, Mor V, Gravenstein S. Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450-453. PubMed
7. Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. N Engl J Med. 2012;367(5):391-393. PubMed
8. Kocher KE, Dimick JB, Nallamothu BK. Changes in the source of unscheduled hospitalizations in the United States. Med Care. 2013;51(8):689-698. PubMed
9. Morganti KG, Bauhoff S, Blanchard JC, Abir M, Iyer N. The evolving role of emergency departments in the United States. Santa Monica, CA: Rand Corporation; 2013. PubMed
10. Katz EB, Carrier ER, Umscheid CA, Pines JM. Comparative effectiveness of care coordination interventions in the emergency department: a systematic review. Ann Emerg Med. 2012;60(1):12.e1-23.e1. PubMed
11. Jaquis WP, Kaplan JA, Carpenter C, et al. Transitions of Care Task Force Report. 2012. http://www.acep.org/workarea/DownloadAsset.aspx?id=91206. Accessed on January 2, 2016. 
12. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Heart Failure (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
13. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Acute Myocardial Infarction (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
14. Hennessy S, Leonard CE, Freeman CP, et al. Validation of diagnostic codes for outpatient-originating sudden cardiac death and ventricular arrhythmia in Medicaid and Medicare claims data. Pharmacoepidemiol Drug Saf. 2010;19(6):555-562. PubMed
15. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Acute Myocardial Infarction Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873653724&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DAMI_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
16. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Heart Failure Readmission Measure Methodology. 2008. http://69.28.93.62/wp-content/uploads/2017/01/2007-Baseline-info-on-Readmissions-krumholz.pdf. Accessed on February 22, 2016.
17. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Pneumonia Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873654295&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DPneumo_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
18. QualityNet. Claims-based measures: readmission measures. 2016. http://www.qualitynet.org/dcs/ContentServer?cid=1219069855273&pagename=QnetPublic%2FPage%2FQnetTier3. Accessed on December 14, 2017.
19. Agency for Healthcare Research and Quality. Clinical classifications software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project 2013; https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 14, 2017.
20. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med. 2007;45(4):247-251. PubMed
21. Dharmarajan K, Hsieh AF, Lin Z, et al. Diagnoses and timing of 30-day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355-363. PubMed
22. Venkatesh AK, Wang C, Ross JS, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. Med Care. 2016;54(12):1070-1077. PubMed
23. Sabbatini AK, Kocher KE, Basu A, Hsia RY. In-hospital outcomes and costs among patients hospitalized during a return visit to the emergency department. JAMA. 2016;315(7):663-671. PubMed
24. Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620-1629. PubMed
25. Ross JS, Bernheim SM, Lin Z, et al. Based on key measures, care quality for Medicare enrollees at safety-net and non-safety-net hospitals was almost equal. Health Aff (Millwood). 2012;31(8):1739-1748. PubMed
26. Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681. PubMed
27. Venkatesh A, Wang C, Suter LG, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. In: Academy Health Annual Research Meeting. San Diego, CA; 2014. PubMed
28. Pittsenbarger ZE, Thurm CW, Neuman MI, et al. Hospital-level factors associated with pediatric emergency department return visits. J Hosp Med. 2017;12(7):536-543. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(9)
Publications
Topics
Page Number
589-594. Published online first March 15, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Hospital readmissions for acute myocardial infarction (AMI), heart failure, and pneumonia have become central to quality-measurement efforts by the Centers for Medicare & Medicaid Services (CMS), which seek to improve hospital care transitions through public reporting and payment programs.1 Most current measures are limited to readmissions that require inpatient hospitalization and do not capture return visits to the emergency department (ED) that do not result in readmission but rather ED discharge. These visits may reflect important needs for acute, unscheduled care during the vulnerable posthospitalization period.2-5 While previous research has suggested that nearly 10% of patients may return to the ED following hospital discharge without readmission, the characteristics of these visits among Medicare beneficiaries and the implications for national care-coordination quality-measurement initiatives have not been explored.6,7

As the locus of acute outpatient care and the primary portal of hospital admissions and readmissions, ED visits following hospital discharge may convey meaningful information about posthospitalization care transitions.8,9 In addition, recent reviews and perspectives have highlighted the role of ED care-coordination services as interventions to reduce inpatient hospitalizations and improve care transitions,10,11 yet no empirical studies have evaluated the relationship between these unique care-coordination opportunities in the ED and care-coordination outcomes, such as hospital readmissions. As policymakers seek to develop accountability measures that capture the totality of acute, unscheduled visits following hospital discharge, describing the relationship between ED visits and readmissions will be essential to providers for benchmarking and to policymakers and payers seeking to reduce the total cost of care.12,13

Accordingly, we sought to characterize the frequency, diagnoses, and hospital-level variation in treat-and-discharge ED visitation following hospital discharge for 3 conditions for which hospital readmission is publicly reported by the CMS: AMI, heart failure, and pneumonia. We also sought to evaluate the relationship between hospital-level ED visitation following hospital discharge and publicly reported, risk-standardized readmission rates (RSRRs).

METHODS

Study Design

This study was a cross-sectional analysis of Medicare beneficiaries discharged alive following hospitalization for AMI, heart failure, and pneumonia between July 2011 and June 2012.

Selection of Participants

We used Medicare Standard Analytic Files to identify inpatient hospitalizations for each disease cohort based on principal discharge diagnoses. Each condition-specific cohort was constructed to be consistent with the CMS’s readmission measures using International Classification of Diseases, 9th Revision-Clinical Modification codes to identify AMI, heart failure, and pneumonia discharges.1 We included only patients who were enrolled in fee-for-service (FFS) Medicare parts A and B for 12 months prior to their index hospitalization to maximize the capture of diagnoses for risk adjustment. Each cohort included only patients who were discharged alive while maintaining FFS coverage for at least 30 days following hospital discharge to minimize bias in outcome ascertainment. We excluded patients who were discharged against medical advice. All contiguous admissions that were identified in a transfer chain were considered to be a single admission. Hospitals with fewer than 25 condition-specific index hospital admissions were excluded from this analysis for consistency with publicly reported measures.1

Measurements

We measured postdischarge, treat-and release ED visits that occurred at any hospital within 30 days of hospital discharge from the index hospitalization. ED visits were identified as a hospital outpatient claim for ED services using hospital outpatient revenue center codes 0450, 0451, 0452, 0456, and 0981. This definition is consistent with those of previous studies.3,14 We defined postdischarge ED visits as treat-and-discharge visits or visits that did not result in inpatient readmission or observation stays. Similar to readmission measures, only 1 postdischarge ED visit was counted toward the hospital-level outcome in patients with multiple ED visits within the 30 days following hospital discharge. We defined readmission as the first unplanned, inpatient hospitalization occurring at any hospital within the 30-day period following discharge. Any subsequent inpatient admission following the 30-day period was considered a distinct index admission if it met the inclusion criteria. Consistent with CMS methods, unplanned, inpatient readmissions are from any source and are not limited to patients who were first evaluated in the ED.

 

 

Outcomes

We describe hospital-level, postdischarge ED visitation as the risk-standardized postdischarge ED visit rate. The general construct of this measure is consistent with those of prior studies that define postdischarge ED visitation as the proportion of index admissions followed by a treat-and-discharge ED visit without hospital readmission2,3; however, this outcome also incorporates a risk-standardization model with covariates that are identical to the risk-standardization approach that is used for readmission measurement.

We describe hospital-level readmission by calculating RSRRs consistent with CMS readmission measures, which are endorsed by the National Quality Forum and used for public reporting.15-17 Detailed technical documentation, including the SAS code used to replicate hospital-level measures of readmission, are available publicly through the CMS QualityNet portal.18

We calculated risk-standardized postdischarge ED visit rates and RSRRs as the ratio of the predicted number of postdischarge ED visits or readmissions for a hospital given its observed case mix to the expected number of postdischarge ED visits or readmissions based on the nation’s performance with that hospital’s case mix, respectively. This approach estimates a distinct risk-standardized postdischarge ED visit rate and RSRR for each hospital using hierarchical generalized linear models (HGLMs) and using a logit link with a first-level adjustment for age, sex, 29 clinical covariates for AMI, 35 clinical covariates for heart failure, and 38 clinical covariates for pneumonia. Each clinical covariate is identified based on inpatient and outpatient claims during the 12 months prior to the index hospitalization. The second level of the HGLM includes a random hospital-level intercept. This approach to measuring hospital readmissions accounts for the correlated nature of observed readmission rates within a hospital and reflects the assumption that after adjustment for patient characteristics and sampling variability, the remaining variation in postdischarge ED visit rates or readmission rates reflects hospital quality.

Analysis

In order to characterize treat-and-discharge postdischarge ED visits, we first described the clinical conditions that were evaluated during the first postdischarge ED visit. Based on the principal discharge diagnosis, ED visits were grouped into clinically meaningful categories using the Agency for Healthcare Research and Quality Clinical Classifications Software (CCS).19 We also report hospital-level variation in risk-standardized postdischarge ED visit rates for AMI, heart failure, and pneumonia.

Next, we examined the relationship between hospital characteristics and risk-standardized postdischarge ED visit rates. We linked hospital characteristics from the American Hospital Association (AHA) Annual Survey to the study dataset, including the following: safety-net status, teaching status, and urban or rural status. Consistent with prior work, hospital safety-net status was defined as a hospital Medicaid caseload greater than 1 standard deviation above the mean Medicaid caseload in the hospital’s state. Approximately 94% of the hospitals included in the 3 condition cohorts in the dataset had complete data in the 2011 AHA Annual Survey to be included in this analysis.

We evaluated the relationship between postdischarge ED visit rates and hospital readmission rates in 2 ways. First, we calculated Spearman rank correlation coefficients between hospital-level, risk-standardized postdischarge ED visit rates and RSRRs. Second, we calculated hospital-level variation in RSRRs based on the strata of risk-standardized postdischarge ED visit rates. Given the normal distribution of postdischarge ED visit rates, we grouped hospitals by quartile of postdischarge ED visit rates and 1 group for hospitals with no postdischarge ED visits.

Based on preliminary analyses indicating a relationship between hospital size, measured by condition-specific index hospitalization volume, and postdischarge treat-and-discharge ED visit rates, all descriptive statistics and correlations reported are weighted by the volume of condition-specific index hospitalizations. The study was approved by the Yale University Human Research Protection Program. All analyses were conducted using SAS 9.1 (SAS Institute Inc, Cary, NC). The analytic plan and results reported in this work are in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology checklist.20

RESULTS

During the 1-year study period, we included a total of 157,035 patients who were hospitalized at 1656 hospitals for AMI, 391,209 at 3044 hospitals for heart failure, and 342,376 at 3484 hospitals for pneumonia. Details of study cohort creation are available in supplementary Table 1. After hospitalization for AMI, 14,714 patients experienced a postdischarge ED visit (8.4%) and 27,214 an inpatient readmissions (17.3%) within 30 days of discharge; 31,621 (7.6%) and 88,106 (22.5%) patients after hospitalization for heart failure and 26,681 (7.4%) and 59,352 (17.3%) patients after hospitalization for pneumonia experienced a postdischarge ED visit and an inpatient readmission within 30 days of discharge, respectively.

Postdischarge ED visits were for a wide variety of conditions, with the top 10 CCS categories comprising 44% of postdischarge ED visits following AMI hospitalizations, 44% of following heart failure hospitalizations, and 41% following pneumonia hospitalizations (supplementary Table 2). The first postdischarge ED visit was rarely for the same condition as the index hospitalization in the AMI cohort (224 visits; 1.5%) as well as the pneumonia cohort (1401 visits; 5.3%). Among patients who were originally admitted for heart failure, 10.6% of the first postdischarge ED visits were also for congestive heart failure. However, the first postdischarge ED visit was commonly for associated conditions, such as coronary artery disease in the case of AMI or chronic obstructive pulmonary disease in the case of pneumonia, albeit these related conditions did not comprise the majority of postdischarge ED visitation.

We found wide hospital-level variation in postdischarge ED visit rates for each condition: AMI (median: 8.3%; 5th and 95th percentile: 2.8%-14.3%), heart failure (median: 7.3%; 5th and 95th percentile: 3.0%-13.3%), and pneumonia (median: 7.1%; 5th and 95th percentile: 2.4%-13.2%; supplementary Table 3). The variation persisted after accounting for hospital case mix, as evidenced in the supplementary Figure, which describes hospital variation in risk-standardized postdischarge ED visit rates. This variation was statistically significant (P < .001), as demonstrated by the isolated relationship between the random effect and the outcome (AMI: random effect estimate 0.0849 [95% confidence interval (CI), 0.0832 to 0.0866]; heart failure: random effect estimate 0.0796 [95% CI, 0.0784 to 0.0809]; pneumonia: random effect estimate 0.0753 [95% CI, 0.0741 to 0.0764]).

Across all 3 conditions, hospitals located in rural areas had significantly higher risk-standardized postdischarge ED visit rates than hospitals located in urban areas (10.1% vs 8.6% for AMI, 8.4% vs 7.5% for heart failure, and 8.0% vs 7.4% for pneumonia). In comparison to teaching hospitals, nonteaching hospitals had significantly higher risk-standardized postdischarge ED visit rates following hospital discharge for pneumonia (7.6% vs 7.1%). Safety-net hospitals also had higher risk-standardized postdischarge ED visitation rates following discharge for heart failure (8.4% vs 7.7%) and pneumonia (7.7% vs 7.3%). Risk-standardized postdischarge ED visit rates were higher in publicly owned hospitals than in nonprofit or privately owned hospitals for heart failure (8.0% vs 7.5% in nonprofit hospitals or 7.5% in private hospitals) and pneumonia (7.7% vs 7.4% in nonprofit hospitals and 7.3% in private hospitals; Table).



Among hospitals with RSRRs that were publicly reported by CMS, we found a moderate inverse correlation between risk-standardized postdischarge ED visit rates and hospital RSRRs for each condition: AMI (r = −0.23; 95% CI, −0.29 to −0.19), heart failure (r = −0.29; 95% CI, −0.34 to −0.27), and pneumonia (r = −0.18; 95% CI, −0.22 to −0.15; Figure).

 

 

DISCUSSION

Across a national cohort of Medicare beneficiaries, we found frequent treat-and-discharge ED utilization following hospital discharge for AMI, heart failure, and pneumonia, suggesting that publicly reported readmission measures are capturing only a portion of postdischarge acute-care use. Our findings confirm prior work describing a 30-day postdischarge ED visit rate of 8% to 9% among Medicare beneficiaries for all hospitalizations in several states.3,6While many of the first postdischarge ED visits were for conditions related to the index hospitalization, the majority represent acute, unscheduled visits for different diagnoses. These findings are consistent with prior work studying inpatient readmissions and observation readmissions that find similar heterogeneity in the clinical reasons for hospital return.21,22

We also described substantial hospital-level variation in risk-standardized ED postdischarge rates. Prior work by Vashi et al.3 demonstrated substantial variation in observed postdischarge ED visit rates and inpatient readmissions following hospital discharge between clinical conditions in a population-level study. Our work extends upon this by demonstrating hospital-level variation for 3 conditions of high volume and substantial policy importance after accounting for differences in hospital case mix. Interestingly, our work also found similar rates of postdischarge ED treat-and-discharge visitation as recent work by Sabbatini et al.23 analyzing an all-payer, adult population with any clinical condition. Taken together, these studies show the substantial volume of postdischarge acute-care utilization in the ED not captured by existing readmission measures.

We found several hospital characteristics of importance in describing variation in postdischarge ED visitation rates. Notably, hospitals located in rural areas and safety-net hospitals demonstrated higher postdischarge ED visitation rates. This may reflect a higher use of the ED as an acute, unscheduled care access point in rural communities without access to alternative acute diagnostic and treatment services.24 Similarly, safety-net hospitals may be more likely to provide unscheduled care for patients with poor access to primary care in the ED setting. Yet, consistent with prior work, our results also indicate that these differences do not result in different readmission rates.25 Regarding hospital teaching status, unlike prior work suggesting that teaching hospitals care for more safety-net Medicare beneficiaries,26 our work found opposite patterns of postdischarge ED visitation between hospital teaching and safety-net status following pneumonia hospitalization. This may reflect differences in the organization of acute care as patients with limited access to unscheduled primary and specialty care in safety-net communities utilize the ED, whereas patients in teaching-hospital communities may be able to access hospital-based clinics for care.

Contrary to the expectations of many clinicians and policymakers, we found an inverse relationship between postdischarge ED visit rates and readmission rates. While the cross-sectional design of our study cannot provide a causal explanation, these findings merit policy attention and future exploration of several hypotheses. One possible explanation for this finding is that hospitals with high postdischarge ED visit rates provide care in communities in which acute, unscheduled care is consolidated to the ED setting and thereby permits the ED to serve a gatekeeper function for scarce inpatient resources. This hypothesis may also be supported by recent interventions demonstrating that the use of ED care coordination and geriatric ED services at higher-volume EDs can reduce hospitalizations. Also, hospitals with greater ED capacity may have easier ED access and may be able to see patients earlier in their disease courses post discharge or more frequently in the ED for follow-up, therefore increasing ED visits but avoiding rehospitalization. Another possible explanation is that hospitals with lower postdischarge ED visit rates may also have a lower propensity to admit patients. Because our definition of postdischarge ED visitation did not include ED visits that result in hospitalization, hospitals with a lower propensity to admit from the ED may therefore appear to have higher ED visit rates. This explanation may be further supported by our finding that many postdischarge ED visits are for conditions that are associated with discretionary hospitalization in the ED.27 A third explanation for this finding may be that poor access to outpatient care outside the hospital setting results in higher postdischarge ED visit rates without increasing the acuity of these revisits or increasing readmission rates28; however, given the validated, risk-standardized approach to readmission measurement, this is unlikely. This is also unlikely given recent work by Sabbatini et al.23 demonstrating substantial acuity among patients who return to the ED following hospital discharge. Future work should seek to evaluate the relationship between the availability of ED care-coordination services and the specific ED, hospital, and community care-coordination activities undertaken in the ED following hospital discharge to reduce readmission rates.

This work should be interpreted within the confines of its design. First, it is possible that some of the variation detected in postdischarge ED visit rates is mediated by hospital-level variation in postdischarge observation visits that are not captured in this outcome. However, in previous work, we have demonstrated that almost one-third of hospitals have no postdischarge observation stays and that most postdischarge observation stays are for more than 24 hours, which is unlikely to reflect the intensity of care of postdischarge ED visits.27 Second, our analyses were limited to Medicare FFS beneficiaries, which may limit the generalizability of this work to other patient populations. However, this dataset did include a national cohort of Medicare beneficiaries that is identical to those included in publicly reported CMS readmission measures; therefore, these results have substantial policy relevance. Third, this work was limited to 3 conditions of high illness severity of policy focus, and future work applying similar analyses to less severe conditions may find different degrees of hospital-level variation in postdischarge outcomes that are amenable to quality improvement. Finally, we assessed the rate of treat-and-discharge ED visits only after hospital discharge; this understates the frequency of ED visits since repeat ED visits and ED visits resulting in rehospitalization are not included. However, our definition was designed to mirror the definition used to assess hospital readmissions for policy purposes and is a conservative approach.

In summary, ED visits following hospital discharge are common, as Medicare beneficiaries have 1 treat-and-discharge ED visit for every 2 readmissions within 30 days of hospital discharge. Postdischarge ED visits occur for a wide variety of conditions, with wide risk-standardized, hospital-level variation. Hospitals with the highest risk-standardized postdischarge ED visitation rates demonstrated lower RSRRs, suggesting that policymakers and researchers should further examine the role of the hospital-based ED in providing access to acute care and supporting care transitions for the vulnerable Medicare population.

 

 

Disclosure

 Dr. Venkatesh received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, and grant support from the Emergency Medicine Foundation’s Health Policy Research Scholar Award during the conduct of the study; and Dr. Wang, Mr. Wang, Ms. Altaf, Dr. Bernheim, and Dr. Horwitz received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, during the conduct of the study.

Hospital readmissions for acute myocardial infarction (AMI), heart failure, and pneumonia have become central to quality-measurement efforts by the Centers for Medicare & Medicaid Services (CMS), which seek to improve hospital care transitions through public reporting and payment programs.1 Most current measures are limited to readmissions that require inpatient hospitalization and do not capture return visits to the emergency department (ED) that do not result in readmission but rather ED discharge. These visits may reflect important needs for acute, unscheduled care during the vulnerable posthospitalization period.2-5 While previous research has suggested that nearly 10% of patients may return to the ED following hospital discharge without readmission, the characteristics of these visits among Medicare beneficiaries and the implications for national care-coordination quality-measurement initiatives have not been explored.6,7

As the locus of acute outpatient care and the primary portal of hospital admissions and readmissions, ED visits following hospital discharge may convey meaningful information about posthospitalization care transitions.8,9 In addition, recent reviews and perspectives have highlighted the role of ED care-coordination services as interventions to reduce inpatient hospitalizations and improve care transitions,10,11 yet no empirical studies have evaluated the relationship between these unique care-coordination opportunities in the ED and care-coordination outcomes, such as hospital readmissions. As policymakers seek to develop accountability measures that capture the totality of acute, unscheduled visits following hospital discharge, describing the relationship between ED visits and readmissions will be essential to providers for benchmarking and to policymakers and payers seeking to reduce the total cost of care.12,13

Accordingly, we sought to characterize the frequency, diagnoses, and hospital-level variation in treat-and-discharge ED visitation following hospital discharge for 3 conditions for which hospital readmission is publicly reported by the CMS: AMI, heart failure, and pneumonia. We also sought to evaluate the relationship between hospital-level ED visitation following hospital discharge and publicly reported, risk-standardized readmission rates (RSRRs).

METHODS

Study Design

This study was a cross-sectional analysis of Medicare beneficiaries discharged alive following hospitalization for AMI, heart failure, and pneumonia between July 2011 and June 2012.

Selection of Participants

We used Medicare Standard Analytic Files to identify inpatient hospitalizations for each disease cohort based on principal discharge diagnoses. Each condition-specific cohort was constructed to be consistent with the CMS’s readmission measures using International Classification of Diseases, 9th Revision-Clinical Modification codes to identify AMI, heart failure, and pneumonia discharges.1 We included only patients who were enrolled in fee-for-service (FFS) Medicare parts A and B for 12 months prior to their index hospitalization to maximize the capture of diagnoses for risk adjustment. Each cohort included only patients who were discharged alive while maintaining FFS coverage for at least 30 days following hospital discharge to minimize bias in outcome ascertainment. We excluded patients who were discharged against medical advice. All contiguous admissions that were identified in a transfer chain were considered to be a single admission. Hospitals with fewer than 25 condition-specific index hospital admissions were excluded from this analysis for consistency with publicly reported measures.1

Measurements

We measured postdischarge, treat-and release ED visits that occurred at any hospital within 30 days of hospital discharge from the index hospitalization. ED visits were identified as a hospital outpatient claim for ED services using hospital outpatient revenue center codes 0450, 0451, 0452, 0456, and 0981. This definition is consistent with those of previous studies.3,14 We defined postdischarge ED visits as treat-and-discharge visits or visits that did not result in inpatient readmission or observation stays. Similar to readmission measures, only 1 postdischarge ED visit was counted toward the hospital-level outcome in patients with multiple ED visits within the 30 days following hospital discharge. We defined readmission as the first unplanned, inpatient hospitalization occurring at any hospital within the 30-day period following discharge. Any subsequent inpatient admission following the 30-day period was considered a distinct index admission if it met the inclusion criteria. Consistent with CMS methods, unplanned, inpatient readmissions are from any source and are not limited to patients who were first evaluated in the ED.

 

 

Outcomes

We describe hospital-level, postdischarge ED visitation as the risk-standardized postdischarge ED visit rate. The general construct of this measure is consistent with those of prior studies that define postdischarge ED visitation as the proportion of index admissions followed by a treat-and-discharge ED visit without hospital readmission2,3; however, this outcome also incorporates a risk-standardization model with covariates that are identical to the risk-standardization approach that is used for readmission measurement.

We describe hospital-level readmission by calculating RSRRs consistent with CMS readmission measures, which are endorsed by the National Quality Forum and used for public reporting.15-17 Detailed technical documentation, including the SAS code used to replicate hospital-level measures of readmission, are available publicly through the CMS QualityNet portal.18

We calculated risk-standardized postdischarge ED visit rates and RSRRs as the ratio of the predicted number of postdischarge ED visits or readmissions for a hospital given its observed case mix to the expected number of postdischarge ED visits or readmissions based on the nation’s performance with that hospital’s case mix, respectively. This approach estimates a distinct risk-standardized postdischarge ED visit rate and RSRR for each hospital using hierarchical generalized linear models (HGLMs) and using a logit link with a first-level adjustment for age, sex, 29 clinical covariates for AMI, 35 clinical covariates for heart failure, and 38 clinical covariates for pneumonia. Each clinical covariate is identified based on inpatient and outpatient claims during the 12 months prior to the index hospitalization. The second level of the HGLM includes a random hospital-level intercept. This approach to measuring hospital readmissions accounts for the correlated nature of observed readmission rates within a hospital and reflects the assumption that after adjustment for patient characteristics and sampling variability, the remaining variation in postdischarge ED visit rates or readmission rates reflects hospital quality.

Analysis

In order to characterize treat-and-discharge postdischarge ED visits, we first described the clinical conditions that were evaluated during the first postdischarge ED visit. Based on the principal discharge diagnosis, ED visits were grouped into clinically meaningful categories using the Agency for Healthcare Research and Quality Clinical Classifications Software (CCS).19 We also report hospital-level variation in risk-standardized postdischarge ED visit rates for AMI, heart failure, and pneumonia.

Next, we examined the relationship between hospital characteristics and risk-standardized postdischarge ED visit rates. We linked hospital characteristics from the American Hospital Association (AHA) Annual Survey to the study dataset, including the following: safety-net status, teaching status, and urban or rural status. Consistent with prior work, hospital safety-net status was defined as a hospital Medicaid caseload greater than 1 standard deviation above the mean Medicaid caseload in the hospital’s state. Approximately 94% of the hospitals included in the 3 condition cohorts in the dataset had complete data in the 2011 AHA Annual Survey to be included in this analysis.

We evaluated the relationship between postdischarge ED visit rates and hospital readmission rates in 2 ways. First, we calculated Spearman rank correlation coefficients between hospital-level, risk-standardized postdischarge ED visit rates and RSRRs. Second, we calculated hospital-level variation in RSRRs based on the strata of risk-standardized postdischarge ED visit rates. Given the normal distribution of postdischarge ED visit rates, we grouped hospitals by quartile of postdischarge ED visit rates and 1 group for hospitals with no postdischarge ED visits.

Based on preliminary analyses indicating a relationship between hospital size, measured by condition-specific index hospitalization volume, and postdischarge treat-and-discharge ED visit rates, all descriptive statistics and correlations reported are weighted by the volume of condition-specific index hospitalizations. The study was approved by the Yale University Human Research Protection Program. All analyses were conducted using SAS 9.1 (SAS Institute Inc, Cary, NC). The analytic plan and results reported in this work are in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology checklist.20

RESULTS

During the 1-year study period, we included a total of 157,035 patients who were hospitalized at 1656 hospitals for AMI, 391,209 at 3044 hospitals for heart failure, and 342,376 at 3484 hospitals for pneumonia. Details of study cohort creation are available in supplementary Table 1. After hospitalization for AMI, 14,714 patients experienced a postdischarge ED visit (8.4%) and 27,214 an inpatient readmissions (17.3%) within 30 days of discharge; 31,621 (7.6%) and 88,106 (22.5%) patients after hospitalization for heart failure and 26,681 (7.4%) and 59,352 (17.3%) patients after hospitalization for pneumonia experienced a postdischarge ED visit and an inpatient readmission within 30 days of discharge, respectively.

Postdischarge ED visits were for a wide variety of conditions, with the top 10 CCS categories comprising 44% of postdischarge ED visits following AMI hospitalizations, 44% of following heart failure hospitalizations, and 41% following pneumonia hospitalizations (supplementary Table 2). The first postdischarge ED visit was rarely for the same condition as the index hospitalization in the AMI cohort (224 visits; 1.5%) as well as the pneumonia cohort (1401 visits; 5.3%). Among patients who were originally admitted for heart failure, 10.6% of the first postdischarge ED visits were also for congestive heart failure. However, the first postdischarge ED visit was commonly for associated conditions, such as coronary artery disease in the case of AMI or chronic obstructive pulmonary disease in the case of pneumonia, albeit these related conditions did not comprise the majority of postdischarge ED visitation.

We found wide hospital-level variation in postdischarge ED visit rates for each condition: AMI (median: 8.3%; 5th and 95th percentile: 2.8%-14.3%), heart failure (median: 7.3%; 5th and 95th percentile: 3.0%-13.3%), and pneumonia (median: 7.1%; 5th and 95th percentile: 2.4%-13.2%; supplementary Table 3). The variation persisted after accounting for hospital case mix, as evidenced in the supplementary Figure, which describes hospital variation in risk-standardized postdischarge ED visit rates. This variation was statistically significant (P < .001), as demonstrated by the isolated relationship between the random effect and the outcome (AMI: random effect estimate 0.0849 [95% confidence interval (CI), 0.0832 to 0.0866]; heart failure: random effect estimate 0.0796 [95% CI, 0.0784 to 0.0809]; pneumonia: random effect estimate 0.0753 [95% CI, 0.0741 to 0.0764]).

Across all 3 conditions, hospitals located in rural areas had significantly higher risk-standardized postdischarge ED visit rates than hospitals located in urban areas (10.1% vs 8.6% for AMI, 8.4% vs 7.5% for heart failure, and 8.0% vs 7.4% for pneumonia). In comparison to teaching hospitals, nonteaching hospitals had significantly higher risk-standardized postdischarge ED visit rates following hospital discharge for pneumonia (7.6% vs 7.1%). Safety-net hospitals also had higher risk-standardized postdischarge ED visitation rates following discharge for heart failure (8.4% vs 7.7%) and pneumonia (7.7% vs 7.3%). Risk-standardized postdischarge ED visit rates were higher in publicly owned hospitals than in nonprofit or privately owned hospitals for heart failure (8.0% vs 7.5% in nonprofit hospitals or 7.5% in private hospitals) and pneumonia (7.7% vs 7.4% in nonprofit hospitals and 7.3% in private hospitals; Table).



Among hospitals with RSRRs that were publicly reported by CMS, we found a moderate inverse correlation between risk-standardized postdischarge ED visit rates and hospital RSRRs for each condition: AMI (r = −0.23; 95% CI, −0.29 to −0.19), heart failure (r = −0.29; 95% CI, −0.34 to −0.27), and pneumonia (r = −0.18; 95% CI, −0.22 to −0.15; Figure).

 

 

DISCUSSION

Across a national cohort of Medicare beneficiaries, we found frequent treat-and-discharge ED utilization following hospital discharge for AMI, heart failure, and pneumonia, suggesting that publicly reported readmission measures are capturing only a portion of postdischarge acute-care use. Our findings confirm prior work describing a 30-day postdischarge ED visit rate of 8% to 9% among Medicare beneficiaries for all hospitalizations in several states.3,6While many of the first postdischarge ED visits were for conditions related to the index hospitalization, the majority represent acute, unscheduled visits for different diagnoses. These findings are consistent with prior work studying inpatient readmissions and observation readmissions that find similar heterogeneity in the clinical reasons for hospital return.21,22

We also described substantial hospital-level variation in risk-standardized ED postdischarge rates. Prior work by Vashi et al.3 demonstrated substantial variation in observed postdischarge ED visit rates and inpatient readmissions following hospital discharge between clinical conditions in a population-level study. Our work extends upon this by demonstrating hospital-level variation for 3 conditions of high volume and substantial policy importance after accounting for differences in hospital case mix. Interestingly, our work also found similar rates of postdischarge ED treat-and-discharge visitation as recent work by Sabbatini et al.23 analyzing an all-payer, adult population with any clinical condition. Taken together, these studies show the substantial volume of postdischarge acute-care utilization in the ED not captured by existing readmission measures.

We found several hospital characteristics of importance in describing variation in postdischarge ED visitation rates. Notably, hospitals located in rural areas and safety-net hospitals demonstrated higher postdischarge ED visitation rates. This may reflect a higher use of the ED as an acute, unscheduled care access point in rural communities without access to alternative acute diagnostic and treatment services.24 Similarly, safety-net hospitals may be more likely to provide unscheduled care for patients with poor access to primary care in the ED setting. Yet, consistent with prior work, our results also indicate that these differences do not result in different readmission rates.25 Regarding hospital teaching status, unlike prior work suggesting that teaching hospitals care for more safety-net Medicare beneficiaries,26 our work found opposite patterns of postdischarge ED visitation between hospital teaching and safety-net status following pneumonia hospitalization. This may reflect differences in the organization of acute care as patients with limited access to unscheduled primary and specialty care in safety-net communities utilize the ED, whereas patients in teaching-hospital communities may be able to access hospital-based clinics for care.

Contrary to the expectations of many clinicians and policymakers, we found an inverse relationship between postdischarge ED visit rates and readmission rates. While the cross-sectional design of our study cannot provide a causal explanation, these findings merit policy attention and future exploration of several hypotheses. One possible explanation for this finding is that hospitals with high postdischarge ED visit rates provide care in communities in which acute, unscheduled care is consolidated to the ED setting and thereby permits the ED to serve a gatekeeper function for scarce inpatient resources. This hypothesis may also be supported by recent interventions demonstrating that the use of ED care coordination and geriatric ED services at higher-volume EDs can reduce hospitalizations. Also, hospitals with greater ED capacity may have easier ED access and may be able to see patients earlier in their disease courses post discharge or more frequently in the ED for follow-up, therefore increasing ED visits but avoiding rehospitalization. Another possible explanation is that hospitals with lower postdischarge ED visit rates may also have a lower propensity to admit patients. Because our definition of postdischarge ED visitation did not include ED visits that result in hospitalization, hospitals with a lower propensity to admit from the ED may therefore appear to have higher ED visit rates. This explanation may be further supported by our finding that many postdischarge ED visits are for conditions that are associated with discretionary hospitalization in the ED.27 A third explanation for this finding may be that poor access to outpatient care outside the hospital setting results in higher postdischarge ED visit rates without increasing the acuity of these revisits or increasing readmission rates28; however, given the validated, risk-standardized approach to readmission measurement, this is unlikely. This is also unlikely given recent work by Sabbatini et al.23 demonstrating substantial acuity among patients who return to the ED following hospital discharge. Future work should seek to evaluate the relationship between the availability of ED care-coordination services and the specific ED, hospital, and community care-coordination activities undertaken in the ED following hospital discharge to reduce readmission rates.

This work should be interpreted within the confines of its design. First, it is possible that some of the variation detected in postdischarge ED visit rates is mediated by hospital-level variation in postdischarge observation visits that are not captured in this outcome. However, in previous work, we have demonstrated that almost one-third of hospitals have no postdischarge observation stays and that most postdischarge observation stays are for more than 24 hours, which is unlikely to reflect the intensity of care of postdischarge ED visits.27 Second, our analyses were limited to Medicare FFS beneficiaries, which may limit the generalizability of this work to other patient populations. However, this dataset did include a national cohort of Medicare beneficiaries that is identical to those included in publicly reported CMS readmission measures; therefore, these results have substantial policy relevance. Third, this work was limited to 3 conditions of high illness severity of policy focus, and future work applying similar analyses to less severe conditions may find different degrees of hospital-level variation in postdischarge outcomes that are amenable to quality improvement. Finally, we assessed the rate of treat-and-discharge ED visits only after hospital discharge; this understates the frequency of ED visits since repeat ED visits and ED visits resulting in rehospitalization are not included. However, our definition was designed to mirror the definition used to assess hospital readmissions for policy purposes and is a conservative approach.

In summary, ED visits following hospital discharge are common, as Medicare beneficiaries have 1 treat-and-discharge ED visit for every 2 readmissions within 30 days of hospital discharge. Postdischarge ED visits occur for a wide variety of conditions, with wide risk-standardized, hospital-level variation. Hospitals with the highest risk-standardized postdischarge ED visitation rates demonstrated lower RSRRs, suggesting that policymakers and researchers should further examine the role of the hospital-based ED in providing access to acute care and supporting care transitions for the vulnerable Medicare population.

 

 

Disclosure

 Dr. Venkatesh received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, and grant support from the Emergency Medicine Foundation’s Health Policy Research Scholar Award during the conduct of the study; and Dr. Wang, Mr. Wang, Ms. Altaf, Dr. Bernheim, and Dr. Horwitz received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, during the conduct of the study.

References

1. Dorsey KB GJ, Desai N, Lindenauer P, et al. 2015 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures: AMI-Version 8.0, HF-Version 8.0, Pneumonia-Version 8.0, COPD-Version 4.0, and Stroke-Version 4.0. 2015. https://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228890435217&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DRdmn_AMIHFPNCOPDSTK_Msr_UpdtRpt.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on July 8, 2015.
2. Rising KL, White LF, Fernandez WG, Boutwell AE. Emergency department visits after hospital discharge: a missing part of the equation. Ann Emerg Med. 2013;62(2):145-150. PubMed
3. Vashi AA, Fox JP, Carr BG, et al. Use of hospital-based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364-371. PubMed
4. Kocher KE, Nallamothu BK, Birkmeyer JD, Dimick JB. Emergency department visits after surgery are common for Medicare patients, suggesting opportunities to improve care. Health Aff (Millwood). 2013;32(9):1600-1607. PubMed
5. Krumholz HM. Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
6. Baier RR, Gardner RL, Coleman EA, Jencks SF, Mor V, Gravenstein S. Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450-453. PubMed
7. Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. N Engl J Med. 2012;367(5):391-393. PubMed
8. Kocher KE, Dimick JB, Nallamothu BK. Changes in the source of unscheduled hospitalizations in the United States. Med Care. 2013;51(8):689-698. PubMed
9. Morganti KG, Bauhoff S, Blanchard JC, Abir M, Iyer N. The evolving role of emergency departments in the United States. Santa Monica, CA: Rand Corporation; 2013. PubMed
10. Katz EB, Carrier ER, Umscheid CA, Pines JM. Comparative effectiveness of care coordination interventions in the emergency department: a systematic review. Ann Emerg Med. 2012;60(1):12.e1-23.e1. PubMed
11. Jaquis WP, Kaplan JA, Carpenter C, et al. Transitions of Care Task Force Report. 2012. http://www.acep.org/workarea/DownloadAsset.aspx?id=91206. Accessed on January 2, 2016. 
12. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Heart Failure (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
13. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Acute Myocardial Infarction (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
14. Hennessy S, Leonard CE, Freeman CP, et al. Validation of diagnostic codes for outpatient-originating sudden cardiac death and ventricular arrhythmia in Medicaid and Medicare claims data. Pharmacoepidemiol Drug Saf. 2010;19(6):555-562. PubMed
15. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Acute Myocardial Infarction Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873653724&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DAMI_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
16. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Heart Failure Readmission Measure Methodology. 2008. http://69.28.93.62/wp-content/uploads/2017/01/2007-Baseline-info-on-Readmissions-krumholz.pdf. Accessed on February 22, 2016.
17. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Pneumonia Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873654295&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DPneumo_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
18. QualityNet. Claims-based measures: readmission measures. 2016. http://www.qualitynet.org/dcs/ContentServer?cid=1219069855273&pagename=QnetPublic%2FPage%2FQnetTier3. Accessed on December 14, 2017.
19. Agency for Healthcare Research and Quality. Clinical classifications software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project 2013; https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 14, 2017.
20. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med. 2007;45(4):247-251. PubMed
21. Dharmarajan K, Hsieh AF, Lin Z, et al. Diagnoses and timing of 30-day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355-363. PubMed
22. Venkatesh AK, Wang C, Ross JS, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. Med Care. 2016;54(12):1070-1077. PubMed
23. Sabbatini AK, Kocher KE, Basu A, Hsia RY. In-hospital outcomes and costs among patients hospitalized during a return visit to the emergency department. JAMA. 2016;315(7):663-671. PubMed
24. Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620-1629. PubMed
25. Ross JS, Bernheim SM, Lin Z, et al. Based on key measures, care quality for Medicare enrollees at safety-net and non-safety-net hospitals was almost equal. Health Aff (Millwood). 2012;31(8):1739-1748. PubMed
26. Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681. PubMed
27. Venkatesh A, Wang C, Suter LG, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. In: Academy Health Annual Research Meeting. San Diego, CA; 2014. PubMed
28. Pittsenbarger ZE, Thurm CW, Neuman MI, et al. Hospital-level factors associated with pediatric emergency department return visits. J Hosp Med. 2017;12(7):536-543. PubMed

References

1. Dorsey KB GJ, Desai N, Lindenauer P, et al. 2015 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures: AMI-Version 8.0, HF-Version 8.0, Pneumonia-Version 8.0, COPD-Version 4.0, and Stroke-Version 4.0. 2015. https://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228890435217&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DRdmn_AMIHFPNCOPDSTK_Msr_UpdtRpt.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on July 8, 2015.
2. Rising KL, White LF, Fernandez WG, Boutwell AE. Emergency department visits after hospital discharge: a missing part of the equation. Ann Emerg Med. 2013;62(2):145-150. PubMed
3. Vashi AA, Fox JP, Carr BG, et al. Use of hospital-based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364-371. PubMed
4. Kocher KE, Nallamothu BK, Birkmeyer JD, Dimick JB. Emergency department visits after surgery are common for Medicare patients, suggesting opportunities to improve care. Health Aff (Millwood). 2013;32(9):1600-1607. PubMed
5. Krumholz HM. Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
6. Baier RR, Gardner RL, Coleman EA, Jencks SF, Mor V, Gravenstein S. Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450-453. PubMed
7. Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. N Engl J Med. 2012;367(5):391-393. PubMed
8. Kocher KE, Dimick JB, Nallamothu BK. Changes in the source of unscheduled hospitalizations in the United States. Med Care. 2013;51(8):689-698. PubMed
9. Morganti KG, Bauhoff S, Blanchard JC, Abir M, Iyer N. The evolving role of emergency departments in the United States. Santa Monica, CA: Rand Corporation; 2013. PubMed
10. Katz EB, Carrier ER, Umscheid CA, Pines JM. Comparative effectiveness of care coordination interventions in the emergency department: a systematic review. Ann Emerg Med. 2012;60(1):12.e1-23.e1. PubMed
11. Jaquis WP, Kaplan JA, Carpenter C, et al. Transitions of Care Task Force Report. 2012. http://www.acep.org/workarea/DownloadAsset.aspx?id=91206. Accessed on January 2, 2016. 
12. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Heart Failure (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
13. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Acute Myocardial Infarction (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
14. Hennessy S, Leonard CE, Freeman CP, et al. Validation of diagnostic codes for outpatient-originating sudden cardiac death and ventricular arrhythmia in Medicaid and Medicare claims data. Pharmacoepidemiol Drug Saf. 2010;19(6):555-562. PubMed
15. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Acute Myocardial Infarction Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873653724&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DAMI_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
16. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Heart Failure Readmission Measure Methodology. 2008. http://69.28.93.62/wp-content/uploads/2017/01/2007-Baseline-info-on-Readmissions-krumholz.pdf. Accessed on February 22, 2016.
17. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Pneumonia Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873654295&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DPneumo_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
18. QualityNet. Claims-based measures: readmission measures. 2016. http://www.qualitynet.org/dcs/ContentServer?cid=1219069855273&pagename=QnetPublic%2FPage%2FQnetTier3. Accessed on December 14, 2017.
19. Agency for Healthcare Research and Quality. Clinical classifications software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project 2013; https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 14, 2017.
20. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med. 2007;45(4):247-251. PubMed
21. Dharmarajan K, Hsieh AF, Lin Z, et al. Diagnoses and timing of 30-day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355-363. PubMed
22. Venkatesh AK, Wang C, Ross JS, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. Med Care. 2016;54(12):1070-1077. PubMed
23. Sabbatini AK, Kocher KE, Basu A, Hsia RY. In-hospital outcomes and costs among patients hospitalized during a return visit to the emergency department. JAMA. 2016;315(7):663-671. PubMed
24. Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620-1629. PubMed
25. Ross JS, Bernheim SM, Lin Z, et al. Based on key measures, care quality for Medicare enrollees at safety-net and non-safety-net hospitals was almost equal. Health Aff (Millwood). 2012;31(8):1739-1748. PubMed
26. Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681. PubMed
27. Venkatesh A, Wang C, Suter LG, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. In: Academy Health Annual Research Meeting. San Diego, CA; 2014. PubMed
28. Pittsenbarger ZE, Thurm CW, Neuman MI, et al. Hospital-level factors associated with pediatric emergency department return visits. J Hosp Med. 2017;12(7):536-543. PubMed

Issue
Journal of Hospital Medicine 13(9)
Issue
Journal of Hospital Medicine 13(9)
Page Number
589-594. Published online first March 15, 2018
Page Number
589-594. Published online first March 15, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Arjun K. Venkatesh, MD, MBA, MHS, 1 Church St., 2nd Floor, New Haven, CT 06510; Telephone: 203-764-5700; Fax: 203-764-5653; E-mail: arjun.venkatesh@yale.edu
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media
Media Files

Mortality and Readmission Correlations

Article Type
Changed
Mon, 05/22/2017 - 18:28
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

Files
References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
Article PDF
Issue
Journal of Hospital Medicine - 7(9)
Publications
Page Number
690-696
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
Issue
Journal of Hospital Medicine - 7(9)
Issue
Journal of Hospital Medicine - 7(9)
Page Number
690-696
Page Number
690-696
Publications
Publications
Article Type
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Section of General Internal Medicine, Department of Medicine, Yale University School of Medicine, PO Box 208093, New Haven, CT 06520‐8093
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Readmission and Mortality [Rates] in Pneumonia

Article Type
Changed
Sun, 05/28/2017 - 20:18
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Article PDF
Issue
Journal of Hospital Medicine - 5(6)
Publications
Page Number
E12-E18
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article PDF
Article PDF

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Issue
Journal of Hospital Medicine - 5(6)
Issue
Journal of Hospital Medicine - 5(6)
Page Number
E12-E18
Page Number
E12-E18
Publications
Publications
Article Type
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut St., Springfield, MA 01199
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media