Affiliations
Center for Outcomes Research and Evaluation, Yale–New Haven Hospital
Section of Cardiovascular Medicine, Yale School of Medicine
Given name(s)
Changqin
Family name
Wang
Degrees
MD, MS

Association Between Postdischarge Emergency Department Visitation and Readmission Rates

Article Type
Changed
Fri, 10/04/2019 - 15:46

Hospital readmissions for acute myocardial infarction (AMI), heart failure, and pneumonia have become central to quality-measurement efforts by the Centers for Medicare & Medicaid Services (CMS), which seek to improve hospital care transitions through public reporting and payment programs.1 Most current measures are limited to readmissions that require inpatient hospitalization and do not capture return visits to the emergency department (ED) that do not result in readmission but rather ED discharge. These visits may reflect important needs for acute, unscheduled care during the vulnerable posthospitalization period.2-5 While previous research has suggested that nearly 10% of patients may return to the ED following hospital discharge without readmission, the characteristics of these visits among Medicare beneficiaries and the implications for national care-coordination quality-measurement initiatives have not been explored.6,7

As the locus of acute outpatient care and the primary portal of hospital admissions and readmissions, ED visits following hospital discharge may convey meaningful information about posthospitalization care transitions.8,9 In addition, recent reviews and perspectives have highlighted the role of ED care-coordination services as interventions to reduce inpatient hospitalizations and improve care transitions,10,11 yet no empirical studies have evaluated the relationship between these unique care-coordination opportunities in the ED and care-coordination outcomes, such as hospital readmissions. As policymakers seek to develop accountability measures that capture the totality of acute, unscheduled visits following hospital discharge, describing the relationship between ED visits and readmissions will be essential to providers for benchmarking and to policymakers and payers seeking to reduce the total cost of care.12,13

Accordingly, we sought to characterize the frequency, diagnoses, and hospital-level variation in treat-and-discharge ED visitation following hospital discharge for 3 conditions for which hospital readmission is publicly reported by the CMS: AMI, heart failure, and pneumonia. We also sought to evaluate the relationship between hospital-level ED visitation following hospital discharge and publicly reported, risk-standardized readmission rates (RSRRs).

METHODS

Study Design

This study was a cross-sectional analysis of Medicare beneficiaries discharged alive following hospitalization for AMI, heart failure, and pneumonia between July 2011 and June 2012.

Selection of Participants

We used Medicare Standard Analytic Files to identify inpatient hospitalizations for each disease cohort based on principal discharge diagnoses. Each condition-specific cohort was constructed to be consistent with the CMS’s readmission measures using International Classification of Diseases, 9th Revision-Clinical Modification codes to identify AMI, heart failure, and pneumonia discharges.1 We included only patients who were enrolled in fee-for-service (FFS) Medicare parts A and B for 12 months prior to their index hospitalization to maximize the capture of diagnoses for risk adjustment. Each cohort included only patients who were discharged alive while maintaining FFS coverage for at least 30 days following hospital discharge to minimize bias in outcome ascertainment. We excluded patients who were discharged against medical advice. All contiguous admissions that were identified in a transfer chain were considered to be a single admission. Hospitals with fewer than 25 condition-specific index hospital admissions were excluded from this analysis for consistency with publicly reported measures.1

Measurements

We measured postdischarge, treat-and release ED visits that occurred at any hospital within 30 days of hospital discharge from the index hospitalization. ED visits were identified as a hospital outpatient claim for ED services using hospital outpatient revenue center codes 0450, 0451, 0452, 0456, and 0981. This definition is consistent with those of previous studies.3,14 We defined postdischarge ED visits as treat-and-discharge visits or visits that did not result in inpatient readmission or observation stays. Similar to readmission measures, only 1 postdischarge ED visit was counted toward the hospital-level outcome in patients with multiple ED visits within the 30 days following hospital discharge. We defined readmission as the first unplanned, inpatient hospitalization occurring at any hospital within the 30-day period following discharge. Any subsequent inpatient admission following the 30-day period was considered a distinct index admission if it met the inclusion criteria. Consistent with CMS methods, unplanned, inpatient readmissions are from any source and are not limited to patients who were first evaluated in the ED.

 

 

Outcomes

We describe hospital-level, postdischarge ED visitation as the risk-standardized postdischarge ED visit rate. The general construct of this measure is consistent with those of prior studies that define postdischarge ED visitation as the proportion of index admissions followed by a treat-and-discharge ED visit without hospital readmission2,3; however, this outcome also incorporates a risk-standardization model with covariates that are identical to the risk-standardization approach that is used for readmission measurement.

We describe hospital-level readmission by calculating RSRRs consistent with CMS readmission measures, which are endorsed by the National Quality Forum and used for public reporting.15-17 Detailed technical documentation, including the SAS code used to replicate hospital-level measures of readmission, are available publicly through the CMS QualityNet portal.18

We calculated risk-standardized postdischarge ED visit rates and RSRRs as the ratio of the predicted number of postdischarge ED visits or readmissions for a hospital given its observed case mix to the expected number of postdischarge ED visits or readmissions based on the nation’s performance with that hospital’s case mix, respectively. This approach estimates a distinct risk-standardized postdischarge ED visit rate and RSRR for each hospital using hierarchical generalized linear models (HGLMs) and using a logit link with a first-level adjustment for age, sex, 29 clinical covariates for AMI, 35 clinical covariates for heart failure, and 38 clinical covariates for pneumonia. Each clinical covariate is identified based on inpatient and outpatient claims during the 12 months prior to the index hospitalization. The second level of the HGLM includes a random hospital-level intercept. This approach to measuring hospital readmissions accounts for the correlated nature of observed readmission rates within a hospital and reflects the assumption that after adjustment for patient characteristics and sampling variability, the remaining variation in postdischarge ED visit rates or readmission rates reflects hospital quality.

Analysis

In order to characterize treat-and-discharge postdischarge ED visits, we first described the clinical conditions that were evaluated during the first postdischarge ED visit. Based on the principal discharge diagnosis, ED visits were grouped into clinically meaningful categories using the Agency for Healthcare Research and Quality Clinical Classifications Software (CCS).19 We also report hospital-level variation in risk-standardized postdischarge ED visit rates for AMI, heart failure, and pneumonia.

Next, we examined the relationship between hospital characteristics and risk-standardized postdischarge ED visit rates. We linked hospital characteristics from the American Hospital Association (AHA) Annual Survey to the study dataset, including the following: safety-net status, teaching status, and urban or rural status. Consistent with prior work, hospital safety-net status was defined as a hospital Medicaid caseload greater than 1 standard deviation above the mean Medicaid caseload in the hospital’s state. Approximately 94% of the hospitals included in the 3 condition cohorts in the dataset had complete data in the 2011 AHA Annual Survey to be included in this analysis.

We evaluated the relationship between postdischarge ED visit rates and hospital readmission rates in 2 ways. First, we calculated Spearman rank correlation coefficients between hospital-level, risk-standardized postdischarge ED visit rates and RSRRs. Second, we calculated hospital-level variation in RSRRs based on the strata of risk-standardized postdischarge ED visit rates. Given the normal distribution of postdischarge ED visit rates, we grouped hospitals by quartile of postdischarge ED visit rates and 1 group for hospitals with no postdischarge ED visits.

Based on preliminary analyses indicating a relationship between hospital size, measured by condition-specific index hospitalization volume, and postdischarge treat-and-discharge ED visit rates, all descriptive statistics and correlations reported are weighted by the volume of condition-specific index hospitalizations. The study was approved by the Yale University Human Research Protection Program. All analyses were conducted using SAS 9.1 (SAS Institute Inc, Cary, NC). The analytic plan and results reported in this work are in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology checklist.20

RESULTS

During the 1-year study period, we included a total of 157,035 patients who were hospitalized at 1656 hospitals for AMI, 391,209 at 3044 hospitals for heart failure, and 342,376 at 3484 hospitals for pneumonia. Details of study cohort creation are available in supplementary Table 1. After hospitalization for AMI, 14,714 patients experienced a postdischarge ED visit (8.4%) and 27,214 an inpatient readmissions (17.3%) within 30 days of discharge; 31,621 (7.6%) and 88,106 (22.5%) patients after hospitalization for heart failure and 26,681 (7.4%) and 59,352 (17.3%) patients after hospitalization for pneumonia experienced a postdischarge ED visit and an inpatient readmission within 30 days of discharge, respectively.

Postdischarge ED visits were for a wide variety of conditions, with the top 10 CCS categories comprising 44% of postdischarge ED visits following AMI hospitalizations, 44% of following heart failure hospitalizations, and 41% following pneumonia hospitalizations (supplementary Table 2). The first postdischarge ED visit was rarely for the same condition as the index hospitalization in the AMI cohort (224 visits; 1.5%) as well as the pneumonia cohort (1401 visits; 5.3%). Among patients who were originally admitted for heart failure, 10.6% of the first postdischarge ED visits were also for congestive heart failure. However, the first postdischarge ED visit was commonly for associated conditions, such as coronary artery disease in the case of AMI or chronic obstructive pulmonary disease in the case of pneumonia, albeit these related conditions did not comprise the majority of postdischarge ED visitation.

We found wide hospital-level variation in postdischarge ED visit rates for each condition: AMI (median: 8.3%; 5th and 95th percentile: 2.8%-14.3%), heart failure (median: 7.3%; 5th and 95th percentile: 3.0%-13.3%), and pneumonia (median: 7.1%; 5th and 95th percentile: 2.4%-13.2%; supplementary Table 3). The variation persisted after accounting for hospital case mix, as evidenced in the supplementary Figure, which describes hospital variation in risk-standardized postdischarge ED visit rates. This variation was statistically significant (P < .001), as demonstrated by the isolated relationship between the random effect and the outcome (AMI: random effect estimate 0.0849 [95% confidence interval (CI), 0.0832 to 0.0866]; heart failure: random effect estimate 0.0796 [95% CI, 0.0784 to 0.0809]; pneumonia: random effect estimate 0.0753 [95% CI, 0.0741 to 0.0764]).

Across all 3 conditions, hospitals located in rural areas had significantly higher risk-standardized postdischarge ED visit rates than hospitals located in urban areas (10.1% vs 8.6% for AMI, 8.4% vs 7.5% for heart failure, and 8.0% vs 7.4% for pneumonia). In comparison to teaching hospitals, nonteaching hospitals had significantly higher risk-standardized postdischarge ED visit rates following hospital discharge for pneumonia (7.6% vs 7.1%). Safety-net hospitals also had higher risk-standardized postdischarge ED visitation rates following discharge for heart failure (8.4% vs 7.7%) and pneumonia (7.7% vs 7.3%). Risk-standardized postdischarge ED visit rates were higher in publicly owned hospitals than in nonprofit or privately owned hospitals for heart failure (8.0% vs 7.5% in nonprofit hospitals or 7.5% in private hospitals) and pneumonia (7.7% vs 7.4% in nonprofit hospitals and 7.3% in private hospitals; Table).



Among hospitals with RSRRs that were publicly reported by CMS, we found a moderate inverse correlation between risk-standardized postdischarge ED visit rates and hospital RSRRs for each condition: AMI (r = −0.23; 95% CI, −0.29 to −0.19), heart failure (r = −0.29; 95% CI, −0.34 to −0.27), and pneumonia (r = −0.18; 95% CI, −0.22 to −0.15; Figure).

 

 

DISCUSSION

Across a national cohort of Medicare beneficiaries, we found frequent treat-and-discharge ED utilization following hospital discharge for AMI, heart failure, and pneumonia, suggesting that publicly reported readmission measures are capturing only a portion of postdischarge acute-care use. Our findings confirm prior work describing a 30-day postdischarge ED visit rate of 8% to 9% among Medicare beneficiaries for all hospitalizations in several states.3,6While many of the first postdischarge ED visits were for conditions related to the index hospitalization, the majority represent acute, unscheduled visits for different diagnoses. These findings are consistent with prior work studying inpatient readmissions and observation readmissions that find similar heterogeneity in the clinical reasons for hospital return.21,22

We also described substantial hospital-level variation in risk-standardized ED postdischarge rates. Prior work by Vashi et al.3 demonstrated substantial variation in observed postdischarge ED visit rates and inpatient readmissions following hospital discharge between clinical conditions in a population-level study. Our work extends upon this by demonstrating hospital-level variation for 3 conditions of high volume and substantial policy importance after accounting for differences in hospital case mix. Interestingly, our work also found similar rates of postdischarge ED treat-and-discharge visitation as recent work by Sabbatini et al.23 analyzing an all-payer, adult population with any clinical condition. Taken together, these studies show the substantial volume of postdischarge acute-care utilization in the ED not captured by existing readmission measures.

We found several hospital characteristics of importance in describing variation in postdischarge ED visitation rates. Notably, hospitals located in rural areas and safety-net hospitals demonstrated higher postdischarge ED visitation rates. This may reflect a higher use of the ED as an acute, unscheduled care access point in rural communities without access to alternative acute diagnostic and treatment services.24 Similarly, safety-net hospitals may be more likely to provide unscheduled care for patients with poor access to primary care in the ED setting. Yet, consistent with prior work, our results also indicate that these differences do not result in different readmission rates.25 Regarding hospital teaching status, unlike prior work suggesting that teaching hospitals care for more safety-net Medicare beneficiaries,26 our work found opposite patterns of postdischarge ED visitation between hospital teaching and safety-net status following pneumonia hospitalization. This may reflect differences in the organization of acute care as patients with limited access to unscheduled primary and specialty care in safety-net communities utilize the ED, whereas patients in teaching-hospital communities may be able to access hospital-based clinics for care.

Contrary to the expectations of many clinicians and policymakers, we found an inverse relationship between postdischarge ED visit rates and readmission rates. While the cross-sectional design of our study cannot provide a causal explanation, these findings merit policy attention and future exploration of several hypotheses. One possible explanation for this finding is that hospitals with high postdischarge ED visit rates provide care in communities in which acute, unscheduled care is consolidated to the ED setting and thereby permits the ED to serve a gatekeeper function for scarce inpatient resources. This hypothesis may also be supported by recent interventions demonstrating that the use of ED care coordination and geriatric ED services at higher-volume EDs can reduce hospitalizations. Also, hospitals with greater ED capacity may have easier ED access and may be able to see patients earlier in their disease courses post discharge or more frequently in the ED for follow-up, therefore increasing ED visits but avoiding rehospitalization. Another possible explanation is that hospitals with lower postdischarge ED visit rates may also have a lower propensity to admit patients. Because our definition of postdischarge ED visitation did not include ED visits that result in hospitalization, hospitals with a lower propensity to admit from the ED may therefore appear to have higher ED visit rates. This explanation may be further supported by our finding that many postdischarge ED visits are for conditions that are associated with discretionary hospitalization in the ED.27 A third explanation for this finding may be that poor access to outpatient care outside the hospital setting results in higher postdischarge ED visit rates without increasing the acuity of these revisits or increasing readmission rates28; however, given the validated, risk-standardized approach to readmission measurement, this is unlikely. This is also unlikely given recent work by Sabbatini et al.23 demonstrating substantial acuity among patients who return to the ED following hospital discharge. Future work should seek to evaluate the relationship between the availability of ED care-coordination services and the specific ED, hospital, and community care-coordination activities undertaken in the ED following hospital discharge to reduce readmission rates.

This work should be interpreted within the confines of its design. First, it is possible that some of the variation detected in postdischarge ED visit rates is mediated by hospital-level variation in postdischarge observation visits that are not captured in this outcome. However, in previous work, we have demonstrated that almost one-third of hospitals have no postdischarge observation stays and that most postdischarge observation stays are for more than 24 hours, which is unlikely to reflect the intensity of care of postdischarge ED visits.27 Second, our analyses were limited to Medicare FFS beneficiaries, which may limit the generalizability of this work to other patient populations. However, this dataset did include a national cohort of Medicare beneficiaries that is identical to those included in publicly reported CMS readmission measures; therefore, these results have substantial policy relevance. Third, this work was limited to 3 conditions of high illness severity of policy focus, and future work applying similar analyses to less severe conditions may find different degrees of hospital-level variation in postdischarge outcomes that are amenable to quality improvement. Finally, we assessed the rate of treat-and-discharge ED visits only after hospital discharge; this understates the frequency of ED visits since repeat ED visits and ED visits resulting in rehospitalization are not included. However, our definition was designed to mirror the definition used to assess hospital readmissions for policy purposes and is a conservative approach.

In summary, ED visits following hospital discharge are common, as Medicare beneficiaries have 1 treat-and-discharge ED visit for every 2 readmissions within 30 days of hospital discharge. Postdischarge ED visits occur for a wide variety of conditions, with wide risk-standardized, hospital-level variation. Hospitals with the highest risk-standardized postdischarge ED visitation rates demonstrated lower RSRRs, suggesting that policymakers and researchers should further examine the role of the hospital-based ED in providing access to acute care and supporting care transitions for the vulnerable Medicare population.

 

 

Disclosure

 Dr. Venkatesh received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, and grant support from the Emergency Medicine Foundation’s Health Policy Research Scholar Award during the conduct of the study; and Dr. Wang, Mr. Wang, Ms. Altaf, Dr. Bernheim, and Dr. Horwitz received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, during the conduct of the study.

Files
References

1. Dorsey KB GJ, Desai N, Lindenauer P, et al. 2015 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures: AMI-Version 8.0, HF-Version 8.0, Pneumonia-Version 8.0, COPD-Version 4.0, and Stroke-Version 4.0. 2015. https://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228890435217&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DRdmn_AMIHFPNCOPDSTK_Msr_UpdtRpt.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on July 8, 2015.
2. Rising KL, White LF, Fernandez WG, Boutwell AE. Emergency department visits after hospital discharge: a missing part of the equation. Ann Emerg Med. 2013;62(2):145-150. PubMed
3. Vashi AA, Fox JP, Carr BG, et al. Use of hospital-based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364-371. PubMed
4. Kocher KE, Nallamothu BK, Birkmeyer JD, Dimick JB. Emergency department visits after surgery are common for Medicare patients, suggesting opportunities to improve care. Health Aff (Millwood). 2013;32(9):1600-1607. PubMed
5. Krumholz HM. Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
6. Baier RR, Gardner RL, Coleman EA, Jencks SF, Mor V, Gravenstein S. Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450-453. PubMed
7. Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. N Engl J Med. 2012;367(5):391-393. PubMed
8. Kocher KE, Dimick JB, Nallamothu BK. Changes in the source of unscheduled hospitalizations in the United States. Med Care. 2013;51(8):689-698. PubMed
9. Morganti KG, Bauhoff S, Blanchard JC, Abir M, Iyer N. The evolving role of emergency departments in the United States. Santa Monica, CA: Rand Corporation; 2013. PubMed
10. Katz EB, Carrier ER, Umscheid CA, Pines JM. Comparative effectiveness of care coordination interventions in the emergency department: a systematic review. Ann Emerg Med. 2012;60(1):12.e1-23.e1. PubMed
11. Jaquis WP, Kaplan JA, Carpenter C, et al. Transitions of Care Task Force Report. 2012. http://www.acep.org/workarea/DownloadAsset.aspx?id=91206. Accessed on January 2, 2016. 
12. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Heart Failure (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
13. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Acute Myocardial Infarction (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
14. Hennessy S, Leonard CE, Freeman CP, et al. Validation of diagnostic codes for outpatient-originating sudden cardiac death and ventricular arrhythmia in Medicaid and Medicare claims data. Pharmacoepidemiol Drug Saf. 2010;19(6):555-562. PubMed
15. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Acute Myocardial Infarction Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873653724&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DAMI_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
16. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Heart Failure Readmission Measure Methodology. 2008. http://69.28.93.62/wp-content/uploads/2017/01/2007-Baseline-info-on-Readmissions-krumholz.pdf. Accessed on February 22, 2016.
17. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Pneumonia Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873654295&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DPneumo_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
18. QualityNet. Claims-based measures: readmission measures. 2016. http://www.qualitynet.org/dcs/ContentServer?cid=1219069855273&pagename=QnetPublic%2FPage%2FQnetTier3. Accessed on December 14, 2017.
19. Agency for Healthcare Research and Quality. Clinical classifications software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project 2013; https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 14, 2017.
20. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med. 2007;45(4):247-251. PubMed
21. Dharmarajan K, Hsieh AF, Lin Z, et al. Diagnoses and timing of 30-day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355-363. PubMed
22. Venkatesh AK, Wang C, Ross JS, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. Med Care. 2016;54(12):1070-1077. PubMed
23. Sabbatini AK, Kocher KE, Basu A, Hsia RY. In-hospital outcomes and costs among patients hospitalized during a return visit to the emergency department. JAMA. 2016;315(7):663-671. PubMed
24. Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620-1629. PubMed
25. Ross JS, Bernheim SM, Lin Z, et al. Based on key measures, care quality for Medicare enrollees at safety-net and non-safety-net hospitals was almost equal. Health Aff (Millwood). 2012;31(8):1739-1748. PubMed
26. Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681. PubMed
27. Venkatesh A, Wang C, Suter LG, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. In: Academy Health Annual Research Meeting. San Diego, CA; 2014. PubMed
28. Pittsenbarger ZE, Thurm CW, Neuman MI, et al. Hospital-level factors associated with pediatric emergency department return visits. J Hosp Med. 2017;12(7):536-543. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(9)
Publications
Topics
Page Number
589-594. Published online first March 15, 2018
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Hospital readmissions for acute myocardial infarction (AMI), heart failure, and pneumonia have become central to quality-measurement efforts by the Centers for Medicare & Medicaid Services (CMS), which seek to improve hospital care transitions through public reporting and payment programs.1 Most current measures are limited to readmissions that require inpatient hospitalization and do not capture return visits to the emergency department (ED) that do not result in readmission but rather ED discharge. These visits may reflect important needs for acute, unscheduled care during the vulnerable posthospitalization period.2-5 While previous research has suggested that nearly 10% of patients may return to the ED following hospital discharge without readmission, the characteristics of these visits among Medicare beneficiaries and the implications for national care-coordination quality-measurement initiatives have not been explored.6,7

As the locus of acute outpatient care and the primary portal of hospital admissions and readmissions, ED visits following hospital discharge may convey meaningful information about posthospitalization care transitions.8,9 In addition, recent reviews and perspectives have highlighted the role of ED care-coordination services as interventions to reduce inpatient hospitalizations and improve care transitions,10,11 yet no empirical studies have evaluated the relationship between these unique care-coordination opportunities in the ED and care-coordination outcomes, such as hospital readmissions. As policymakers seek to develop accountability measures that capture the totality of acute, unscheduled visits following hospital discharge, describing the relationship between ED visits and readmissions will be essential to providers for benchmarking and to policymakers and payers seeking to reduce the total cost of care.12,13

Accordingly, we sought to characterize the frequency, diagnoses, and hospital-level variation in treat-and-discharge ED visitation following hospital discharge for 3 conditions for which hospital readmission is publicly reported by the CMS: AMI, heart failure, and pneumonia. We also sought to evaluate the relationship between hospital-level ED visitation following hospital discharge and publicly reported, risk-standardized readmission rates (RSRRs).

METHODS

Study Design

This study was a cross-sectional analysis of Medicare beneficiaries discharged alive following hospitalization for AMI, heart failure, and pneumonia between July 2011 and June 2012.

Selection of Participants

We used Medicare Standard Analytic Files to identify inpatient hospitalizations for each disease cohort based on principal discharge diagnoses. Each condition-specific cohort was constructed to be consistent with the CMS’s readmission measures using International Classification of Diseases, 9th Revision-Clinical Modification codes to identify AMI, heart failure, and pneumonia discharges.1 We included only patients who were enrolled in fee-for-service (FFS) Medicare parts A and B for 12 months prior to their index hospitalization to maximize the capture of diagnoses for risk adjustment. Each cohort included only patients who were discharged alive while maintaining FFS coverage for at least 30 days following hospital discharge to minimize bias in outcome ascertainment. We excluded patients who were discharged against medical advice. All contiguous admissions that were identified in a transfer chain were considered to be a single admission. Hospitals with fewer than 25 condition-specific index hospital admissions were excluded from this analysis for consistency with publicly reported measures.1

Measurements

We measured postdischarge, treat-and release ED visits that occurred at any hospital within 30 days of hospital discharge from the index hospitalization. ED visits were identified as a hospital outpatient claim for ED services using hospital outpatient revenue center codes 0450, 0451, 0452, 0456, and 0981. This definition is consistent with those of previous studies.3,14 We defined postdischarge ED visits as treat-and-discharge visits or visits that did not result in inpatient readmission or observation stays. Similar to readmission measures, only 1 postdischarge ED visit was counted toward the hospital-level outcome in patients with multiple ED visits within the 30 days following hospital discharge. We defined readmission as the first unplanned, inpatient hospitalization occurring at any hospital within the 30-day period following discharge. Any subsequent inpatient admission following the 30-day period was considered a distinct index admission if it met the inclusion criteria. Consistent with CMS methods, unplanned, inpatient readmissions are from any source and are not limited to patients who were first evaluated in the ED.

 

 

Outcomes

We describe hospital-level, postdischarge ED visitation as the risk-standardized postdischarge ED visit rate. The general construct of this measure is consistent with those of prior studies that define postdischarge ED visitation as the proportion of index admissions followed by a treat-and-discharge ED visit without hospital readmission2,3; however, this outcome also incorporates a risk-standardization model with covariates that are identical to the risk-standardization approach that is used for readmission measurement.

We describe hospital-level readmission by calculating RSRRs consistent with CMS readmission measures, which are endorsed by the National Quality Forum and used for public reporting.15-17 Detailed technical documentation, including the SAS code used to replicate hospital-level measures of readmission, are available publicly through the CMS QualityNet portal.18

We calculated risk-standardized postdischarge ED visit rates and RSRRs as the ratio of the predicted number of postdischarge ED visits or readmissions for a hospital given its observed case mix to the expected number of postdischarge ED visits or readmissions based on the nation’s performance with that hospital’s case mix, respectively. This approach estimates a distinct risk-standardized postdischarge ED visit rate and RSRR for each hospital using hierarchical generalized linear models (HGLMs) and using a logit link with a first-level adjustment for age, sex, 29 clinical covariates for AMI, 35 clinical covariates for heart failure, and 38 clinical covariates for pneumonia. Each clinical covariate is identified based on inpatient and outpatient claims during the 12 months prior to the index hospitalization. The second level of the HGLM includes a random hospital-level intercept. This approach to measuring hospital readmissions accounts for the correlated nature of observed readmission rates within a hospital and reflects the assumption that after adjustment for patient characteristics and sampling variability, the remaining variation in postdischarge ED visit rates or readmission rates reflects hospital quality.

Analysis

In order to characterize treat-and-discharge postdischarge ED visits, we first described the clinical conditions that were evaluated during the first postdischarge ED visit. Based on the principal discharge diagnosis, ED visits were grouped into clinically meaningful categories using the Agency for Healthcare Research and Quality Clinical Classifications Software (CCS).19 We also report hospital-level variation in risk-standardized postdischarge ED visit rates for AMI, heart failure, and pneumonia.

Next, we examined the relationship between hospital characteristics and risk-standardized postdischarge ED visit rates. We linked hospital characteristics from the American Hospital Association (AHA) Annual Survey to the study dataset, including the following: safety-net status, teaching status, and urban or rural status. Consistent with prior work, hospital safety-net status was defined as a hospital Medicaid caseload greater than 1 standard deviation above the mean Medicaid caseload in the hospital’s state. Approximately 94% of the hospitals included in the 3 condition cohorts in the dataset had complete data in the 2011 AHA Annual Survey to be included in this analysis.

We evaluated the relationship between postdischarge ED visit rates and hospital readmission rates in 2 ways. First, we calculated Spearman rank correlation coefficients between hospital-level, risk-standardized postdischarge ED visit rates and RSRRs. Second, we calculated hospital-level variation in RSRRs based on the strata of risk-standardized postdischarge ED visit rates. Given the normal distribution of postdischarge ED visit rates, we grouped hospitals by quartile of postdischarge ED visit rates and 1 group for hospitals with no postdischarge ED visits.

Based on preliminary analyses indicating a relationship between hospital size, measured by condition-specific index hospitalization volume, and postdischarge treat-and-discharge ED visit rates, all descriptive statistics and correlations reported are weighted by the volume of condition-specific index hospitalizations. The study was approved by the Yale University Human Research Protection Program. All analyses were conducted using SAS 9.1 (SAS Institute Inc, Cary, NC). The analytic plan and results reported in this work are in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology checklist.20

RESULTS

During the 1-year study period, we included a total of 157,035 patients who were hospitalized at 1656 hospitals for AMI, 391,209 at 3044 hospitals for heart failure, and 342,376 at 3484 hospitals for pneumonia. Details of study cohort creation are available in supplementary Table 1. After hospitalization for AMI, 14,714 patients experienced a postdischarge ED visit (8.4%) and 27,214 an inpatient readmissions (17.3%) within 30 days of discharge; 31,621 (7.6%) and 88,106 (22.5%) patients after hospitalization for heart failure and 26,681 (7.4%) and 59,352 (17.3%) patients after hospitalization for pneumonia experienced a postdischarge ED visit and an inpatient readmission within 30 days of discharge, respectively.

Postdischarge ED visits were for a wide variety of conditions, with the top 10 CCS categories comprising 44% of postdischarge ED visits following AMI hospitalizations, 44% of following heart failure hospitalizations, and 41% following pneumonia hospitalizations (supplementary Table 2). The first postdischarge ED visit was rarely for the same condition as the index hospitalization in the AMI cohort (224 visits; 1.5%) as well as the pneumonia cohort (1401 visits; 5.3%). Among patients who were originally admitted for heart failure, 10.6% of the first postdischarge ED visits were also for congestive heart failure. However, the first postdischarge ED visit was commonly for associated conditions, such as coronary artery disease in the case of AMI or chronic obstructive pulmonary disease in the case of pneumonia, albeit these related conditions did not comprise the majority of postdischarge ED visitation.

We found wide hospital-level variation in postdischarge ED visit rates for each condition: AMI (median: 8.3%; 5th and 95th percentile: 2.8%-14.3%), heart failure (median: 7.3%; 5th and 95th percentile: 3.0%-13.3%), and pneumonia (median: 7.1%; 5th and 95th percentile: 2.4%-13.2%; supplementary Table 3). The variation persisted after accounting for hospital case mix, as evidenced in the supplementary Figure, which describes hospital variation in risk-standardized postdischarge ED visit rates. This variation was statistically significant (P < .001), as demonstrated by the isolated relationship between the random effect and the outcome (AMI: random effect estimate 0.0849 [95% confidence interval (CI), 0.0832 to 0.0866]; heart failure: random effect estimate 0.0796 [95% CI, 0.0784 to 0.0809]; pneumonia: random effect estimate 0.0753 [95% CI, 0.0741 to 0.0764]).

Across all 3 conditions, hospitals located in rural areas had significantly higher risk-standardized postdischarge ED visit rates than hospitals located in urban areas (10.1% vs 8.6% for AMI, 8.4% vs 7.5% for heart failure, and 8.0% vs 7.4% for pneumonia). In comparison to teaching hospitals, nonteaching hospitals had significantly higher risk-standardized postdischarge ED visit rates following hospital discharge for pneumonia (7.6% vs 7.1%). Safety-net hospitals also had higher risk-standardized postdischarge ED visitation rates following discharge for heart failure (8.4% vs 7.7%) and pneumonia (7.7% vs 7.3%). Risk-standardized postdischarge ED visit rates were higher in publicly owned hospitals than in nonprofit or privately owned hospitals for heart failure (8.0% vs 7.5% in nonprofit hospitals or 7.5% in private hospitals) and pneumonia (7.7% vs 7.4% in nonprofit hospitals and 7.3% in private hospitals; Table).



Among hospitals with RSRRs that were publicly reported by CMS, we found a moderate inverse correlation between risk-standardized postdischarge ED visit rates and hospital RSRRs for each condition: AMI (r = −0.23; 95% CI, −0.29 to −0.19), heart failure (r = −0.29; 95% CI, −0.34 to −0.27), and pneumonia (r = −0.18; 95% CI, −0.22 to −0.15; Figure).

 

 

DISCUSSION

Across a national cohort of Medicare beneficiaries, we found frequent treat-and-discharge ED utilization following hospital discharge for AMI, heart failure, and pneumonia, suggesting that publicly reported readmission measures are capturing only a portion of postdischarge acute-care use. Our findings confirm prior work describing a 30-day postdischarge ED visit rate of 8% to 9% among Medicare beneficiaries for all hospitalizations in several states.3,6While many of the first postdischarge ED visits were for conditions related to the index hospitalization, the majority represent acute, unscheduled visits for different diagnoses. These findings are consistent with prior work studying inpatient readmissions and observation readmissions that find similar heterogeneity in the clinical reasons for hospital return.21,22

We also described substantial hospital-level variation in risk-standardized ED postdischarge rates. Prior work by Vashi et al.3 demonstrated substantial variation in observed postdischarge ED visit rates and inpatient readmissions following hospital discharge between clinical conditions in a population-level study. Our work extends upon this by demonstrating hospital-level variation for 3 conditions of high volume and substantial policy importance after accounting for differences in hospital case mix. Interestingly, our work also found similar rates of postdischarge ED treat-and-discharge visitation as recent work by Sabbatini et al.23 analyzing an all-payer, adult population with any clinical condition. Taken together, these studies show the substantial volume of postdischarge acute-care utilization in the ED not captured by existing readmission measures.

We found several hospital characteristics of importance in describing variation in postdischarge ED visitation rates. Notably, hospitals located in rural areas and safety-net hospitals demonstrated higher postdischarge ED visitation rates. This may reflect a higher use of the ED as an acute, unscheduled care access point in rural communities without access to alternative acute diagnostic and treatment services.24 Similarly, safety-net hospitals may be more likely to provide unscheduled care for patients with poor access to primary care in the ED setting. Yet, consistent with prior work, our results also indicate that these differences do not result in different readmission rates.25 Regarding hospital teaching status, unlike prior work suggesting that teaching hospitals care for more safety-net Medicare beneficiaries,26 our work found opposite patterns of postdischarge ED visitation between hospital teaching and safety-net status following pneumonia hospitalization. This may reflect differences in the organization of acute care as patients with limited access to unscheduled primary and specialty care in safety-net communities utilize the ED, whereas patients in teaching-hospital communities may be able to access hospital-based clinics for care.

Contrary to the expectations of many clinicians and policymakers, we found an inverse relationship between postdischarge ED visit rates and readmission rates. While the cross-sectional design of our study cannot provide a causal explanation, these findings merit policy attention and future exploration of several hypotheses. One possible explanation for this finding is that hospitals with high postdischarge ED visit rates provide care in communities in which acute, unscheduled care is consolidated to the ED setting and thereby permits the ED to serve a gatekeeper function for scarce inpatient resources. This hypothesis may also be supported by recent interventions demonstrating that the use of ED care coordination and geriatric ED services at higher-volume EDs can reduce hospitalizations. Also, hospitals with greater ED capacity may have easier ED access and may be able to see patients earlier in their disease courses post discharge or more frequently in the ED for follow-up, therefore increasing ED visits but avoiding rehospitalization. Another possible explanation is that hospitals with lower postdischarge ED visit rates may also have a lower propensity to admit patients. Because our definition of postdischarge ED visitation did not include ED visits that result in hospitalization, hospitals with a lower propensity to admit from the ED may therefore appear to have higher ED visit rates. This explanation may be further supported by our finding that many postdischarge ED visits are for conditions that are associated with discretionary hospitalization in the ED.27 A third explanation for this finding may be that poor access to outpatient care outside the hospital setting results in higher postdischarge ED visit rates without increasing the acuity of these revisits or increasing readmission rates28; however, given the validated, risk-standardized approach to readmission measurement, this is unlikely. This is also unlikely given recent work by Sabbatini et al.23 demonstrating substantial acuity among patients who return to the ED following hospital discharge. Future work should seek to evaluate the relationship between the availability of ED care-coordination services and the specific ED, hospital, and community care-coordination activities undertaken in the ED following hospital discharge to reduce readmission rates.

This work should be interpreted within the confines of its design. First, it is possible that some of the variation detected in postdischarge ED visit rates is mediated by hospital-level variation in postdischarge observation visits that are not captured in this outcome. However, in previous work, we have demonstrated that almost one-third of hospitals have no postdischarge observation stays and that most postdischarge observation stays are for more than 24 hours, which is unlikely to reflect the intensity of care of postdischarge ED visits.27 Second, our analyses were limited to Medicare FFS beneficiaries, which may limit the generalizability of this work to other patient populations. However, this dataset did include a national cohort of Medicare beneficiaries that is identical to those included in publicly reported CMS readmission measures; therefore, these results have substantial policy relevance. Third, this work was limited to 3 conditions of high illness severity of policy focus, and future work applying similar analyses to less severe conditions may find different degrees of hospital-level variation in postdischarge outcomes that are amenable to quality improvement. Finally, we assessed the rate of treat-and-discharge ED visits only after hospital discharge; this understates the frequency of ED visits since repeat ED visits and ED visits resulting in rehospitalization are not included. However, our definition was designed to mirror the definition used to assess hospital readmissions for policy purposes and is a conservative approach.

In summary, ED visits following hospital discharge are common, as Medicare beneficiaries have 1 treat-and-discharge ED visit for every 2 readmissions within 30 days of hospital discharge. Postdischarge ED visits occur for a wide variety of conditions, with wide risk-standardized, hospital-level variation. Hospitals with the highest risk-standardized postdischarge ED visitation rates demonstrated lower RSRRs, suggesting that policymakers and researchers should further examine the role of the hospital-based ED in providing access to acute care and supporting care transitions for the vulnerable Medicare population.

 

 

Disclosure

 Dr. Venkatesh received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, and grant support from the Emergency Medicine Foundation’s Health Policy Research Scholar Award during the conduct of the study; and Dr. Wang, Mr. Wang, Ms. Altaf, Dr. Bernheim, and Dr. Horwitz received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, during the conduct of the study.

Hospital readmissions for acute myocardial infarction (AMI), heart failure, and pneumonia have become central to quality-measurement efforts by the Centers for Medicare & Medicaid Services (CMS), which seek to improve hospital care transitions through public reporting and payment programs.1 Most current measures are limited to readmissions that require inpatient hospitalization and do not capture return visits to the emergency department (ED) that do not result in readmission but rather ED discharge. These visits may reflect important needs for acute, unscheduled care during the vulnerable posthospitalization period.2-5 While previous research has suggested that nearly 10% of patients may return to the ED following hospital discharge without readmission, the characteristics of these visits among Medicare beneficiaries and the implications for national care-coordination quality-measurement initiatives have not been explored.6,7

As the locus of acute outpatient care and the primary portal of hospital admissions and readmissions, ED visits following hospital discharge may convey meaningful information about posthospitalization care transitions.8,9 In addition, recent reviews and perspectives have highlighted the role of ED care-coordination services as interventions to reduce inpatient hospitalizations and improve care transitions,10,11 yet no empirical studies have evaluated the relationship between these unique care-coordination opportunities in the ED and care-coordination outcomes, such as hospital readmissions. As policymakers seek to develop accountability measures that capture the totality of acute, unscheduled visits following hospital discharge, describing the relationship between ED visits and readmissions will be essential to providers for benchmarking and to policymakers and payers seeking to reduce the total cost of care.12,13

Accordingly, we sought to characterize the frequency, diagnoses, and hospital-level variation in treat-and-discharge ED visitation following hospital discharge for 3 conditions for which hospital readmission is publicly reported by the CMS: AMI, heart failure, and pneumonia. We also sought to evaluate the relationship between hospital-level ED visitation following hospital discharge and publicly reported, risk-standardized readmission rates (RSRRs).

METHODS

Study Design

This study was a cross-sectional analysis of Medicare beneficiaries discharged alive following hospitalization for AMI, heart failure, and pneumonia between July 2011 and June 2012.

Selection of Participants

We used Medicare Standard Analytic Files to identify inpatient hospitalizations for each disease cohort based on principal discharge diagnoses. Each condition-specific cohort was constructed to be consistent with the CMS’s readmission measures using International Classification of Diseases, 9th Revision-Clinical Modification codes to identify AMI, heart failure, and pneumonia discharges.1 We included only patients who were enrolled in fee-for-service (FFS) Medicare parts A and B for 12 months prior to their index hospitalization to maximize the capture of diagnoses for risk adjustment. Each cohort included only patients who were discharged alive while maintaining FFS coverage for at least 30 days following hospital discharge to minimize bias in outcome ascertainment. We excluded patients who were discharged against medical advice. All contiguous admissions that were identified in a transfer chain were considered to be a single admission. Hospitals with fewer than 25 condition-specific index hospital admissions were excluded from this analysis for consistency with publicly reported measures.1

Measurements

We measured postdischarge, treat-and release ED visits that occurred at any hospital within 30 days of hospital discharge from the index hospitalization. ED visits were identified as a hospital outpatient claim for ED services using hospital outpatient revenue center codes 0450, 0451, 0452, 0456, and 0981. This definition is consistent with those of previous studies.3,14 We defined postdischarge ED visits as treat-and-discharge visits or visits that did not result in inpatient readmission or observation stays. Similar to readmission measures, only 1 postdischarge ED visit was counted toward the hospital-level outcome in patients with multiple ED visits within the 30 days following hospital discharge. We defined readmission as the first unplanned, inpatient hospitalization occurring at any hospital within the 30-day period following discharge. Any subsequent inpatient admission following the 30-day period was considered a distinct index admission if it met the inclusion criteria. Consistent with CMS methods, unplanned, inpatient readmissions are from any source and are not limited to patients who were first evaluated in the ED.

 

 

Outcomes

We describe hospital-level, postdischarge ED visitation as the risk-standardized postdischarge ED visit rate. The general construct of this measure is consistent with those of prior studies that define postdischarge ED visitation as the proportion of index admissions followed by a treat-and-discharge ED visit without hospital readmission2,3; however, this outcome also incorporates a risk-standardization model with covariates that are identical to the risk-standardization approach that is used for readmission measurement.

We describe hospital-level readmission by calculating RSRRs consistent with CMS readmission measures, which are endorsed by the National Quality Forum and used for public reporting.15-17 Detailed technical documentation, including the SAS code used to replicate hospital-level measures of readmission, are available publicly through the CMS QualityNet portal.18

We calculated risk-standardized postdischarge ED visit rates and RSRRs as the ratio of the predicted number of postdischarge ED visits or readmissions for a hospital given its observed case mix to the expected number of postdischarge ED visits or readmissions based on the nation’s performance with that hospital’s case mix, respectively. This approach estimates a distinct risk-standardized postdischarge ED visit rate and RSRR for each hospital using hierarchical generalized linear models (HGLMs) and using a logit link with a first-level adjustment for age, sex, 29 clinical covariates for AMI, 35 clinical covariates for heart failure, and 38 clinical covariates for pneumonia. Each clinical covariate is identified based on inpatient and outpatient claims during the 12 months prior to the index hospitalization. The second level of the HGLM includes a random hospital-level intercept. This approach to measuring hospital readmissions accounts for the correlated nature of observed readmission rates within a hospital and reflects the assumption that after adjustment for patient characteristics and sampling variability, the remaining variation in postdischarge ED visit rates or readmission rates reflects hospital quality.

Analysis

In order to characterize treat-and-discharge postdischarge ED visits, we first described the clinical conditions that were evaluated during the first postdischarge ED visit. Based on the principal discharge diagnosis, ED visits were grouped into clinically meaningful categories using the Agency for Healthcare Research and Quality Clinical Classifications Software (CCS).19 We also report hospital-level variation in risk-standardized postdischarge ED visit rates for AMI, heart failure, and pneumonia.

Next, we examined the relationship between hospital characteristics and risk-standardized postdischarge ED visit rates. We linked hospital characteristics from the American Hospital Association (AHA) Annual Survey to the study dataset, including the following: safety-net status, teaching status, and urban or rural status. Consistent with prior work, hospital safety-net status was defined as a hospital Medicaid caseload greater than 1 standard deviation above the mean Medicaid caseload in the hospital’s state. Approximately 94% of the hospitals included in the 3 condition cohorts in the dataset had complete data in the 2011 AHA Annual Survey to be included in this analysis.

We evaluated the relationship between postdischarge ED visit rates and hospital readmission rates in 2 ways. First, we calculated Spearman rank correlation coefficients between hospital-level, risk-standardized postdischarge ED visit rates and RSRRs. Second, we calculated hospital-level variation in RSRRs based on the strata of risk-standardized postdischarge ED visit rates. Given the normal distribution of postdischarge ED visit rates, we grouped hospitals by quartile of postdischarge ED visit rates and 1 group for hospitals with no postdischarge ED visits.

Based on preliminary analyses indicating a relationship between hospital size, measured by condition-specific index hospitalization volume, and postdischarge treat-and-discharge ED visit rates, all descriptive statistics and correlations reported are weighted by the volume of condition-specific index hospitalizations. The study was approved by the Yale University Human Research Protection Program. All analyses were conducted using SAS 9.1 (SAS Institute Inc, Cary, NC). The analytic plan and results reported in this work are in compliance with the Strengthening the Reporting of Observational Studies in Epidemiology checklist.20

RESULTS

During the 1-year study period, we included a total of 157,035 patients who were hospitalized at 1656 hospitals for AMI, 391,209 at 3044 hospitals for heart failure, and 342,376 at 3484 hospitals for pneumonia. Details of study cohort creation are available in supplementary Table 1. After hospitalization for AMI, 14,714 patients experienced a postdischarge ED visit (8.4%) and 27,214 an inpatient readmissions (17.3%) within 30 days of discharge; 31,621 (7.6%) and 88,106 (22.5%) patients after hospitalization for heart failure and 26,681 (7.4%) and 59,352 (17.3%) patients after hospitalization for pneumonia experienced a postdischarge ED visit and an inpatient readmission within 30 days of discharge, respectively.

Postdischarge ED visits were for a wide variety of conditions, with the top 10 CCS categories comprising 44% of postdischarge ED visits following AMI hospitalizations, 44% of following heart failure hospitalizations, and 41% following pneumonia hospitalizations (supplementary Table 2). The first postdischarge ED visit was rarely for the same condition as the index hospitalization in the AMI cohort (224 visits; 1.5%) as well as the pneumonia cohort (1401 visits; 5.3%). Among patients who were originally admitted for heart failure, 10.6% of the first postdischarge ED visits were also for congestive heart failure. However, the first postdischarge ED visit was commonly for associated conditions, such as coronary artery disease in the case of AMI or chronic obstructive pulmonary disease in the case of pneumonia, albeit these related conditions did not comprise the majority of postdischarge ED visitation.

We found wide hospital-level variation in postdischarge ED visit rates for each condition: AMI (median: 8.3%; 5th and 95th percentile: 2.8%-14.3%), heart failure (median: 7.3%; 5th and 95th percentile: 3.0%-13.3%), and pneumonia (median: 7.1%; 5th and 95th percentile: 2.4%-13.2%; supplementary Table 3). The variation persisted after accounting for hospital case mix, as evidenced in the supplementary Figure, which describes hospital variation in risk-standardized postdischarge ED visit rates. This variation was statistically significant (P < .001), as demonstrated by the isolated relationship between the random effect and the outcome (AMI: random effect estimate 0.0849 [95% confidence interval (CI), 0.0832 to 0.0866]; heart failure: random effect estimate 0.0796 [95% CI, 0.0784 to 0.0809]; pneumonia: random effect estimate 0.0753 [95% CI, 0.0741 to 0.0764]).

Across all 3 conditions, hospitals located in rural areas had significantly higher risk-standardized postdischarge ED visit rates than hospitals located in urban areas (10.1% vs 8.6% for AMI, 8.4% vs 7.5% for heart failure, and 8.0% vs 7.4% for pneumonia). In comparison to teaching hospitals, nonteaching hospitals had significantly higher risk-standardized postdischarge ED visit rates following hospital discharge for pneumonia (7.6% vs 7.1%). Safety-net hospitals also had higher risk-standardized postdischarge ED visitation rates following discharge for heart failure (8.4% vs 7.7%) and pneumonia (7.7% vs 7.3%). Risk-standardized postdischarge ED visit rates were higher in publicly owned hospitals than in nonprofit or privately owned hospitals for heart failure (8.0% vs 7.5% in nonprofit hospitals or 7.5% in private hospitals) and pneumonia (7.7% vs 7.4% in nonprofit hospitals and 7.3% in private hospitals; Table).



Among hospitals with RSRRs that were publicly reported by CMS, we found a moderate inverse correlation between risk-standardized postdischarge ED visit rates and hospital RSRRs for each condition: AMI (r = −0.23; 95% CI, −0.29 to −0.19), heart failure (r = −0.29; 95% CI, −0.34 to −0.27), and pneumonia (r = −0.18; 95% CI, −0.22 to −0.15; Figure).

 

 

DISCUSSION

Across a national cohort of Medicare beneficiaries, we found frequent treat-and-discharge ED utilization following hospital discharge for AMI, heart failure, and pneumonia, suggesting that publicly reported readmission measures are capturing only a portion of postdischarge acute-care use. Our findings confirm prior work describing a 30-day postdischarge ED visit rate of 8% to 9% among Medicare beneficiaries for all hospitalizations in several states.3,6While many of the first postdischarge ED visits were for conditions related to the index hospitalization, the majority represent acute, unscheduled visits for different diagnoses. These findings are consistent with prior work studying inpatient readmissions and observation readmissions that find similar heterogeneity in the clinical reasons for hospital return.21,22

We also described substantial hospital-level variation in risk-standardized ED postdischarge rates. Prior work by Vashi et al.3 demonstrated substantial variation in observed postdischarge ED visit rates and inpatient readmissions following hospital discharge between clinical conditions in a population-level study. Our work extends upon this by demonstrating hospital-level variation for 3 conditions of high volume and substantial policy importance after accounting for differences in hospital case mix. Interestingly, our work also found similar rates of postdischarge ED treat-and-discharge visitation as recent work by Sabbatini et al.23 analyzing an all-payer, adult population with any clinical condition. Taken together, these studies show the substantial volume of postdischarge acute-care utilization in the ED not captured by existing readmission measures.

We found several hospital characteristics of importance in describing variation in postdischarge ED visitation rates. Notably, hospitals located in rural areas and safety-net hospitals demonstrated higher postdischarge ED visitation rates. This may reflect a higher use of the ED as an acute, unscheduled care access point in rural communities without access to alternative acute diagnostic and treatment services.24 Similarly, safety-net hospitals may be more likely to provide unscheduled care for patients with poor access to primary care in the ED setting. Yet, consistent with prior work, our results also indicate that these differences do not result in different readmission rates.25 Regarding hospital teaching status, unlike prior work suggesting that teaching hospitals care for more safety-net Medicare beneficiaries,26 our work found opposite patterns of postdischarge ED visitation between hospital teaching and safety-net status following pneumonia hospitalization. This may reflect differences in the organization of acute care as patients with limited access to unscheduled primary and specialty care in safety-net communities utilize the ED, whereas patients in teaching-hospital communities may be able to access hospital-based clinics for care.

Contrary to the expectations of many clinicians and policymakers, we found an inverse relationship between postdischarge ED visit rates and readmission rates. While the cross-sectional design of our study cannot provide a causal explanation, these findings merit policy attention and future exploration of several hypotheses. One possible explanation for this finding is that hospitals with high postdischarge ED visit rates provide care in communities in which acute, unscheduled care is consolidated to the ED setting and thereby permits the ED to serve a gatekeeper function for scarce inpatient resources. This hypothesis may also be supported by recent interventions demonstrating that the use of ED care coordination and geriatric ED services at higher-volume EDs can reduce hospitalizations. Also, hospitals with greater ED capacity may have easier ED access and may be able to see patients earlier in their disease courses post discharge or more frequently in the ED for follow-up, therefore increasing ED visits but avoiding rehospitalization. Another possible explanation is that hospitals with lower postdischarge ED visit rates may also have a lower propensity to admit patients. Because our definition of postdischarge ED visitation did not include ED visits that result in hospitalization, hospitals with a lower propensity to admit from the ED may therefore appear to have higher ED visit rates. This explanation may be further supported by our finding that many postdischarge ED visits are for conditions that are associated with discretionary hospitalization in the ED.27 A third explanation for this finding may be that poor access to outpatient care outside the hospital setting results in higher postdischarge ED visit rates without increasing the acuity of these revisits or increasing readmission rates28; however, given the validated, risk-standardized approach to readmission measurement, this is unlikely. This is also unlikely given recent work by Sabbatini et al.23 demonstrating substantial acuity among patients who return to the ED following hospital discharge. Future work should seek to evaluate the relationship between the availability of ED care-coordination services and the specific ED, hospital, and community care-coordination activities undertaken in the ED following hospital discharge to reduce readmission rates.

This work should be interpreted within the confines of its design. First, it is possible that some of the variation detected in postdischarge ED visit rates is mediated by hospital-level variation in postdischarge observation visits that are not captured in this outcome. However, in previous work, we have demonstrated that almost one-third of hospitals have no postdischarge observation stays and that most postdischarge observation stays are for more than 24 hours, which is unlikely to reflect the intensity of care of postdischarge ED visits.27 Second, our analyses were limited to Medicare FFS beneficiaries, which may limit the generalizability of this work to other patient populations. However, this dataset did include a national cohort of Medicare beneficiaries that is identical to those included in publicly reported CMS readmission measures; therefore, these results have substantial policy relevance. Third, this work was limited to 3 conditions of high illness severity of policy focus, and future work applying similar analyses to less severe conditions may find different degrees of hospital-level variation in postdischarge outcomes that are amenable to quality improvement. Finally, we assessed the rate of treat-and-discharge ED visits only after hospital discharge; this understates the frequency of ED visits since repeat ED visits and ED visits resulting in rehospitalization are not included. However, our definition was designed to mirror the definition used to assess hospital readmissions for policy purposes and is a conservative approach.

In summary, ED visits following hospital discharge are common, as Medicare beneficiaries have 1 treat-and-discharge ED visit for every 2 readmissions within 30 days of hospital discharge. Postdischarge ED visits occur for a wide variety of conditions, with wide risk-standardized, hospital-level variation. Hospitals with the highest risk-standardized postdischarge ED visitation rates demonstrated lower RSRRs, suggesting that policymakers and researchers should further examine the role of the hospital-based ED in providing access to acute care and supporting care transitions for the vulnerable Medicare population.

 

 

Disclosure

 Dr. Venkatesh received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, and grant support from the Emergency Medicine Foundation’s Health Policy Research Scholar Award during the conduct of the study; and Dr. Wang, Mr. Wang, Ms. Altaf, Dr. Bernheim, and Dr. Horwitz received contract support from the CMS, an agency of the U.S. Department of Health & Human Services, during the conduct of the study.

References

1. Dorsey KB GJ, Desai N, Lindenauer P, et al. 2015 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures: AMI-Version 8.0, HF-Version 8.0, Pneumonia-Version 8.0, COPD-Version 4.0, and Stroke-Version 4.0. 2015. https://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228890435217&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DRdmn_AMIHFPNCOPDSTK_Msr_UpdtRpt.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on July 8, 2015.
2. Rising KL, White LF, Fernandez WG, Boutwell AE. Emergency department visits after hospital discharge: a missing part of the equation. Ann Emerg Med. 2013;62(2):145-150. PubMed
3. Vashi AA, Fox JP, Carr BG, et al. Use of hospital-based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364-371. PubMed
4. Kocher KE, Nallamothu BK, Birkmeyer JD, Dimick JB. Emergency department visits after surgery are common for Medicare patients, suggesting opportunities to improve care. Health Aff (Millwood). 2013;32(9):1600-1607. PubMed
5. Krumholz HM. Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
6. Baier RR, Gardner RL, Coleman EA, Jencks SF, Mor V, Gravenstein S. Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450-453. PubMed
7. Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. N Engl J Med. 2012;367(5):391-393. PubMed
8. Kocher KE, Dimick JB, Nallamothu BK. Changes in the source of unscheduled hospitalizations in the United States. Med Care. 2013;51(8):689-698. PubMed
9. Morganti KG, Bauhoff S, Blanchard JC, Abir M, Iyer N. The evolving role of emergency departments in the United States. Santa Monica, CA: Rand Corporation; 2013. PubMed
10. Katz EB, Carrier ER, Umscheid CA, Pines JM. Comparative effectiveness of care coordination interventions in the emergency department: a systematic review. Ann Emerg Med. 2012;60(1):12.e1-23.e1. PubMed
11. Jaquis WP, Kaplan JA, Carpenter C, et al. Transitions of Care Task Force Report. 2012. http://www.acep.org/workarea/DownloadAsset.aspx?id=91206. Accessed on January 2, 2016. 
12. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Heart Failure (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
13. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Acute Myocardial Infarction (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
14. Hennessy S, Leonard CE, Freeman CP, et al. Validation of diagnostic codes for outpatient-originating sudden cardiac death and ventricular arrhythmia in Medicaid and Medicare claims data. Pharmacoepidemiol Drug Saf. 2010;19(6):555-562. PubMed
15. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Acute Myocardial Infarction Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873653724&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DAMI_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
16. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Heart Failure Readmission Measure Methodology. 2008. http://69.28.93.62/wp-content/uploads/2017/01/2007-Baseline-info-on-Readmissions-krumholz.pdf. Accessed on February 22, 2016.
17. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Pneumonia Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873654295&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DPneumo_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
18. QualityNet. Claims-based measures: readmission measures. 2016. http://www.qualitynet.org/dcs/ContentServer?cid=1219069855273&pagename=QnetPublic%2FPage%2FQnetTier3. Accessed on December 14, 2017.
19. Agency for Healthcare Research and Quality. Clinical classifications software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project 2013; https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 14, 2017.
20. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med. 2007;45(4):247-251. PubMed
21. Dharmarajan K, Hsieh AF, Lin Z, et al. Diagnoses and timing of 30-day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355-363. PubMed
22. Venkatesh AK, Wang C, Ross JS, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. Med Care. 2016;54(12):1070-1077. PubMed
23. Sabbatini AK, Kocher KE, Basu A, Hsia RY. In-hospital outcomes and costs among patients hospitalized during a return visit to the emergency department. JAMA. 2016;315(7):663-671. PubMed
24. Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620-1629. PubMed
25. Ross JS, Bernheim SM, Lin Z, et al. Based on key measures, care quality for Medicare enrollees at safety-net and non-safety-net hospitals was almost equal. Health Aff (Millwood). 2012;31(8):1739-1748. PubMed
26. Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681. PubMed
27. Venkatesh A, Wang C, Suter LG, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. In: Academy Health Annual Research Meeting. San Diego, CA; 2014. PubMed
28. Pittsenbarger ZE, Thurm CW, Neuman MI, et al. Hospital-level factors associated with pediatric emergency department return visits. J Hosp Med. 2017;12(7):536-543. PubMed

References

1. Dorsey KB GJ, Desai N, Lindenauer P, et al. 2015 Condition-Specific Measures Updates and Specifications Report Hospital-Level 30-Day Risk-Standardized Readmission Measures: AMI-Version 8.0, HF-Version 8.0, Pneumonia-Version 8.0, COPD-Version 4.0, and Stroke-Version 4.0. 2015. https://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228890435217&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DRdmn_AMIHFPNCOPDSTK_Msr_UpdtRpt.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on July 8, 2015.
2. Rising KL, White LF, Fernandez WG, Boutwell AE. Emergency department visits after hospital discharge: a missing part of the equation. Ann Emerg Med. 2013;62(2):145-150. PubMed
3. Vashi AA, Fox JP, Carr BG, et al. Use of hospital-based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364-371. PubMed
4. Kocher KE, Nallamothu BK, Birkmeyer JD, Dimick JB. Emergency department visits after surgery are common for Medicare patients, suggesting opportunities to improve care. Health Aff (Millwood). 2013;32(9):1600-1607. PubMed
5. Krumholz HM. Post-hospital syndrome–an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100-102. PubMed
6. Baier RR, Gardner RL, Coleman EA, Jencks SF, Mor V, Gravenstein S. Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450-453. PubMed
7. Schuur JD, Venkatesh AK. The growing role of emergency departments in hospital admissions. N Engl J Med. 2012;367(5):391-393. PubMed
8. Kocher KE, Dimick JB, Nallamothu BK. Changes in the source of unscheduled hospitalizations in the United States. Med Care. 2013;51(8):689-698. PubMed
9. Morganti KG, Bauhoff S, Blanchard JC, Abir M, Iyer N. The evolving role of emergency departments in the United States. Santa Monica, CA: Rand Corporation; 2013. PubMed
10. Katz EB, Carrier ER, Umscheid CA, Pines JM. Comparative effectiveness of care coordination interventions in the emergency department: a systematic review. Ann Emerg Med. 2012;60(1):12.e1-23.e1. PubMed
11. Jaquis WP, Kaplan JA, Carpenter C, et al. Transitions of Care Task Force Report. 2012. http://www.acep.org/workarea/DownloadAsset.aspx?id=91206. Accessed on January 2, 2016. 
12. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Heart Failure (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
13. Horwitz LI, Wang C, Altaf FK, et al. Excess Days in Acute Care after Hospitalization for Acute Myocardial Infarction (Version 1.0) Final Measure Methodology Report. 2015. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/HospitalQualityInits/Measure-Methodology.html. Accessed on January 2, 2016.
14. Hennessy S, Leonard CE, Freeman CP, et al. Validation of diagnostic codes for outpatient-originating sudden cardiac death and ventricular arrhythmia in Medicaid and Medicare claims data. Pharmacoepidemiol Drug Saf. 2010;19(6):555-562. PubMed
15. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Acute Myocardial Infarction Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873653724&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DAMI_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
16. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Heart Failure Readmission Measure Methodology. 2008. http://69.28.93.62/wp-content/uploads/2017/01/2007-Baseline-info-on-Readmissions-krumholz.pdf. Accessed on February 22, 2016.
17. Krumholz H, Normand S, Keenan P, et al. Hospital 30-Day Pneumonia Readmission Measure Methodology. 2008. http://www.qualitynet.org/dcs/BlobServer?blobkey=id&blobnocache=true&blobwhere=1228873654295&blobheader=multipart%2Foctet-stream&blobheadername1=Content-Disposition&blobheadervalue1=attachment%3Bfilename%3DPneumo_ReadmMeasMethod.pdf&blobcol=urldata&blobtable=MungoBlobs. Accessed on February 22, 2016.
18. QualityNet. Claims-based measures: readmission measures. 2016. http://www.qualitynet.org/dcs/ContentServer?cid=1219069855273&pagename=QnetPublic%2FPage%2FQnetTier3. Accessed on December 14, 2017.
19. Agency for Healthcare Research and Quality. Clinical classifications software (CCS) for ICD-9-CM. Healthcare Cost and Utilization Project 2013; https://www.hcup-us.ahrq.gov/toolssoftware/ccs/ccs.jsp. Accessed December 14, 2017.
20. Von Elm E, Altman DG, Egger M, et al. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Prev Med. 2007;45(4):247-251. PubMed
21. Dharmarajan K, Hsieh AF, Lin Z, et al. Diagnoses and timing of 30-day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355-363. PubMed
22. Venkatesh AK, Wang C, Ross JS, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. Med Care. 2016;54(12):1070-1077. PubMed
23. Sabbatini AK, Kocher KE, Basu A, Hsia RY. In-hospital outcomes and costs among patients hospitalized during a return visit to the emergency department. JAMA. 2016;315(7):663-671. PubMed
24. Pitts SR, Carrier ER, Rich EC, Kellermann AL. Where Americans get acute care: increasingly, it’s not at their doctor’s office. Health Aff (Millwood). 2010;29(9):1620-1629. PubMed
25. Ross JS, Bernheim SM, Lin Z, et al. Based on key measures, care quality for Medicare enrollees at safety-net and non-safety-net hospitals was almost equal. Health Aff (Millwood). 2012;31(8):1739-1748. PubMed
26. Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681. PubMed
27. Venkatesh A, Wang C, Suter LG, et al. Hospital Use of Observation Stays: Cross-Sectional Study of the Impact on Readmission Rates. In: Academy Health Annual Research Meeting. San Diego, CA; 2014. PubMed
28. Pittsenbarger ZE, Thurm CW, Neuman MI, et al. Hospital-level factors associated with pediatric emergency department return visits. J Hosp Med. 2017;12(7):536-543. PubMed

Issue
Journal of Hospital Medicine 13(9)
Issue
Journal of Hospital Medicine 13(9)
Page Number
589-594. Published online first March 15, 2018
Page Number
589-594. Published online first March 15, 2018
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2018 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Arjun K. Venkatesh, MD, MBA, MHS, 1 Church St., 2nd Floor, New Haven, CT 06510; Telephone: 203-764-5700; Fax: 203-764-5653; E-mail: arjun.venkatesh@yale.edu
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media
Media Files

Hospital Mortality Measure for COPD

Article Type
Changed
Sun, 05/21/2017 - 18:06
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

Files
References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
Article PDF
Issue
Journal of Hospital Medicine - 8(8)
Publications
Page Number
428-435
Sections
Files
Files
Article PDF
Article PDF

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
Issue
Journal of Hospital Medicine - 8(8)
Issue
Journal of Hospital Medicine - 8(8)
Page Number
428-435
Page Number
428-435
Publications
Publications
Article Type
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Peter K. Lindenauer, MD, MSc, Baystate Medical Center, Center for Quality of Care Research, 759 Chestnut St., Springfield, MA 01199; Telephone: 413–794‐5987; Fax: 413–794–8866; E‐mail: peter.lindenauer@bhs.org
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files