Article Type
Changed
Mon, 05/22/2017 - 18:28
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

Files
References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
Article PDF
Issue
Journal of Hospital Medicine - 7(9)
Publications
Page Number
690-696
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

The Centers for Medicare & Medicaid Services (CMS) publicly reports hospital‐specific, 30‐day risk‐standardized mortality and readmission rates for Medicare fee‐for‐service patients admitted with acute myocardial infarction (AMI), heart failure (HF), and pneumonia.1 These measures are intended to reflect hospital performance on quality of care provided to patients during and after hospitalization.2, 3

Quality‐of‐care measures for a given disease are often assumed to reflect the quality of care for that particular condition. However, studies have found limited association between condition‐specific process measures and either mortality or readmission rates for those conditions.46 Mortality and readmission rates may instead reflect broader hospital‐wide or specialty‐wide structure, culture, and practice. For example, studies have previously found that hospitals differ in mortality or readmission rates according to organizational structure,7 financial structure,8 culture,9, 10 information technology,11 patient volume,1214 academic status,12 and other institution‐wide factors.12 There is now a strong policy push towards developing hospital‐wide (all‐condition) measures, beginning with readmission.15

It is not clear how much of the quality of care for a given condition is attributable to hospital‐wide influences that affect all conditions rather than disease‐specific factors. If readmission or mortality performance for a particular condition reflects, in large part, broader institutional characteristics, then improvement efforts might better be focused on hospital‐wide activities, such as team training or implementing electronic medical records. On the other hand, if the disease‐specific measures reflect quality strictly for those conditions, then improvement efforts would be better focused on disease‐specific care, such as early identification of the relevant patient population or standardizing disease‐specific care. As hospitals work to improve performance across an increasingly wide variety of conditions, it is becoming more important for hospitals to prioritize and focus their activities effectively and efficiently.

One means of determining the relative contribution of hospital versus disease factors is to explore whether outcome rates are consistent among different conditions cared for in the same hospital. If mortality (or readmission) rates across different conditions are highly correlated, it would suggest that hospital‐wide factors may play a substantive role in outcomes. Some studies have found that mortality for a particular surgical condition is a useful proxy for mortality for other surgical conditions,16, 17 while other studies have found little correlation among mortality rates for various medical conditions.18, 19 It is also possible that correlation varies according to hospital characteristics; for example, smaller or nonteaching hospitals might be more homogenous in their care than larger, less homogeneous institutions. No studies have been performed using publicly reported estimates of risk‐standardized mortality or readmission rates. In this study we use the publicly reported measures of 30‐day mortality and 30‐day readmission for AMI, HF, and pneumonia to examine whether, and to what degree, mortality rates track together within US hospitals, and separately, to what degree readmission rates track together within US hospitals.

METHODS

Data Sources

CMS calculates risk‐standardized mortality and readmission rates, and patient volume, for all acute care nonfederal hospitals with one or more eligible case of AMI, HF, and pneumonia annually based on fee‐for‐service (FFS) Medicare claims. CMS publicly releases the rates for the large subset of hospitals that participate in public reporting and have 25 or more cases for the conditions over the 3‐year period between July 2006 and June 2009. We estimated the rates for all hospitals included in the measure calculations, including those with fewer than 25 cases, using the CMS methodology and data obtained from CMS. The distribution of these rates has been previously reported.20, 21 In addition, we used the 2008 American Hospital Association (AHA) Survey to obtain data about hospital characteristics, including number of beds, hospital ownership (government, not‐for‐profit, for‐profit), teaching status (member of Council of Teaching Hospitals, other teaching hospital, nonteaching), presence of specialized cardiac capabilities (coronary artery bypass graft surgery, cardiac catheterization lab without cardiac surgery, neither), US Census Bureau core‐based statistical area (division [subarea of area with urban center >2.5 million people], metropolitan [urban center of at least 50,000 people], micropolitan [urban center of between 10,000 and 50,000 people], and rural [<10,000 people]), and safety net status22 (yes/no). Safety net status was defined as either public hospitals or private hospitals with a Medicaid caseload greater than one standard deviation above their respective state's mean private hospital Medicaid caseload using the 2007 AHA Annual Survey data.

Study Sample

This study includes 2 hospital cohorts, 1 for mortality and 1 for readmission. Hospitals were eligible for the mortality cohort if the dataset included risk‐standardized mortality rates for all 3 conditions (AMI, HF, and pneumonia). Hospitals were eligible for the readmission cohort if the dataset included risk‐standardized readmission rates for all 3 of these conditions.

Risk‐Standardized Measures

The measures include all FFS Medicare patients who are 65 years old, have been enrolled in FFS Medicare for the 12 months before the index hospitalization, are admitted with 1 of the 3 qualifying diagnoses, and do not leave the hospital against medical advice. The mortality measures include all deaths within 30 days of admission, and all deaths are attributable to the initial admitting hospital, even if the patient is then transferred to another acute care facility. Therefore, for a given hospital, transfers into the hospital are excluded from its rate, but transfers out are included. The readmission measures include all readmissions within 30 days of discharge, and all readmissions are attributable to the final discharging hospital, even if the patient was originally admitted to a different acute care facility. Therefore, for a given hospital, transfers in are included in its rate, but transfers out are excluded. For mortality measures, only 1 hospitalization for a patient in a specific year is randomly selected if the patient has multiple hospitalizations in the year. For readmission measures, admissions in which the patient died prior to discharge, and admissions within 30 days of an index admission, are not counted as index admissions.

Outcomes for all measures are all‐cause; however, for the AMI readmission measure, planned admissions for cardiac procedures are not counted as readmissions. Patients in observation status or in non‐acute care facilities are not counted as readmissions. Detailed specifications for the outcomes measures are available at the National Quality Measures Clearinghouse.23

The derivation and validation of the risk‐standardized outcome measures have been previously reported.20, 21, 2327 The measures are derived from hierarchical logistic regression models that include age, sex, clinical covariates, and a hospital‐specific random effect. The rates are calculated as the ratio of the number of predicted outcomes (obtained from a model applying the hospital‐specific effect) to the number of expected outcomes (obtained from a model applying the average effect among hospitals), multiplied by the unadjusted overall 30‐day rate.

Statistical Analysis

We examined patterns and distributions of hospital volume, risk‐standardized mortality rates, and risk‐standardized readmission rates among included hospitals. To measure the degree of association among hospitals' risk‐standardized mortality rates for AMI, HF, and pneumonia, we calculated Pearson correlation coefficients, resulting in 3 correlations for the 3 pairs of conditions (AMI and HF, AMI and pneumonia, HF and pneumonia), and tested whether they were significantly different from 0. We also conducted a factor analysis using the principal component method with a minimum eigenvalue of 1 to retain factors to determine whether there was a single common factor underlying mortality performance for the 3 conditions.28 Finally, we divided hospitals into quartiles of performance for each outcome based on the point estimate of risk‐standardized rate, and compared quartile of performance between condition pairs for each outcome. For each condition pair, we assessed the percent of hospitals in the same quartile of performance in both conditions, the percent of hospitals in either the top quartile of performance or the bottom quartile of performance for both, and the percent of hospitals in the top quartile for one and the bottom quartile for the other. We calculated the weighted kappa for agreement on quartile of performance between condition pairs for each outcome and the Spearman correlation for quartiles of performance. Then, we examined Pearson correlation coefficients in different subgroups of hospitals, including by size, ownership, teaching status, cardiac procedure capability, statistical area, and safety net status. In order to determine whether these correlations differed by hospital characteristics, we tested if the Pearson correlation coefficients were different between any 2 subgroups using the method proposed by Fisher.29 We repeated all of these analyses separately for the risk‐standardized readmission rates.

To determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair, we used the method recommended by Raghunathan et al.30 For these analyses, we included only hospitals reporting both mortality and readmission rates for the condition pairs. We used the same methods to determine whether correlations between mortality rates were significantly different than correlations between readmission rates for any given condition pair among subgroups of hospital characteristics.

All analyses and graphing were performed using the SAS statistical package version 9.2 (SAS Institute, Cary, NC). We considered a P‐value < 0.05 to be statistically significant, and all statistical tests were 2‐tailed.

RESULTS

The mortality cohort included 4559 hospitals, and the readmission cohort included 4468 hospitals. The majority of hospitals was small, nonteaching, and did not have advanced cardiac capabilities such as cardiac surgery or cardiac catheterization (Table 1).

Hospital Characteristics for Each Cohort
DescriptionMortality MeasuresReadmission Measures
 Hospital N = 4559Hospital N = 4468
 N (%)*N (%)*
  • Abbreviations: CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; SD, standard deviation. *Unless otherwise specified.

No. of beds  
>600157 (3.4)156 (3.5)
300600628 (13.8)626 (14.0)
<3003588 (78.7)3505 (78.5)
Unknown186 (4.08)181 (4.1)
Mean (SD)173.24 (189.52)175.23 (190.00)
Ownership  
Not‐for‐profit2650 (58.1)2619 (58.6)
For‐profit672 (14.7)663 (14.8)
Government1051 (23.1)1005 (22.5)
Unknown186 (4.1)181 (4.1)
Teaching status  
COTH277 (6.1)276 (6.2)
Teaching505 (11.1)503 (11.3)
Nonteaching3591 (78.8)3508 (78.5)
Unknown186 (4.1)181 (4.1)
Cardiac facility type  
CABG1471 (32.3)1467 (32.8)
Cath lab578 (12.7)578 (12.9)
Neither2324 (51.0)2242 (50.2)
Unknown186 (4.1)181 (4.1)
Core‐based statistical area  
Division621 (13.6)618 (13.8)
Metro1850 (40.6)1835 (41.1)
Micro801 (17.6)788 (17.6)
Rural1101 (24.2)1046 (23.4)
Unknown186 (4.1)181 (4.1)
Safety net status  
No2995 (65.7)2967 (66.4)
Yes1377 (30.2)1319 (29.5)
Unknown187 (4.1)182 (4.1)

For mortality measures, the smallest median number of cases per hospital was for AMI (48; interquartile range [IQR], 13,171), and the greatest number was for pneumonia (178; IQR, 87, 336). The same pattern held for readmission measures (AMI median 33; IQR; 9, 150; pneumonia median 191; IQR, 95, 352.5). With respect to mortality measures, AMI had the highest rate and HF the lowest rate; however, for readmission measures, HF had the highest rate and pneumonia the lowest rate (Table 2).

Hospital Volume and Risk‐Standardized Rates for Each Condition in the Mortality and Readmission Cohorts
DescriptionMortality Measures (N = 4559)Readmission Measures (N = 4468)
AMIHFPNAMIHFPN
  • Abbreviations: AMI, acute myocardial infarction; HF, heart failure; IQR, interquartile range; PN, pneumonia; SD, standard deviation. *Weighted by hospital volume.

Total discharges558,6531,094,9601,114,706546,5141,314,3941,152,708
Hospital volume      
Mean (SD)122.54 (172.52)240.18 (271.35)244.51 (220.74)122.32 (201.78)294.18 (333.2)257.99 (228.5)
Median (IQR)48 (13, 171)142 (56, 337)178 (87, 336)33 (9, 150)172.5 (68, 407)191 (95, 352.5)
Range min, max1, 13791, 28141, 22411, 16111, 34102, 2359
30‐Day risk‐standardized rate*      
Mean (SD)15.7 (1.8)10.9 (1.6)11.5 (1.9)19.9 (1.5)24.8 (2.1)18.5 (1.7)
Median (IQR)15.7 (14.5, 16.8)10.8 (9.9, 11.9)11.3 (10.2, 12.6)19.9 (18.9, 20.8)24.7 (23.4, 26.1)18.4 (17.3, 19.5)
Range min, max10.3, 24.66.6, 18.26.7, 20.915.2, 26.317.3, 32.413.6, 26.7

Every mortality measure was significantly correlated with every other mortality measure (range of correlation coefficients, 0.270.41, P < 0.0001 for all 3 correlations). For example, the correlation between risk‐standardized mortality rates (RSMR) for HF and pneumonia was 0.41. Similarly, every readmission measure was significantly correlated with every other readmission measure (range of correlation coefficients, 0.320.47; P < 0.0001 for all 3 correlations). Overall, the lowest correlation was between risk‐standardized mortality rates for AMI and pneumonia (r = 0.27), and the highest correlation was between risk‐standardized readmission rates (RSRR) for HF and pneumonia (r = 0.47) (Table 3).

Correlations Between Risk‐Standardized Mortality Rates and Between Risk‐Standardized Readmission Rates for Subgroups of Hospitals
DescriptionMortality MeasuresReadmission Measures
NAMI and HFAMI and PNHF and PN AMI and HFAMI and PNHF and PN
rPrPrPNrPrPrP
  • NOTE: P value is the minimum P value of pairwise comparisons within each subgroup. Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; N, number of hospitals; PN, pneumonia; r, Pearson correlation coefficient.

All45590.30 0.27 0.41 44680.38 0.32 0.47 
Hospitals with 25 patients28720.33 0.30 0.44 24670.44 0.38 0.51 
No. of beds  0.15 0.005 0.0009  <0.0001 <0.0001 <0.0001
>6001570.38 0.43 0.51 1560.67 0.50 0.66 
3006006280.29 0.30 0.49 6260.54 0.45 0.58 
<30035880.27 0.23 0.37 35050.30 0.26 0.44 
Ownership  0.021 0.05 0.39  0.0004 0.0004 0.003
Not‐for‐profit26500.32 0.28 0.42 26190.43 0.36 0.50 
For‐profit6720.30 0.23 0.40 6630.29 0.22 0.40 
Government10510.24 0.22 0.39 10050.32 0.29 0.45 
Teaching status  0.11 0.08 0.0012  <0.0001 0.0002 0.0003
COTH2770.31 0.34 0.54 2760.54 0.47 0.59 
Teaching5050.22 0.28 0.43 5030.52 0.42 0.56 
Nonteaching35910.29 0.24 0.39 35080.32 0.26 0.44 
Cardiac facility type 0.022 0.006 <0.0001  <0.0001 0.0006 0.004
CABG14710.33 0.29 0.47 14670.48 0.37 0.52 
Cath lab5780.25 0.26 0.36 5780.32 0.37 0.47 
Neither23240.26 0.21 0.36 22420.28 0.27 0.44 
Core‐based statistical area 0.0001 <0.0001 0.002  <0.0001 <0.0001 <0.0001
Division6210.38 0.34 0.41 6180.46 0.40 0.56 
Metro18500.26 0.26 0.42 18350.38 0.30 0.40 
Micro8010.23 0.22 0.34 7880.32 0.30 0.47 
Rural11010.21 0.13 0.32 10460.22 0.21 0.44 
Safety net status  0.001 0.027 0.68  0.029 0.037 0.28
No29950.33 0.28 0.41 29670.40 0.33 0.48 
Yes13770.23 0.21 0.40 13190.34 0.30 0.45 

Both the factor analysis for the mortality measures and the factor analysis for the readmission measures yielded only one factor with an eigenvalue >1. In each factor analysis, this single common factor kept more than half of the data based on the cumulative eigenvalue (55% for mortality measures and 60% for readmission measures). For the mortality measures, the pattern of RSMR for myocardial infarction (MI), heart failure (HF), and pneumonia (PN) in the factor was high (0.68 for MI, 0.78 for HF, and 0.76 for PN); the same was true of the RSRR in the readmission measures (0.72 for MI, 0.81 for HF, and 0.78 for PN).

For all condition pairs and both outcomes, a third or more of hospitals were in the same quartile of performance for both conditions of the pair (Table 4). Hospitals were more likely to be in the same quartile of performance if they were in the top or bottom quartile than if they were in the middle. Less than 10% of hospitals were in the top quartile for one condition in the mortality or readmission pair and in the bottom quartile for the other condition in the pair. Kappa scores for same quartile of performance between pairs of outcomes ranged from 0.16 to 0.27, and were highest for HF and pneumonia for both mortality and readmission rates.

Measures of Agreement for Quartiles of Performance in Mortality and Readmission Pairs
Condition PairSame Quartile (Any) (%)Same Quartile (Q1 or Q4) (%)Q1 in One and Q4 in Another (%)Weighted KappaSpearman Correlation
  • Abbreviations: HF, heart failure; MI, myocardial infarction; PN, pneumonia.

Mortality
MI and HF34.820.27.90.190.25
MI and PN32.718.88.20.160.22
HF and PN35.921.85.00.260.36
Readmission     
MI and HF36.621.07.50.220.28
MI and PN34.019.68.10.190.24
HF and PN37.122.65.40.270.37

In subgroup analyses, the highest mortality correlation was between HF and pneumonia in hospitals with more than 600 beds (r = 0.51, P = 0.0009), and the highest readmission correlation was between AMI and HF in hospitals with more than 600 beds (r = 0.67, P < 0.0001). Across both measures and all 3 condition pairs, correlations between conditions increased with increasing hospital bed size, presence of cardiac surgery capability, and increasing population of the hospital's Census Bureau statistical area. Furthermore, for most measures and condition pairs, correlations between conditions were highest in not‐for‐profit hospitals, hospitals belonging to the Council of Teaching Hospitals, and non‐safety net hospitals (Table 3).

For all condition pairs, the correlation between readmission rates was significantly higher than the correlation between mortality rates (P < 0.01). In subgroup analyses, readmission correlations were also significantly higher than mortality correlations for all pairs of conditions among moderate‐sized hospitals, among nonprofit hospitals, among teaching hospitals that did not belong to the Council of Teaching Hospitals, and among non‐safety net hospitals (Table 5).

Comparison of Correlations Between Mortality Rates and Correlations Between Readmission Rates for Condition Pairs
DescriptionAMI and HFAMI and PNHF and PN
NMCRCPNMCRCPNMCRCP
  • Abbreviations: AMI, acute myocardial infarction; CABG, coronary artery bypass graft surgery capability; Cath lab, cardiac catheterization lab capability; COTH, Council of Teaching Hospitals member; HF, heart failure; MC, mortality correlation; PN, pneumonia; r, Pearson correlation coefficient; RC, readmission correlation.

             
All44570.310.38<0.000144590.270.320.00747310.410.460.0004
Hospitals with 25 patients24720.330.44<0.00124630.310.380.0141040.420.470.001
No. of beds            
>6001560.380.670.00021560.430.500.481600.510.660.042
3006006260.290.54<0.00016260.310.450.0036300.490.580.033
<30034940.280.300.2134960.230.260.1737330.370.430.003
Ownership            
Not‐for‐profit26140.320.43<0.000126170.280.360.00326970.420.500.0003
For‐profit6620.300.290.906610.230.220.756990.400.400.99
Government10000.250.320.0910000.220.290.0911270.390.430.21
Teaching status            
COTH2760.310.540.0012770.350.460.102780.540.590.41
Teaching5040.220.52<0.00015040.280.420.0125080.430.560.005
Nonteaching34960.290.320.1834970.240.260.4637370.390.430.016
Cardiac facility type            
CABG14650.330.48<0.000114670.300.370.01814830.470.510.103
Cath lab5770.250.320.185770.260.370.0465790.360.470.022
Neither22340.260.280.4822340.210.270.03724610.360.440.002
Core‐based statistical area            
Division6180.380.460.096200.340.400.186300.410.560.001
Metro18330.260.38<0.000118320.260.300.2118960.420.400.63
Micro7870.240.320.087870.220.300.118200.340.460.003
Rural10380.210.220.8310390.130.210.05611770.320.430.002
Safety net status            
No29610.330.400.00129630.280.330.03630620.410.480.001
Yes13140.230.340.00313140.220.300.01514600.400.450.14

DISCUSSION

In this study, we found that risk‐standardized mortality rates for 3 common medical conditions were moderately correlated within institutions, as were risk‐standardized readmission rates. Readmission rates were more strongly correlated than mortality rates, and all rates tracked closest together in large, urban, and/or teaching hospitals. Very few hospitals were in the top quartile of performance for one condition and in the bottom quartile for a different condition.

Our findings are consistent with the hypothesis that 30‐day risk‐standardized mortality and 30‐day risk‐standardized readmission rates, in part, capture broad aspects of hospital quality that transcend condition‐specific activities. In this study, readmission rates tracked better together than mortality rates for every pair of conditions, suggesting that there may be a greater contribution of hospital‐wide environment, structure, and processes to readmission rates than to mortality rates. This difference is plausible because services specific to readmission, such as discharge planning, care coordination, medication reconciliation, and discharge communication with patients and outpatient clinicians, are typically hospital‐wide processes.

Our study differs from earlier studies of medical conditions in that the correlations we found were higher.18, 19 There are several possible explanations for this difference. First, during the intervening 1525 years since those studies were performed, care for these conditions has evolved substantially, such that there are now more standardized protocols available for all 3 of these diseases. Hospitals that are sufficiently organized or acculturated to systematically implement care protocols may have the infrastructure or culture to do so for all conditions, increasing correlation of performance among conditions. In addition, there are now more technologies and systems available that span care for multiple conditions, such as electronic medical records and quality committees, than were available in previous generations. Second, one of these studies utilized less robust risk‐adjustment,18 and neither used the same methodology of risk standardization. Nonetheless, it is interesting to note that Rosenthal and colleagues identified the same increase in correlation with higher volumes than we did.19 Studies investigating mortality correlations among surgical procedures, on the other hand, have generally found higher correlations than we found in these medical conditions.16, 17

Accountable care organizations will be assessed using an all‐condition readmission measure,31 several states track all‐condition readmission rates,3234 and several countries measure all‐condition mortality.35 An all‐condition measure for quality assessment first requires that there be a hospital‐wide quality signal above and beyond disease‐specific care. This study suggests that a moderate signal exists for readmission and, to a slightly lesser extent, for mortality, across 3 common conditions. There are other considerations, however, in developing all‐condition measures. There must be adequate risk adjustment for the wide variety of conditions that are included, and there must be a means of accounting for the variation in types of conditions and procedures cared for by different hospitals. Our study does not address these challenges, which have been described to be substantial for mortality measures.35

We were surprised by the finding that risk‐standardized rates correlated more strongly within larger institutions than smaller ones, because one might assume that care within smaller hospitals might be more homogenous. It may be easier, however, to detect a quality signal in hospitals with higher volumes of patients for all 3 conditions, because estimates for these hospitals are more precise. Consequently, we have greater confidence in results for larger volumes, and suspect a similar quality signal may be present but more difficult to detect statistically in smaller hospitals. Overall correlations were higher when we restricted the sample to hospitals with at least 25 cases, as is used for public reporting. It is also possible that the finding is real given that large‐volume hospitals have been demonstrated to provide better care for these conditions and are more likely to adopt systems of care that affect multiple conditions, such as electronic medical records.14, 36

The kappa scores comparing quartile of national performance for pairs of conditions were only in the fair range. There are several possible explanations for this fact: 1) outcomes for these 3 conditions are not measuring the same constructs; 2) they are all measuring the same construct, but they are unreliable in doing so; and/or 3) hospitals have similar latent quality for all 3 conditions, but the national quality of performance differs by condition, yielding variable relative performance per hospital for each condition. Based solely on our findings, we cannot distinguish which, if any, of these explanations may be true.31

Our study has several limitations. First, all 3 conditions currently publicly reported by CMS are medical diagnoses, although AMI patients may be cared for in distinct cardiology units and often undergo procedures; therefore, we cannot determine the degree to which correlations reflect hospital‐wide quality versus medicine‐wide quality. An institution may have a weak medicine department but a strong surgical department or vice versa. Second, it is possible that the correlations among conditions for readmission and among conditions for mortality are attributable to patient characteristics that are not adequately adjusted for in the risk‐adjustment model, such as socioeconomic factors, or to hospital characteristics not related to quality, such as coding practices or inter‐hospital transfer rates. For this to be true, these unmeasured characteristics would have to be consistent across different conditions within each hospital and have a consistent influence on outcomes. Third, it is possible that public reporting may have prompted disease‐specific focus on these conditions. We do not have data from non‐publicly reported conditions to test this hypothesis. Fourth, there are many small‐volume hospitals in this study; their estimates for readmission and mortality are less reliable than for large‐volume hospitals, potentially limiting our ability to detect correlations in this group of hospitals.

This study lends credence to the hypothesis that 30‐day risk‐standardized mortality and readmission rates for individual conditions may reflect aspects of hospital‐wide quality or at least medicine‐wide quality, although the correlations are not large enough to conclude that hospital‐wide factors play a dominant role, and there are other possible explanations for the correlations. Further work is warranted to better understand the causes of the correlations, and to better specify the nature of hospital factors that contribute to correlations among outcomes.

Acknowledgements

Disclosures: Dr Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation of Aging Research through the Paul B. Beeson Career Development Award Program. Dr Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30 AG021342 NIH/NIA). Dr Krumholz is supported by grant U01 HL105270‐01 (Center for Cardiovascular Outcomes Research at Yale University) from the National Heart, Lung, and Blood Institute. Dr Krumholz chairs a cardiac scientific advisory board for UnitedHealth. Authors Drye, Krumholz, and Wang receive support from the Centers for Medicare & Medicaid Services (CMS) to develop and maintain performance measures that are used for public reporting. The analyses upon which this publication is based were performed under Contract Number HHSM‐500‐2008‐0025I Task Order T0001, entitled Measure & Instrument Development and Support (MIDS)Development and Re‐evaluation of the CMS Hospital Outcomes and Efficiency Measures, funded by the Centers for Medicare & Medicaid Services, an agency of the US Department of Health and Human Services. The Centers for Medicare & Medicaid Services reviewed and approved the use of its data for this work, and approved submission of the manuscript. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US Government. The authors assume full responsibility for the accuracy and completeness of the ideas presented.

References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
References
  1. US Department of Health and Human Services. Hospital Compare.2011. Available at: http://www.hospitalcompare.hhs.gov. Accessed March 5, 2011.
  2. Balla U,Malnick S,Schattner A.Early readmissions to the department of medicine as a screening tool for monitoring quality of care problems.Medicine (Baltimore).2008;87(5):294300.
  3. Dubois RW,Rogers WH,Moxley JH,Draper D,Brook RH.Hospital inpatient mortality. Is it a predictor of quality?N Engl J Med.1987;317(26):16741680.
  4. Werner RM,Bradlow ET.Relationship between Medicare's hospital compare performance measures and mortality rates.JAMA.2006;296(22):26942702.
  5. Jha AK,Orav EJ,Epstein AM.Public reporting of discharge planning and rates of readmissions.N Engl J Med.2009;361(27):26372645.
  6. Patterson ME,Hernandez AF,Hammill BG, et al.Process of care performance measures and long‐term outcomes in patients hospitalized with heart failure.Med Care.2010;48(3):210216.
  7. Chukmaitov AS,Bazzoli GJ,Harless DW,Hurley RE,Devers KJ,Zhao M.Variations in inpatient mortality among hospitals in different system types, 1995 to 2000.Med Care.2009;47(4):466473.
  8. Devereaux PJ,Choi PT,Lacchetti C, et al.A systematic review and meta‐analysis of studies comparing mortality rates of private for‐profit and private not‐for‐profit hospitals.Can Med Assoc J.2002;166(11):13991406.
  9. Curry LA,Spatz E,Cherlin E, et al.What distinguishes top‐performing hospitals in acute myocardial infarction mortality rates? A qualitative study.Ann Intern Med.2011;154(6):384390.
  10. Hansen LO,Williams MV,Singer SJ.Perceptions of hospital safety climate and incidence of readmission.Health Serv Res.2011;46(2):596616.
  11. Longhurst CA,Parast L,Sandborg CI, et al.Decrease in hospital‐wide mortality rate after implementation of a commercially sold computerized physician order entry system.Pediatrics.2010;126(1):1421.
  12. Fink A,Yano EM,Brook RH.The condition of the literature on differences in hospital mortality.Med Care.1989;27(4):315336.
  13. Gandjour A,Bannenberg A,Lauterbach KW.Threshold volumes associated with higher survival in health care: a systematic review.Med Care.2003;41(10):11291141.
  14. Ross JS,Normand SL,Wang Y, et al.Hospital volume and 30‐day mortality for three common medical conditions.N Engl J Med.2010;362(12):11101118.
  15. Patient Protection and Affordable Care Act Pub. L. No. 111–148, 124 Stat, §3025.2010. Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/content‐detail.html. Accessed on July 26, year="2012"2012.
  16. Dimick JB,Staiger DO,Birkmeyer JD.Are mortality rates for different operations related? Implications for measuring the quality of noncardiac surgery.Med Care.2006;44(8):774778.
  17. Goodney PP,O'Connor GT,Wennberg DE,Birkmeyer JD.Do hospitals with low mortality rates in coronary artery bypass also perform well in valve replacement?Ann Thorac Surg.2003;76(4):11311137.
  18. Chassin MR,Park RE,Lohr KN,Keesey J,Brook RH.Differences among hospitals in Medicare patient mortality.Health Serv Res.1989;24(1):131.
  19. Rosenthal GE,Shah A,Way LE,Harper DL.Variations in standardized hospital mortality rates for six common medical diagnoses: implications for profiling hospital quality.Med Care.1998;36(7):955964.
  20. Lindenauer PK,Normand SL,Drye EE, et al.Development, validation, and results of a measure of 30‐day readmission following hospitalization for pneumonia.J Hosp Med.2011;6(3):142150.
  21. Keenan PS,Normand SL,Lin Z, et al.An administrative claims measure suitable for profiling hospital performance on the basis of 30‐day all‐cause readmission rates among patients with heart failure.Circ Cardiovasc Qual Outcomes.2008;1:2937.
  22. Ross JS,Cha SS,Epstein AJ, et al.Quality of care for acute myocardial infarction at urban safety‐net hospitals.Health Aff (Millwood).2007;26(1):238248.
  23. National Quality Measures Clearinghouse.2011. Available at: http://www.qualitymeasures.ahrq.gov/. Accessed February 21,year="2011"2011.
  24. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  25. Krumholz HM,Wang Y,Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  26. Bratzler DW,Normand SL,Wang Y, et al.An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients.PLoS One.2011;6(4):e17401.
  27. Krumholz HM,Lin Z,Drye EE, et al.An administrative claims measure suitable for profiling hospital performance based on 30‐day all‐cause readmission rates among patients with acute myocardial infarction.Circ Cardiovasc Qual Outcomes.2011;4(2):243252.
  28. Kaiser HF.The application of electronic computers to factor analysis.Educ Psychol Meas.1960;20:141151.
  29. Fisher RA.On the ‘probable error’ of a coefficient of correlation deduced from a small sample.Metron.1921;1:332.
  30. Raghunathan TE,Rosenthal R,Rubin DB.Comparing correlated but nonoverlapping correlations.Psychol Methods.1996;1(2):178183.
  31. Centers for Medicare and Medicaid Services.Medicare Shared Savings Program: Accountable Care Organizations, Final Rule.Fed Reg.2011;76:6780267990.
  32. Massachusetts Healthcare Quality and Cost Council. Potentially Preventable Readmissions.2011. Available at: http://www.mass.gov/hqcc/the‐hcqcc‐council/data‐submission‐information/potentially‐preventable‐readmissions‐ppr.html. Accessed February 29, 2012.
  33. Texas Medicaid. Potentially Preventable Readmission (PPR).2012. Available at: http://www.tmhp.com/Pages/Medicaid/Hospital_PPR.aspx. Accessed February 29, 2012.
  34. New York State. Potentially Preventable Readmissions.2011. Available at: http://www.health.ny.gov/regulations/recently_adopted/docs/2011–02‐23_potentially_preventable_readmissions.pdf. Accessed February 29, 2012.
  35. Shahian DM,Wolf RE,Iezzoni LI,Kirle L,Normand SL.Variability in the measurement of hospital‐wide mortality rates.N Engl J Med.2010;363(26):25302539.
  36. Jha AK,DesRoches CM,Campbell EG, et al.Use of electronic health records in U.S. hospitals.N Engl J Med.2009;360(16):16281638.
Issue
Journal of Hospital Medicine - 7(9)
Issue
Journal of Hospital Medicine - 7(9)
Page Number
690-696
Page Number
690-696
Publications
Publications
Article Type
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals
Display Headline
Correlations among risk‐standardized mortality rates and among risk‐standardized readmission rates within hospitals
Sections
Article Source

Copyright © 2012 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Section of General Internal Medicine, Department of Medicine, Yale University School of Medicine, PO Box 208093, New Haven, CT 06520‐8093
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files