Affiliations
Yale New Haven Health Services Corporation/The Center for Outcomes Research and Evaluation, New Haven, Connecticut
Section of Cardiovascular Medicine, Yale University School of Medicine, New Haven, Connecticut
Given name(s)
Yun
Family name
Wang
Degrees
PhD

Warfarin‐Associated Adverse Events

Article Type
Changed
Mon, 05/15/2017 - 22:29
Display Headline
Predictors of warfarin‐associated adverse events in hospitalized patients: Opportunities to prevent patient harm

Warfarin is 1 of the most common causes of adverse drug events, with hospitalized patients being particularly at risk compared to outpatients.[1] Despite the availability of new oral anticoagulants (NOACs), physicians commonly prescribe warfarin to hospitalized patients,[2] likely in part due to the greater difficulty in reversing NOACs compared to warfarin. Furthermore, uptake of the NOACs is likely to be slow in resource‐poor countries due to the lower cost of warfarin.[3] However, the narrow therapeutic index, frequent drug‐drug interactions, and patient variability in metabolism of warfarin makes management challenging.[4] Thus, warfarin remains a significant cause of adverse events in hospitalized patients, occurring in approximately 3% to 8% of exposed patients, depending on underlying condition.[2, 5]

An elevated international normalized ratio (INR) is a strong predictor of drug‐associated adverse events (patient harm). In a study employing 21 different electronic triggers to identify potential adverse events, an elevated INR had the highest yield for events associated with harm (96% of INRs >5.0 associated with harm).[6] Although pharmacist‐managed inpatient anticoagulation services have been shown to improve warfarin management,[7, 8] there are evidence gaps regarding the causes of warfarin‐related adverse events and practice changes that could decrease their frequency. Although overanticoagulation is a well‐known risk factor for warfarin‐related adverse events,[9, 10] there are few evidence‐based warfarin monitoring and dosing recommendations for hospitalized patients.[10] For example, the 2012 American College of Chest Physicians Antithrombotic Guidelines[11] provide a weak recommendation on initial dosing of warfarin, but no recommendations on how frequently to monitor the INR, or appropriate dosing responses to INR levels. Although many hospitals employ protocols that suggest daily INR monitoring until stable, there are no evidence‐based guidelines to support this practice.[12] Conversely, there are reports of flags to order an INR level that are not activated unless greater than 2[13] or 3 days[14] pass since the prior INR. Protocols from some major academic medical centers suggest that after a therapeutic INR is reached, INR levels can be measured intermittently, as infrequently as twice a week.[15, 16]

The 2015 Joint Commission anticoagulant‐focused National Patient Safety Goal[17] (initially issued in 2008) mandates the assessment of baseline coagulation status before starting warfarin, and warfarin dosing based on a current INR; however, current is not defined. Neither the extent to which the mandate for assessing baseline coagulation status is adhered to nor the relationship between this process of care and patient outcomes is known. The importance of adverse drug events associated with anticoagulants, included warfarin, was also recently highlighted in the 2014 federal National Action Plan for Adverse Drug Event Prevention. In this document, the prevention of adverse drug events associated with anticoagulants was 1 of the 3 areas selected for special national attention and action.[18]

The Medicare Patient Safety Monitoring System (MPSMS) is a national chart abstraction‐based system that includes 21 in‐hospital adverse event measures, including warfarin‐associated adverse drug events.[2] Because of the importance of warfarin‐associated bleeding in hospitalized patients, we analyzed MPSMS data to determine what factors related to INR monitoring practices place patients at risk for these events. We were particularly interested in determining if we could detect potentially modifiable predictors of overanticoagulation and warfarin‐associated adverse events.

METHODS

Study Sample

We combined 2009 to 2013 MPSMS all payer data from the Centers for Medicare & Medicaid Services Hospital Inpatient Quality Reporting program for 4 common medical conditions: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, and (4) major surgery (as defined by the national Surgical Care Improvement Project).[19] To increase the sample size for cardiac patients, we combined myocardial infarction patients and heart failure patients into 1 group: acute cardiovascular disease. Patients under 18 years of age are excluded from the MPSMS sample, and we excluded patients whose INR never exceeded 1.5 after the initiation of warfarin therapy.

Patient Characteristics

Patient characteristics included demographics (age, sex, race [white, black, and other race]) and comorbidities. Comorbidities abstracted from medical records included: histories at the time of hospital admission of heart failure, obesity, coronary artery disease, renal disease, cerebrovascular disease, chronic obstructive pulmonary disease, cancer, diabetes, and smoking. The use of anticoagulants other than warfarin was also captured.

INRs

The INR measurement period for each patient started from the initial date of warfarin administration and ended on the date the maximum INR occurred. If a patient had more than 1 INR value on any day, the higher INR value was selected. A day without an INR measurement was defined as no INR value documented for a calendar day within the INR measurement period, starting on the third day of warfarin and ending on the day of the maximum INR level.

Outcomes

The study was performed to assess the association between the number of days on which a patient did not have an INR measured while receiving warfarin and the occurrence of (1) an INR 6.0[20, 21] (intermediate outcome) and (2) a warfarin‐associated adverse event. A description of the MPSMS measure of warfarin‐associated adverse events has been previously published.[2] Warfarin‐associated adverse events must have occurred within 48 hours of predefined triggers: an INR 4.0, cessation of warfarin therapy, administration of vitamin K or fresh frozen plasma, or transfusion of packed red blood cells other than in the setting of a surgical procedure. Warfarin‐associated adverse events were divided into minor and major events for this analysis. Minor events were defined as bleeding, drop in hematocrit of 3 points (occurring more than 48 hours after admission and not associated with surgery), or development of a hematoma. Major events were death, intracranial bleeding, or cardiac arrest. A patient who had both a major and a minor event was considered as having had a major event.

To assess the relationship between a rapidly rising INR and a subsequent INR 5.0 or 6.0, we determined the increase in INR between the measurement done 2 days prior to the maximum INR and 1 day prior to the maximum INR. This analysis was performed only on patients whose INR was 2.0 and 3.5 on the day prior to the maximum INR. In doing so, we sought to determine if the INR rise could predict the occurrence of a subsequent severely elevated INR in patients whose INR was within or near the therapeutic range.

Statistical Analysis

We conducted bivariate analysis to quantify the associations between lapses in measurement of the INR and subsequent warfarin‐associated adverse events, using the Mantel‐Haenszel 2 test for categorical variables. We fitted a generalized linear model with a logit link function to estimate the association of days on which an INR was not measured and the occurrence of the composite adverse event measure or the occurrence of an INR 6.0, adjusting for baseline patient characteristics, the number of days on warfarin, and receipt of heparin and low‐molecular‐weight heparin (LMWH). To account for potential imbalances in baseline patient characteristics and warfarin use prior to admission, we conducted a second analysis using the stabilized inverse probability weights approach. Specifically, we weighted each patient by the patient's inverse propensity scores of having only 1 day, at least 1 day, and at least 2 days without an INR measurement while receiving warfarin.[22, 23, 24, 25] To obtain the propensity scores, we fitted 3 logistic models with all variables included in the above primary mixed models except receipt of LMWH, heparin, and the number of days on warfarin as predictors, but 3 different outcomes, 1 day without an INR measurement, 1 or more days without an INR measurement, and 2 or more days without an INR measurement. Analyses were conducted using SAS version 9.2 (SAS Institute Inc., Cary, NC). All statistical testing was 2‐sided, at a significance level of 0.05. The institutional review board at Solutions IRB (Little Rock, AR) determined that the requirement for informed consent could be waived based on the nature of the study.

RESULTS

There were 130,828 patients included in the 2009 to 2013 MPSMS sample, of whom 19,445 (14.9%) received warfarin during their hospital stay and had at least 1 INR measurement. Among these patients, 5228 (26.9%) had no INR level above 1.5 and were excluded from further analysis, leaving 14,217 included patients. Of these patients, 1055 (7.4%) developed a warfarin‐associated adverse event. Table 1 demonstrates the baseline demographics and comorbidities of the included patients.

Baseline Characteristics and Anticoagulant Exposure of Patients Who Received Warfarin During Their Hospital Stay and Had at Least One INR >1.5
CharacteristicsAcute Cardiovascular Disease, No. (%), N = 6,394Pneumonia, No. (%), N = 3,668Major Surgery, No. (%), N = 4,155All, No. (%), N = 14,217
  • NOTE: Abbreviations: LMWH, low‐molecular‐weight heparin; SD, standard deviation.

Age, mean [SD]75.3 [12.4]74.5 [13.3]69.4 [11.8]73.4 [12.7]
Sex, female3,175 (49.7)1,741 (47.5)2,639 (63.5)7,555 (53.1)
Race    
White5,388 (84.3)3,268 (89.1)3,760 (90.5)12,416 (87.3)
Other1,006 (15.7)400 (10.9)395 (9.5)1,801 (12.7)
Comorbidities    
Cancer1,186 (18.6)939 (25.6)708 (17.0)2,833 (19.9)
Diabetes3,043 (47.6)1,536 (41.9)1,080 (26.0)5,659 (39.8)
Obesity1,938 (30.3)896 (24.4)1,260 (30.3)4,094 (28.8)
Cerebrovascular disease1,664 (26.0)910 (24.8)498 (12.0)3,072 (21.6)
Heart failure/pulmonary edema5,882 (92.0)2,052 (55.9)607 (14.6)8,541 (60.1)
Chronic obstructive pulmonary disease2,636 (41.2)1,929 (52.6)672 (16.2)5,237 (36.8)
Smoking895 (14.0)662 (18.1)623 (15.0)2,180 (15.3)
Corticosteroids490 (7.7)568 (15.5)147 (3.5)1,205 (8.5)
Coronary artery disease4,628 (72.4)1,875 (51.1)1,228 (29.6)7,731 (54.4)
Renal disease3,000 (46.9)1,320 (36.0)565 (13.6)4,885 (34.4)
Warfarin prior to arrival5,074 (79.4)3,020 (82.3)898 (21.6)8,992 (63.3)
Heparin given during hospitalization850 (13.3)282 (7.7)314 (7.6)1,446 (10.7)
LMWH given during hospitalization1,591 (24.9)1,070 (29.2)1,431 (34.4)4,092 (28.8)

Warfarin was started on hospital day 1 for 6825 (48.0%) of 14,217 patients. Among these patients, 6539 (95.8%) had an INR measured within 1 calendar day. We were unable to determine how many patients who started warfarin later in their hospital stay had a baseline INR, as we did not capture INRs performed prior to the day that warfarin was initiated.

Supporting Table 1 in the online version of this article demonstrates the association between an INR 6.0 and the occurrence of warfarin‐associated adverse events. A maximum INR 6.0 occurred in 469 (3.3%) of the patients included in the study, and among those patients, 133 (28.4%) experienced a warfarin‐associated adverse event compared to 922 (6.7%) adverse events in the 13,748 patients who did not develop an INR 6.0 (P < 0.001).

Among 8529 patients who received warfarin for at least 3 days, beginning on the third day of warfarin, 1549 patients (18.2%) did not have INR measured at least once each day that they received warfarin. Table 2 demonstrates that patients who had 2 or more days on which the INR was not measured had higher rates of INR 6.0 than patients for whom the INR was measured daily. A similar association was seen for warfarin‐associated adverse events (Table 2).

Association Between Number of Days Without an INR Measurement and Maximum INR Among Patients Who Received Warfarin for Three Days or More, and Association Between Number of Days Without an INR Measurement and Warfarin‐Associated Adverse Events
 No. of Patients, No. (%), N = 8,529Patients With INR on All Days, No. (%), N = 6,980Patients With 1 Day Without an INR, No. (%), N = 968Patients With 2 or More Days Without an INR, No. (%), N = 581P Value
  • NOTE: Abbreviations: INR, international normalized ratio. *Mantel‐Haenszel 2. Adverse events that occurred greater than 1 calendar day prior to the maximum INR were excluded from this analysis. Because the INR values were only collected until the maximum INR was reached, this means that no adverse events included in this analysis occurred before the last day without an INR measurement.

Maximum INR    <0.01*
1.515.998,1836,748 (96.7)911 (94.1)524 (90.2) 
6.0346232 (3.3)57 (5.9)57 (9.8) 
Warfarin‐associated adverse events    <0.01*
No adverse events7,689 (90.2)6,331 (90.7)872 (90.1)486 (83.6) 
Minor adverse events792 (9.3)617 (8.8)86 (8.9)89 (15.3) 
Major adverse events48 (0.6)32 (0.5)10 (1.0)6 (1.0) 

Figure 1A demonstrates the association between the number of days without an INR measurement and the subsequent development of an INR 6.0 or a warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin and LMWH, and number of days on warfarin. Patients with 1 or more days without an INR measurement had higher risk‐adjusted ORs of a subsequent INR 6.0, although the difference was not statistically significant for surgical patients. The analysis results based on inverse propensity scoring are seen in Figure 1B. Cardiac and surgical patients with 2 or more days without an INR measurement were at higher risk of having a warfarin‐associated adverse event, whereas cardiac and pneumonia patients with 1 or more days without an INR measurement were at higher risk of developing an INR 6.0.

Figure 1
(A) Association between number of days without an INR measurement and a subsequent INR ≥6.0 or warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin or low molecular weight heparin, and number of days receiving warfarin. (B) Stabilized inverse probability‐weighted propensity‐adjusted association between number of days without an INR measurement and a subsequent INR ≥6.0 or warfarin‐associated adverse event. Abbreviations: INR, international normalized ratio.

Supporting Table 2 in the online version of this article demonstrates the relationship between patient characteristics and the occurrence of an INR 6.0 or a warfarin‐related adverse event. The only characteristic that was associated with either of these outcomes for all 3 patient conditions was renal disease, which was positively associated with a warfarin‐associated adverse event. Warfarin use prior to arrival was associated with lower risks of both an INR 6.0 and a warfarin‐associated adverse event, except for among surgical patients. Supporting Table 3 in the online version of this article demonstrates the differences in patient characteristics between patients who had daily INR measurement and those who had at least 1 day without an INR measurement.

Figure 2 illustrates the relationship of the maximum INR to the prior 1‐day change in INR in 4963 patients whose INR on the day prior to the maximum INR was 2.0 to 3.5. When the increase in INR was <0.9, the risk of the next day's INR being 6.0 was 0.7%, and if the increase was 0.9, the risk was 5.2%. The risk of developing an INR 5.0 was 1.9% if the preceding day's INR increase was <0.9 and 15.3% if the prior day's INR rise was 0.9. Overall, 51% of INRs 6.0 and 55% of INRs 5.0 were immediately preceded by an INR increase of 0.9. The positive likelihood ratio (LR) for a 0.9 rise in INR predicting an INR of 6.0 was 4.2, and the positive LR was 4.9 for predicting an INR 5.0.

Figure 2
Relationship between prior day increase in INR and subsequent maximum INR level. Patients included in this analysis had an INR under 3.5 on the day prior to their maximum INR and a maximum INR ≥2.0. The prior INR increase represents the change in the INR from the previous day, on the day before the maximum INR was reached. Among 3250 patients, 408 (12.6%) had a 1‐day INR increase of ≥0.9. Abbreviations: INR, international normalized ratio.

There was no decline in the frequency of warfarin use among the patients in the MPSMS sample during the study period (16.7% in 2009 and 17.3% in 2013).

DISCUSSION

We studied warfarin‐associated adverse events in a nationally representative study of patients who received warfarin while in an acute care hospital for a primary diagnosis of cardiac disease, pneumonia, or major surgery. Several findings resulted from our analysis. First, warfarin is still commonly prescribed to hospitalized patients and remains a frequent cause of adverse events; 7.4% of the 2009 to 2013 MPSMS population who received warfarin and had at least 1 INR >1.5 developed a warfarin‐associated adverse event.

Over 95% of patients who received warfarin on the day of hospital admission had an INR performed within 1 day. This is similar to the results from a 2006 single center study in which 95% of patients had an INR measured prior to their first dose of warfarin.[10] Since 2008, The Joint Commission's National Patient Safety Goal has required the assessment of coagulation status before starting warfarin.[17] The high level of adherence to this standard suggests that further attention to this process of care is unlikely to significantly improve patient safety.

We also found that the lack of daily INR measurements was associated with an increased risk of an INR 6.0 and warfarin‐associated adverse events in some patient populations. There is limited evidence addressing the appropriate frequency of INR measurement in hospitalized patients receiving warfarin. The Joint Commission National Patient Safety Goal requires use of a current INR to adjust this therapy, but provides no specifics.[17] Although some experts believe that INRs should be monitored daily in hospitalized patients, this does not appear to be uniformly accepted. In some reports, 2[13] or 3[14] consecutive days without the performance of an INR was required to activate a reminder. Protocols from some major teaching hospitals specify intermittent monitoring once the INR is therapeutic.[15, 16] Because our results suggest that lapses in INR measurement lead to overanticoagulation and warfarin‐related adverse events, it may be appropriate to measure INRs daily in most hospitalized patients receiving warfarin. This would be consistent with the many known causes of INR instability in patients admitted to the hospital, including drug‐drug interactions, hepatic dysfunction, and changes in volume of distribution, such that truly stable hospitalized patients are likely rare. Indeed, hospital admission is a well‐known predictor of instability of warfarin effect. [9] Although our results suggest that daily INR measurement is associated with a lower rate of overanticoagulation, future studies might better define lower risk patients for whom daily INR measurement would not be necessary.

A prior INR increase 0.9 in 1 day was associated with an increased risk of subsequent overanticoagulation. Although a rapidly rising INR is known to predict overanticoagulation[10, 14] we could find no evidence as to what specific rate of rise confers this risk. Our results suggest that use of a warfarin dosing protocol that considers both the absolute value of the INR and the rate of rise could reduce warfarin‐related adverse events.

There are important limitations of our study. We did not abstract warfarin dosages, which precluded study of the appropriateness of both initial warfarin dosing and adjustment of the warfarin dose based on INR results. MPSMS does not reliably capture antiplatelet agents or other agents that result in drug‐drug interactions with warfarin, such as antibiotics, so this factor could theoretically have confounded our results. Antibiotic use seems unlikely to be a major confounder, because patients with acute cardiovascular disease demonstrated a similar relationship between INR measurement and an INR 6.0 to that seen with pneumonia and surgical patients, despite the latter patients likely having greater antibiotics exposure. Furthermore, MPSMS does not capture indices of severity of illness, so other unmeasured confounders could have influenced our results. Although we have data for patients admitted to the hospital for only 4 conditions, these are conditions that represent approximately 22% of hospital admissions in the United States.[2] Strengths of our study include the nationally representative and randomly selected cases and use of data that were obtained from chart abstraction as opposed to administrative data. Through the use of centralized data abstraction, we avoided the potential bias introduced when hospitals self‐report adverse events.

In summary, in a national sample of patients admitted to the hospital for 4 common conditions, warfarin‐associated adverse events were detected in 7.4% of patients who received warfarin. Lack of daily INR measurement was associated with an increased risk of overanticoagulation and warfarin‐associated adverse events in certain patient populations. A 1‐day increase in the INR of 0.9 predicted subsequent overanticoagulation. These results provide actionable opportunities to improve safety in some hospitalized patients receiving warfarin.

Acknowledgements

The authors express their appreciation to Dan Budnitz, MD, MPH, for his advice regarding study design and his review and comments on a draft of this manuscript.

Disclosures: This work was supported by contract HHSA290201200003C from the Agency for Healthcare Research and Quality, United States Department of Health and Human Services, Rockville, Maryland. Qualidigm was the contractor. The authors assume full responsibility for the accuracy and completeness of the ideas. Dr. Metersky has worked on various quality improvement and patient safety projects with Qualidigm, Centers for Medicare & Medicaid Services, and the Agency for Healthcare Research and Quality. His employer has received remuneration for this work. Dr. Krumholz works under contract with the Centers for Medicare & Medicaid Services to develop and maintain performance measures. Dr. Krumholz is the chair of a cardiac scientific advisory board for UnitedHealth and the recipient of a research grant from Medtronic, Inc. through Yale University. The other authors report no conflicts of interest.

Files
References
  1. Nutescu EA, Wittkowsky AK, Burnett A, Merli GJ, Ansell JE, Garcia DA. Delivery of optimized inpatient anticoagulation therapy: consensus statement from the anticoagulation forum. Ann Pharmacother. 2013;47:714724.
  2. Wang Y, Eldridge N, Metersky ML, et al. National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341351.
  3. Eikelboom JW, Weitz JI. Update on antithrombotic therapy: new anticoagulants. Circulation. 2010;121:15231532
  4. Voora D, McLeod HL, Eby C, Gage BF. The pharmacogenetics of coumarin therapy. Pharmacogenomics. 2005;6:503513.
  5. Classen DC, Jaser L, Budnitz DS. Adverse drug events among hospitalized Medicare patients: epidemiology and national estimates from a new approach to surveillance. Jt Comm J Qual Patient Saf. 2010;36:1221.
  6. Szekendi MK, Sullivan C, Bobb A, et al. Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care. 2006;15:184190.
  7. Dawson NL, Porter IE, Klipa D, et al. Inpatient warfarin management: pharmacist management using a detailed dosing protocol. J Thromb Thrombolysis. 2012;33:178184.
  8. Wong YM, Quek YN, Tay JC, Chadachan V, Lee HK. Efficacy and safety of a pharmacist‐managed inpatient anticoagulation service for warfarin initiation and titration. J Clin Pharm Ther. 2011;36:585591.
  9. Palareti G, Leali N, Coccheri S, et al. Bleeding complications of oral anticoagulant treatment: an inception‐cohort, prospective collaborative study (ISCOAT). Italian Study on Complications of Oral Anticoagulant Therapy. Lancet. 1996;348:423428.
  10. Dawson NL, Klipa D, O'Brien AK, Crook JE, Cucchi MW, Valentino AK. Oral anticoagulation in the hospital: analysis of patients at risk. J Thromb Thrombolysis. 2011;31:2226.
  11. Holbrook A, Schulman S, Witt DM, et al. Evidence‐based management of anticoagulant therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence‐Based Clinical Practice Guidelines. Chest. 2012;141:e152Se184S.
  12. Agency for Healthcare Research and Quality. National Guideline Clearinghouse. Available at: http://www.guideline.gov. Accessed April 30, 2015.
  13. Lederer J, Best D. Reduction in anticoagulation‐related adverse drug events using a trigger‐based methodology. Jt Comm J Qual Patient Saf. 2005;31:313318.
  14. Hartis CE, Gum MO, Lederer JW. Use of specific indicators to detect warfarin‐related adverse events. Am J Health Syst Pharm. 2005;62:16831688.
  15. University of Wisconsin Health. Warfarin management– adult–inpatient clinical practice guideline. Available at: http://www.uwhealth.org/files/uwhealth/docs/pdf3/Inpatient_Warfarin_Guideline.pdf. Accessed April 30, 2015
  16. Anticoagulation Guidelines ‐ LSU Health Shreveport. Available at: http://myhsc.lsuhscshreveport.edu/pharmacy/PT%20Policies/Anticoagulation_Safety.pdf. Accessed November 29, 2015.
  17. The Joint Commission. National patient safety goals effective January 1, 2015. Available at: http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed November 29, 2015.
  18. U.S. Department of Health and Human Services. Office of Disease Prevention and Health Promotion. Available at: http://health.gov/hcq/pdfs/ade-action-plan-508c.pdf. Accessed November 29, 2015.
  19. The Joint Commission. Surgical care improvement project. Available at: http://www.jointcommission.org/surgical_care_improvement_project. Accessed May 5, 2015.
  20. Dager WE, Branch JM, King JH, et al. Optimization of inpatient warfarin therapy: Impact of daily consultation by a pharmacist‐managed anticoagulation service. Ann Pharmacother. 2000;34:567572.
  21. Hammerquist RJ, Gulseth MP, Stewart DW. Effects of requiring a baseline International Normalized Ratio for inpatients treated with warfarin. Am J Health Syst Pharm. 2010;67:1722.
  22. Freedman DA, Berk RA. Weighting regressions by propensity scores. Eval Rev. 2008;32:392409.
  23. Austin PC. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar Behav Res. 2011;46:399424.
  24. D'Agostino RB. Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17:22652281.
  25. Rosenbaum P, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:4155.
Article PDF
Issue
Journal of Hospital Medicine - 11(4)
Publications
Page Number
276-282
Sections
Files
Files
Article PDF
Article PDF

Warfarin is 1 of the most common causes of adverse drug events, with hospitalized patients being particularly at risk compared to outpatients.[1] Despite the availability of new oral anticoagulants (NOACs), physicians commonly prescribe warfarin to hospitalized patients,[2] likely in part due to the greater difficulty in reversing NOACs compared to warfarin. Furthermore, uptake of the NOACs is likely to be slow in resource‐poor countries due to the lower cost of warfarin.[3] However, the narrow therapeutic index, frequent drug‐drug interactions, and patient variability in metabolism of warfarin makes management challenging.[4] Thus, warfarin remains a significant cause of adverse events in hospitalized patients, occurring in approximately 3% to 8% of exposed patients, depending on underlying condition.[2, 5]

An elevated international normalized ratio (INR) is a strong predictor of drug‐associated adverse events (patient harm). In a study employing 21 different electronic triggers to identify potential adverse events, an elevated INR had the highest yield for events associated with harm (96% of INRs >5.0 associated with harm).[6] Although pharmacist‐managed inpatient anticoagulation services have been shown to improve warfarin management,[7, 8] there are evidence gaps regarding the causes of warfarin‐related adverse events and practice changes that could decrease their frequency. Although overanticoagulation is a well‐known risk factor for warfarin‐related adverse events,[9, 10] there are few evidence‐based warfarin monitoring and dosing recommendations for hospitalized patients.[10] For example, the 2012 American College of Chest Physicians Antithrombotic Guidelines[11] provide a weak recommendation on initial dosing of warfarin, but no recommendations on how frequently to monitor the INR, or appropriate dosing responses to INR levels. Although many hospitals employ protocols that suggest daily INR monitoring until stable, there are no evidence‐based guidelines to support this practice.[12] Conversely, there are reports of flags to order an INR level that are not activated unless greater than 2[13] or 3 days[14] pass since the prior INR. Protocols from some major academic medical centers suggest that after a therapeutic INR is reached, INR levels can be measured intermittently, as infrequently as twice a week.[15, 16]

The 2015 Joint Commission anticoagulant‐focused National Patient Safety Goal[17] (initially issued in 2008) mandates the assessment of baseline coagulation status before starting warfarin, and warfarin dosing based on a current INR; however, current is not defined. Neither the extent to which the mandate for assessing baseline coagulation status is adhered to nor the relationship between this process of care and patient outcomes is known. The importance of adverse drug events associated with anticoagulants, included warfarin, was also recently highlighted in the 2014 federal National Action Plan for Adverse Drug Event Prevention. In this document, the prevention of adverse drug events associated with anticoagulants was 1 of the 3 areas selected for special national attention and action.[18]

The Medicare Patient Safety Monitoring System (MPSMS) is a national chart abstraction‐based system that includes 21 in‐hospital adverse event measures, including warfarin‐associated adverse drug events.[2] Because of the importance of warfarin‐associated bleeding in hospitalized patients, we analyzed MPSMS data to determine what factors related to INR monitoring practices place patients at risk for these events. We were particularly interested in determining if we could detect potentially modifiable predictors of overanticoagulation and warfarin‐associated adverse events.

METHODS

Study Sample

We combined 2009 to 2013 MPSMS all payer data from the Centers for Medicare & Medicaid Services Hospital Inpatient Quality Reporting program for 4 common medical conditions: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, and (4) major surgery (as defined by the national Surgical Care Improvement Project).[19] To increase the sample size for cardiac patients, we combined myocardial infarction patients and heart failure patients into 1 group: acute cardiovascular disease. Patients under 18 years of age are excluded from the MPSMS sample, and we excluded patients whose INR never exceeded 1.5 after the initiation of warfarin therapy.

Patient Characteristics

Patient characteristics included demographics (age, sex, race [white, black, and other race]) and comorbidities. Comorbidities abstracted from medical records included: histories at the time of hospital admission of heart failure, obesity, coronary artery disease, renal disease, cerebrovascular disease, chronic obstructive pulmonary disease, cancer, diabetes, and smoking. The use of anticoagulants other than warfarin was also captured.

INRs

The INR measurement period for each patient started from the initial date of warfarin administration and ended on the date the maximum INR occurred. If a patient had more than 1 INR value on any day, the higher INR value was selected. A day without an INR measurement was defined as no INR value documented for a calendar day within the INR measurement period, starting on the third day of warfarin and ending on the day of the maximum INR level.

Outcomes

The study was performed to assess the association between the number of days on which a patient did not have an INR measured while receiving warfarin and the occurrence of (1) an INR 6.0[20, 21] (intermediate outcome) and (2) a warfarin‐associated adverse event. A description of the MPSMS measure of warfarin‐associated adverse events has been previously published.[2] Warfarin‐associated adverse events must have occurred within 48 hours of predefined triggers: an INR 4.0, cessation of warfarin therapy, administration of vitamin K or fresh frozen plasma, or transfusion of packed red blood cells other than in the setting of a surgical procedure. Warfarin‐associated adverse events were divided into minor and major events for this analysis. Minor events were defined as bleeding, drop in hematocrit of 3 points (occurring more than 48 hours after admission and not associated with surgery), or development of a hematoma. Major events were death, intracranial bleeding, or cardiac arrest. A patient who had both a major and a minor event was considered as having had a major event.

To assess the relationship between a rapidly rising INR and a subsequent INR 5.0 or 6.0, we determined the increase in INR between the measurement done 2 days prior to the maximum INR and 1 day prior to the maximum INR. This analysis was performed only on patients whose INR was 2.0 and 3.5 on the day prior to the maximum INR. In doing so, we sought to determine if the INR rise could predict the occurrence of a subsequent severely elevated INR in patients whose INR was within or near the therapeutic range.

Statistical Analysis

We conducted bivariate analysis to quantify the associations between lapses in measurement of the INR and subsequent warfarin‐associated adverse events, using the Mantel‐Haenszel 2 test for categorical variables. We fitted a generalized linear model with a logit link function to estimate the association of days on which an INR was not measured and the occurrence of the composite adverse event measure or the occurrence of an INR 6.0, adjusting for baseline patient characteristics, the number of days on warfarin, and receipt of heparin and low‐molecular‐weight heparin (LMWH). To account for potential imbalances in baseline patient characteristics and warfarin use prior to admission, we conducted a second analysis using the stabilized inverse probability weights approach. Specifically, we weighted each patient by the patient's inverse propensity scores of having only 1 day, at least 1 day, and at least 2 days without an INR measurement while receiving warfarin.[22, 23, 24, 25] To obtain the propensity scores, we fitted 3 logistic models with all variables included in the above primary mixed models except receipt of LMWH, heparin, and the number of days on warfarin as predictors, but 3 different outcomes, 1 day without an INR measurement, 1 or more days without an INR measurement, and 2 or more days without an INR measurement. Analyses were conducted using SAS version 9.2 (SAS Institute Inc., Cary, NC). All statistical testing was 2‐sided, at a significance level of 0.05. The institutional review board at Solutions IRB (Little Rock, AR) determined that the requirement for informed consent could be waived based on the nature of the study.

RESULTS

There were 130,828 patients included in the 2009 to 2013 MPSMS sample, of whom 19,445 (14.9%) received warfarin during their hospital stay and had at least 1 INR measurement. Among these patients, 5228 (26.9%) had no INR level above 1.5 and were excluded from further analysis, leaving 14,217 included patients. Of these patients, 1055 (7.4%) developed a warfarin‐associated adverse event. Table 1 demonstrates the baseline demographics and comorbidities of the included patients.

Baseline Characteristics and Anticoagulant Exposure of Patients Who Received Warfarin During Their Hospital Stay and Had at Least One INR >1.5
CharacteristicsAcute Cardiovascular Disease, No. (%), N = 6,394Pneumonia, No. (%), N = 3,668Major Surgery, No. (%), N = 4,155All, No. (%), N = 14,217
  • NOTE: Abbreviations: LMWH, low‐molecular‐weight heparin; SD, standard deviation.

Age, mean [SD]75.3 [12.4]74.5 [13.3]69.4 [11.8]73.4 [12.7]
Sex, female3,175 (49.7)1,741 (47.5)2,639 (63.5)7,555 (53.1)
Race    
White5,388 (84.3)3,268 (89.1)3,760 (90.5)12,416 (87.3)
Other1,006 (15.7)400 (10.9)395 (9.5)1,801 (12.7)
Comorbidities    
Cancer1,186 (18.6)939 (25.6)708 (17.0)2,833 (19.9)
Diabetes3,043 (47.6)1,536 (41.9)1,080 (26.0)5,659 (39.8)
Obesity1,938 (30.3)896 (24.4)1,260 (30.3)4,094 (28.8)
Cerebrovascular disease1,664 (26.0)910 (24.8)498 (12.0)3,072 (21.6)
Heart failure/pulmonary edema5,882 (92.0)2,052 (55.9)607 (14.6)8,541 (60.1)
Chronic obstructive pulmonary disease2,636 (41.2)1,929 (52.6)672 (16.2)5,237 (36.8)
Smoking895 (14.0)662 (18.1)623 (15.0)2,180 (15.3)
Corticosteroids490 (7.7)568 (15.5)147 (3.5)1,205 (8.5)
Coronary artery disease4,628 (72.4)1,875 (51.1)1,228 (29.6)7,731 (54.4)
Renal disease3,000 (46.9)1,320 (36.0)565 (13.6)4,885 (34.4)
Warfarin prior to arrival5,074 (79.4)3,020 (82.3)898 (21.6)8,992 (63.3)
Heparin given during hospitalization850 (13.3)282 (7.7)314 (7.6)1,446 (10.7)
LMWH given during hospitalization1,591 (24.9)1,070 (29.2)1,431 (34.4)4,092 (28.8)

Warfarin was started on hospital day 1 for 6825 (48.0%) of 14,217 patients. Among these patients, 6539 (95.8%) had an INR measured within 1 calendar day. We were unable to determine how many patients who started warfarin later in their hospital stay had a baseline INR, as we did not capture INRs performed prior to the day that warfarin was initiated.

Supporting Table 1 in the online version of this article demonstrates the association between an INR 6.0 and the occurrence of warfarin‐associated adverse events. A maximum INR 6.0 occurred in 469 (3.3%) of the patients included in the study, and among those patients, 133 (28.4%) experienced a warfarin‐associated adverse event compared to 922 (6.7%) adverse events in the 13,748 patients who did not develop an INR 6.0 (P < 0.001).

Among 8529 patients who received warfarin for at least 3 days, beginning on the third day of warfarin, 1549 patients (18.2%) did not have INR measured at least once each day that they received warfarin. Table 2 demonstrates that patients who had 2 or more days on which the INR was not measured had higher rates of INR 6.0 than patients for whom the INR was measured daily. A similar association was seen for warfarin‐associated adverse events (Table 2).

Association Between Number of Days Without an INR Measurement and Maximum INR Among Patients Who Received Warfarin for Three Days or More, and Association Between Number of Days Without an INR Measurement and Warfarin‐Associated Adverse Events
 No. of Patients, No. (%), N = 8,529Patients With INR on All Days, No. (%), N = 6,980Patients With 1 Day Without an INR, No. (%), N = 968Patients With 2 or More Days Without an INR, No. (%), N = 581P Value
  • NOTE: Abbreviations: INR, international normalized ratio. *Mantel‐Haenszel 2. Adverse events that occurred greater than 1 calendar day prior to the maximum INR were excluded from this analysis. Because the INR values were only collected until the maximum INR was reached, this means that no adverse events included in this analysis occurred before the last day without an INR measurement.

Maximum INR    <0.01*
1.515.998,1836,748 (96.7)911 (94.1)524 (90.2) 
6.0346232 (3.3)57 (5.9)57 (9.8) 
Warfarin‐associated adverse events    <0.01*
No adverse events7,689 (90.2)6,331 (90.7)872 (90.1)486 (83.6) 
Minor adverse events792 (9.3)617 (8.8)86 (8.9)89 (15.3) 
Major adverse events48 (0.6)32 (0.5)10 (1.0)6 (1.0) 

Figure 1A demonstrates the association between the number of days without an INR measurement and the subsequent development of an INR 6.0 or a warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin and LMWH, and number of days on warfarin. Patients with 1 or more days without an INR measurement had higher risk‐adjusted ORs of a subsequent INR 6.0, although the difference was not statistically significant for surgical patients. The analysis results based on inverse propensity scoring are seen in Figure 1B. Cardiac and surgical patients with 2 or more days without an INR measurement were at higher risk of having a warfarin‐associated adverse event, whereas cardiac and pneumonia patients with 1 or more days without an INR measurement were at higher risk of developing an INR 6.0.

Figure 1
(A) Association between number of days without an INR measurement and a subsequent INR ≥6.0 or warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin or low molecular weight heparin, and number of days receiving warfarin. (B) Stabilized inverse probability‐weighted propensity‐adjusted association between number of days without an INR measurement and a subsequent INR ≥6.0 or warfarin‐associated adverse event. Abbreviations: INR, international normalized ratio.

Supporting Table 2 in the online version of this article demonstrates the relationship between patient characteristics and the occurrence of an INR 6.0 or a warfarin‐related adverse event. The only characteristic that was associated with either of these outcomes for all 3 patient conditions was renal disease, which was positively associated with a warfarin‐associated adverse event. Warfarin use prior to arrival was associated with lower risks of both an INR 6.0 and a warfarin‐associated adverse event, except for among surgical patients. Supporting Table 3 in the online version of this article demonstrates the differences in patient characteristics between patients who had daily INR measurement and those who had at least 1 day without an INR measurement.

Figure 2 illustrates the relationship of the maximum INR to the prior 1‐day change in INR in 4963 patients whose INR on the day prior to the maximum INR was 2.0 to 3.5. When the increase in INR was <0.9, the risk of the next day's INR being 6.0 was 0.7%, and if the increase was 0.9, the risk was 5.2%. The risk of developing an INR 5.0 was 1.9% if the preceding day's INR increase was <0.9 and 15.3% if the prior day's INR rise was 0.9. Overall, 51% of INRs 6.0 and 55% of INRs 5.0 were immediately preceded by an INR increase of 0.9. The positive likelihood ratio (LR) for a 0.9 rise in INR predicting an INR of 6.0 was 4.2, and the positive LR was 4.9 for predicting an INR 5.0.

Figure 2
Relationship between prior day increase in INR and subsequent maximum INR level. Patients included in this analysis had an INR under 3.5 on the day prior to their maximum INR and a maximum INR ≥2.0. The prior INR increase represents the change in the INR from the previous day, on the day before the maximum INR was reached. Among 3250 patients, 408 (12.6%) had a 1‐day INR increase of ≥0.9. Abbreviations: INR, international normalized ratio.

There was no decline in the frequency of warfarin use among the patients in the MPSMS sample during the study period (16.7% in 2009 and 17.3% in 2013).

DISCUSSION

We studied warfarin‐associated adverse events in a nationally representative study of patients who received warfarin while in an acute care hospital for a primary diagnosis of cardiac disease, pneumonia, or major surgery. Several findings resulted from our analysis. First, warfarin is still commonly prescribed to hospitalized patients and remains a frequent cause of adverse events; 7.4% of the 2009 to 2013 MPSMS population who received warfarin and had at least 1 INR >1.5 developed a warfarin‐associated adverse event.

Over 95% of patients who received warfarin on the day of hospital admission had an INR performed within 1 day. This is similar to the results from a 2006 single center study in which 95% of patients had an INR measured prior to their first dose of warfarin.[10] Since 2008, The Joint Commission's National Patient Safety Goal has required the assessment of coagulation status before starting warfarin.[17] The high level of adherence to this standard suggests that further attention to this process of care is unlikely to significantly improve patient safety.

We also found that the lack of daily INR measurements was associated with an increased risk of an INR 6.0 and warfarin‐associated adverse events in some patient populations. There is limited evidence addressing the appropriate frequency of INR measurement in hospitalized patients receiving warfarin. The Joint Commission National Patient Safety Goal requires use of a current INR to adjust this therapy, but provides no specifics.[17] Although some experts believe that INRs should be monitored daily in hospitalized patients, this does not appear to be uniformly accepted. In some reports, 2[13] or 3[14] consecutive days without the performance of an INR was required to activate a reminder. Protocols from some major teaching hospitals specify intermittent monitoring once the INR is therapeutic.[15, 16] Because our results suggest that lapses in INR measurement lead to overanticoagulation and warfarin‐related adverse events, it may be appropriate to measure INRs daily in most hospitalized patients receiving warfarin. This would be consistent with the many known causes of INR instability in patients admitted to the hospital, including drug‐drug interactions, hepatic dysfunction, and changes in volume of distribution, such that truly stable hospitalized patients are likely rare. Indeed, hospital admission is a well‐known predictor of instability of warfarin effect. [9] Although our results suggest that daily INR measurement is associated with a lower rate of overanticoagulation, future studies might better define lower risk patients for whom daily INR measurement would not be necessary.

A prior INR increase 0.9 in 1 day was associated with an increased risk of subsequent overanticoagulation. Although a rapidly rising INR is known to predict overanticoagulation[10, 14] we could find no evidence as to what specific rate of rise confers this risk. Our results suggest that use of a warfarin dosing protocol that considers both the absolute value of the INR and the rate of rise could reduce warfarin‐related adverse events.

There are important limitations of our study. We did not abstract warfarin dosages, which precluded study of the appropriateness of both initial warfarin dosing and adjustment of the warfarin dose based on INR results. MPSMS does not reliably capture antiplatelet agents or other agents that result in drug‐drug interactions with warfarin, such as antibiotics, so this factor could theoretically have confounded our results. Antibiotic use seems unlikely to be a major confounder, because patients with acute cardiovascular disease demonstrated a similar relationship between INR measurement and an INR 6.0 to that seen with pneumonia and surgical patients, despite the latter patients likely having greater antibiotics exposure. Furthermore, MPSMS does not capture indices of severity of illness, so other unmeasured confounders could have influenced our results. Although we have data for patients admitted to the hospital for only 4 conditions, these are conditions that represent approximately 22% of hospital admissions in the United States.[2] Strengths of our study include the nationally representative and randomly selected cases and use of data that were obtained from chart abstraction as opposed to administrative data. Through the use of centralized data abstraction, we avoided the potential bias introduced when hospitals self‐report adverse events.

In summary, in a national sample of patients admitted to the hospital for 4 common conditions, warfarin‐associated adverse events were detected in 7.4% of patients who received warfarin. Lack of daily INR measurement was associated with an increased risk of overanticoagulation and warfarin‐associated adverse events in certain patient populations. A 1‐day increase in the INR of 0.9 predicted subsequent overanticoagulation. These results provide actionable opportunities to improve safety in some hospitalized patients receiving warfarin.

Acknowledgements

The authors express their appreciation to Dan Budnitz, MD, MPH, for his advice regarding study design and his review and comments on a draft of this manuscript.

Disclosures: This work was supported by contract HHSA290201200003C from the Agency for Healthcare Research and Quality, United States Department of Health and Human Services, Rockville, Maryland. Qualidigm was the contractor. The authors assume full responsibility for the accuracy and completeness of the ideas. Dr. Metersky has worked on various quality improvement and patient safety projects with Qualidigm, Centers for Medicare & Medicaid Services, and the Agency for Healthcare Research and Quality. His employer has received remuneration for this work. Dr. Krumholz works under contract with the Centers for Medicare & Medicaid Services to develop and maintain performance measures. Dr. Krumholz is the chair of a cardiac scientific advisory board for UnitedHealth and the recipient of a research grant from Medtronic, Inc. through Yale University. The other authors report no conflicts of interest.

Warfarin is 1 of the most common causes of adverse drug events, with hospitalized patients being particularly at risk compared to outpatients.[1] Despite the availability of new oral anticoagulants (NOACs), physicians commonly prescribe warfarin to hospitalized patients,[2] likely in part due to the greater difficulty in reversing NOACs compared to warfarin. Furthermore, uptake of the NOACs is likely to be slow in resource‐poor countries due to the lower cost of warfarin.[3] However, the narrow therapeutic index, frequent drug‐drug interactions, and patient variability in metabolism of warfarin makes management challenging.[4] Thus, warfarin remains a significant cause of adverse events in hospitalized patients, occurring in approximately 3% to 8% of exposed patients, depending on underlying condition.[2, 5]

An elevated international normalized ratio (INR) is a strong predictor of drug‐associated adverse events (patient harm). In a study employing 21 different electronic triggers to identify potential adverse events, an elevated INR had the highest yield for events associated with harm (96% of INRs >5.0 associated with harm).[6] Although pharmacist‐managed inpatient anticoagulation services have been shown to improve warfarin management,[7, 8] there are evidence gaps regarding the causes of warfarin‐related adverse events and practice changes that could decrease their frequency. Although overanticoagulation is a well‐known risk factor for warfarin‐related adverse events,[9, 10] there are few evidence‐based warfarin monitoring and dosing recommendations for hospitalized patients.[10] For example, the 2012 American College of Chest Physicians Antithrombotic Guidelines[11] provide a weak recommendation on initial dosing of warfarin, but no recommendations on how frequently to monitor the INR, or appropriate dosing responses to INR levels. Although many hospitals employ protocols that suggest daily INR monitoring until stable, there are no evidence‐based guidelines to support this practice.[12] Conversely, there are reports of flags to order an INR level that are not activated unless greater than 2[13] or 3 days[14] pass since the prior INR. Protocols from some major academic medical centers suggest that after a therapeutic INR is reached, INR levels can be measured intermittently, as infrequently as twice a week.[15, 16]

The 2015 Joint Commission anticoagulant‐focused National Patient Safety Goal[17] (initially issued in 2008) mandates the assessment of baseline coagulation status before starting warfarin, and warfarin dosing based on a current INR; however, current is not defined. Neither the extent to which the mandate for assessing baseline coagulation status is adhered to nor the relationship between this process of care and patient outcomes is known. The importance of adverse drug events associated with anticoagulants, included warfarin, was also recently highlighted in the 2014 federal National Action Plan for Adverse Drug Event Prevention. In this document, the prevention of adverse drug events associated with anticoagulants was 1 of the 3 areas selected for special national attention and action.[18]

The Medicare Patient Safety Monitoring System (MPSMS) is a national chart abstraction‐based system that includes 21 in‐hospital adverse event measures, including warfarin‐associated adverse drug events.[2] Because of the importance of warfarin‐associated bleeding in hospitalized patients, we analyzed MPSMS data to determine what factors related to INR monitoring practices place patients at risk for these events. We were particularly interested in determining if we could detect potentially modifiable predictors of overanticoagulation and warfarin‐associated adverse events.

METHODS

Study Sample

We combined 2009 to 2013 MPSMS all payer data from the Centers for Medicare & Medicaid Services Hospital Inpatient Quality Reporting program for 4 common medical conditions: (1) acute myocardial infarction, (2) heart failure, (3) pneumonia, and (4) major surgery (as defined by the national Surgical Care Improvement Project).[19] To increase the sample size for cardiac patients, we combined myocardial infarction patients and heart failure patients into 1 group: acute cardiovascular disease. Patients under 18 years of age are excluded from the MPSMS sample, and we excluded patients whose INR never exceeded 1.5 after the initiation of warfarin therapy.

Patient Characteristics

Patient characteristics included demographics (age, sex, race [white, black, and other race]) and comorbidities. Comorbidities abstracted from medical records included: histories at the time of hospital admission of heart failure, obesity, coronary artery disease, renal disease, cerebrovascular disease, chronic obstructive pulmonary disease, cancer, diabetes, and smoking. The use of anticoagulants other than warfarin was also captured.

INRs

The INR measurement period for each patient started from the initial date of warfarin administration and ended on the date the maximum INR occurred. If a patient had more than 1 INR value on any day, the higher INR value was selected. A day without an INR measurement was defined as no INR value documented for a calendar day within the INR measurement period, starting on the third day of warfarin and ending on the day of the maximum INR level.

Outcomes

The study was performed to assess the association between the number of days on which a patient did not have an INR measured while receiving warfarin and the occurrence of (1) an INR 6.0[20, 21] (intermediate outcome) and (2) a warfarin‐associated adverse event. A description of the MPSMS measure of warfarin‐associated adverse events has been previously published.[2] Warfarin‐associated adverse events must have occurred within 48 hours of predefined triggers: an INR 4.0, cessation of warfarin therapy, administration of vitamin K or fresh frozen plasma, or transfusion of packed red blood cells other than in the setting of a surgical procedure. Warfarin‐associated adverse events were divided into minor and major events for this analysis. Minor events were defined as bleeding, drop in hematocrit of 3 points (occurring more than 48 hours after admission and not associated with surgery), or development of a hematoma. Major events were death, intracranial bleeding, or cardiac arrest. A patient who had both a major and a minor event was considered as having had a major event.

To assess the relationship between a rapidly rising INR and a subsequent INR 5.0 or 6.0, we determined the increase in INR between the measurement done 2 days prior to the maximum INR and 1 day prior to the maximum INR. This analysis was performed only on patients whose INR was 2.0 and 3.5 on the day prior to the maximum INR. In doing so, we sought to determine if the INR rise could predict the occurrence of a subsequent severely elevated INR in patients whose INR was within or near the therapeutic range.

Statistical Analysis

We conducted bivariate analysis to quantify the associations between lapses in measurement of the INR and subsequent warfarin‐associated adverse events, using the Mantel‐Haenszel 2 test for categorical variables. We fitted a generalized linear model with a logit link function to estimate the association of days on which an INR was not measured and the occurrence of the composite adverse event measure or the occurrence of an INR 6.0, adjusting for baseline patient characteristics, the number of days on warfarin, and receipt of heparin and low‐molecular‐weight heparin (LMWH). To account for potential imbalances in baseline patient characteristics and warfarin use prior to admission, we conducted a second analysis using the stabilized inverse probability weights approach. Specifically, we weighted each patient by the patient's inverse propensity scores of having only 1 day, at least 1 day, and at least 2 days without an INR measurement while receiving warfarin.[22, 23, 24, 25] To obtain the propensity scores, we fitted 3 logistic models with all variables included in the above primary mixed models except receipt of LMWH, heparin, and the number of days on warfarin as predictors, but 3 different outcomes, 1 day without an INR measurement, 1 or more days without an INR measurement, and 2 or more days without an INR measurement. Analyses were conducted using SAS version 9.2 (SAS Institute Inc., Cary, NC). All statistical testing was 2‐sided, at a significance level of 0.05. The institutional review board at Solutions IRB (Little Rock, AR) determined that the requirement for informed consent could be waived based on the nature of the study.

RESULTS

There were 130,828 patients included in the 2009 to 2013 MPSMS sample, of whom 19,445 (14.9%) received warfarin during their hospital stay and had at least 1 INR measurement. Among these patients, 5228 (26.9%) had no INR level above 1.5 and were excluded from further analysis, leaving 14,217 included patients. Of these patients, 1055 (7.4%) developed a warfarin‐associated adverse event. Table 1 demonstrates the baseline demographics and comorbidities of the included patients.

Baseline Characteristics and Anticoagulant Exposure of Patients Who Received Warfarin During Their Hospital Stay and Had at Least One INR >1.5
CharacteristicsAcute Cardiovascular Disease, No. (%), N = 6,394Pneumonia, No. (%), N = 3,668Major Surgery, No. (%), N = 4,155All, No. (%), N = 14,217
  • NOTE: Abbreviations: LMWH, low‐molecular‐weight heparin; SD, standard deviation.

Age, mean [SD]75.3 [12.4]74.5 [13.3]69.4 [11.8]73.4 [12.7]
Sex, female3,175 (49.7)1,741 (47.5)2,639 (63.5)7,555 (53.1)
Race    
White5,388 (84.3)3,268 (89.1)3,760 (90.5)12,416 (87.3)
Other1,006 (15.7)400 (10.9)395 (9.5)1,801 (12.7)
Comorbidities    
Cancer1,186 (18.6)939 (25.6)708 (17.0)2,833 (19.9)
Diabetes3,043 (47.6)1,536 (41.9)1,080 (26.0)5,659 (39.8)
Obesity1,938 (30.3)896 (24.4)1,260 (30.3)4,094 (28.8)
Cerebrovascular disease1,664 (26.0)910 (24.8)498 (12.0)3,072 (21.6)
Heart failure/pulmonary edema5,882 (92.0)2,052 (55.9)607 (14.6)8,541 (60.1)
Chronic obstructive pulmonary disease2,636 (41.2)1,929 (52.6)672 (16.2)5,237 (36.8)
Smoking895 (14.0)662 (18.1)623 (15.0)2,180 (15.3)
Corticosteroids490 (7.7)568 (15.5)147 (3.5)1,205 (8.5)
Coronary artery disease4,628 (72.4)1,875 (51.1)1,228 (29.6)7,731 (54.4)
Renal disease3,000 (46.9)1,320 (36.0)565 (13.6)4,885 (34.4)
Warfarin prior to arrival5,074 (79.4)3,020 (82.3)898 (21.6)8,992 (63.3)
Heparin given during hospitalization850 (13.3)282 (7.7)314 (7.6)1,446 (10.7)
LMWH given during hospitalization1,591 (24.9)1,070 (29.2)1,431 (34.4)4,092 (28.8)

Warfarin was started on hospital day 1 for 6825 (48.0%) of 14,217 patients. Among these patients, 6539 (95.8%) had an INR measured within 1 calendar day. We were unable to determine how many patients who started warfarin later in their hospital stay had a baseline INR, as we did not capture INRs performed prior to the day that warfarin was initiated.

Supporting Table 1 in the online version of this article demonstrates the association between an INR 6.0 and the occurrence of warfarin‐associated adverse events. A maximum INR 6.0 occurred in 469 (3.3%) of the patients included in the study, and among those patients, 133 (28.4%) experienced a warfarin‐associated adverse event compared to 922 (6.7%) adverse events in the 13,748 patients who did not develop an INR 6.0 (P < 0.001).

Among 8529 patients who received warfarin for at least 3 days, beginning on the third day of warfarin, 1549 patients (18.2%) did not have INR measured at least once each day that they received warfarin. Table 2 demonstrates that patients who had 2 or more days on which the INR was not measured had higher rates of INR 6.0 than patients for whom the INR was measured daily. A similar association was seen for warfarin‐associated adverse events (Table 2).

Association Between Number of Days Without an INR Measurement and Maximum INR Among Patients Who Received Warfarin for Three Days or More, and Association Between Number of Days Without an INR Measurement and Warfarin‐Associated Adverse Events
 No. of Patients, No. (%), N = 8,529Patients With INR on All Days, No. (%), N = 6,980Patients With 1 Day Without an INR, No. (%), N = 968Patients With 2 or More Days Without an INR, No. (%), N = 581P Value
  • NOTE: Abbreviations: INR, international normalized ratio. *Mantel‐Haenszel 2. Adverse events that occurred greater than 1 calendar day prior to the maximum INR were excluded from this analysis. Because the INR values were only collected until the maximum INR was reached, this means that no adverse events included in this analysis occurred before the last day without an INR measurement.

Maximum INR    <0.01*
1.515.998,1836,748 (96.7)911 (94.1)524 (90.2) 
6.0346232 (3.3)57 (5.9)57 (9.8) 
Warfarin‐associated adverse events    <0.01*
No adverse events7,689 (90.2)6,331 (90.7)872 (90.1)486 (83.6) 
Minor adverse events792 (9.3)617 (8.8)86 (8.9)89 (15.3) 
Major adverse events48 (0.6)32 (0.5)10 (1.0)6 (1.0) 

Figure 1A demonstrates the association between the number of days without an INR measurement and the subsequent development of an INR 6.0 or a warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin and LMWH, and number of days on warfarin. Patients with 1 or more days without an INR measurement had higher risk‐adjusted ORs of a subsequent INR 6.0, although the difference was not statistically significant for surgical patients. The analysis results based on inverse propensity scoring are seen in Figure 1B. Cardiac and surgical patients with 2 or more days without an INR measurement were at higher risk of having a warfarin‐associated adverse event, whereas cardiac and pneumonia patients with 1 or more days without an INR measurement were at higher risk of developing an INR 6.0.

Figure 1
(A) Association between number of days without an INR measurement and a subsequent INR ≥6.0 or warfarin‐associated adverse event, adjusted for baseline patient characteristics, receipt of heparin or low molecular weight heparin, and number of days receiving warfarin. (B) Stabilized inverse probability‐weighted propensity‐adjusted association between number of days without an INR measurement and a subsequent INR ≥6.0 or warfarin‐associated adverse event. Abbreviations: INR, international normalized ratio.

Supporting Table 2 in the online version of this article demonstrates the relationship between patient characteristics and the occurrence of an INR 6.0 or a warfarin‐related adverse event. The only characteristic that was associated with either of these outcomes for all 3 patient conditions was renal disease, which was positively associated with a warfarin‐associated adverse event. Warfarin use prior to arrival was associated with lower risks of both an INR 6.0 and a warfarin‐associated adverse event, except for among surgical patients. Supporting Table 3 in the online version of this article demonstrates the differences in patient characteristics between patients who had daily INR measurement and those who had at least 1 day without an INR measurement.

Figure 2 illustrates the relationship of the maximum INR to the prior 1‐day change in INR in 4963 patients whose INR on the day prior to the maximum INR was 2.0 to 3.5. When the increase in INR was <0.9, the risk of the next day's INR being 6.0 was 0.7%, and if the increase was 0.9, the risk was 5.2%. The risk of developing an INR 5.0 was 1.9% if the preceding day's INR increase was <0.9 and 15.3% if the prior day's INR rise was 0.9. Overall, 51% of INRs 6.0 and 55% of INRs 5.0 were immediately preceded by an INR increase of 0.9. The positive likelihood ratio (LR) for a 0.9 rise in INR predicting an INR of 6.0 was 4.2, and the positive LR was 4.9 for predicting an INR 5.0.

Figure 2
Relationship between prior day increase in INR and subsequent maximum INR level. Patients included in this analysis had an INR under 3.5 on the day prior to their maximum INR and a maximum INR ≥2.0. The prior INR increase represents the change in the INR from the previous day, on the day before the maximum INR was reached. Among 3250 patients, 408 (12.6%) had a 1‐day INR increase of ≥0.9. Abbreviations: INR, international normalized ratio.

There was no decline in the frequency of warfarin use among the patients in the MPSMS sample during the study period (16.7% in 2009 and 17.3% in 2013).

DISCUSSION

We studied warfarin‐associated adverse events in a nationally representative study of patients who received warfarin while in an acute care hospital for a primary diagnosis of cardiac disease, pneumonia, or major surgery. Several findings resulted from our analysis. First, warfarin is still commonly prescribed to hospitalized patients and remains a frequent cause of adverse events; 7.4% of the 2009 to 2013 MPSMS population who received warfarin and had at least 1 INR >1.5 developed a warfarin‐associated adverse event.

Over 95% of patients who received warfarin on the day of hospital admission had an INR performed within 1 day. This is similar to the results from a 2006 single center study in which 95% of patients had an INR measured prior to their first dose of warfarin.[10] Since 2008, The Joint Commission's National Patient Safety Goal has required the assessment of coagulation status before starting warfarin.[17] The high level of adherence to this standard suggests that further attention to this process of care is unlikely to significantly improve patient safety.

We also found that the lack of daily INR measurements was associated with an increased risk of an INR 6.0 and warfarin‐associated adverse events in some patient populations. There is limited evidence addressing the appropriate frequency of INR measurement in hospitalized patients receiving warfarin. The Joint Commission National Patient Safety Goal requires use of a current INR to adjust this therapy, but provides no specifics.[17] Although some experts believe that INRs should be monitored daily in hospitalized patients, this does not appear to be uniformly accepted. In some reports, 2[13] or 3[14] consecutive days without the performance of an INR was required to activate a reminder. Protocols from some major teaching hospitals specify intermittent monitoring once the INR is therapeutic.[15, 16] Because our results suggest that lapses in INR measurement lead to overanticoagulation and warfarin‐related adverse events, it may be appropriate to measure INRs daily in most hospitalized patients receiving warfarin. This would be consistent with the many known causes of INR instability in patients admitted to the hospital, including drug‐drug interactions, hepatic dysfunction, and changes in volume of distribution, such that truly stable hospitalized patients are likely rare. Indeed, hospital admission is a well‐known predictor of instability of warfarin effect. [9] Although our results suggest that daily INR measurement is associated with a lower rate of overanticoagulation, future studies might better define lower risk patients for whom daily INR measurement would not be necessary.

A prior INR increase 0.9 in 1 day was associated with an increased risk of subsequent overanticoagulation. Although a rapidly rising INR is known to predict overanticoagulation[10, 14] we could find no evidence as to what specific rate of rise confers this risk. Our results suggest that use of a warfarin dosing protocol that considers both the absolute value of the INR and the rate of rise could reduce warfarin‐related adverse events.

There are important limitations of our study. We did not abstract warfarin dosages, which precluded study of the appropriateness of both initial warfarin dosing and adjustment of the warfarin dose based on INR results. MPSMS does not reliably capture antiplatelet agents or other agents that result in drug‐drug interactions with warfarin, such as antibiotics, so this factor could theoretically have confounded our results. Antibiotic use seems unlikely to be a major confounder, because patients with acute cardiovascular disease demonstrated a similar relationship between INR measurement and an INR 6.0 to that seen with pneumonia and surgical patients, despite the latter patients likely having greater antibiotics exposure. Furthermore, MPSMS does not capture indices of severity of illness, so other unmeasured confounders could have influenced our results. Although we have data for patients admitted to the hospital for only 4 conditions, these are conditions that represent approximately 22% of hospital admissions in the United States.[2] Strengths of our study include the nationally representative and randomly selected cases and use of data that were obtained from chart abstraction as opposed to administrative data. Through the use of centralized data abstraction, we avoided the potential bias introduced when hospitals self‐report adverse events.

In summary, in a national sample of patients admitted to the hospital for 4 common conditions, warfarin‐associated adverse events were detected in 7.4% of patients who received warfarin. Lack of daily INR measurement was associated with an increased risk of overanticoagulation and warfarin‐associated adverse events in certain patient populations. A 1‐day increase in the INR of 0.9 predicted subsequent overanticoagulation. These results provide actionable opportunities to improve safety in some hospitalized patients receiving warfarin.

Acknowledgements

The authors express their appreciation to Dan Budnitz, MD, MPH, for his advice regarding study design and his review and comments on a draft of this manuscript.

Disclosures: This work was supported by contract HHSA290201200003C from the Agency for Healthcare Research and Quality, United States Department of Health and Human Services, Rockville, Maryland. Qualidigm was the contractor. The authors assume full responsibility for the accuracy and completeness of the ideas. Dr. Metersky has worked on various quality improvement and patient safety projects with Qualidigm, Centers for Medicare & Medicaid Services, and the Agency for Healthcare Research and Quality. His employer has received remuneration for this work. Dr. Krumholz works under contract with the Centers for Medicare & Medicaid Services to develop and maintain performance measures. Dr. Krumholz is the chair of a cardiac scientific advisory board for UnitedHealth and the recipient of a research grant from Medtronic, Inc. through Yale University. The other authors report no conflicts of interest.

References
  1. Nutescu EA, Wittkowsky AK, Burnett A, Merli GJ, Ansell JE, Garcia DA. Delivery of optimized inpatient anticoagulation therapy: consensus statement from the anticoagulation forum. Ann Pharmacother. 2013;47:714724.
  2. Wang Y, Eldridge N, Metersky ML, et al. National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341351.
  3. Eikelboom JW, Weitz JI. Update on antithrombotic therapy: new anticoagulants. Circulation. 2010;121:15231532
  4. Voora D, McLeod HL, Eby C, Gage BF. The pharmacogenetics of coumarin therapy. Pharmacogenomics. 2005;6:503513.
  5. Classen DC, Jaser L, Budnitz DS. Adverse drug events among hospitalized Medicare patients: epidemiology and national estimates from a new approach to surveillance. Jt Comm J Qual Patient Saf. 2010;36:1221.
  6. Szekendi MK, Sullivan C, Bobb A, et al. Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care. 2006;15:184190.
  7. Dawson NL, Porter IE, Klipa D, et al. Inpatient warfarin management: pharmacist management using a detailed dosing protocol. J Thromb Thrombolysis. 2012;33:178184.
  8. Wong YM, Quek YN, Tay JC, Chadachan V, Lee HK. Efficacy and safety of a pharmacist‐managed inpatient anticoagulation service for warfarin initiation and titration. J Clin Pharm Ther. 2011;36:585591.
  9. Palareti G, Leali N, Coccheri S, et al. Bleeding complications of oral anticoagulant treatment: an inception‐cohort, prospective collaborative study (ISCOAT). Italian Study on Complications of Oral Anticoagulant Therapy. Lancet. 1996;348:423428.
  10. Dawson NL, Klipa D, O'Brien AK, Crook JE, Cucchi MW, Valentino AK. Oral anticoagulation in the hospital: analysis of patients at risk. J Thromb Thrombolysis. 2011;31:2226.
  11. Holbrook A, Schulman S, Witt DM, et al. Evidence‐based management of anticoagulant therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence‐Based Clinical Practice Guidelines. Chest. 2012;141:e152Se184S.
  12. Agency for Healthcare Research and Quality. National Guideline Clearinghouse. Available at: http://www.guideline.gov. Accessed April 30, 2015.
  13. Lederer J, Best D. Reduction in anticoagulation‐related adverse drug events using a trigger‐based methodology. Jt Comm J Qual Patient Saf. 2005;31:313318.
  14. Hartis CE, Gum MO, Lederer JW. Use of specific indicators to detect warfarin‐related adverse events. Am J Health Syst Pharm. 2005;62:16831688.
  15. University of Wisconsin Health. Warfarin management– adult–inpatient clinical practice guideline. Available at: http://www.uwhealth.org/files/uwhealth/docs/pdf3/Inpatient_Warfarin_Guideline.pdf. Accessed April 30, 2015
  16. Anticoagulation Guidelines ‐ LSU Health Shreveport. Available at: http://myhsc.lsuhscshreveport.edu/pharmacy/PT%20Policies/Anticoagulation_Safety.pdf. Accessed November 29, 2015.
  17. The Joint Commission. National patient safety goals effective January 1, 2015. Available at: http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed November 29, 2015.
  18. U.S. Department of Health and Human Services. Office of Disease Prevention and Health Promotion. Available at: http://health.gov/hcq/pdfs/ade-action-plan-508c.pdf. Accessed November 29, 2015.
  19. The Joint Commission. Surgical care improvement project. Available at: http://www.jointcommission.org/surgical_care_improvement_project. Accessed May 5, 2015.
  20. Dager WE, Branch JM, King JH, et al. Optimization of inpatient warfarin therapy: Impact of daily consultation by a pharmacist‐managed anticoagulation service. Ann Pharmacother. 2000;34:567572.
  21. Hammerquist RJ, Gulseth MP, Stewart DW. Effects of requiring a baseline International Normalized Ratio for inpatients treated with warfarin. Am J Health Syst Pharm. 2010;67:1722.
  22. Freedman DA, Berk RA. Weighting regressions by propensity scores. Eval Rev. 2008;32:392409.
  23. Austin PC. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar Behav Res. 2011;46:399424.
  24. D'Agostino RB. Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17:22652281.
  25. Rosenbaum P, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:4155.
References
  1. Nutescu EA, Wittkowsky AK, Burnett A, Merli GJ, Ansell JE, Garcia DA. Delivery of optimized inpatient anticoagulation therapy: consensus statement from the anticoagulation forum. Ann Pharmacother. 2013;47:714724.
  2. Wang Y, Eldridge N, Metersky ML, et al. National trends in patient safety for four common conditions, 2005–2011. N Engl J Med. 2014;370:341351.
  3. Eikelboom JW, Weitz JI. Update on antithrombotic therapy: new anticoagulants. Circulation. 2010;121:15231532
  4. Voora D, McLeod HL, Eby C, Gage BF. The pharmacogenetics of coumarin therapy. Pharmacogenomics. 2005;6:503513.
  5. Classen DC, Jaser L, Budnitz DS. Adverse drug events among hospitalized Medicare patients: epidemiology and national estimates from a new approach to surveillance. Jt Comm J Qual Patient Saf. 2010;36:1221.
  6. Szekendi MK, Sullivan C, Bobb A, et al. Active surveillance using electronic triggers to detect adverse events in hospitalized patients. Qual Saf Health Care. 2006;15:184190.
  7. Dawson NL, Porter IE, Klipa D, et al. Inpatient warfarin management: pharmacist management using a detailed dosing protocol. J Thromb Thrombolysis. 2012;33:178184.
  8. Wong YM, Quek YN, Tay JC, Chadachan V, Lee HK. Efficacy and safety of a pharmacist‐managed inpatient anticoagulation service for warfarin initiation and titration. J Clin Pharm Ther. 2011;36:585591.
  9. Palareti G, Leali N, Coccheri S, et al. Bleeding complications of oral anticoagulant treatment: an inception‐cohort, prospective collaborative study (ISCOAT). Italian Study on Complications of Oral Anticoagulant Therapy. Lancet. 1996;348:423428.
  10. Dawson NL, Klipa D, O'Brien AK, Crook JE, Cucchi MW, Valentino AK. Oral anticoagulation in the hospital: analysis of patients at risk. J Thromb Thrombolysis. 2011;31:2226.
  11. Holbrook A, Schulman S, Witt DM, et al. Evidence‐based management of anticoagulant therapy: Antithrombotic Therapy and Prevention of Thrombosis, 9th ed: American College of Chest Physicians Evidence‐Based Clinical Practice Guidelines. Chest. 2012;141:e152Se184S.
  12. Agency for Healthcare Research and Quality. National Guideline Clearinghouse. Available at: http://www.guideline.gov. Accessed April 30, 2015.
  13. Lederer J, Best D. Reduction in anticoagulation‐related adverse drug events using a trigger‐based methodology. Jt Comm J Qual Patient Saf. 2005;31:313318.
  14. Hartis CE, Gum MO, Lederer JW. Use of specific indicators to detect warfarin‐related adverse events. Am J Health Syst Pharm. 2005;62:16831688.
  15. University of Wisconsin Health. Warfarin management– adult–inpatient clinical practice guideline. Available at: http://www.uwhealth.org/files/uwhealth/docs/pdf3/Inpatient_Warfarin_Guideline.pdf. Accessed April 30, 2015
  16. Anticoagulation Guidelines ‐ LSU Health Shreveport. Available at: http://myhsc.lsuhscshreveport.edu/pharmacy/PT%20Policies/Anticoagulation_Safety.pdf. Accessed November 29, 2015.
  17. The Joint Commission. National patient safety goals effective January 1, 2015. Available at: http://www.jointcommission.org/assets/1/6/2015_NPSG_HAP.pdf. Accessed November 29, 2015.
  18. U.S. Department of Health and Human Services. Office of Disease Prevention and Health Promotion. Available at: http://health.gov/hcq/pdfs/ade-action-plan-508c.pdf. Accessed November 29, 2015.
  19. The Joint Commission. Surgical care improvement project. Available at: http://www.jointcommission.org/surgical_care_improvement_project. Accessed May 5, 2015.
  20. Dager WE, Branch JM, King JH, et al. Optimization of inpatient warfarin therapy: Impact of daily consultation by a pharmacist‐managed anticoagulation service. Ann Pharmacother. 2000;34:567572.
  21. Hammerquist RJ, Gulseth MP, Stewart DW. Effects of requiring a baseline International Normalized Ratio for inpatients treated with warfarin. Am J Health Syst Pharm. 2010;67:1722.
  22. Freedman DA, Berk RA. Weighting regressions by propensity scores. Eval Rev. 2008;32:392409.
  23. Austin PC. An introduction to propensity score methods for reducing the effects of confounding in observational studies. Multivar Behav Res. 2011;46:399424.
  24. D'Agostino RB. Propensity score methods for bias reduction in the comparison of a treatment to a non‐randomized control group. Stat Med. 1998;17:22652281.
  25. Rosenbaum P, Rubin DB. The central role of the propensity score in observational studies for causal effects. Biometrika. 1983;70:4155.
Issue
Journal of Hospital Medicine - 11(4)
Issue
Journal of Hospital Medicine - 11(4)
Page Number
276-282
Page Number
276-282
Publications
Publications
Article Type
Display Headline
Predictors of warfarin‐associated adverse events in hospitalized patients: Opportunities to prevent patient harm
Display Headline
Predictors of warfarin‐associated adverse events in hospitalized patients: Opportunities to prevent patient harm
Sections
Article Source

© 2015 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Mark L. Metersky, MD, Division of Pulmonary and Critical Care Medicine, University of Connecticut School of Medicine, 263 Farmington Avenue, Farmington, CT 06030‐1321; Telephone: 860‐679‐3582; Fax: 860‐679‐1103; E‐mail: metersky@uchc.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Treatment and Outcomes of SPFD

Article Type
Changed
Sun, 05/21/2017 - 17:23
Display Headline
Treatment of single peripheral pulmonary emboli: Patient outcomes and factors associated with decision to treat

Over the past decade, the use of chest computed tomography scans with pulmonary angiography (CTPA) for diagnosis of pulmonary embolism (PE) has soared due to the ease of acquisition, the desire for the additional information that CT scanning may provide, and heightened sensitivity to medical liability.[1, 2, 3, 4, 5, 6] In parallel with this shift, the incidence of PE has nearly doubled, despite no recorded increase in the pretest probability of the disease, increasing from 62 per 100,000 to 112 per 100,000 during the period of 1993 to 2006.6 One major explanation for this increase is that the improvement in CTPA resolution has enabled radiologists to identify more small peripheral (ie, segmental and subsegmental) filling defects. When confronted with the finding of a small peripheral filling defect on CTPA, clinicians often face a management quandary. Case series and retrospective series on outcomes of these patients do not support treatment, but they are limited by having small numbers of patients; the largest examined 93 patients and provided no insight into the treatment decision.[7] Uncertainty exists, furthermore, about the pathologic meaning of small peripheral filling defects.[8] Clinicians must weigh these arguments and the risk of anticoagulation against concerns about the consequences of untreated pulmonary thromboemboli. More information is needed, therefore, on the outcomes of patients with peripheral filling defects, and on variables impacting the treatment decision, in order to help clinicians manage these patients.[9]

In this study, we analyzed cases of patients with a single peripheral filling defect (SPFD). We choose to look at patients with a SPFD because they represent the starkest decision‐making treatment dilemma and are not infrequent. We assessed the 90‐day mortality and rate of postdischarge venous thromboembolism (VTE) of treated and untreated patients and identified characteristics of treated and untreated patients with a SPFD. We wished to determine the incidence of SPFD among patients evaluated with CTPA and to determine how often the defect is called a PE by the radiologist. We also aimed to determine what role secondary studies play in helping to clarify the diagnosis and management of SPFD and to identify other factors that may influence the decision to treat patients with this finding.

METHODS

Site

This retrospective cohort study was conducted at a community hospital in Norwalk, CT. The hospital is a 328‐bed, not‐for‐profit, acute‐care community teaching hospital that serves a population of 250,000 in lower Fairfield County, Connecticut, and is affiliated with the Yale School of Medicine.

Subjects

The reports of all CTPAs done over a 66‐month period from 2006 to 2010 were individually reviewed. Any study that had a filling defect reported in the body of the radiology report was selected for initial consideration. A second round of review was conducted, extracting only CTPAs with a SPFD for study inclusion. We then excluded from the primary analysis those studies in which the patient had a concurrently positive lower‐extremity ultrasound, the medical records could not be located, and the patient age was <18 years. The study was approved by the investigational review board of the hospital.

Radiographic Methods

The CTPAs were performed using the SOMATOM Definition scanner, a 128‐slice CT scanner with 0.5‐cm collimation (Siemens, Erlangen, Germany). The CT‐scanner technology did not change over the 66 months of the study period.

Data Collection

Clinical data were abstracted from the physical charts and from the computerized practitioner order‐entry system (PowerChart electronic medical record system; Cerner Corp, Kansas City, MO). Three abstractors were trained in the process of chart abstraction using training sets of 10 records. The Fleiss was used to assess concordance. The Fleiss was 0.6 at the initial training set, and after 3 training sets it improved to 0.9. In‐hospital all‐cause mortality was determined using the hospital death records, and out‐of‐hospital mortality data were obtained from the online statewide death records.[10] Postdischarge VTE was assessed by interrogating the hospital radiology database for repeat ventilation perfusion scan, conventional pulmonary angiography, lower‐limb compression ultrasound (CUS) or CTPA studies that were positive within 90 days of the index event. Treatment was defined as either anticoagulation, ascertained from medication list at discharge, or inferior vena cava (IVC) filter placement, documented at the index visit.

To better understand the variation in interpretation of SPFD, all CTPA studies that showed a SPFD were also over‐read by 2 radiologists who reached a consensus opinion regarding whether the finding was a PE. The radiologists who over‐read the studies were blinded to the final impression of the initial radiologist. Our study group comprised 3 radiologists; 1 read <20% of the initial studies and the other 2 had no input in the initial readings. One of the radiologists was an attending and the other 2 were fourth‐year radiology residents.

Baseline Variables and Outcome Measures

A peripheral filling defect was defined as a single filling defect located in either the segmental or subsegmental pulmonary artery. The primary variables of interest were patient demographics (age, sex, and race), insurance status, the presence of pulmonary input in the management of the patient, history of comorbid conditions (prior VTE, congestive heart failure, chronic lung disease, pulmonary hypertension, coronary artery disease, surgery within the last 6 months, active malignancy, and acute pulmonary edema or syncope at presentation) and risk class as assessed by the Pulmonary Embolism Severity Index (PESI) score.[11] The PESI scoring system is a risk‐stratification tool for patients with acute PE. It uses 11 prognostic variables to predict in‐hospital and all‐cause mortality: age, sex, heart rate 110 bpm, systolic blood pressure <90 mm Hg, congestive heart failure, presence of malignancy, chronic lung disease, respiratory rate <30/minute, temperature <36C, altered mental status, and oxygen saturation <90%. Additional variables of interest were the proportion of patients in the treated and untreated arms who had a pulmonary consultation at the index visit and the role, if any, of a second test for VTE at the index visit. The primary outcomes investigated were all‐cause 90‐day mortality and 90‐day incidence of postdischarge VTE from the index visit in the treated and untreated groups. Those patients whose studies had a SPFD that was concluded by the initial radiologist to be a PE on the final impression of the report were analyzed as a subgroup.

Statistical Analysis

Bivariate analysis was conducted to compare patient baseline characteristics between treated and untreated groups. The 2 test was used for comparing binary or categorical variables and the Student t test was used for comparing continuous variables. A logistic regression model utilizing the Markov chain Monte Carlo (MCMC) method was employed for assessing the differences in 90‐day mortality and 90‐day postdischarge VTE between the treated group and untreated group, adjusting for patient baseline characteristics. This model was also used for identifying factors associated with the decision to treat. We reported the odds ratio (OR) and its corresponding 95% confidence interval (CI) for each estimate identified from the model. All analyses were conducted using SAS version 9.3 64‐bit software (SAS Institute Inc, Cary, NC).

RESULTS

A total of 4906 CTPAs were screened during the 66 months reviewed, identifying 518 (10.6%) with any filling defect and 153 (3.1%) with a SPFD. Thirteen patients were excluded from the primary analysis because their records could not be located, and another 6 were excluded because they had a concurrently positive CUS. The primary analysis was performed, therefore, with 134 patients. The inpatient service ordered 78% of the CTPAs. The initial radiologist stated in the impression section of the report that a PE was present in 99 of 134 (73.9%) studies. On over‐read of the 134 studies, 100 of these were considered to be positive for a PE. There was modest agreement between the initial impression and the consensus impression at over‐read (=0.69).

Association of Treatment With Mortality and Recurrence

In the primary‐analysis group, 61 (45.5%) patients were treated: 50 patients had warfarin alone, 10 patients had an IVC filter alone, and 1 patient had both warfarin and an IVC filter. No patient was treated solely with low‐molecular‐weight heparin long‐term. Whenever low‐molecular‐weight heparin was used, it was as a bridge to warfarin. The characteristics of the patients in the treatment groups were similar (Table 1). Four of the treated patients had a CTPA with SPFD that was not called a PE in the initial reading. Ten patients died, 5 each in the treated and untreated groups, yielding an overall mortality rate at 90 days of 7.4% (Table 2). Analysis of the 134 patients showed no difference in adjusted 90‐day mortality between treated and untreated groups (OR: 1.0, 95% CI: 0.25‐3.98). The number of patients with postdischarge VTE within 90 days was 5 of 134 (3.7%) patients, 3 treated and 2 untreated, and too few to show a treatment effect. Among the 99 cases considered by the initial radiologist to be definite for a PE, 59 (59.6%) were treated and 40 (40.4%) untreated. In this subgroup, no mortality benefit was observed with treatment (OR: 1.42, 95% CI: 0.28‐8.05).

Baseline Characteristics of Treated and Untreated Patients With Single Peripheral Filling Defects
CharacteristicTreated, n=61Untreated, n=73P Value
  • NOTE: Data are presented as n (%) unless otherwise specified. Abbreviations: CHF, congestive heart failure; M, male; PESI, Pulmonary Embolism Severity Index; SD, standard deviation.

  • Patients who were being actively treated for a malignancy.

  • Patients who had documented major surgery or were involved in a major trauma and hospitalized for this within 3 months prior to identification of filling defect.

  • The PESI class scoring system is a risk‐stratification tool for patients with acute pulmonary embolism. It uses 11 prognostic variables to predict in hospital and all‐cause mortality.[11]

Age, y, mean (SD)67 (20)62 (21)0.056
Sex, M29 (48)34 (47)0.831
Race/ethnicity  0.426
White43 (70)57 (78) 
Black12 (20)8 (11) 
Hispanic6 (10)7 (10) 
Other01 (2) 
Primary insurance  0.231
Medicare30 (50)29 (40) 
Medicaid2 (3)8 (11) 
Commercial27 (44)30 (41) 
Self‐pay2 (3)6 (8) 
Pulmonary consultation29 (48)28 (38)0.482
Comorbid illnesses  0.119
Cancera13 (21)17 (23) 
Surgery/traumab16 (26)2 (3) 
Chronic lung disease17 (28)15 (21) 
CHF12 (20)9 (12) 
Ischemic heart disease12 (20)7 (10) 
Pulmonary hypertension01 (1) 
Collagen vascular disease1 (2)2 (3) 
PESI classc 0.840
I15 (25)24 (33) 
II13 (21)16 (22) 
III12 (20)13 (18) 
IV9 (15)8 (11) 
V12 (20)12 (16) 
Mortality and Recurrence of Treated and Untreated Patients With Single Peripheral Filling Defects
TreatmentCombined Outcome90‐Day All‐Cause Mortality90‐Day All‐Cause Recurrence
Death or Recurrent VTE, n (% All Patients)Adjusted OR for Combined Outcome (95% CI)aMortality, n (% All Patients)Adjusted OR (95% CI)aRecurrence, n (% All Patients)Adjusted OR (95% CI)a
  • NOTE: Abbreviations: CI, confidence interval; IVC, inferior vena cava; NA, not applicable; OR, odds ratio; PESI, Pulmonary Embolism Severity Index; VTE, venous thromboembolism.

  • Adjusted for PESI and patient age and sex. Models were fitted separately for any treatment vs no treatment, for warfarin vs no treatment, and for IVC filter vs no treatment.

Any treatment, n=618 (6.0)1.50 (0.435.20)5 (3.7)1.00 (0.253.98)3 (2.2)1.10 (0.129.92)
Warfarin, n=515 (3.7)0.75 (0.202.85)2 (1.5)0.26 (0.041.51)3 (2.2)2.04 (0.2318.04)
IVC filter, n=103 (2.2)5.77 (1.2227.36)3 (2.2)10.60 (2.1053.56)0NA
None, n=737 (5.2)Referent5 (3.7)Referent2 (1.5%)Referent

Use of Secondary Diagnostic Tests

A CUS was performed on 42 of the 153 patients (27%) with studies noting a SPFD. Six CUSs were positive, with 5 of the patients receiving anticoagulation and the sixth an IVC filter. A second lung‐imaging study was done in 10 (7%) of the 134 patients in the primary‐analysis group: 1 conventional pulmonary angiogram that was normal and 9 ventilation‐perfusion scans, among which 4 were normal, 2 were intermediate probability for PE, 2 were low probability for PE, and 1 was very low probability for PE. The 2 patients whose scans were read as intermediate probability and 1 patient whose scan was read as low probability was treated, and none of the patients with normal scans received treatment. None of these 10 patients died or had a postdischarge VTE during the 90‐day follow‐up period.

Factors Associated With Treatment

In the risk‐adjusted model, patient characteristics associated with treatment were immobility, previous VTE, and acute mental‐status change (Table 3). When the radiologist concluded that the SPFD was a PE, there was a highly increased likelihood of being treated. These factors were selected based on the MCMC simulation and the final model had a goodness‐of‐fit P value of 0.69, indicating it was fitted well. Vital‐sign abnormalities, comorbid illnesses, history of cancer, ethnicity, insurance status, and the presence of pulmonary consultation were not associated with the decision to treat. The 3 patient factorsimmobility, previous VTE, and absence of mental‐status changecombined with the initial impression of the radiologist, were strongly predictive of the decision to treat (C statistic: 0.87). None of the subset of patients who had a negative CUS and normal or very low probability ventilation‐perfusion scan received treatment. Eighty of the 134 (60%) patients had an active malignancy, chronic lung disease, heart failure, or evidence of ischemic heart disease; all 10 patients who died were from this subset of patients.

Factors Associated With the Decision to Treat
FactorsAdjusted OR95% CIProbability of Being Statistically Associated With the Decision to Treat
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio; VTE, venous thromboembolism.

Immobility3.91.4510.60.78
Acute mental‐status change0.140.020.840.64
Initial impression of radiologist24.685.4112.890.86
Prior VTE3.721.1811.670.70

DISCUSSION

This very large retrospective study examines treatment and outcomes in patients with a SPFD. We found that SPFDs were common, showing up in approximately 3% of all the CTPAs performed. Among the studies that were deemed positive for PE, SPFD comprised nearly one‐third. Treatment of SPFD, whether concluded as PE or not, was not associated with a mortality benefit or difference in postdischarge VTE within 90 days. Our results add to the weight of smaller case‐control and retrospective series that also found no benefit from treating small PE.[7, 12, 13, 14, 15]

Given this data, why might physicians choose to treat? Physicians may feel compelled to anticoagulate due to extrapolation of data from the early studies showing a fatality rate of up to 30% in untreated PE.[2] Also, physicians may harbor the concern that, though small emboli may pose no immediate danger, they serve as a marker of hypercoagulability and as such are a harbinger of subsequent large clots. A reflexive treatment response to the radiologist's conclusion that the filling defect is a PE may also play a part. Balancing this concern is the recognition that the treatment for acute PE is not benign. The age‐adjusted incidence of major bleeding (eg, gastrointestinal or intracranial) with warfarin has increased by 71%, from 3.1 to 5.3 per 100,000, since the introduction of CTPA.[6] Also, as seen in this study, a substantial percentage of patients will incur the morbidity and cost of IVC‐filter placement.

When physicians face management uncertainty, they consider risk factors for the condition investigated, consult experts, employ additional studies, and weigh patient preference. In this study, history of immobility and VTE were, indeed, positively associated with treatment, but change in mental status was negatively so. Given that the PESI score is higher with change in mental status, this finding is superficially paradoxical but unsurprising. Mental‐status change could not likely stem from a SPFD and its presence heightens the risks of anticoagulation, hence dissuading treatment. Pulmonary consultations were documented in less than half of the cases and did not clearly sway the treatment decision. Determining whether more patients would have been treated if pulmonologists were not involved would require a prospective study.

The most important association with treatment was how the radiologist interpreted the SPFD. Even then, the influence of the radiologist's interpretation was far from complete: 40% of the cases in which PE was called went untreated, and 4 cases received treatment despite PE not being called. The value of the radiologist's interpretation is further undercut by the modest interobserver agreement found on over‐read, which is line with previous reports and reflective of lack of a gold standard for diagnosing isolated peripheral PE.[3, 12, 16]

Even if radiologists could agree upon what they are seeing, the question remains about the pathological importance. Unrecognized PE incidental to the cause of death are commonly found at autopsy. Autopsy studies reveal that up to 52% to 64% of patients have PE; and, if multiple blocks of lung tissue are studied, the prevalence increases up to 90%.[17, 18] In the series by Freiman et al., 59% of the identified thrombi were small enough not to be recognized on routine gross examination.[17] Furthermore, an unknown percentage of small clots, especially in the upper lobes, are in situ thrombi rather than emboli.[18] In the case of small dot‐like clots, Suh and colleagues have speculated that they represent normal embolic activity from the lower limbs, which are cleared routinely by the lung serving in its role as a filter.[19] Although our study only examined SPFD, the accumulation of small emboli could have pathologic consequences. In their review, Gali and Kim reported that 12% of patients with chronic thromboembolic pulmonary hypertension who underwent pulmonary endarterectomy had disease confined to the distal segmental and subsegmental arteries.[13]

Use of secondary studies could mitigate some of the diagnostic and management uncertainty, but they were obtained in only about a quarter of the cases. The use of a second lung‐imaging study following CTPA is not recommended in guidelines or diagnostic algorithms, but in our institution a significant minority of physicians were employing these tests to clarify the nature of the filling defects.[20] Tapson, speaking to the treatment dilemma that small PEs present, has suggested that prospective trials on this topic employ tests that investigate risk for poor outcome if untreated including cardiopulmonary reserve, D‐dimer, and presence of lower‐limb thrombus.[21] Indeed, a study is ongoing examining the outcome at 90 days of patients with single or multiple subsegmental embolism with negative CUS.[22]

Ten of the 134 patients (7.4%) with peripheral filling defects died within 90 days. It is difficult to establish whether these deaths were PE‐specific mortalities because there was a high degree of comorbid illness in this cohort. Five of the 134 (3.7%) had recurrent VTE, which is comparable to the outcomes in other studies.[23]

There are limitations to this study. This study is the first to limit analysis of the filling defects to single defects in the segmental or subsegmental pulmonary arteries. This subset of patients includes those with the least clot burden, therefore representing the starkest decision‐making treatment dilemma, and the incidence of these clots is not insignificant. As a retrospective study, we could not fully capture all of the considerations that may have factored into the clinicians' decision‐making regarding treatment, including patient preference. Because of inadequate documentation, especially in the emergency department notes, we were unable to calculate pretest probability. Also, we cannot exclude that subclinical VTEs were occurring that would later harm the patients. We did not analyze the role of D‐dimer testing because that test is validated to guide the decision to obtain lung‐imaging studies and not to inform the treatment decision. In our cohort, 89 of 134 (66%) of our patients were already hospitalized for other diagnoses prior to PE being queried. Moreover, many of these patients had active malignancy or were being treated for pneumonia, which would decrease the positive predictive value of the D‐dimer test. D‐dimer performs poorly when used for prognosis.[24] This is a single‐center study, therefore the comparability of our findings to other centers may be an issue, although our findings generally accord with those from other single‐center studies.[7, 12, 24, 25] We determined the recurrence rate from the hospital records and could have missed cases diagnosed elsewhere. However, our hospital is the only one in the city and serves the vast majority of patients in the area, and 88% of our cohort had a repeat visit to our hospital subsequently. In addition, the radiology service is the only one in the area that provides outpatient CUS, CTPA, and ventilation‐perfusion scan studies. Our study is the largest to date on this issue. However, our sample size is somewhat modest, and consequently the factors associated with treatment have large confidence intervals. We are therefore constrained in recommending empiric application of our findings. Nonetheless, our results in terms of no difference in mortality and recurrence between treated and untreated patients are in keeping with other studies on this topic. Also, our simulation analysis did reveal factors that were highly associated with the decision to treat. These findings as a whole strongly point to the need for a larger study on this issue, because, as we and other authors have argued, the consequences of treatment are not benign.[6]

In conclusion, this study shows that SPFDs are common and that there was no difference in 90‐day mortality between treated and untreated patients, regardless of whether the defects were interpreted as PE or not. Physicians appear to rely heavily on the radiologist's interpretation for their treatment decision, but they will also treat when the interpretation is not PE and not infrequently abstain when it is. Treatment remains common despite the modest agreement among radiologists whether the peripheral filling defect even represents PE. When secondary imaging studies are obtained and negative, physicians forgo treatment. Larger studies are needed to help clarify our findings and should include decision‐making algorithms that include secondary imaging studies, because these studies may provide enough reassurance when negative to sway physicians against treatment.

Disclosure

Nothing to report.

Files
References
  1. Calder KK, Herbert M, Henderson SO. The mortality of untreated pulmonary embolism in emergency department patients. Ann Emerg Med. 2005;45:302310.
  2. Dalen J. Pulmonary embolism: what have we learned since Virchow? Natural history, pathophysiology, and diagnosis. Chest. 2002;122:14001456.
  3. Schoepf JU, Holzknecht N, Helmberger TK, et al. Subsegmental pulmonary emboli: improved detection with thin‐collimation multi‐detector row spiral CT. Radiology. 2002;222:483490.
  4. Stein PD, Kayali F, Olson RE. Trends in the use of diagnostic imaging in patients hospitalized with acute pulmonary embolism. Am J Cardiol. 2004;93:13161317.
  5. Trowbridge RL, Araoz PA, Gotway MB, Bailey RA, Auerbach AD. The effect of helical computed tomography on diagnostic and treatment strategies in patients with suspected pulmonary embolism. Am J Med. 2004;116:8490.
  6. Wiener RS, Schwartz LM, Woloshin S. Time trends in pulmonary embolism in the United States: evidence of overdiagnosis. Arch Intern Med. 2011;171:831837.
  7. Donato AA, Khoche S, Santora J, Wagner B. Clinical outcomes in patients with isolated subsegmental pulmonary emboli diagnosed by multidetector CT pulmonary angiography. Thromb Res. 2010;126:e266e270.
  8. Torbicki A, Perrier A, Konstantinides S, et al. Guidelines on the diagnosis and management of acute pulmonary embolism: the Task Force for the Diagnosis and Management of Acute Pulmonary Embolism of the European Society of Cardiology (ESC). Eur Heart J. 2008;29:22762315.
  9. Stein PD, Goodman LR, Hull RD, Dalen JE, Matta F. Diagnosis and management of isolated subsegmental pulmonary embolism: review and assessment of the options. Clin Appl Thromb Hemost. 2012;18:2026.
  10. Intelius. Available at: http://www.intelius.com. Accessed September 30, 2010.
  11. Chan CM, Woods C, Shorr AF. The validation and reproducibility of the pulmonary embolism severity index. J Thromb Haemost. 2010;8:15091514.
  12. Eyer BA, Goodman LR, Washington L. Clinicians' response to radiologists' reports of isolated subsegmental pulmonary embolism or inconclusive interpretation of pulmonary embolism using MDCT. AJR Am J Roentgenol. 2005;184:623628.
  13. Galiè N, Kim NH. Pulmonary microvascular disease in chronic thromboembolic pulmonary hypertension. Proc Am Thorac Soc. 2006;3:571576.
  14. Goodman L. Small pulmonary emboli: what do we know? Radiology. 2005;234:654658.
  15. Stein PD, Henry JW, Gottschalk A. Reassessment of pulmonary angiography for the diagnosis of pulmonary embolism: relation of interpreter agreement to the order of the involved pulmonary arterial branch. Radiology. 1999;210:689691.
  16. Patel S, Kazerooni EA. Helical CT for the evaluation of acute pulmonary embolism. AJR Am J Roentgenol. 2005;185:135149.
  17. Freiman DG, Suyemoto J, Wessler S. Frequency of pulmonary thromboembolism in man. N Engl J Med. 1965;272:12781280.
  18. Wagenvoort CA. Pathology of pulmonary thromboembolism. Chest. 1995;107(1 suppl):10S17S.
  19. Suh JM, Cronan JJ, Healey TT. Dots are not clots: the over‐diagnosis and over‐treatment of PE. Emerg Radiol. 2010;17:347352.
  20. Moores LK, King CS, Holley AB. Current approach to the diagnosis of acute nonmassive pulmonary embolism. Chest. 2011;140:509518.
  21. Tapson VF. Acute pulmonary embolism: comment on “time trends in pulmonary embolism in the United States.” Arch Intern Med. 2011;171:837839.
  22. National Institutes of Health, ClinicalTrials.gov; Carrier M. A study to evaluate the safety of withholding anticoagulation in patients with subsegmental PE who have a negative serial bilateral lower extremity ultrasound (SSPE). ClinicalTrials.gov identifier: NCT01455818.
  23. Stein PD, Henry JW, Relyea B. Untreated patients with pulmonary embolism: outcome, clinical, and laboratory assessment. Chest. 1995;107:931935.
  24. Stein PD, Janjua M, Matta F, Alrifai A, Jaweesh F, Chughtai HL. Prognostic value of D‐dimer in stable patients with pulmonary embolism. Clin Appl Thromb Hemost. 2011;17:E183E185.
  25. Gal G, Righini M, Parent F, Strijen M, Couturaud F. Diagnosis and management of subsegmental pulmonary embolism. J Thromb Hemost. 2006;4:724731.
Article PDF
Issue
Journal of Hospital Medicine - 9(1)
Publications
Page Number
42-47
Sections
Files
Files
Article PDF
Article PDF

Over the past decade, the use of chest computed tomography scans with pulmonary angiography (CTPA) for diagnosis of pulmonary embolism (PE) has soared due to the ease of acquisition, the desire for the additional information that CT scanning may provide, and heightened sensitivity to medical liability.[1, 2, 3, 4, 5, 6] In parallel with this shift, the incidence of PE has nearly doubled, despite no recorded increase in the pretest probability of the disease, increasing from 62 per 100,000 to 112 per 100,000 during the period of 1993 to 2006.6 One major explanation for this increase is that the improvement in CTPA resolution has enabled radiologists to identify more small peripheral (ie, segmental and subsegmental) filling defects. When confronted with the finding of a small peripheral filling defect on CTPA, clinicians often face a management quandary. Case series and retrospective series on outcomes of these patients do not support treatment, but they are limited by having small numbers of patients; the largest examined 93 patients and provided no insight into the treatment decision.[7] Uncertainty exists, furthermore, about the pathologic meaning of small peripheral filling defects.[8] Clinicians must weigh these arguments and the risk of anticoagulation against concerns about the consequences of untreated pulmonary thromboemboli. More information is needed, therefore, on the outcomes of patients with peripheral filling defects, and on variables impacting the treatment decision, in order to help clinicians manage these patients.[9]

In this study, we analyzed cases of patients with a single peripheral filling defect (SPFD). We choose to look at patients with a SPFD because they represent the starkest decision‐making treatment dilemma and are not infrequent. We assessed the 90‐day mortality and rate of postdischarge venous thromboembolism (VTE) of treated and untreated patients and identified characteristics of treated and untreated patients with a SPFD. We wished to determine the incidence of SPFD among patients evaluated with CTPA and to determine how often the defect is called a PE by the radiologist. We also aimed to determine what role secondary studies play in helping to clarify the diagnosis and management of SPFD and to identify other factors that may influence the decision to treat patients with this finding.

METHODS

Site

This retrospective cohort study was conducted at a community hospital in Norwalk, CT. The hospital is a 328‐bed, not‐for‐profit, acute‐care community teaching hospital that serves a population of 250,000 in lower Fairfield County, Connecticut, and is affiliated with the Yale School of Medicine.

Subjects

The reports of all CTPAs done over a 66‐month period from 2006 to 2010 were individually reviewed. Any study that had a filling defect reported in the body of the radiology report was selected for initial consideration. A second round of review was conducted, extracting only CTPAs with a SPFD for study inclusion. We then excluded from the primary analysis those studies in which the patient had a concurrently positive lower‐extremity ultrasound, the medical records could not be located, and the patient age was <18 years. The study was approved by the investigational review board of the hospital.

Radiographic Methods

The CTPAs were performed using the SOMATOM Definition scanner, a 128‐slice CT scanner with 0.5‐cm collimation (Siemens, Erlangen, Germany). The CT‐scanner technology did not change over the 66 months of the study period.

Data Collection

Clinical data were abstracted from the physical charts and from the computerized practitioner order‐entry system (PowerChart electronic medical record system; Cerner Corp, Kansas City, MO). Three abstractors were trained in the process of chart abstraction using training sets of 10 records. The Fleiss was used to assess concordance. The Fleiss was 0.6 at the initial training set, and after 3 training sets it improved to 0.9. In‐hospital all‐cause mortality was determined using the hospital death records, and out‐of‐hospital mortality data were obtained from the online statewide death records.[10] Postdischarge VTE was assessed by interrogating the hospital radiology database for repeat ventilation perfusion scan, conventional pulmonary angiography, lower‐limb compression ultrasound (CUS) or CTPA studies that were positive within 90 days of the index event. Treatment was defined as either anticoagulation, ascertained from medication list at discharge, or inferior vena cava (IVC) filter placement, documented at the index visit.

To better understand the variation in interpretation of SPFD, all CTPA studies that showed a SPFD were also over‐read by 2 radiologists who reached a consensus opinion regarding whether the finding was a PE. The radiologists who over‐read the studies were blinded to the final impression of the initial radiologist. Our study group comprised 3 radiologists; 1 read <20% of the initial studies and the other 2 had no input in the initial readings. One of the radiologists was an attending and the other 2 were fourth‐year radiology residents.

Baseline Variables and Outcome Measures

A peripheral filling defect was defined as a single filling defect located in either the segmental or subsegmental pulmonary artery. The primary variables of interest were patient demographics (age, sex, and race), insurance status, the presence of pulmonary input in the management of the patient, history of comorbid conditions (prior VTE, congestive heart failure, chronic lung disease, pulmonary hypertension, coronary artery disease, surgery within the last 6 months, active malignancy, and acute pulmonary edema or syncope at presentation) and risk class as assessed by the Pulmonary Embolism Severity Index (PESI) score.[11] The PESI scoring system is a risk‐stratification tool for patients with acute PE. It uses 11 prognostic variables to predict in‐hospital and all‐cause mortality: age, sex, heart rate 110 bpm, systolic blood pressure <90 mm Hg, congestive heart failure, presence of malignancy, chronic lung disease, respiratory rate <30/minute, temperature <36C, altered mental status, and oxygen saturation <90%. Additional variables of interest were the proportion of patients in the treated and untreated arms who had a pulmonary consultation at the index visit and the role, if any, of a second test for VTE at the index visit. The primary outcomes investigated were all‐cause 90‐day mortality and 90‐day incidence of postdischarge VTE from the index visit in the treated and untreated groups. Those patients whose studies had a SPFD that was concluded by the initial radiologist to be a PE on the final impression of the report were analyzed as a subgroup.

Statistical Analysis

Bivariate analysis was conducted to compare patient baseline characteristics between treated and untreated groups. The 2 test was used for comparing binary or categorical variables and the Student t test was used for comparing continuous variables. A logistic regression model utilizing the Markov chain Monte Carlo (MCMC) method was employed for assessing the differences in 90‐day mortality and 90‐day postdischarge VTE between the treated group and untreated group, adjusting for patient baseline characteristics. This model was also used for identifying factors associated with the decision to treat. We reported the odds ratio (OR) and its corresponding 95% confidence interval (CI) for each estimate identified from the model. All analyses were conducted using SAS version 9.3 64‐bit software (SAS Institute Inc, Cary, NC).

RESULTS

A total of 4906 CTPAs were screened during the 66 months reviewed, identifying 518 (10.6%) with any filling defect and 153 (3.1%) with a SPFD. Thirteen patients were excluded from the primary analysis because their records could not be located, and another 6 were excluded because they had a concurrently positive CUS. The primary analysis was performed, therefore, with 134 patients. The inpatient service ordered 78% of the CTPAs. The initial radiologist stated in the impression section of the report that a PE was present in 99 of 134 (73.9%) studies. On over‐read of the 134 studies, 100 of these were considered to be positive for a PE. There was modest agreement between the initial impression and the consensus impression at over‐read (=0.69).

Association of Treatment With Mortality and Recurrence

In the primary‐analysis group, 61 (45.5%) patients were treated: 50 patients had warfarin alone, 10 patients had an IVC filter alone, and 1 patient had both warfarin and an IVC filter. No patient was treated solely with low‐molecular‐weight heparin long‐term. Whenever low‐molecular‐weight heparin was used, it was as a bridge to warfarin. The characteristics of the patients in the treatment groups were similar (Table 1). Four of the treated patients had a CTPA with SPFD that was not called a PE in the initial reading. Ten patients died, 5 each in the treated and untreated groups, yielding an overall mortality rate at 90 days of 7.4% (Table 2). Analysis of the 134 patients showed no difference in adjusted 90‐day mortality between treated and untreated groups (OR: 1.0, 95% CI: 0.25‐3.98). The number of patients with postdischarge VTE within 90 days was 5 of 134 (3.7%) patients, 3 treated and 2 untreated, and too few to show a treatment effect. Among the 99 cases considered by the initial radiologist to be definite for a PE, 59 (59.6%) were treated and 40 (40.4%) untreated. In this subgroup, no mortality benefit was observed with treatment (OR: 1.42, 95% CI: 0.28‐8.05).

Baseline Characteristics of Treated and Untreated Patients With Single Peripheral Filling Defects
CharacteristicTreated, n=61Untreated, n=73P Value
  • NOTE: Data are presented as n (%) unless otherwise specified. Abbreviations: CHF, congestive heart failure; M, male; PESI, Pulmonary Embolism Severity Index; SD, standard deviation.

  • Patients who were being actively treated for a malignancy.

  • Patients who had documented major surgery or were involved in a major trauma and hospitalized for this within 3 months prior to identification of filling defect.

  • The PESI class scoring system is a risk‐stratification tool for patients with acute pulmonary embolism. It uses 11 prognostic variables to predict in hospital and all‐cause mortality.[11]

Age, y, mean (SD)67 (20)62 (21)0.056
Sex, M29 (48)34 (47)0.831
Race/ethnicity  0.426
White43 (70)57 (78) 
Black12 (20)8 (11) 
Hispanic6 (10)7 (10) 
Other01 (2) 
Primary insurance  0.231
Medicare30 (50)29 (40) 
Medicaid2 (3)8 (11) 
Commercial27 (44)30 (41) 
Self‐pay2 (3)6 (8) 
Pulmonary consultation29 (48)28 (38)0.482
Comorbid illnesses  0.119
Cancera13 (21)17 (23) 
Surgery/traumab16 (26)2 (3) 
Chronic lung disease17 (28)15 (21) 
CHF12 (20)9 (12) 
Ischemic heart disease12 (20)7 (10) 
Pulmonary hypertension01 (1) 
Collagen vascular disease1 (2)2 (3) 
PESI classc 0.840
I15 (25)24 (33) 
II13 (21)16 (22) 
III12 (20)13 (18) 
IV9 (15)8 (11) 
V12 (20)12 (16) 
Mortality and Recurrence of Treated and Untreated Patients With Single Peripheral Filling Defects
TreatmentCombined Outcome90‐Day All‐Cause Mortality90‐Day All‐Cause Recurrence
Death or Recurrent VTE, n (% All Patients)Adjusted OR for Combined Outcome (95% CI)aMortality, n (% All Patients)Adjusted OR (95% CI)aRecurrence, n (% All Patients)Adjusted OR (95% CI)a
  • NOTE: Abbreviations: CI, confidence interval; IVC, inferior vena cava; NA, not applicable; OR, odds ratio; PESI, Pulmonary Embolism Severity Index; VTE, venous thromboembolism.

  • Adjusted for PESI and patient age and sex. Models were fitted separately for any treatment vs no treatment, for warfarin vs no treatment, and for IVC filter vs no treatment.

Any treatment, n=618 (6.0)1.50 (0.435.20)5 (3.7)1.00 (0.253.98)3 (2.2)1.10 (0.129.92)
Warfarin, n=515 (3.7)0.75 (0.202.85)2 (1.5)0.26 (0.041.51)3 (2.2)2.04 (0.2318.04)
IVC filter, n=103 (2.2)5.77 (1.2227.36)3 (2.2)10.60 (2.1053.56)0NA
None, n=737 (5.2)Referent5 (3.7)Referent2 (1.5%)Referent

Use of Secondary Diagnostic Tests

A CUS was performed on 42 of the 153 patients (27%) with studies noting a SPFD. Six CUSs were positive, with 5 of the patients receiving anticoagulation and the sixth an IVC filter. A second lung‐imaging study was done in 10 (7%) of the 134 patients in the primary‐analysis group: 1 conventional pulmonary angiogram that was normal and 9 ventilation‐perfusion scans, among which 4 were normal, 2 were intermediate probability for PE, 2 were low probability for PE, and 1 was very low probability for PE. The 2 patients whose scans were read as intermediate probability and 1 patient whose scan was read as low probability was treated, and none of the patients with normal scans received treatment. None of these 10 patients died or had a postdischarge VTE during the 90‐day follow‐up period.

Factors Associated With Treatment

In the risk‐adjusted model, patient characteristics associated with treatment were immobility, previous VTE, and acute mental‐status change (Table 3). When the radiologist concluded that the SPFD was a PE, there was a highly increased likelihood of being treated. These factors were selected based on the MCMC simulation and the final model had a goodness‐of‐fit P value of 0.69, indicating it was fitted well. Vital‐sign abnormalities, comorbid illnesses, history of cancer, ethnicity, insurance status, and the presence of pulmonary consultation were not associated with the decision to treat. The 3 patient factorsimmobility, previous VTE, and absence of mental‐status changecombined with the initial impression of the radiologist, were strongly predictive of the decision to treat (C statistic: 0.87). None of the subset of patients who had a negative CUS and normal or very low probability ventilation‐perfusion scan received treatment. Eighty of the 134 (60%) patients had an active malignancy, chronic lung disease, heart failure, or evidence of ischemic heart disease; all 10 patients who died were from this subset of patients.

Factors Associated With the Decision to Treat
FactorsAdjusted OR95% CIProbability of Being Statistically Associated With the Decision to Treat
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio; VTE, venous thromboembolism.

Immobility3.91.4510.60.78
Acute mental‐status change0.140.020.840.64
Initial impression of radiologist24.685.4112.890.86
Prior VTE3.721.1811.670.70

DISCUSSION

This very large retrospective study examines treatment and outcomes in patients with a SPFD. We found that SPFDs were common, showing up in approximately 3% of all the CTPAs performed. Among the studies that were deemed positive for PE, SPFD comprised nearly one‐third. Treatment of SPFD, whether concluded as PE or not, was not associated with a mortality benefit or difference in postdischarge VTE within 90 days. Our results add to the weight of smaller case‐control and retrospective series that also found no benefit from treating small PE.[7, 12, 13, 14, 15]

Given this data, why might physicians choose to treat? Physicians may feel compelled to anticoagulate due to extrapolation of data from the early studies showing a fatality rate of up to 30% in untreated PE.[2] Also, physicians may harbor the concern that, though small emboli may pose no immediate danger, they serve as a marker of hypercoagulability and as such are a harbinger of subsequent large clots. A reflexive treatment response to the radiologist's conclusion that the filling defect is a PE may also play a part. Balancing this concern is the recognition that the treatment for acute PE is not benign. The age‐adjusted incidence of major bleeding (eg, gastrointestinal or intracranial) with warfarin has increased by 71%, from 3.1 to 5.3 per 100,000, since the introduction of CTPA.[6] Also, as seen in this study, a substantial percentage of patients will incur the morbidity and cost of IVC‐filter placement.

When physicians face management uncertainty, they consider risk factors for the condition investigated, consult experts, employ additional studies, and weigh patient preference. In this study, history of immobility and VTE were, indeed, positively associated with treatment, but change in mental status was negatively so. Given that the PESI score is higher with change in mental status, this finding is superficially paradoxical but unsurprising. Mental‐status change could not likely stem from a SPFD and its presence heightens the risks of anticoagulation, hence dissuading treatment. Pulmonary consultations were documented in less than half of the cases and did not clearly sway the treatment decision. Determining whether more patients would have been treated if pulmonologists were not involved would require a prospective study.

The most important association with treatment was how the radiologist interpreted the SPFD. Even then, the influence of the radiologist's interpretation was far from complete: 40% of the cases in which PE was called went untreated, and 4 cases received treatment despite PE not being called. The value of the radiologist's interpretation is further undercut by the modest interobserver agreement found on over‐read, which is line with previous reports and reflective of lack of a gold standard for diagnosing isolated peripheral PE.[3, 12, 16]

Even if radiologists could agree upon what they are seeing, the question remains about the pathological importance. Unrecognized PE incidental to the cause of death are commonly found at autopsy. Autopsy studies reveal that up to 52% to 64% of patients have PE; and, if multiple blocks of lung tissue are studied, the prevalence increases up to 90%.[17, 18] In the series by Freiman et al., 59% of the identified thrombi were small enough not to be recognized on routine gross examination.[17] Furthermore, an unknown percentage of small clots, especially in the upper lobes, are in situ thrombi rather than emboli.[18] In the case of small dot‐like clots, Suh and colleagues have speculated that they represent normal embolic activity from the lower limbs, which are cleared routinely by the lung serving in its role as a filter.[19] Although our study only examined SPFD, the accumulation of small emboli could have pathologic consequences. In their review, Gali and Kim reported that 12% of patients with chronic thromboembolic pulmonary hypertension who underwent pulmonary endarterectomy had disease confined to the distal segmental and subsegmental arteries.[13]

Use of secondary studies could mitigate some of the diagnostic and management uncertainty, but they were obtained in only about a quarter of the cases. The use of a second lung‐imaging study following CTPA is not recommended in guidelines or diagnostic algorithms, but in our institution a significant minority of physicians were employing these tests to clarify the nature of the filling defects.[20] Tapson, speaking to the treatment dilemma that small PEs present, has suggested that prospective trials on this topic employ tests that investigate risk for poor outcome if untreated including cardiopulmonary reserve, D‐dimer, and presence of lower‐limb thrombus.[21] Indeed, a study is ongoing examining the outcome at 90 days of patients with single or multiple subsegmental embolism with negative CUS.[22]

Ten of the 134 patients (7.4%) with peripheral filling defects died within 90 days. It is difficult to establish whether these deaths were PE‐specific mortalities because there was a high degree of comorbid illness in this cohort. Five of the 134 (3.7%) had recurrent VTE, which is comparable to the outcomes in other studies.[23]

There are limitations to this study. This study is the first to limit analysis of the filling defects to single defects in the segmental or subsegmental pulmonary arteries. This subset of patients includes those with the least clot burden, therefore representing the starkest decision‐making treatment dilemma, and the incidence of these clots is not insignificant. As a retrospective study, we could not fully capture all of the considerations that may have factored into the clinicians' decision‐making regarding treatment, including patient preference. Because of inadequate documentation, especially in the emergency department notes, we were unable to calculate pretest probability. Also, we cannot exclude that subclinical VTEs were occurring that would later harm the patients. We did not analyze the role of D‐dimer testing because that test is validated to guide the decision to obtain lung‐imaging studies and not to inform the treatment decision. In our cohort, 89 of 134 (66%) of our patients were already hospitalized for other diagnoses prior to PE being queried. Moreover, many of these patients had active malignancy or were being treated for pneumonia, which would decrease the positive predictive value of the D‐dimer test. D‐dimer performs poorly when used for prognosis.[24] This is a single‐center study, therefore the comparability of our findings to other centers may be an issue, although our findings generally accord with those from other single‐center studies.[7, 12, 24, 25] We determined the recurrence rate from the hospital records and could have missed cases diagnosed elsewhere. However, our hospital is the only one in the city and serves the vast majority of patients in the area, and 88% of our cohort had a repeat visit to our hospital subsequently. In addition, the radiology service is the only one in the area that provides outpatient CUS, CTPA, and ventilation‐perfusion scan studies. Our study is the largest to date on this issue. However, our sample size is somewhat modest, and consequently the factors associated with treatment have large confidence intervals. We are therefore constrained in recommending empiric application of our findings. Nonetheless, our results in terms of no difference in mortality and recurrence between treated and untreated patients are in keeping with other studies on this topic. Also, our simulation analysis did reveal factors that were highly associated with the decision to treat. These findings as a whole strongly point to the need for a larger study on this issue, because, as we and other authors have argued, the consequences of treatment are not benign.[6]

In conclusion, this study shows that SPFDs are common and that there was no difference in 90‐day mortality between treated and untreated patients, regardless of whether the defects were interpreted as PE or not. Physicians appear to rely heavily on the radiologist's interpretation for their treatment decision, but they will also treat when the interpretation is not PE and not infrequently abstain when it is. Treatment remains common despite the modest agreement among radiologists whether the peripheral filling defect even represents PE. When secondary imaging studies are obtained and negative, physicians forgo treatment. Larger studies are needed to help clarify our findings and should include decision‐making algorithms that include secondary imaging studies, because these studies may provide enough reassurance when negative to sway physicians against treatment.

Disclosure

Nothing to report.

Over the past decade, the use of chest computed tomography scans with pulmonary angiography (CTPA) for diagnosis of pulmonary embolism (PE) has soared due to the ease of acquisition, the desire for the additional information that CT scanning may provide, and heightened sensitivity to medical liability.[1, 2, 3, 4, 5, 6] In parallel with this shift, the incidence of PE has nearly doubled, despite no recorded increase in the pretest probability of the disease, increasing from 62 per 100,000 to 112 per 100,000 during the period of 1993 to 2006.6 One major explanation for this increase is that the improvement in CTPA resolution has enabled radiologists to identify more small peripheral (ie, segmental and subsegmental) filling defects. When confronted with the finding of a small peripheral filling defect on CTPA, clinicians often face a management quandary. Case series and retrospective series on outcomes of these patients do not support treatment, but they are limited by having small numbers of patients; the largest examined 93 patients and provided no insight into the treatment decision.[7] Uncertainty exists, furthermore, about the pathologic meaning of small peripheral filling defects.[8] Clinicians must weigh these arguments and the risk of anticoagulation against concerns about the consequences of untreated pulmonary thromboemboli. More information is needed, therefore, on the outcomes of patients with peripheral filling defects, and on variables impacting the treatment decision, in order to help clinicians manage these patients.[9]

In this study, we analyzed cases of patients with a single peripheral filling defect (SPFD). We choose to look at patients with a SPFD because they represent the starkest decision‐making treatment dilemma and are not infrequent. We assessed the 90‐day mortality and rate of postdischarge venous thromboembolism (VTE) of treated and untreated patients and identified characteristics of treated and untreated patients with a SPFD. We wished to determine the incidence of SPFD among patients evaluated with CTPA and to determine how often the defect is called a PE by the radiologist. We also aimed to determine what role secondary studies play in helping to clarify the diagnosis and management of SPFD and to identify other factors that may influence the decision to treat patients with this finding.

METHODS

Site

This retrospective cohort study was conducted at a community hospital in Norwalk, CT. The hospital is a 328‐bed, not‐for‐profit, acute‐care community teaching hospital that serves a population of 250,000 in lower Fairfield County, Connecticut, and is affiliated with the Yale School of Medicine.

Subjects

The reports of all CTPAs done over a 66‐month period from 2006 to 2010 were individually reviewed. Any study that had a filling defect reported in the body of the radiology report was selected for initial consideration. A second round of review was conducted, extracting only CTPAs with a SPFD for study inclusion. We then excluded from the primary analysis those studies in which the patient had a concurrently positive lower‐extremity ultrasound, the medical records could not be located, and the patient age was <18 years. The study was approved by the investigational review board of the hospital.

Radiographic Methods

The CTPAs were performed using the SOMATOM Definition scanner, a 128‐slice CT scanner with 0.5‐cm collimation (Siemens, Erlangen, Germany). The CT‐scanner technology did not change over the 66 months of the study period.

Data Collection

Clinical data were abstracted from the physical charts and from the computerized practitioner order‐entry system (PowerChart electronic medical record system; Cerner Corp, Kansas City, MO). Three abstractors were trained in the process of chart abstraction using training sets of 10 records. The Fleiss was used to assess concordance. The Fleiss was 0.6 at the initial training set, and after 3 training sets it improved to 0.9. In‐hospital all‐cause mortality was determined using the hospital death records, and out‐of‐hospital mortality data were obtained from the online statewide death records.[10] Postdischarge VTE was assessed by interrogating the hospital radiology database for repeat ventilation perfusion scan, conventional pulmonary angiography, lower‐limb compression ultrasound (CUS) or CTPA studies that were positive within 90 days of the index event. Treatment was defined as either anticoagulation, ascertained from medication list at discharge, or inferior vena cava (IVC) filter placement, documented at the index visit.

To better understand the variation in interpretation of SPFD, all CTPA studies that showed a SPFD were also over‐read by 2 radiologists who reached a consensus opinion regarding whether the finding was a PE. The radiologists who over‐read the studies were blinded to the final impression of the initial radiologist. Our study group comprised 3 radiologists; 1 read <20% of the initial studies and the other 2 had no input in the initial readings. One of the radiologists was an attending and the other 2 were fourth‐year radiology residents.

Baseline Variables and Outcome Measures

A peripheral filling defect was defined as a single filling defect located in either the segmental or subsegmental pulmonary artery. The primary variables of interest were patient demographics (age, sex, and race), insurance status, the presence of pulmonary input in the management of the patient, history of comorbid conditions (prior VTE, congestive heart failure, chronic lung disease, pulmonary hypertension, coronary artery disease, surgery within the last 6 months, active malignancy, and acute pulmonary edema or syncope at presentation) and risk class as assessed by the Pulmonary Embolism Severity Index (PESI) score.[11] The PESI scoring system is a risk‐stratification tool for patients with acute PE. It uses 11 prognostic variables to predict in‐hospital and all‐cause mortality: age, sex, heart rate 110 bpm, systolic blood pressure <90 mm Hg, congestive heart failure, presence of malignancy, chronic lung disease, respiratory rate <30/minute, temperature <36C, altered mental status, and oxygen saturation <90%. Additional variables of interest were the proportion of patients in the treated and untreated arms who had a pulmonary consultation at the index visit and the role, if any, of a second test for VTE at the index visit. The primary outcomes investigated were all‐cause 90‐day mortality and 90‐day incidence of postdischarge VTE from the index visit in the treated and untreated groups. Those patients whose studies had a SPFD that was concluded by the initial radiologist to be a PE on the final impression of the report were analyzed as a subgroup.

Statistical Analysis

Bivariate analysis was conducted to compare patient baseline characteristics between treated and untreated groups. The 2 test was used for comparing binary or categorical variables and the Student t test was used for comparing continuous variables. A logistic regression model utilizing the Markov chain Monte Carlo (MCMC) method was employed for assessing the differences in 90‐day mortality and 90‐day postdischarge VTE between the treated group and untreated group, adjusting for patient baseline characteristics. This model was also used for identifying factors associated with the decision to treat. We reported the odds ratio (OR) and its corresponding 95% confidence interval (CI) for each estimate identified from the model. All analyses were conducted using SAS version 9.3 64‐bit software (SAS Institute Inc, Cary, NC).

RESULTS

A total of 4906 CTPAs were screened during the 66 months reviewed, identifying 518 (10.6%) with any filling defect and 153 (3.1%) with a SPFD. Thirteen patients were excluded from the primary analysis because their records could not be located, and another 6 were excluded because they had a concurrently positive CUS. The primary analysis was performed, therefore, with 134 patients. The inpatient service ordered 78% of the CTPAs. The initial radiologist stated in the impression section of the report that a PE was present in 99 of 134 (73.9%) studies. On over‐read of the 134 studies, 100 of these were considered to be positive for a PE. There was modest agreement between the initial impression and the consensus impression at over‐read (=0.69).

Association of Treatment With Mortality and Recurrence

In the primary‐analysis group, 61 (45.5%) patients were treated: 50 patients had warfarin alone, 10 patients had an IVC filter alone, and 1 patient had both warfarin and an IVC filter. No patient was treated solely with low‐molecular‐weight heparin long‐term. Whenever low‐molecular‐weight heparin was used, it was as a bridge to warfarin. The characteristics of the patients in the treatment groups were similar (Table 1). Four of the treated patients had a CTPA with SPFD that was not called a PE in the initial reading. Ten patients died, 5 each in the treated and untreated groups, yielding an overall mortality rate at 90 days of 7.4% (Table 2). Analysis of the 134 patients showed no difference in adjusted 90‐day mortality between treated and untreated groups (OR: 1.0, 95% CI: 0.25‐3.98). The number of patients with postdischarge VTE within 90 days was 5 of 134 (3.7%) patients, 3 treated and 2 untreated, and too few to show a treatment effect. Among the 99 cases considered by the initial radiologist to be definite for a PE, 59 (59.6%) were treated and 40 (40.4%) untreated. In this subgroup, no mortality benefit was observed with treatment (OR: 1.42, 95% CI: 0.28‐8.05).

Baseline Characteristics of Treated and Untreated Patients With Single Peripheral Filling Defects
CharacteristicTreated, n=61Untreated, n=73P Value
  • NOTE: Data are presented as n (%) unless otherwise specified. Abbreviations: CHF, congestive heart failure; M, male; PESI, Pulmonary Embolism Severity Index; SD, standard deviation.

  • Patients who were being actively treated for a malignancy.

  • Patients who had documented major surgery or were involved in a major trauma and hospitalized for this within 3 months prior to identification of filling defect.

  • The PESI class scoring system is a risk‐stratification tool for patients with acute pulmonary embolism. It uses 11 prognostic variables to predict in hospital and all‐cause mortality.[11]

Age, y, mean (SD)67 (20)62 (21)0.056
Sex, M29 (48)34 (47)0.831
Race/ethnicity  0.426
White43 (70)57 (78) 
Black12 (20)8 (11) 
Hispanic6 (10)7 (10) 
Other01 (2) 
Primary insurance  0.231
Medicare30 (50)29 (40) 
Medicaid2 (3)8 (11) 
Commercial27 (44)30 (41) 
Self‐pay2 (3)6 (8) 
Pulmonary consultation29 (48)28 (38)0.482
Comorbid illnesses  0.119
Cancera13 (21)17 (23) 
Surgery/traumab16 (26)2 (3) 
Chronic lung disease17 (28)15 (21) 
CHF12 (20)9 (12) 
Ischemic heart disease12 (20)7 (10) 
Pulmonary hypertension01 (1) 
Collagen vascular disease1 (2)2 (3) 
PESI classc 0.840
I15 (25)24 (33) 
II13 (21)16 (22) 
III12 (20)13 (18) 
IV9 (15)8 (11) 
V12 (20)12 (16) 
Mortality and Recurrence of Treated and Untreated Patients With Single Peripheral Filling Defects
TreatmentCombined Outcome90‐Day All‐Cause Mortality90‐Day All‐Cause Recurrence
Death or Recurrent VTE, n (% All Patients)Adjusted OR for Combined Outcome (95% CI)aMortality, n (% All Patients)Adjusted OR (95% CI)aRecurrence, n (% All Patients)Adjusted OR (95% CI)a
  • NOTE: Abbreviations: CI, confidence interval; IVC, inferior vena cava; NA, not applicable; OR, odds ratio; PESI, Pulmonary Embolism Severity Index; VTE, venous thromboembolism.

  • Adjusted for PESI and patient age and sex. Models were fitted separately for any treatment vs no treatment, for warfarin vs no treatment, and for IVC filter vs no treatment.

Any treatment, n=618 (6.0)1.50 (0.435.20)5 (3.7)1.00 (0.253.98)3 (2.2)1.10 (0.129.92)
Warfarin, n=515 (3.7)0.75 (0.202.85)2 (1.5)0.26 (0.041.51)3 (2.2)2.04 (0.2318.04)
IVC filter, n=103 (2.2)5.77 (1.2227.36)3 (2.2)10.60 (2.1053.56)0NA
None, n=737 (5.2)Referent5 (3.7)Referent2 (1.5%)Referent

Use of Secondary Diagnostic Tests

A CUS was performed on 42 of the 153 patients (27%) with studies noting a SPFD. Six CUSs were positive, with 5 of the patients receiving anticoagulation and the sixth an IVC filter. A second lung‐imaging study was done in 10 (7%) of the 134 patients in the primary‐analysis group: 1 conventional pulmonary angiogram that was normal and 9 ventilation‐perfusion scans, among which 4 were normal, 2 were intermediate probability for PE, 2 were low probability for PE, and 1 was very low probability for PE. The 2 patients whose scans were read as intermediate probability and 1 patient whose scan was read as low probability was treated, and none of the patients with normal scans received treatment. None of these 10 patients died or had a postdischarge VTE during the 90‐day follow‐up period.

Factors Associated With Treatment

In the risk‐adjusted model, patient characteristics associated with treatment were immobility, previous VTE, and acute mental‐status change (Table 3). When the radiologist concluded that the SPFD was a PE, there was a highly increased likelihood of being treated. These factors were selected based on the MCMC simulation and the final model had a goodness‐of‐fit P value of 0.69, indicating it was fitted well. Vital‐sign abnormalities, comorbid illnesses, history of cancer, ethnicity, insurance status, and the presence of pulmonary consultation were not associated with the decision to treat. The 3 patient factorsimmobility, previous VTE, and absence of mental‐status changecombined with the initial impression of the radiologist, were strongly predictive of the decision to treat (C statistic: 0.87). None of the subset of patients who had a negative CUS and normal or very low probability ventilation‐perfusion scan received treatment. Eighty of the 134 (60%) patients had an active malignancy, chronic lung disease, heart failure, or evidence of ischemic heart disease; all 10 patients who died were from this subset of patients.

Factors Associated With the Decision to Treat
FactorsAdjusted OR95% CIProbability of Being Statistically Associated With the Decision to Treat
  • NOTE: Abbreviations: CI, confidence interval; OR, odds ratio; VTE, venous thromboembolism.

Immobility3.91.4510.60.78
Acute mental‐status change0.140.020.840.64
Initial impression of radiologist24.685.4112.890.86
Prior VTE3.721.1811.670.70

DISCUSSION

This very large retrospective study examines treatment and outcomes in patients with a SPFD. We found that SPFDs were common, showing up in approximately 3% of all the CTPAs performed. Among the studies that were deemed positive for PE, SPFD comprised nearly one‐third. Treatment of SPFD, whether concluded as PE or not, was not associated with a mortality benefit or difference in postdischarge VTE within 90 days. Our results add to the weight of smaller case‐control and retrospective series that also found no benefit from treating small PE.[7, 12, 13, 14, 15]

Given this data, why might physicians choose to treat? Physicians may feel compelled to anticoagulate due to extrapolation of data from the early studies showing a fatality rate of up to 30% in untreated PE.[2] Also, physicians may harbor the concern that, though small emboli may pose no immediate danger, they serve as a marker of hypercoagulability and as such are a harbinger of subsequent large clots. A reflexive treatment response to the radiologist's conclusion that the filling defect is a PE may also play a part. Balancing this concern is the recognition that the treatment for acute PE is not benign. The age‐adjusted incidence of major bleeding (eg, gastrointestinal or intracranial) with warfarin has increased by 71%, from 3.1 to 5.3 per 100,000, since the introduction of CTPA.[6] Also, as seen in this study, a substantial percentage of patients will incur the morbidity and cost of IVC‐filter placement.

When physicians face management uncertainty, they consider risk factors for the condition investigated, consult experts, employ additional studies, and weigh patient preference. In this study, history of immobility and VTE were, indeed, positively associated with treatment, but change in mental status was negatively so. Given that the PESI score is higher with change in mental status, this finding is superficially paradoxical but unsurprising. Mental‐status change could not likely stem from a SPFD and its presence heightens the risks of anticoagulation, hence dissuading treatment. Pulmonary consultations were documented in less than half of the cases and did not clearly sway the treatment decision. Determining whether more patients would have been treated if pulmonologists were not involved would require a prospective study.

The most important association with treatment was how the radiologist interpreted the SPFD. Even then, the influence of the radiologist's interpretation was far from complete: 40% of the cases in which PE was called went untreated, and 4 cases received treatment despite PE not being called. The value of the radiologist's interpretation is further undercut by the modest interobserver agreement found on over‐read, which is line with previous reports and reflective of lack of a gold standard for diagnosing isolated peripheral PE.[3, 12, 16]

Even if radiologists could agree upon what they are seeing, the question remains about the pathological importance. Unrecognized PE incidental to the cause of death are commonly found at autopsy. Autopsy studies reveal that up to 52% to 64% of patients have PE; and, if multiple blocks of lung tissue are studied, the prevalence increases up to 90%.[17, 18] In the series by Freiman et al., 59% of the identified thrombi were small enough not to be recognized on routine gross examination.[17] Furthermore, an unknown percentage of small clots, especially in the upper lobes, are in situ thrombi rather than emboli.[18] In the case of small dot‐like clots, Suh and colleagues have speculated that they represent normal embolic activity from the lower limbs, which are cleared routinely by the lung serving in its role as a filter.[19] Although our study only examined SPFD, the accumulation of small emboli could have pathologic consequences. In their review, Gali and Kim reported that 12% of patients with chronic thromboembolic pulmonary hypertension who underwent pulmonary endarterectomy had disease confined to the distal segmental and subsegmental arteries.[13]

Use of secondary studies could mitigate some of the diagnostic and management uncertainty, but they were obtained in only about a quarter of the cases. The use of a second lung‐imaging study following CTPA is not recommended in guidelines or diagnostic algorithms, but in our institution a significant minority of physicians were employing these tests to clarify the nature of the filling defects.[20] Tapson, speaking to the treatment dilemma that small PEs present, has suggested that prospective trials on this topic employ tests that investigate risk for poor outcome if untreated including cardiopulmonary reserve, D‐dimer, and presence of lower‐limb thrombus.[21] Indeed, a study is ongoing examining the outcome at 90 days of patients with single or multiple subsegmental embolism with negative CUS.[22]

Ten of the 134 patients (7.4%) with peripheral filling defects died within 90 days. It is difficult to establish whether these deaths were PE‐specific mortalities because there was a high degree of comorbid illness in this cohort. Five of the 134 (3.7%) had recurrent VTE, which is comparable to the outcomes in other studies.[23]

There are limitations to this study. This study is the first to limit analysis of the filling defects to single defects in the segmental or subsegmental pulmonary arteries. This subset of patients includes those with the least clot burden, therefore representing the starkest decision‐making treatment dilemma, and the incidence of these clots is not insignificant. As a retrospective study, we could not fully capture all of the considerations that may have factored into the clinicians' decision‐making regarding treatment, including patient preference. Because of inadequate documentation, especially in the emergency department notes, we were unable to calculate pretest probability. Also, we cannot exclude that subclinical VTEs were occurring that would later harm the patients. We did not analyze the role of D‐dimer testing because that test is validated to guide the decision to obtain lung‐imaging studies and not to inform the treatment decision. In our cohort, 89 of 134 (66%) of our patients were already hospitalized for other diagnoses prior to PE being queried. Moreover, many of these patients had active malignancy or were being treated for pneumonia, which would decrease the positive predictive value of the D‐dimer test. D‐dimer performs poorly when used for prognosis.[24] This is a single‐center study, therefore the comparability of our findings to other centers may be an issue, although our findings generally accord with those from other single‐center studies.[7, 12, 24, 25] We determined the recurrence rate from the hospital records and could have missed cases diagnosed elsewhere. However, our hospital is the only one in the city and serves the vast majority of patients in the area, and 88% of our cohort had a repeat visit to our hospital subsequently. In addition, the radiology service is the only one in the area that provides outpatient CUS, CTPA, and ventilation‐perfusion scan studies. Our study is the largest to date on this issue. However, our sample size is somewhat modest, and consequently the factors associated with treatment have large confidence intervals. We are therefore constrained in recommending empiric application of our findings. Nonetheless, our results in terms of no difference in mortality and recurrence between treated and untreated patients are in keeping with other studies on this topic. Also, our simulation analysis did reveal factors that were highly associated with the decision to treat. These findings as a whole strongly point to the need for a larger study on this issue, because, as we and other authors have argued, the consequences of treatment are not benign.[6]

In conclusion, this study shows that SPFDs are common and that there was no difference in 90‐day mortality between treated and untreated patients, regardless of whether the defects were interpreted as PE or not. Physicians appear to rely heavily on the radiologist's interpretation for their treatment decision, but they will also treat when the interpretation is not PE and not infrequently abstain when it is. Treatment remains common despite the modest agreement among radiologists whether the peripheral filling defect even represents PE. When secondary imaging studies are obtained and negative, physicians forgo treatment. Larger studies are needed to help clarify our findings and should include decision‐making algorithms that include secondary imaging studies, because these studies may provide enough reassurance when negative to sway physicians against treatment.

Disclosure

Nothing to report.

References
  1. Calder KK, Herbert M, Henderson SO. The mortality of untreated pulmonary embolism in emergency department patients. Ann Emerg Med. 2005;45:302310.
  2. Dalen J. Pulmonary embolism: what have we learned since Virchow? Natural history, pathophysiology, and diagnosis. Chest. 2002;122:14001456.
  3. Schoepf JU, Holzknecht N, Helmberger TK, et al. Subsegmental pulmonary emboli: improved detection with thin‐collimation multi‐detector row spiral CT. Radiology. 2002;222:483490.
  4. Stein PD, Kayali F, Olson RE. Trends in the use of diagnostic imaging in patients hospitalized with acute pulmonary embolism. Am J Cardiol. 2004;93:13161317.
  5. Trowbridge RL, Araoz PA, Gotway MB, Bailey RA, Auerbach AD. The effect of helical computed tomography on diagnostic and treatment strategies in patients with suspected pulmonary embolism. Am J Med. 2004;116:8490.
  6. Wiener RS, Schwartz LM, Woloshin S. Time trends in pulmonary embolism in the United States: evidence of overdiagnosis. Arch Intern Med. 2011;171:831837.
  7. Donato AA, Khoche S, Santora J, Wagner B. Clinical outcomes in patients with isolated subsegmental pulmonary emboli diagnosed by multidetector CT pulmonary angiography. Thromb Res. 2010;126:e266e270.
  8. Torbicki A, Perrier A, Konstantinides S, et al. Guidelines on the diagnosis and management of acute pulmonary embolism: the Task Force for the Diagnosis and Management of Acute Pulmonary Embolism of the European Society of Cardiology (ESC). Eur Heart J. 2008;29:22762315.
  9. Stein PD, Goodman LR, Hull RD, Dalen JE, Matta F. Diagnosis and management of isolated subsegmental pulmonary embolism: review and assessment of the options. Clin Appl Thromb Hemost. 2012;18:2026.
  10. Intelius. Available at: http://www.intelius.com. Accessed September 30, 2010.
  11. Chan CM, Woods C, Shorr AF. The validation and reproducibility of the pulmonary embolism severity index. J Thromb Haemost. 2010;8:15091514.
  12. Eyer BA, Goodman LR, Washington L. Clinicians' response to radiologists' reports of isolated subsegmental pulmonary embolism or inconclusive interpretation of pulmonary embolism using MDCT. AJR Am J Roentgenol. 2005;184:623628.
  13. Galiè N, Kim NH. Pulmonary microvascular disease in chronic thromboembolic pulmonary hypertension. Proc Am Thorac Soc. 2006;3:571576.
  14. Goodman L. Small pulmonary emboli: what do we know? Radiology. 2005;234:654658.
  15. Stein PD, Henry JW, Gottschalk A. Reassessment of pulmonary angiography for the diagnosis of pulmonary embolism: relation of interpreter agreement to the order of the involved pulmonary arterial branch. Radiology. 1999;210:689691.
  16. Patel S, Kazerooni EA. Helical CT for the evaluation of acute pulmonary embolism. AJR Am J Roentgenol. 2005;185:135149.
  17. Freiman DG, Suyemoto J, Wessler S. Frequency of pulmonary thromboembolism in man. N Engl J Med. 1965;272:12781280.
  18. Wagenvoort CA. Pathology of pulmonary thromboembolism. Chest. 1995;107(1 suppl):10S17S.
  19. Suh JM, Cronan JJ, Healey TT. Dots are not clots: the over‐diagnosis and over‐treatment of PE. Emerg Radiol. 2010;17:347352.
  20. Moores LK, King CS, Holley AB. Current approach to the diagnosis of acute nonmassive pulmonary embolism. Chest. 2011;140:509518.
  21. Tapson VF. Acute pulmonary embolism: comment on “time trends in pulmonary embolism in the United States.” Arch Intern Med. 2011;171:837839.
  22. National Institutes of Health, ClinicalTrials.gov; Carrier M. A study to evaluate the safety of withholding anticoagulation in patients with subsegmental PE who have a negative serial bilateral lower extremity ultrasound (SSPE). ClinicalTrials.gov identifier: NCT01455818.
  23. Stein PD, Henry JW, Relyea B. Untreated patients with pulmonary embolism: outcome, clinical, and laboratory assessment. Chest. 1995;107:931935.
  24. Stein PD, Janjua M, Matta F, Alrifai A, Jaweesh F, Chughtai HL. Prognostic value of D‐dimer in stable patients with pulmonary embolism. Clin Appl Thromb Hemost. 2011;17:E183E185.
  25. Gal G, Righini M, Parent F, Strijen M, Couturaud F. Diagnosis and management of subsegmental pulmonary embolism. J Thromb Hemost. 2006;4:724731.
References
  1. Calder KK, Herbert M, Henderson SO. The mortality of untreated pulmonary embolism in emergency department patients. Ann Emerg Med. 2005;45:302310.
  2. Dalen J. Pulmonary embolism: what have we learned since Virchow? Natural history, pathophysiology, and diagnosis. Chest. 2002;122:14001456.
  3. Schoepf JU, Holzknecht N, Helmberger TK, et al. Subsegmental pulmonary emboli: improved detection with thin‐collimation multi‐detector row spiral CT. Radiology. 2002;222:483490.
  4. Stein PD, Kayali F, Olson RE. Trends in the use of diagnostic imaging in patients hospitalized with acute pulmonary embolism. Am J Cardiol. 2004;93:13161317.
  5. Trowbridge RL, Araoz PA, Gotway MB, Bailey RA, Auerbach AD. The effect of helical computed tomography on diagnostic and treatment strategies in patients with suspected pulmonary embolism. Am J Med. 2004;116:8490.
  6. Wiener RS, Schwartz LM, Woloshin S. Time trends in pulmonary embolism in the United States: evidence of overdiagnosis. Arch Intern Med. 2011;171:831837.
  7. Donato AA, Khoche S, Santora J, Wagner B. Clinical outcomes in patients with isolated subsegmental pulmonary emboli diagnosed by multidetector CT pulmonary angiography. Thromb Res. 2010;126:e266e270.
  8. Torbicki A, Perrier A, Konstantinides S, et al. Guidelines on the diagnosis and management of acute pulmonary embolism: the Task Force for the Diagnosis and Management of Acute Pulmonary Embolism of the European Society of Cardiology (ESC). Eur Heart J. 2008;29:22762315.
  9. Stein PD, Goodman LR, Hull RD, Dalen JE, Matta F. Diagnosis and management of isolated subsegmental pulmonary embolism: review and assessment of the options. Clin Appl Thromb Hemost. 2012;18:2026.
  10. Intelius. Available at: http://www.intelius.com. Accessed September 30, 2010.
  11. Chan CM, Woods C, Shorr AF. The validation and reproducibility of the pulmonary embolism severity index. J Thromb Haemost. 2010;8:15091514.
  12. Eyer BA, Goodman LR, Washington L. Clinicians' response to radiologists' reports of isolated subsegmental pulmonary embolism or inconclusive interpretation of pulmonary embolism using MDCT. AJR Am J Roentgenol. 2005;184:623628.
  13. Galiè N, Kim NH. Pulmonary microvascular disease in chronic thromboembolic pulmonary hypertension. Proc Am Thorac Soc. 2006;3:571576.
  14. Goodman L. Small pulmonary emboli: what do we know? Radiology. 2005;234:654658.
  15. Stein PD, Henry JW, Gottschalk A. Reassessment of pulmonary angiography for the diagnosis of pulmonary embolism: relation of interpreter agreement to the order of the involved pulmonary arterial branch. Radiology. 1999;210:689691.
  16. Patel S, Kazerooni EA. Helical CT for the evaluation of acute pulmonary embolism. AJR Am J Roentgenol. 2005;185:135149.
  17. Freiman DG, Suyemoto J, Wessler S. Frequency of pulmonary thromboembolism in man. N Engl J Med. 1965;272:12781280.
  18. Wagenvoort CA. Pathology of pulmonary thromboembolism. Chest. 1995;107(1 suppl):10S17S.
  19. Suh JM, Cronan JJ, Healey TT. Dots are not clots: the over‐diagnosis and over‐treatment of PE. Emerg Radiol. 2010;17:347352.
  20. Moores LK, King CS, Holley AB. Current approach to the diagnosis of acute nonmassive pulmonary embolism. Chest. 2011;140:509518.
  21. Tapson VF. Acute pulmonary embolism: comment on “time trends in pulmonary embolism in the United States.” Arch Intern Med. 2011;171:837839.
  22. National Institutes of Health, ClinicalTrials.gov; Carrier M. A study to evaluate the safety of withholding anticoagulation in patients with subsegmental PE who have a negative serial bilateral lower extremity ultrasound (SSPE). ClinicalTrials.gov identifier: NCT01455818.
  23. Stein PD, Henry JW, Relyea B. Untreated patients with pulmonary embolism: outcome, clinical, and laboratory assessment. Chest. 1995;107:931935.
  24. Stein PD, Janjua M, Matta F, Alrifai A, Jaweesh F, Chughtai HL. Prognostic value of D‐dimer in stable patients with pulmonary embolism. Clin Appl Thromb Hemost. 2011;17:E183E185.
  25. Gal G, Righini M, Parent F, Strijen M, Couturaud F. Diagnosis and management of subsegmental pulmonary embolism. J Thromb Hemost. 2006;4:724731.
Issue
Journal of Hospital Medicine - 9(1)
Issue
Journal of Hospital Medicine - 9(1)
Page Number
42-47
Page Number
42-47
Publications
Publications
Article Type
Display Headline
Treatment of single peripheral pulmonary emboli: Patient outcomes and factors associated with decision to treat
Display Headline
Treatment of single peripheral pulmonary emboli: Patient outcomes and factors associated with decision to treat
Sections
Article Source

© 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: O'Neil Green, MBBS, Pulmonary and Critical Care Section, Department of Internal Medicine, Yale New Haven Hospital, 300 Cedar St, New Haven, CT 06520; Telephone: (860) 459‐8719; Fax: (860) 496‐9132; E‐mail: oneil.green@yale.edu
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Hospital Mortality Measure for COPD

Article Type
Changed
Sun, 05/21/2017 - 18:06
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

Files
References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
Article PDF
Issue
Journal of Hospital Medicine - 8(8)
Publications
Page Number
428-435
Sections
Files
Files
Article PDF
Article PDF

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

Chronic obstructive pulmonary disease (COPD) affects as many as 24 million individuals in the United States, is responsible for more than 700,000 annual hospital admissions, and is currently the nation's third leading cause of death, accounting for nearly $49.9 billion in medical spending in 2010.[1, 2] Reported in‐hospital mortality rates for patients hospitalized for exacerbations of COPD range from 2% to 5%.[3, 4, 5, 6, 7] Information about 30‐day mortality rates following hospitalization for COPD is more limited; however, international studies suggest that rates range from 3% to 9%,[8, 9] and 90‐day mortality rates exceed 15%.[10]

Despite this significant clinical and economic impact, there have been no large‐scale, sustained efforts to measure the quality or outcomes of hospital care for patients with COPD in the United States. What little is known about the treatment of patients with COPD suggests widespread opportunities to increase adherence to guideline‐recommended therapies, to reduce the use of ineffective treatments and tests, and to address variation in care across institutions.[5, 11, 12]

Public reporting of hospital performance is a key strategy for improving the quality and safety of hospital care, both in the United States and internationally.[13] Since 2007, the Centers for Medicare and Medicaid Services (CMS) has reported hospital mortality rates on the Hospital Compare Web site, and COPD is 1 of the conditions highlighted in the Affordable Care Act for future consideration.[14] Such initiatives rely on validated, risk‐adjusted performance measures for comparisons across institutions and to enable outcomes to be tracked over time. We present the development, validation, and results of a model intended for public reporting of risk‐standardized mortality rates for patients hospitalized with exacerbations of COPD that has been endorsed by the National Quality Forum.[15]

METHODS

Approach to Measure Development

We developed this measure in accordance with guidelines described by the National Quality Forum,[16] CMS' Measure Management System,[17] and the American Heart Association scientific statement, Standards for Statistical Models Used for Public Reporting of Health Outcomes.[18] Throughout the process we obtained expert clinical and stakeholder input through meetings with a clinical advisory group and a national technical expert panel (see Acknowledgments). Last, we presented the proposed measure specifications and a summary of the technical expert panel discussions online and made a widely distributed call for public comments. We took the comments into consideration during the final stages of measure development (available at https://www.cms.gov/MMS/17_CallforPublicComment.asp).

Data Sources

We used claims data from Medicare inpatient, outpatient, and carrier (physician) Standard Analytic Files from 2008 to develop and validate the model, and examined model reliability using data from 2007 and 2009. The Medicare enrollment database was used to determine Medicare Fee‐for‐Service enrollment and mortality.

Study Cohort

Admissions were considered eligible for inclusion if the patient was 65 years or older, was admitted to a nonfederal acute care hospital in the United States, and had a principal diagnosis of COPD or a principal diagnosis of acute respiratory failure or respiratory arrest when paired with a secondary diagnosis of COPD with exacerbation (Table 1).

ICD‐9‐CM Codes Used to Define the Measure Cohort
ICD‐9‐CMDescription
  • NOTE: Abbreviations: COPD, chronic obstructive pulmonary disease; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; NOS, not otherwise specified.

  • Principal diagnosis when combined with a secondary diagnosis of acute exacerbation of COPD (491.21, 491.22, 493.21, or 493.22)

491.21Obstructive chronic bronchitis; with (acute) exacerbation; acute exacerbation of COPD, decompensated COPD, decompensated COPD with exacerbation
491.22Obstructive chronic bronchitis; with acute bronchitis
491.8Other chronic bronchitis; chronic: tracheitis, tracheobronchitis
491.9Unspecified chronic bronchitis
492.8Other emphysema; emphysema (lung or pulmonary): NOS, centriacinar, centrilobular, obstructive, panacinar, panlobular, unilateral, vesicular; MacLeod's syndrome; Swyer‐James syndrome; unilateral hyperlucent lung
493.20Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, unspecified
493.21Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with status asthmaticus
493.22Chronic obstructive asthma; asthma with COPD, chronic asthmatic bronchitis, with (acute) exacerbation
496Chronic: nonspecific lung disease, obstructive lung disease, obstructive pulmonary disease (COPD) NOS. (Note: This code is not to be used with any code from categories 491493.)
518.81aOther diseases of lung; acute respiratory failure; respiratory failure NOS
518.82aOther diseases of lung; acute respiratory failure; other pulmonary insufficiency, acute respiratory distress
518.84aOther diseases of lung; acute respiratory failure; acute and chronic respiratory failure
799.1aOther ill‐defined and unknown causes of morbidity and mortality; respiratory arrest, cardiorespiratory failure

If a patient was discharged and readmitted to a second hospital on the same or the next day, we combined the 2 acute care admissions into a single episode of care and assigned the mortality outcome to the first admitting hospital. We excluded admissions for patients who were enrolled in Medicare Hospice in the 12 months prior to or on the first day of the index hospitalization. An index admission was any eligible admission assessed in the measure for the outcome. We also excluded admissions for patients who were discharged against medical advice, those for whom vital status at 30 days was unknown or recorded inconsistently, and patients with unreliable data (eg, age >115 years). For patients with multiple hospitalizations during a single year, we randomly selected 1 admission per patient to avoid survival bias. Finally, to assure adequate risk adjustment we limited the analysis to patients who had continuous enrollment in Medicare Fee‐for‐Service Parts A and B for the 12 months prior to their index admission so that we could identify comorbid conditions coded during all prior encounters.

Outcomes

The outcome of 30‐day mortality was defined as death from any cause within 30 days of the admission date for the index hospitalization. Mortality was assessed at 30 days to standardize the period of outcome ascertainment,[19] and because 30 days is a clinically meaningful time frame, during which differences in the quality of hospital care may be revealed.

Risk‐Adjustment Variables

We randomly selected half of all COPD admissions in 2008 that met the inclusion and exclusion criteria to create a model development sample. Candidate variables for inclusion in the risk‐standardized model were selected by a clinician team from diagnostic groups included in the Hierarchical Condition Category clinical classification system[20] and included age and comorbid conditions. Sleep apnea (International Classification of Diseases, 9th Revision, Clinical Modification [ICD‐9‐CM] condition codes 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, and 780.57) and mechanical ventilation (ICD‐9‐CM procedure codes 93.90, 96.70, 96.71, and 96.72) were also included as candidate variables.

We defined a condition as present for a given patient if it was coded in the inpatient, outpatient, or physician claims data sources in the preceding 12 months, including the index admission. Because a subset of the condition category variables can represent a complication of care, we did not consider them to be risk factors if they appeared only as secondary diagnosis codes for the index admission and not in claims submitted during the prior year.

We selected final variables for inclusion in the risk‐standardized model based on clinical considerations and a modified approach to stepwise logistic regression. The final patient‐level risk‐adjustment model included 42 variables (Table 2).

Adjusted OR for Model Risk Factors and Mortality in Development Sample (Hierarchical Logistic Regression Model)
VariableDevelopment Sample (150,035 Admissions at 4537 Hospitals)Validation Sample (149,646 Admissions at 4535 Hospitals)
 Frequency, %OR95% CIFrequency, %OR95% CI
  • NOTE: Abbreviations: CI, confidence interval; DM, diabetes mellitus; ICD‐9‐CM, International Classification of Diseases, 9th Revision, Clinical Modification; OR, odds ratio; CC, condition category.

  • Indicates variable forced into the model.

Demographics      
Age 65 years (continuous) 1.031.03‐1.04 1.031.03‐1.04
Cardiovascular/respiratory      
Sleep apnea (ICD‐9‐CM: 327.20, 327.21, 327.23, 327.27, 327.29, 780.51, 780.53, 780.57)a9.570.870.81‐0.949.720.840.78‐0.90
History of mechanical ventilation (ICD‐9‐CM: 93.90, 96.70, 96.71, 96.72)a6.001.191.11‐1.276.001.151.08‐1.24
Respirator dependence/respiratory failure (CC 7778)a1.150.890.77‐1.021.200.780.68‐0.91
Cardiorespiratory failure and shock (CC 79)26.351.601.53‐1.6826.341.591.52‐1.66
Congestive heart failure (CC 80)41.501.341.28‐1.3941.391.311.25‐1.36
Chronic atherosclerosis (CC 8384)a50.440.870.83‐0.9050.120.910.87‐0.94
Arrhythmias (CC 9293)37.151.171.12‐1.2237.061.151.10‐1.20
Vascular or circulatory disease (CC 104106)38.201.091.05‐1.1438.091.020.98‐1.06
Fibrosis of lung and other chronic lung disorder (CC 109)16.961.081.03‐1.1317.081.111.06‐1.17
Asthma (CC 110)17.050.670.63‐0.7016.900.670.63‐0.70
Pneumonia (CC 111113)49.461.291.24‐1.3549.411.271.22‐1.33
Pleural effusion/pneumothorax (CC 114)11.781.171.11‐1.2311.541.181.12‐1.25
Other lung disorders (CC 115)53.070.800.77‐0.8353.170.830.80‐0.87
Other comorbid conditions      
Metastatic cancer and acute leukemia (CC 7)2.762.342.14‐2.562.792.151.97‐2.35
Lung, upper digestive tract, and other severe cancers (CC 8)a5.981.801.68‐1.926.021.981.85‐2.11
Lymphatic, head and neck, brain, and other major cancers; breast, prostate, colorectal and other cancers and tumors; other respiratory and heart neoplasms (CC 911)14.131.030.97‐1.0814.191.010.95‐1.06
Other digestive and urinary neoplasms (CC 12)6.910.910.84‐0.987.050.850.79‐0.92
Diabetes and DM complications (CC 1520, 119120)38.310.910.87‐0.9438.290.910.87‐0.94
Protein‐calorie malnutrition (CC 21)7.402.182.07‐2.307.442.091.98‐2.20
Disorders of fluid/electrolyte/acid‐base (CC 2223)32.051.131.08‐1.1832.161.241.19‐1.30
Other endocrine/metabolic/nutritional disorders (CC 24)67.990.750.72‐0.7867.880.760.73‐0.79
Other gastrointestinal disorders (CC 36)56.210.810.78‐0.8456.180.780.75‐0.81
Osteoarthritis of hip or knee (CC 40)9.320.740.69‐0.799.330.800.74‐0.85
Other musculoskeletal and connective tissue disorders (CC 43)64.140.830.80‐0.8664.200.830.80‐0.87
Iron deficiency and other/unspecified anemias and blood disease (CC 47)40.801.081.04‐1.1240.721.081.04‐1.13
Dementia and senility (CC 4950)17.061.091.04‐1.1416.971.091.04‐1.15
Drug/alcohol abuse, without dependence (CC 53)a23.510.780.75‐0.8223.380.760.72‐0.80
Other psychiatric disorders (CC 60)a16.491.121.07‐1.1816.431.121.06‐1.17
Quadriplegia, paraplegia, functional disability (CC 6769, 100102, 177178)4.921.030.95‐1.124.921.080.99‐1.17
Mononeuropathy, other neurological conditions/emnjuries (CC 76)11.350.850.80‐0.9111.280.880.83‐0.93
Hypertension and hypertensive disease (CC 9091)80.400.780.75‐0.8280.350.790.75‐0.83
Stroke (CC 9596)a6.771.000.93‐1.086.730.980.91‐1.05
Retinal disorders, except detachment and vascular retinopathies (CC 121)10.790.870.82‐0.9310.690.900.85‐0.96
Other eye disorders (CC 124)a19.050.900.86‐0.9519.130.980.85‐0.93
Other ear, nose, throat, and mouth disorders (CC 127)35.210.830.80‐0.8735.020.800.77‐0.83
Renal failure (CC 131)a17.921.121.07‐1.1818.161.131.08‐1.19
Decubitus ulcer or chronic skin ulcer (CC 148149)7.421.271.19‐1.357.421.331.25‐1.42
Other dermatological disorders (CC 153)28.460.900.87‐0.9428.320.890.86‐0.93
Trauma (CC 154156, 158161)9.041.091.03‐1.168.991.151.08‐1.22
Vertebral fractures (CC 157)5.011.331.24‐1.444.971.291.20‐1.39
Major complications of medical care and trauma (CC 164)5.470.810.75‐0.885.550.820.76‐0.89

Model Derivation

We used hierarchical logistic regression models to model the log‐odds of mortality as a function of patient‐level clinical characteristics and a random hospital‐level intercept. At the patient level, each model adjusts the log‐odds of mortality for age and the selected clinical covariates. The second level models the hospital‐specific intercepts as arising from a normal distribution. The hospital intercept represents the underlying risk of mortality, after accounting for patient risk. If there were no differences among hospitals, then after adjusting for patient risk, the hospital intercepts should be identical across all hospitals.

Estimation of Hospital Risk‐Standardized Mortality Rate

We calculated a risk‐standardized mortality rate, defined as the ratio of predicted to expected deaths (similar to observed‐to‐expected), multiplied by the national unadjusted mortality rate.[21] The expected number of deaths for each hospital was estimated by applying the estimated regression coefficients to the characteristics of each hospital's patients, adding the average of the hospital‐specific intercepts, transforming the data by using an inverse logit function, and summing the data from all patients in the hospital to obtain the count. The predicted number of deaths was calculated in the same way, substituting the hospital‐specific intercept for the average hospital‐specific intercept.

Model Performance, Validation, and Reliability Testing

We used the remaining admissions in 2008 as the model validation sample. We computed several summary statistics to assess the patient‐level model performance in both the development and validation samples,[22] including over‐fitting indices, predictive ability, area under the receiver operating characteristic (ROC) curve, distribution of residuals, and model 2. In addition, we assessed face validity through a survey of members of the technical expert panel. To assess reliability of the model across data years, we repeated the modeling process using qualifying COPD admissions in both 2007 and 2009. Finally, to assess generalizability we evaluated the model's performance in an all‐payer sample of data from patients admitted to California hospitals in 2006.

Analyses were conducted using SAS version 9.1.3 (SAS Institute Inc., Cary, NC). We estimated the hierarchical models using the GLIMMIX procedure in SAS.

The Human Investigation Committee at the Yale University School of Medicine/Yale New Haven Hospital approved an exemption (HIC#0903004927) for the authors to use CMS claims and enrollment data for research analyses and publication.

RESULTS

Model Derivation

After exclusions were applied, the development sample included 150,035 admissions in 2008 at 4537 US hospitals (Figure 1). Factors that were most strongly associated with the risk of mortality included metastatic cancer (odds ratio [OR] 2.34), protein calorie malnutrition (OR 2.18), nonmetastatic cancers of the lung and upper digestive tract, (OR 1.80) cardiorespiratory failure and shock (OR 1.60), and congestive heart failure (OR 1.34) (Table 2).

Figure 1
Model development and validation samples. Abbreviations: COPD, chronic obstructive pulmonary disease; FFS, Fee‐for‐Service. Exclusion categories are not mutually exclusive.

Model Performance, Validation, and Reliability

The model had a C statistic of 0.72, indicating good discrimination, and predicted mortality in the development sample ranged from 1.52% in the lowest decile to 23.74% in the highest. The model validation sample, using the remaining cases from 2008, included 149,646 admissions from 4535 hospitals. Variable frequencies and ORs were similar in both samples (Table 2). Model performance was also similar in the validation samples, with good model discrimination and fit (Table 3). Ten of 12 technical expert panel members responded to the survey, of whom 90% at least somewhat agreed with the statement, the COPD mortality measure provides an accurate reflection of quality. When the model was applied to patients age 18 years and older in the 2006 California Patient Discharge Data, overall discrimination was good (C statistic, 0.74), including in those age 18 to 64 years (C statistic, 0.75; 65 and above C statistic, 0.70).

Model Performance in Development and Validation Samples
 DevelopmentValidationData Years
IndicesSample, 2008Sample, 200820072009
  • NOTE: Abbreviations: ROC, receiver operating characteristic; SD, standard deviation. Over‐fitting indices (0, 1) provide evidence of over‐fitting and require several steps to calculate. Let b denote the estimated vector of regression coefficients. Predicted probabilities (p^)=1/(1+exp{Xb}), and Z=Xb (eg, the linear predictor that is a scalar value for everyone). A new logistic regression model that includes only an intercept and a slope by regressing the logits on Z is fitted in the validation sample (eg, Logit(P(Y=1|Z))=0+1Z. Estimated values of 0 far from 0 and estimated values of 1 far from 1 provide evidence of over‐fitting.

Number of admissions150,035149,646259,911279,377
Number of hospitals4537453546364571
Mean risk‐standardized mortality rate, % (SD)8.62 (0.94)8.64 (1.07)8.97 (1.12)8.08 (1.09)
Calibration, 0, 10.034, 0.9850.009, 1.0040.095, 1.0220.120, 0.981
Discriminationpredictive ability, lowest decile %highest decile %1.5223.741.6023.781.5424.641.4222.36
Discriminationarea under the ROC curve, C statistic0.7200.7230.7280.722
Residuals lack of fit, Pearson residual fall %    
20000
2, 091.1491.491.0891.93
0, 21.661.71.961.42
2+6.936.916.966.65
Model Wald 2 (number of covariates)6982.11 (42)7051.50 (42)13042.35 (42)12542.15 (42)
P value<0.0001<0.0001<0.0001<0.0001
Between‐hospital variance, (standard error)0.067 (0.008)0.078 (0.009)0.067 (0.006)0.072 (0.006)

Reliability testing demonstrated consistent performance over several years. The frequency and ORs of the variables included in the model showed only minor changes over time. The area under the ROC curve (C statistic) was 0.73 for the model in the 2007 sample and 0.72 for the model using 2009 data (Table 3).

Hospital Risk‐Standardized Mortality Rates

The mean unadjusted hospital 30‐day mortality rate was 8.6% and ranged from 0% to 100% (Figure 2a). Risk‐standardized mortality rates varied across hospitals (Figure 2b). The mean risk‐standardized mortality rate was 8.6% and ranged from 5.9% to 13.5%. The odds of mortality at a hospital 1 standard deviation above average was 1.20 times that of a hospital 1 standard deviation below average.

Figure 2
(a) Distribution of hospital‐level 30‐day mortality rates and (b) hospital‐level 30‐day risk‐standardized mortality rates (2008 development sample; n = 150,035 admissions from 4537 hospitals). Abbreviations: COPD, chronic obstructive pulmonary disease.

DISCUSSION

We present a hospital‐level risk‐standardized mortality measure for patients admitted with COPD based on administrative claims data that are intended for public reporting and that have achieved endorsement by the National Quality Forum, a voluntary consensus standards‐setting organization. Across more than 4500 US hospitals, the mean 30‐day risk‐standardized mortality rate in 2008 was 8.6%, and we observed considerable variation across institutions, despite adjustment for case mix, suggesting that improvement by lower‐performing institutions may be an achievable goal.

Although improving the delivery of evidence‐based care processes and outcomes of patients with acute myocardial infarction, heart failure, and pneumonia has been the focus of national quality improvement efforts for more than a decade, COPD has largely been overlooked.[23] Within this context, this analysis represents the first attempt to systematically measure, at the hospital level, 30‐day all‐cause mortality for patients admitted to US hospitals for exacerbation of COPD. The model we have developed and validated is intended to be used to compare the performance of hospitals while controlling for differences in the pretreatment risk of mortality of patients and accounting for the clustering of patients within hospitals, and will facilitate surveillance of hospital‐level risk‐adjusted outcomes over time.

In contrast to process‐based measures of quality, such as the percentage of patients with pneumonia who receive appropriate antibiotic therapy, performance measures based on patient outcomes provide a more comprehensive view of care and are more consistent with patients' goals.[24] Additionally, it is well established that hospital performance on individual and composite process measures explains only a small amount of the observed variation in patient outcomes between institutions.[25] In this regard, outcome measures incorporate important, but difficult to measure aspects of care, such as diagnostic accuracy and timing, communication and teamwork, the recognition and response to complications, care coordination at the time of transfers between levels of care, and care settings. Nevertheless, when used for making inferences about the quality of hospital care, individual measures such as the risk‐standardized hospital mortality rate should be interpreted in the context of other performance measures, including readmission, patient experience, and costs of care.

A number of prior investigators have described the outcomes of care for patients hospitalized with exacerbations of COPD, including identifying risk factors for mortality. Patil et al. carried out an analysis of the 1996 Nationwide Inpatient Sample and described an overall in‐hospital mortality rate of 2.5% among patients with COPD, and reported that a multivariable model containing sociodemographic characteristics about the patient and comorbidities had an area under the ROC curve of 0.70.[3] In contrast, this hospital‐level measure includes patients with a principal diagnosis of respiratory failure and focuses on 30‐day rather than inpatient mortality, accounting for the nearly 3‐fold higher mortality rate we observed. In a more recent study that used clinical from a large multistate database, Tabak et al. developed a prediction model for inpatient mortality for patients with COPD that contained only 4 factors: age, blood urea nitrogen, mental status, and pulse, and achieved an area under the ROC curve of 0.72.[4] The simplicity of such a model and its reliance on clinical measurements makes it particularly well suited for bedside application by clinicians, but less valuable for large‐scale public reporting programs that rely on administrative data. In the only other study identified that focused on the assessment of hospital mortality rates, Agabiti et al. analyzed the outcomes of 12,756 patients hospitalized for exacerbations of COPD, using similar ICD‐9‐CM diagnostic criteria as in this study, at 21 hospitals in Rome, Italy.[26] They reported an average crude 30‐day mortality rate of 3.8% among a group of 5 benchmark hospitals and an average mortality of 7.5% (range, 5.2%17.2%) among the remaining institutions.

To put the variation we observed in mortality rates into a broader context, the relative difference in the risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction and 39% for heart failure, whereas rates varied 30% for COPD, from 7.6% to 9.9%.[27] Model discrimination in COPD (C statistic, 0.72) was also similar to that reported for models used for public reporting of hospital mortality in acute myocardial infarction (C statistic, 0.71) and pneumonia (C statistic, 0.72).

This study has a number of important strengths. First, the model was developed from a large sample of recent Medicare claims, achieved good discrimination, and was validated in samples not limited to Medicare beneficiaries. Second, by including patients with a principal diagnosis of COPD, as well as those with a principal diagnosis of acute respiratory failure when accompanied by a secondary diagnosis of COPD with acute exacerbation, this model can be used to assess hospital performance across the full spectrum of disease severity. This broad set of ICD‐9‐CM codes used to define the cohort also ensures that efforts to measure hospital performance will be less influenced by differences in documentation and coding practices across hospitals relating to the diagnosis or sequencing of acute respiratory failure diagnoses. Moreover, the inclusion of patients with respiratory failure is important because these patients have the greatest risk of mortality, and are those in whom efforts to improve the quality and safety of care may have the greatest impact. Third, rather than relying solely on information documented during the index admission, we used ambulatory and inpatient claims from the full year prior to the index admission to identify comorbidities and to distinguish them from potential complications of care. Finally, we did not include factors such as hospital characteristics (eg, number of beds, teaching status) in the model. Although they might have improved overall predictive ability, the goal of the hospital mortality measure is to enable comparisons of mortality rates among hospitals while controlling for differences in patient characteristics. To the extent that factors such as size or teaching status might be independently associated with hospital outcomes, it would be inappropriate to adjust away their effects, because mortality risk should not be influenced by hospital characteristics other than through their effects on quality.

These results should be viewed in light of several limitations. First, we used ICD‐9‐CM codes derived from claims files to define the patient populations included in the measure rather than collecting clinical or physiologic information prospectively or through manual review of medical records, such as the forced expiratory volume in 1 second or whether the patient required long‐term oxygen therapy. Nevertheless, we included a broad set of potential diagnosis codes to capture the full spectrum of COPD exacerbations and to minimize differences in coding across hospitals. Second, because the risk‐adjustment included diagnoses coded in the year prior to the index admission, it is potentially subject to bias due to regional differences in medical care utilization that are not driven by underlying differences in patient illness.[28] Third, using administrative claims data, we observed some paradoxical associations in the model that are difficult to explain on clinical grounds, such as a protective effect of substance and alcohol abuse or prior episodes of respiratory failure. Fourth, although we excluded patients from the analysis who were enrolled in hospice prior to, or on the day of, the index admission, we did not exclude those who choose to withdraw support, transition to comfort measures only, or enrolled in hospice care during a hospitalization. We do not seek to penalize hospitals for being sensitive to the preferences of patients at the end of life. At the same time, it is equally important that the measure is capable of detecting the outcomes of suboptimal care that may in some instances lead a patient or their family to withdraw support or choose hospice. Finally, we did not have the opportunity to validate the model against a clinical registry of patients with COPD, because such data do not currently exist. Nevertheless, the use of claims as a surrogate for chart data for risk adjustment has been validated for several conditions, including acute myocardial infarction, heart failure, and pneumonia.[29, 30]

CONCLUSIONS

Risk‐standardized 30‐day mortality rates for Medicare beneficiaries with COPD vary across hospitals in the US. Calculating and reporting hospital outcomes using validated performance measures may catalyze quality improvement activities and lead to better outcomes. Additional research would be helpful to confirm that hospitals with lower mortality rates achieve care that meets the goals of patients and their families better than at hospitals with higher mortality rates.

Acknowledgment

The authors thank the following members of the technical expert panel: Darlene Bainbridge, RN, MS, NHA, CPHQ, CPHRM, President/CEO, Darlene D. Bainbridge & Associates, Inc.; Robert A. Balk, MD, Director of Pulmonary and Critical Care Medicine, Rush University Medical Center; Dale Bratzler, DO, MPH, President and CEO, Oklahoma Foundation for Medical Quality; Scott Cerreta, RRT, Director of Education, COPD Foundation; Gerard J. Criner, MD, Director of Temple Lung Center and Divisions of Pulmonary and Critical Care Medicine, Temple University; Guy D'Andrea, MBA, President, Discern Consulting; Jonathan Fine, MD, Director of Pulmonary Fellowship, Research and Medical Education, Norwalk Hospital; David Hopkins, MS, PhD, Senior Advisor, Pacific Business Group on Health; Fred Martin Jacobs, MD, JD, FACP, FCCP, FCLM, Executive Vice President and Director, Saint Barnabas Quality Institute; Natalie Napolitano, MPH, RRT‐NPS, Respiratory Therapist, Inova Fairfax Hospital; Russell Robbins, MD, MBA, Principal and Senior Clinical Consultant, Mercer. In addition, the authors acknowledge and thank Angela Merrill, Sandi Nelson, Marian Wrobel, and Eric Schone from Mathematica Policy Research, Inc., Sharon‐Lise T. Normand from Harvard Medical School, and Lein Han and Michael Rapp at The Centers for Medicare & Medicaid Services for their contributions to this work.

Disclosures

Peter K. Lindenauer, MD, MSc, is the guarantor of this article, taking responsibility for the integrity of the work as a whole, from inception to published article, and takes responsibility for the content of the manuscript, including the data and data analysis. All authors have made substantial contributions to the conception and design, or acquisition of data, or analysis and interpretation of data; have drafted the submitted article or revised it critically for important intellectual content; and have provided final approval of the version to be published. Preparation of this manuscript was completed under Contract Number: HHSM‐5002008‐0025I/HHSM‐500‐T0001, Modification No. 000007, Option Year 2 Measure Instrument Development and Support (MIDS). Sponsors did not contribute to the development of the research or manuscript. Dr. Au reports being an unpaid research consultant for Bosch Inc. He receives research funding from the NIH, Department of Veterans Affairs, AHRQ, and Gilead Sciences. The views of the this manuscript represent the authors and do not necessarily represent those of the Department of Veterans Affairs. Drs. Drye and Bernheim report receiving contract funding from CMS to develop and maintain quality measures.

References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
References
  1. FASTSTATS—chronic lower respiratory disease. Available at: http://www.cdc.gov/nchs/fastats/copd.htm. Accessed September 18, 2010.
  2. National Heart, Lung and Blood Institute. Morbidity and mortality chartbook. Available at: http://www.nhlbi.nih.gov/resources/docs/cht‐book.htm. Accessed April 27, 2010.
  3. Patil SP, Krishnan JA, Lechtzin N, Diette GB. In‐hospital mortality following acute exacerbations of chronic obstructive pulmonary disease. Arch Intern Med. 2003;163(10):11801186.
  4. Tabak YP, Sun X, Johannes RS, Gupta V, Shorr AF. Mortality and need for mechanical ventilation in acute exacerbations of chronic obstructive pulmonary disease: development and validation of a simple risk score. Arch Intern Med. 2009;169(17):15951602.
  5. Lindenauer PK, Pekow P, Gao S, Crawford AS, Gutierrez B, Benjamin EM. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144(12):894903.
  6. Dransfield MT, Rowe SM, Johnson JE, Bailey WC, Gerald LB. Use of beta blockers and the risk of death in hospitalised patients with acute exacerbations of COPD. Thorax. 2008;63(4):301305.
  7. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP facts and figures: statistics on hospital‐based care in the United States, 2007. 2009. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed August 6, 2012.
  8. Fruchter O, Yigla M. Predictors of long‐term survival in elderly patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Respirology. 2008;13(6):851855.
  9. Faustini A, Marino C, D'Ippoliti D, Forastiere F, Belleudi V, Perucci CA. The impact on risk‐factor analysis of different mortality outcomes in COPD patients. Eur Respir J 2008;32(3):629636.
  10. Roberts CM, Lowe D, Bucknall CE, Ryland I, Kelly Y, Pearson MG. Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax. 2002;57(2):137141.
  11. Mularski RA, Asch SM, Shrank WH, et al. The quality of obstructive lung disease care for adults in the United States as measured by adherence to recommended processes. Chest. 2006;130(6):18441850.
  12. Bratzler DW, Oehlert WH, McAdams LM, Leon J, Jiang H, Piatt D. Management of acute exacerbations of chronic obstructive pulmonary disease in the elderly: physician practices in the community hospital setting. J Okla State Med Assoc. 2004;97(6):227232.
  13. Corrigan J, Eden J, Smith B. Leadership by Example: Coordinating Government Roles in Improving Health Care Quality. Washington, DC: National Academies Press; 2002.
  14. Patient Protection and Affordable Care Act [H.R. 3590], Pub. L. No. 111–148, §2702, 124 Stat. 119, 318–319 (March 23, 2010). Available at: http://www.gpo.gov/fdsys/pkg/PLAW‐111publ148/html/PLAW‐111publ148.htm. Accessed July 15, 2012.
  15. National Quality Forum. NQF Endorses Additional Pulmonary Measure. 2013. Available at: http://www.qualityforum.org/News_And_Resources/Press_Releases/2013/NQF_Endorses_Additional_Pulmonary_Measure.aspx. Accessed January 11, 2013.
  16. National Quality Forum. National voluntary consensus standards for patient outcomes: a consensus report. Washington, DC: National Quality Forum; 2011.
  17. The Measures Management System. The Centers for Medicare and Medicaid Services. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/MMS/index.html?redirect=/MMS/. Accessed August 6, 2012.
  18. Krumholz HM, Brindis RG, Brush JE, et al. Standards for statistical models used for public reporting of health outcomes: an American Heart Association Scientific Statement from the Quality of Care and Outcomes Research Interdisciplinary Writing Group: cosponsored by the Council on Epidemiology and Prevention and the Stroke Council. Endorsed by the American College of Cardiology Foundation. Circulation. 2006;113(3):456462.
  19. Drye EE, Normand S‐LT, Wang Y, et al. Comparison of hospital risk‐standardized mortality rates calculated by using in‐hospital and 30‐day models: an observational study with implications for hospital profiling. Ann Intern Med. 2012;156(1 pt 1):1926.
  20. Pope G, Ellis R, Ash A, et al. Diagnostic cost group hierarchical condition category models for Medicare risk adjustment. Report prepared for the Health Care Financing Administration. Health Economics Research, Inc.; 2000. Available at: http://www.cms.gov/Research‐Statistics‐Data‐and‐Systems/Statistics‐Trends‐and‐Reports/Reports/downloads/pope_2000_2.pdf. Accessed November 7, 2009.
  21. Normand ST, Shahian DM. Statistical and clinical aspects of hospital outcomes profiling. Stat Sci. 2007;22(2):206226.
  22. Harrell FE, Shih Y‐CT. Using full probability models to compute probabilities of actual interest to decision makers. Int J Technol Assess Health Care. 2001;17(1):1726.
  23. Heffner JE, Mularski RA, Calverley PMA. COPD performance measures: missing opportunities for improving care. Chest. 2010;137(5):11811189.
  24. Krumholz HM, Normand S‐LT, Spertus JA, Shahian DM, Bradley EH. Measuring Performance For Treating Heart Attacks And Heart Failure: The Case For Outcomes Measurement. Health Aff. 2007;26(1):7585.
  25. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality. JAMA. 2006;296(1):7278.
  26. Agabiti N, Belleudi V, Davoli M, et al. Profiling hospital performance to monitor the quality of care: the case of COPD. Eur Respir J. 2010;35(5):10311038.
  27. Krumholz HM, Merrill AR, Schone EM, et al. Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission. Circ Cardiovasc Qual Outcomes. 2009;2(5):407413.
  28. Welch HG, Sharp SM, Gottlieb DJ, Skinner JS, Wennberg JE. Geographic variation in diagnosis frequency and risk of death among Medicare beneficiaries. JAMA. 2011;305(11):11131118.
  29. Bratzler DW, Normand S‐LT, Wang Y, et al. An administrative claims model for profiling hospital 30‐day mortality rates for pneumonia patients. PLoS ONE. 2011;6(4):e17401.
  30. Krumholz HM, Wang Y, Mattera JA, et al. An Administrative Claims Model Suitable for Profiling Hospital Performance Based on 30‐Day Mortality Rates Among Patients With Heart Failure. Circulation. 2006;113(13):16931701.
Issue
Journal of Hospital Medicine - 8(8)
Issue
Journal of Hospital Medicine - 8(8)
Page Number
428-435
Page Number
428-435
Publications
Publications
Article Type
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease
Display Headline
Development, validation, and results of a risk‐standardized measure of hospital 30‐day mortality for patients with exacerbation of chronic obstructive pulmonary disease
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Peter K. Lindenauer, MD, MSc, Baystate Medical Center, Center for Quality of Care Research, 759 Chestnut St., Springfield, MA 01199; Telephone: 413–794‐5987; Fax: 413–794–8866; E‐mail: peter.lindenauer@bhs.org
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Readmission and Mortality [Rates] in Pneumonia

Article Type
Changed
Sun, 05/28/2017 - 20:18
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Article PDF
Issue
Journal of Hospital Medicine - 5(6)
Publications
Page Number
E12-E18
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article PDF
Article PDF

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

Pneumonia results in some 1.2 million hospital admissions each year in the United States, is the second leading cause of hospitalization among patients over 65, and accounts for more than $10 billion annually in hospital expenditures.1, 2 As a result of complex demographic and clinical forces, including an aging population, increasing prevalence of comorbidities, and changes in antimicrobial resistance patterns, between the periods 1988 to 1990 and 2000 to 2002 the number of patients hospitalized for pneumonia grew by 20%, and pneumonia was the leading infectious cause of death.3, 4

Given its public health significance, pneumonia has been the subject of intensive quality measurement and improvement efforts for well over a decade. Two of the largest initiatives are the Centers for Medicare & Medicaid Services (CMS) National Pneumonia Project and The Joint Commission ORYX program.5, 6 These efforts have largely entailed measuring hospital performance on pneumonia‐specific processes of care, such as whether blood oxygen levels were assessed, whether blood cultures were drawn before antibiotic treatment was initiated, the choice and timing of antibiotics, and smoking cessation counseling and vaccination at the time of discharge. While measuring processes of care (especially when they are based on sound evidence), can provide insights about quality, and can help guide hospital improvement efforts, these measures necessarily focus on a narrow spectrum of the overall care provided. Outcomes can complement process measures by directing attention to the results of care, which are influenced by both measured and unmeasured factors, and which may be more relevant from the patient's perspective.79

In 2008 CMS expanded its public reporting initiatives by adding risk‐standardized hospital mortality rates for pneumonia to the Hospital Compare website (http://www.hospitalcompare.hhs.gov/).10 Readmission rates were added in 2009. We sought to examine patterns of hospital and regional performance for patients with pneumonia as reflected in 30‐day risk‐standardized readmission and mortality rates. Our report complements the June 2010 annual release of data on the Hospital Compare website. CMS also reports 30‐day risk‐standardized mortality and readmission for acute myocardial infarction and heart failure; a description of the 2010 reporting results for those measures are described elsewhere.

Methods

Design, Setting, Subjects

We conducted a cross‐sectional study at the hospital level of the outcomes of care of fee‐for‐service patients hospitalized for pneumonia between July 2006 and June 2009. Patients are eligible to be included in the measures if they are 65 years or older, have a principal diagnosis of pneumonia (International Classification of Diseases, Ninth Revision, Clinical Modification codes 480.X, 481, 482.XX, 483.X, 485, 486, and 487.0), and are cared for at a nonfederal acute care hospital in the US and its organized territories, including Puerto Rico, Guam, the US Virgin Islands, and the Northern Mariana Islands.

The mortality measure excludes patients enrolled in the Medicare hospice program in the year prior to, or on the day of admission, those in whom pneumonia is listed as a secondary diagnosis (to eliminate cases resulting from complications of hospitalization), those discharged against medical advice, and patients who are discharged alive but whose length of stay in the hospital is less than 1 day (because of concerns about the accuracy of the pneumonia diagnosis). Patients are also excluded if their administrative records for the period of analysis (1 year prior to hospitalization and 30 days following discharge) were not available or were incomplete, because these are needed to assess comorbid illness and outcomes. The readmission measure is similar, but does not exclude patients on the basis of hospice program enrollment (because these patients have been admitted and readmissions for hospice patients are likely unplanned events that can be measured and reduced), nor on the basis of hospital length of stay (because patients discharged within 24 hours may be at a heightened risk of readmission).11, 12

Information about patient comorbidities is derived from diagnoses recorded in the year prior to the index hospitalization as found in Medicare inpatient, outpatient, and carrier (physician) standard analytic files. Comorbidities are identified using the Condition Categories of the Hierarchical Condition Category grouper, which sorts the more than 15,000 possible diagnostic codes into 189 clinically‐coherent conditions and which was originally developed to support risk‐adjusted payments within Medicare managed care.13

Outcomes

The patient outcomes assessed include death from any cause within 30 days of admission and readmission for any cause within 30 days of discharge. All‐cause, rather than disease‐specific, readmission was chosen because hospital readmission as a consequence of suboptimal inpatient care or discharge coordination may manifest in many different diagnoses, and no validated method is available to distinguish related from unrelated readmissions. The measures use the Medicare Enrollment Database to determine mortality status, and acute care hospital inpatient claims are used to identify readmission events. For patients with multiple hospitalizations during the study period, the mortality measure randomly selects one hospitalization to use for determination of mortality. Admissions that are counted as readmissions (i.e., those that occurred within 30 days of discharge following hospitalization for pneumonia) are not also treated as index hospitalizations. In the case of patients who are transferred to or from another acute care facility, responsibility for deaths is assigned to the hospital that initially admitted the patient, while responsibility for readmissions is assigned to the hospital that ultimately discharges the patient to a nonacute setting (e.g., home, skilled nursing facilities).

Risk‐Standardization Methods

Hierarchical logistic regression is used to model the log‐odds of mortality or readmission within 30 days of admission or discharge from an index pneumonia admission as a function of patient demographic and clinical characteristics and a random hospital‐specific intercept. This strategy accounts for within‐hospital correlation of the observed outcomes, and reflects the assumption that underlying differences in quality among the hospitals being evaluated lead to systematic differences in outcomes. In contrast to nonhierarchical models which ignore hospital effects, this method attempts to measure the influence of the hospital on patient outcome after adjusting for patient characteristics. Comorbidities from the index admission that could represent potential complications of care are not included in the model unless they are also documented in the 12 months prior to admission. Hospital‐specific mortality and readmission rates are calculated as the ratio of predicted‐to‐expected events (similar to the observed/expected ratio), multiplied by the national unadjusted rate, a form of indirect standardization.

The model for mortality has a c‐statistic of 0.72 whereas a model based on medical record review that was developed for validation purposes had a c‐statistic of 0.77. The model for readmission has a c‐statistic of 0.63 whereas a model based on medical review had a c‐statistic of 0.59. The mortality and readmission models produce similar state‐level mortality and readmission rate estimates as the models derived from medical record review, and can therefore serve as reasonable surrogates. These methods, including their development and validation, have been described fully elsewhere,14, 15 and have been evaluated and subsequently endorsed by the National Quality Forum.16

Identification of Geographic Regions

To characterize patterns of performance geographically we identified the 306 hospital referral regions for each hospital in our analysis using definitions provided by the Dartmouth Atlas of Health Care project. Unlike a hospital‐level analysis, the hospital referral regions represent regional markets for tertiary care and are widely used to summarize variation in medical care inputs, utilization patterns, and health outcomes and provide a more detailed look at variation in outcomes than results at the state level.17

Analyses

Summary statistics were constructed using frequencies and proportions for categorical data, and means, medians and interquartile ranges for continuous variables. To characterize 30‐day risk‐standardized mortality and readmission rates at the hospital‐referral region level, we calculated means and percentiles by weighting each hospital's value by the inverse of the variance of the hospital's estimated rate. Hospitals with larger sample sizes, and therefore more precise estimates, lend more weight to the average. Hierarchical models were estimated using the SAS GLIMMIX procedure. Bayesian shrinkage was used to estimate rates in order to take into account the greater uncertainty in the true rates of hospitals with small caseloads. Using this technique, estimated rates at low volume institutions are shrunken toward the population mean, while hospitals with large caseloads have a relatively smaller amount of shrinkage and the estimate is closer to the hospital's observed rate.18

To determine whether a hospital's performance is significantly different than the national rate we measured whether the 95% interval estimate for the risk‐standardized rate overlapped with the national crude mortality or readmission rate. This information is used to categorize hospitals on Hospital Compare as better than the US national rate, worse than the US national rate, or no different than the US national rate. Hospitals with fewer than 25 cases in the 3‐year period, are excluded from this categorization on Hospital Compare.

Analyses were conducted with the use of SAS 9.1.3 (SAS Institute Inc, Cary, NC). We created the hospital referral region maps using ArcGIS version 9.3 (ESRI, Redlands, CA). The Human Investigation Committee at the Yale School of Medicine approved an exemption for the authors to use CMS claims and enrollment data for research analyses and publication.

Results

Hospital‐Specific Risk‐Standardized 30‐Day Mortality and Readmission Rates

Of the 1,118,583 patients included in the mortality analysis 129,444 (11.6%) died within 30 days of hospital admission. The median (Q1, Q3) hospital 30‐day risk‐standardized mortality rate was 11.1% (10.0%, 12.3%), and ranged from 6.7% to 20.9% (Table 1, Figure 1). Hospitals at the 10th percentile had 30‐day risk‐standardized mortality rates of 9.0% while for those at the 90th percentile of performance the rate was 13.5%. The odds of all‐cause mortality for a patient treated at a hospital that was one standard deviation above the national average was 1.68 times higher than that of a patient treated at a hospital that was one standard deviation below the national average.

Figure 1
Distribution of hospital risk‐standardized 30‐day pneumonia mortality rates.
Risk‐Standardized Hospital 30‐Day Pneumonia Mortality and Readmission Rates
 MortalityReadmission
  • Abbreviation: SD, standard deviation.

Patients (n)11185831161817
Hospitals (n)47884813
Patient age, years, median (Q1, Q3)81 (74,86)80 (74,86)
Nonwhite, %11.111.1
Hospital case volume, median (Q1, Q3)168 (77,323)174 (79,334)
Risk‐standardized hospital rate, mean (SD)11.2 (1.2)18.3 (0.9)
Minimum6.713.6
1st percentile7.514.9
5th percentile8.515.8
10th percentile9.016.4
25th percentile10.017.2
Median11.118.2
75th percentile12.319.2
90th percentile13.520.4
95th percentile14.421.1
99th percentile16.122.8
Maximum20.926.7
Model fit statistics  
c‐Statistic0.720.63
Intrahospital Correlation0.070.03

For the 3‐year period 2006 to 2009, 222 (4.7%) hospitals were categorized as having a mortality rate that was better than the national average, 3968 (83.7%) were no different than the national average, 221 (4.6%) were worse and 332 (7.0%) did not meet the minimum case threshold.

Among the 1,161,817 patients included in the readmission analysis 212,638 (18.3%) were readmitted within 30 days of hospital discharge. The median (Q1,Q3) 30‐day risk‐standardized readmission rate was 18.2% (17.2%, 19.2%) and ranged from 13.6% to 26.7% (Table 1, Figure 2). Hospitals at the 10th percentile had 30‐day risk‐standardized readmission rates of 16.4% while for those at the 90th percentile of performance the rate was 20.4%. The odds of all‐cause readmission for a patient treated at a hospital that was one standard deviation above the national average was 1.40 times higher than the odds of all‐cause readmission if treated at a hospital that was one standard deviation below the national average.

Figure 2
Distribution of hospital risk‐standardized 30‐day pneumonia readmission rates.

For the 3‐year period 2006 to 2009, 64 (1.3%) hospitals were categorized as having a readmission rate that was better than the national average, 4203 (88.2%) were no different than the national average, 163 (3.4%) were worse and 333 (7.0%) had less than 25 cases and were therefore not categorized.

While risk‐standardized readmission rates were substantially higher than risk‐standardized mortality rates, mortality rates varied more. For example, the top 10% of hospitals had a relative mortality rate that was 33% lower than those in the bottom 10%, as compared with just a 20% relative difference for readmission rates. The coefficient of variation, a normalized measure of dispersion that unlike the standard deviation is independent of the population mean, was 10.7 for risk‐standardized mortality rates and 4.9 for readmission rates.

Regional Risk‐Standardized 30‐Day Mortality and Readmission Rates

Figures 3 and 4 show the distribution of 30‐day risk‐standardized mortality and readmission rates among hospital referral regions by quintile. Highest mortality regions were found across the entire country, including parts of Northern New England, the Mid and South Atlantic, East and the West South Central, East and West North Central, and the Mountain and Pacific regions of the West. The lowest mortality rates were observed in Southern New England, parts of the Mid and South Atlantic, East and West South Central, and parts of the Mountain and Pacific regions of the West (Figure 3).

Figure 3
Risk‐standardized regional 30‐day pneumonia mortality rates. RSMR, risk‐standardized mortality rate.
Figure 4
Risk‐standardized regional 30‐day pneumonia readmission rates. RSMR, risk‐standardized mortality rate.

Readmission rates were higher in the eastern portions of the US (including the Northeast, Mid and South Atlantic, East South Central) as well as the East North Central, and small parts of the West North Central portions of the Midwest and in Central California. The lowest readmission rates were observed in the West (Mountain and Pacific regions), parts of the Midwest (East and West North Central) and small pockets within the South and Northeast (Figure 4).

Discussion

In this 3‐year analysis of patient, hospital, and regional outcomes we observed that pneumonia in the elderly remains a highly morbid illness, with a 30‐day mortality rate of approximately 11.6%. More notably we observed that risk‐standardized mortality rates, and to a lesser extent readmission rates, vary significantly across hospitals and regions. Finally, we observed that readmission rates, but not mortality rates, show strong geographic concentration.

These findings suggest possible opportunities to save or extend the lives of a substantial number of Americans, and to reduce the burden of rehospitalization on patients and families, if low performing institutions were able to achieve the performance of those with better outcomes. Additionally, because readmission is so common (nearly 1 in 5 patients), efforts to reduce overall health care spending should focus on this large potential source of savings.19 In this regard, impending changes in payment to hospitals around readmissions will change incentives for hospitals and physicians that may ultimately lead to lower readmission rates.20

Previous analyses of the quality of hospital care for patients with pneumonia have focused on the percentage of eligible patients who received guideline‐recommended antibiotics within a specified time frame (4 or 8 hours), and vaccination prior to hospital discharge.21, 22 These studies have highlighted large differences across hospitals and states in the percentage receiving recommended care. In contrast, the focus of this study was to compare risk‐standardized outcomes of care at the nation's hospitals and across its regions. This effort was guided by the notion that the measurement of care outcomes is an important complement to process measurement because outcomes represent a more holistic assessment of care, that an outcomes focus offers hospitals greater autonomy in terms of what processes to improve, and that outcomes are ultimately more meaningful to patients than the technical aspects of how the outcomes were achieved. In contrast to these earlier process‐oriented efforts, the magnitude of the differences we observed in mortality and readmission rates across hospitals was not nearly as large.

A recent analysis of the outcomes of care for patients with heart failure and acute myocardial infarction also found significant variation in both hospital and regional mortality and readmission rates.23 The relative differences in risk‐standardized hospital mortality rates across the 10th to 90th percentiles of hospital performance was 25% for acute myocardial infarction, and 39% for heart failure. By contrast, we found that the difference in risk‐standardized hospital mortality rates across the 10th to 90th percentiles in pneumonia was an even greater 50% (13.5% vs. 9.0%). Similar to the findings in acute myocardial infarction and heart failure, we observed that risk‐standardized mortality rates varied more so than did readmission rates.

Our study has a number of limitations. First, the analysis was restricted to Medicare patients only, and our findings may not be generalizable to younger patients. Second, our risk‐adjustment methods relied on claims data, not clinical information abstracted from charts. Nevertheless, we assessed comorbidities using all physician and hospital claims from the year prior to the index admission. Additionally our mortality and readmission models were validated against those based on medical record data and the outputs of the 2 approaches were highly correlated.15, 24, 25 Our study was restricted to patients with a principal diagnosis of pneumonia, and we therefore did not include those whose principal diagnosis was sepsis or respiratory failure and who had a secondary diagnosis of pneumonia. While this decision was made to reduce the risk of misclassifying complications of care as the reason for admission, we acknowledge that this is likely to have limited our study to patients with less severe disease, and may have introduced bias related to differences in hospital coding practices regarding the use of sepsis and respiratory failure codes. While we excluded patients with 1 day length of stay from the mortality analysis to reduce the risk of including patients in the measure who did not actually have pneumonia, we did not exclude them from the readmission analysis because very short length of stay may be a risk factor for readmission. An additional limitation of our study is that our findings are primarily descriptive, and we did not attempt to explain the sources of the variation we observed. For example, we did not examine the extent to which these differences might be explained by differences in adherence to process measures across hospitals or regions. However, if the experience in acute myocardial infarction can serve as a guide, then it is unlikely that more than a small fraction of the observed variation in outcomes can be attributed to factors such as antibiotic timing or selection.26 Additionally, we cannot explain why readmission rates were more geographically distributed than mortality rates, however it is possible that this may be related to the supply of physicians or hospital beds.27 Finally, some have argued that mortality and readmission rates do not necessarily reflect the very quality they intend to measure.2830

The outcomes of patients with pneumonia appear to be significantly influenced by both the hospital and region where they receive care. Efforts to improve population level outcomes might be informed by studying the practices of hospitals and regions that consistently achieve high levels of performance.31

Acknowledgements

The authors thank Sandi Nelson, Eric Schone, and Marian Wrobel at Mathematicia Policy Research and Changquin Wang and Jinghong Gao at YNHHS/Yale CORE for analytic support. They also acknowledge Shantal Savage, Kanchana Bhat, and Mayur M. Desai at Yale, Joseph S. Ross at the Mount Sinai School of Medicine, and Shaheen Halim at the Centers for Medicare and Medicaid Services.

References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
References
  1. Levit K, Wier L, Ryan K, Elixhauser A, Stranges E. HCUP Facts and Figures: Statistics on Hospital‐based Care in the United States, 2007 [Internet]. 2009 [cited 2009 Nov 7]. Available at: http://www.hcup‐us.ahrq.gov/reports.jsp. Accessed June2010.
  2. Agency for Healthcare Research and Quality. HCUP Nationwide Inpatient Sample (NIS). Healthcare Cost and Utilization Project (HCUP). [Internet]. 2007 [cited 2010 May 13]. Available at: http://www.hcup‐us.ahrq.gov/nisoverview.jsp. Accessed June2010.
  3. Fry AM, Shay DK, Holman RC, Curns AT, Anderson LJ.Trends in hospitalizations for pneumonia among persons aged 65 years or older in the United States, 1988‐2002.JAMA.20057;294(21):27122719.
  4. Heron M. Deaths: Leading Causes for 2006. NVSS [Internet]. 2010 Mar 31;58(14). Available at: http://www.cdc.gov/nchs/data/nvsr/nvsr58/nvsr58_ 14.pdf. Accessed June2010.
  5. Centers for Medicare and Medicaid Services. Pneumonia [Internet]. [cited 2010 May 13]. Available at: http://www.qualitynet.org/dcs/ContentServer?cid= 108981596702326(1):7585.
  6. Bratzler DW, Nsa W, Houck PM.Performance measures for pneumonia: are they valuable, and are process measures adequate?Curr Opin Infect Dis.2007;20(2):182189.
  7. Werner RM, Bradlow ET.Relationship Between Medicare's Hospital Compare Performance Measures and Mortality Rates.JAMA.2006;296(22):26942702.
  8. Medicare.gov ‐ Hospital Compare [Internet]. [cited 2009 Nov 6]. Available at: http://www.hospitalcompare.hhs.gov/Hospital/Search/Welcome.asp? version=default 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2010. Available at: http://www.qualitynet.org/dcs/ContentServer? c=Page 2000 [cited 2009 Nov 7]. Available at: http://www.cms.hhs.gov/Reports/Reports/ItemDetail.asp?ItemID=CMS023176. Accessed June2010.
  9. Krumholz H, Normand S, Bratzler D, et al. Risk‐Adjustment Methodology for Hospital Monitoring/Surveillance and Public Reporting Supplement #1: 30‐Day Mortality Model for Pneumonia [Internet]. Yale University; 2006. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page 2008. Available at: http://www.qualitynet.org/dcs/ContentServer?c= Page1999.
  10. Normand ST, Shahian DM.Statistical and clinical aspects of hospital outcomes profiling.Stat Sci.2007;22(2):206226.
  11. Medicare Payment Advisory Commission. Report to the Congress: Promoting Greater Efficiency in Medicare.2007 June.
  12. Patient Protection and Affordable Care Act [Internet]. 2010. Available at: http://thomas.loc.gov. Accessed June2010.
  13. Jencks SF, Cuerdon T, Burwen DR, et al.Quality of medical care delivered to medicare beneficiaries: a profile at state and national levels.JAMA.2000;284(13):16701676.
  14. Jha AK, Li Z, Orav EJ, Epstein AM.Care in U.S. hospitals — the hospital quality alliance program.N Engl J Med.2005;353(3):265274.
  15. Krumholz HM, Merrill AR, Schone EM, et al.Patterns of hospital performance in acute myocardial infarction and heart failure 30‐day mortality and readmission.Circ Cardiovasc Qual Outcomes.2009;2(5):407413.
  16. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with heart failure.Circulation.2006;113(13):16931701.
  17. Krumholz HM, Wang Y, Mattera JA, et al.An administrative claims model suitable for profiling hospital performance based on 30‐day mortality rates among patients with an acute myocardial infarction.Circulation.2006;113(13):16831692.
  18. Bradley EH, Herrin J, Elbel B, et al.Hospital quality for acute myocardial infarction: correlation among process measures and relationship with short‐term mortality.JAMA.2006;296(1):7278.
  19. Fisher ES, Wennberg JE, Stukel TA, Sharp SM.Hospital readmission rates for cohorts of medicare beneficiaries in Boston and New Haven.N Engl J Med.1994;331(15):989995.
  20. Thomas JW, Hofer TP.Research evidence on the validity of risk‐adjusted mortality rate as a measure of hospital quality of care.Med Care Res Rev.1998;55(4):371404.
  21. Benbassat J, Taragin M.Hospital readmissions as a measure of quality of health care: advantages and limitations.Arch Intern Med.2000;160(8):10741081.
  22. Shojania KG, Forster AJ.Hospital mortality: when failure is not a good measure of success.CMAJ.2008;179(2):153157.
  23. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM.Research in action: using positive deviance to improve quality of health care.Implement Sci.2009;4:25.
Issue
Journal of Hospital Medicine - 5(6)
Issue
Journal of Hospital Medicine - 5(6)
Page Number
E12-E18
Page Number
E12-E18
Publications
Publications
Article Type
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Display Headline
The performance of US hospitals as reflected in risk‐standardized 30‐day mortality and readmission rates for medicare beneficiaries with pneumonia
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Legacy Keywords
community‐acquired and nosocomial pneumonia, quality improvement, outcomes measurement, patient safety, geriatric patient
Sections
Article Source

Copyright © 2010 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Center for Quality of Care Research, Baystate Medical Center, 280 Chestnut St., Springfield, MA 01199
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media