User login
Patient-Centered, Payer-Centered, or Both? The 30-Day Readmission Metric
There is little doubt that preventing 30-day readmissions to the hospital results in lower costs for payers. However, reducing costs alone does not make this metric a measure of “high value” care.1 Rather, it is the improvement in the effectiveness of the discharge process that occurs alongside lower costs that makes readmission reduction efforts “high value” – or a “win-win” for patients and payers.
However, the article by Nuckols and colleagues in this month’s issue of the Journal of Hospital Medicine (JHM) suggests that it might not be that simple and adds nuance to the ongoing discussion about the 30-day readmission metric.2 The study used data collected by the federal government to examine changes not only in 30-day readmission rates between 2009-2010 and 2013-2014 but also changes in emergency department (ED) and observation unit visits. What they found is important. In general, despite reductions in 30-day readmissions for patients served by Medicare and private insurance, there were increases in observation unit and ED visits across all payer types (including Medicare and private insurance). These increases in observation unit and ED visits resulted in statistically higher overall “revisit” rates for the uninsured and those insured by Medicaid and offset any improvements in the “revisit” rates resulting from reductions in 30-day readmissions for those with private insurance. Those insured by Medicare—representing about 300,000 of the 420,000 visits analyzed—still had a statistically lower “revisit” rate, but it was only marginally lower (25.0% in 2013-2014 versus 25.3% in 2009-2010).2
The generalizability of the Nuckols’ study was limited in that it examined only index admissions for acute myocardial infarction (AMI), heart failure (HF), and pneumonia and used data from only Georgia, Nebraska, South Carolina, and Tennessee—the four states where observation and ED visit data were available in the federal database.2 The study also did not examine hospital-level revisit data; hence, it was not able to determine if hospitals with greater reductions in readmission rates had greater increases in observation or ED visits, as one might predict. Despite these limitations, the rigor of the study was noteworthy. The authors used matching techniques to ensure that the populations examined in the two time periods were comparable. Unlike previous research,3,4 they also used a comprehensive definition of a hospital “revisit” (including both observation and ED visits) and measured “revisit” rates across several payer types, rather than focusing exclusively on those covered by fee for service Medicare, as in past studies.4,5
What the study by Nuckols and colleagues suggests is that even though patients may be readmitted less, they may be coming back to the ED or getting admitted to the observation unit more, resulting in overall “revisit” rates that are marginally lower for Medicare patients, but often the same or even higher for other payer groups, particularly disadvantaged payer groups who are uninsured or insured by Medicaid.2 Although the authors do not assert causality for these trends, it is worth noting that the much-discussed Hospital Readmission Reduction Program (or “readmission penalty”) applies only to Medicare patients aged more than 65 years. It is likely that this program influenced the differences identified between payer groups in this article.
Beyond the policy implications of these findings, the experience of patients cared for in these different settings is of paramount importance. Unfortunately, there are limited data comparing patient perceptions, preferences, or outcomes resulting from readmission to an inpatient service versus an observation unit or ED visit within 30 days of discharge. However, there is reason to believe that costs could be higher for some patients treated in the ED or an observation unit as compared to those in the inpatient setting,6 and that care continuity and quality may be different across these settings. In a recent white paper on observation care published by the Society of Hospital Medicine (SHM) Public Policy Committee,7 the SHM reported the results of a 2017 survey of its members about observation care. The results were concerning. An overwhelming majority of respondents (87%) believed that the rules for observation are unclear for patients, and 68% of respondents believed that policy changes mandating informing patients of their observation status have created conflict between the provider and the patient.7 As shared by one respondent, “the observation issue can severely damage the therapeutic bond with patient/family, who may conclude that the hospitalist has more interest in saving someone money at the expense of patient care.”7 Thus, there is significant concern about the nature of observation stays and the experience for patients and providers. We should take care to better understand these experiences given that readmission reduction efforts may funnel more patients into observation care.
As a next step, we recommend further examination of how “revisit” rates have changed over time for patients with any discharge diagnosis, and not just those with pneumonia, AMI, or HF.8 Such examinations should be stratified by payer to identify differential impacts on those with lower socioeconomic status. Analyses should also examine changes in “revisit” types at the hospital level to better understand if hospitals with reductions in readmission rates are simply shifting revisits to the observation unit or ED. It is possible that inpatient readmissions for any given hospital are decreasing without concomitant increases in observation visits, as there are forces independent of the readmission penalty, such as the Recovery Audit Contractor program, that are driving hospitals to more frequently code patients as observation visits rather than inpatient admissions.9 Thus, readmissions could decrease and observation unit visits could increase independent of one another. We also recommend further research to examine differences in care quality, clinical outcomes, and costs for those readmitted to the hospital within 30 days of discharge versus those cared for in observation units or the ED. The challenge of such studies will be to identify and examine comparable populations of patients across these three settings. Examining patient perceptions and preferences across these settings is also critical. Finally, when assessing interventions to reduce inpatient readmissions, we need to consider “revisits” as a whole, not simply readmissions.10 Otherwise, we may simply be promoting the use of interventions that shift inpatient readmissions to observation unit or ED revisits, and there is little that is patient-centered or high value about that.9
Disclosures
The authors have nothing to disclose.
1. Smith M, Saunders R, Stuckhardt L, McGinnis JM, eds. Best care at lower cost: the path to continuously learning health care in America. Washington, DC: National Academies Press; 2013. PubMed
2. Nuckols TK, Fingar KR, Barrett ML, et al. Returns to emergency department, observation, or inpatient care within 30 days after hospitalization in 4 states, 2009 and 2010 versus 2013 and 2014. J Hosp Med. 2018;13(5):296-303. PubMed
3. Fingar KR, Washington R. Trends in Hospital Readmissions for Four High-Volume Conditions, 2009–2013. Statistical Brief No. 196. https://www.hcup-us.ahrq.gov/reports/statbriefs/sb196-Readmissions-Trends-High-Volume-Conditions.pdf. Accessed March 5, 2018.
4. Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016;374(16):1543-1551. DOI: 10.1056/NEJMsa1513024. PubMed
5. Gerhardt G, Yemane A, Apostle K, Oelschlaeger A, Rollins E, Brennan N. Evaluating whether changes in utilization of hospital outpatient services contributed to lower Medicare readmission rate. Medicare Medicaid Res Rev. 2014;4(1). DOI: 10.5600/mmrr2014-004-01-b03 PubMed
6. Kangovi S, Cafardi SG, Smith RA, Kulkarni R, Grande D. Patient financial responsibility for observation care. J Hosp Med. 2015;10(11):718-723. DOI: 10.1002/jhm.2436. PubMed
7. The Hospital Observation Care Problem: Perspectives and Solutions from the Society of Hospital Medicine. Society of Hospital Medicine Public Policy Committee. https://www.hospitalmedicine.org/globalassets/policy-and-advocacy/advocacy-pdf/shms-observation-white-paper-2017. Accessed February 12, 2018.
8. Rosen AK, Chen Q, Shwartz M, et al. Does use of a hospital-wide readmission measure versus condition-specific readmission measures make a difference for hospital profiling and payment penalties? Medical Care. 2016;54(2):155-161. DOI: 10.1097/MLR.0000000000000455. PubMed
9. Baugh CW, Schuur JD. Observation care-high-value care or a cost-shifting loophole? N Engl J Med. 2013;369(4):302-305. DOI: 10.1056/NEJMp1304493. PubMed
10. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):2145-2147. DOI: 10.1056/NEJMp1408345. PubMed
There is little doubt that preventing 30-day readmissions to the hospital results in lower costs for payers. However, reducing costs alone does not make this metric a measure of “high value” care.1 Rather, it is the improvement in the effectiveness of the discharge process that occurs alongside lower costs that makes readmission reduction efforts “high value” – or a “win-win” for patients and payers.
However, the article by Nuckols and colleagues in this month’s issue of the Journal of Hospital Medicine (JHM) suggests that it might not be that simple and adds nuance to the ongoing discussion about the 30-day readmission metric.2 The study used data collected by the federal government to examine changes not only in 30-day readmission rates between 2009-2010 and 2013-2014 but also changes in emergency department (ED) and observation unit visits. What they found is important. In general, despite reductions in 30-day readmissions for patients served by Medicare and private insurance, there were increases in observation unit and ED visits across all payer types (including Medicare and private insurance). These increases in observation unit and ED visits resulted in statistically higher overall “revisit” rates for the uninsured and those insured by Medicaid and offset any improvements in the “revisit” rates resulting from reductions in 30-day readmissions for those with private insurance. Those insured by Medicare—representing about 300,000 of the 420,000 visits analyzed—still had a statistically lower “revisit” rate, but it was only marginally lower (25.0% in 2013-2014 versus 25.3% in 2009-2010).2
The generalizability of the Nuckols’ study was limited in that it examined only index admissions for acute myocardial infarction (AMI), heart failure (HF), and pneumonia and used data from only Georgia, Nebraska, South Carolina, and Tennessee—the four states where observation and ED visit data were available in the federal database.2 The study also did not examine hospital-level revisit data; hence, it was not able to determine if hospitals with greater reductions in readmission rates had greater increases in observation or ED visits, as one might predict. Despite these limitations, the rigor of the study was noteworthy. The authors used matching techniques to ensure that the populations examined in the two time periods were comparable. Unlike previous research,3,4 they also used a comprehensive definition of a hospital “revisit” (including both observation and ED visits) and measured “revisit” rates across several payer types, rather than focusing exclusively on those covered by fee for service Medicare, as in past studies.4,5
What the study by Nuckols and colleagues suggests is that even though patients may be readmitted less, they may be coming back to the ED or getting admitted to the observation unit more, resulting in overall “revisit” rates that are marginally lower for Medicare patients, but often the same or even higher for other payer groups, particularly disadvantaged payer groups who are uninsured or insured by Medicaid.2 Although the authors do not assert causality for these trends, it is worth noting that the much-discussed Hospital Readmission Reduction Program (or “readmission penalty”) applies only to Medicare patients aged more than 65 years. It is likely that this program influenced the differences identified between payer groups in this article.
Beyond the policy implications of these findings, the experience of patients cared for in these different settings is of paramount importance. Unfortunately, there are limited data comparing patient perceptions, preferences, or outcomes resulting from readmission to an inpatient service versus an observation unit or ED visit within 30 days of discharge. However, there is reason to believe that costs could be higher for some patients treated in the ED or an observation unit as compared to those in the inpatient setting,6 and that care continuity and quality may be different across these settings. In a recent white paper on observation care published by the Society of Hospital Medicine (SHM) Public Policy Committee,7 the SHM reported the results of a 2017 survey of its members about observation care. The results were concerning. An overwhelming majority of respondents (87%) believed that the rules for observation are unclear for patients, and 68% of respondents believed that policy changes mandating informing patients of their observation status have created conflict between the provider and the patient.7 As shared by one respondent, “the observation issue can severely damage the therapeutic bond with patient/family, who may conclude that the hospitalist has more interest in saving someone money at the expense of patient care.”7 Thus, there is significant concern about the nature of observation stays and the experience for patients and providers. We should take care to better understand these experiences given that readmission reduction efforts may funnel more patients into observation care.
As a next step, we recommend further examination of how “revisit” rates have changed over time for patients with any discharge diagnosis, and not just those with pneumonia, AMI, or HF.8 Such examinations should be stratified by payer to identify differential impacts on those with lower socioeconomic status. Analyses should also examine changes in “revisit” types at the hospital level to better understand if hospitals with reductions in readmission rates are simply shifting revisits to the observation unit or ED. It is possible that inpatient readmissions for any given hospital are decreasing without concomitant increases in observation visits, as there are forces independent of the readmission penalty, such as the Recovery Audit Contractor program, that are driving hospitals to more frequently code patients as observation visits rather than inpatient admissions.9 Thus, readmissions could decrease and observation unit visits could increase independent of one another. We also recommend further research to examine differences in care quality, clinical outcomes, and costs for those readmitted to the hospital within 30 days of discharge versus those cared for in observation units or the ED. The challenge of such studies will be to identify and examine comparable populations of patients across these three settings. Examining patient perceptions and preferences across these settings is also critical. Finally, when assessing interventions to reduce inpatient readmissions, we need to consider “revisits” as a whole, not simply readmissions.10 Otherwise, we may simply be promoting the use of interventions that shift inpatient readmissions to observation unit or ED revisits, and there is little that is patient-centered or high value about that.9
Disclosures
The authors have nothing to disclose.
There is little doubt that preventing 30-day readmissions to the hospital results in lower costs for payers. However, reducing costs alone does not make this metric a measure of “high value” care.1 Rather, it is the improvement in the effectiveness of the discharge process that occurs alongside lower costs that makes readmission reduction efforts “high value” – or a “win-win” for patients and payers.
However, the article by Nuckols and colleagues in this month’s issue of the Journal of Hospital Medicine (JHM) suggests that it might not be that simple and adds nuance to the ongoing discussion about the 30-day readmission metric.2 The study used data collected by the federal government to examine changes not only in 30-day readmission rates between 2009-2010 and 2013-2014 but also changes in emergency department (ED) and observation unit visits. What they found is important. In general, despite reductions in 30-day readmissions for patients served by Medicare and private insurance, there were increases in observation unit and ED visits across all payer types (including Medicare and private insurance). These increases in observation unit and ED visits resulted in statistically higher overall “revisit” rates for the uninsured and those insured by Medicaid and offset any improvements in the “revisit” rates resulting from reductions in 30-day readmissions for those with private insurance. Those insured by Medicare—representing about 300,000 of the 420,000 visits analyzed—still had a statistically lower “revisit” rate, but it was only marginally lower (25.0% in 2013-2014 versus 25.3% in 2009-2010).2
The generalizability of the Nuckols’ study was limited in that it examined only index admissions for acute myocardial infarction (AMI), heart failure (HF), and pneumonia and used data from only Georgia, Nebraska, South Carolina, and Tennessee—the four states where observation and ED visit data were available in the federal database.2 The study also did not examine hospital-level revisit data; hence, it was not able to determine if hospitals with greater reductions in readmission rates had greater increases in observation or ED visits, as one might predict. Despite these limitations, the rigor of the study was noteworthy. The authors used matching techniques to ensure that the populations examined in the two time periods were comparable. Unlike previous research,3,4 they also used a comprehensive definition of a hospital “revisit” (including both observation and ED visits) and measured “revisit” rates across several payer types, rather than focusing exclusively on those covered by fee for service Medicare, as in past studies.4,5
What the study by Nuckols and colleagues suggests is that even though patients may be readmitted less, they may be coming back to the ED or getting admitted to the observation unit more, resulting in overall “revisit” rates that are marginally lower for Medicare patients, but often the same or even higher for other payer groups, particularly disadvantaged payer groups who are uninsured or insured by Medicaid.2 Although the authors do not assert causality for these trends, it is worth noting that the much-discussed Hospital Readmission Reduction Program (or “readmission penalty”) applies only to Medicare patients aged more than 65 years. It is likely that this program influenced the differences identified between payer groups in this article.
Beyond the policy implications of these findings, the experience of patients cared for in these different settings is of paramount importance. Unfortunately, there are limited data comparing patient perceptions, preferences, or outcomes resulting from readmission to an inpatient service versus an observation unit or ED visit within 30 days of discharge. However, there is reason to believe that costs could be higher for some patients treated in the ED or an observation unit as compared to those in the inpatient setting,6 and that care continuity and quality may be different across these settings. In a recent white paper on observation care published by the Society of Hospital Medicine (SHM) Public Policy Committee,7 the SHM reported the results of a 2017 survey of its members about observation care. The results were concerning. An overwhelming majority of respondents (87%) believed that the rules for observation are unclear for patients, and 68% of respondents believed that policy changes mandating informing patients of their observation status have created conflict between the provider and the patient.7 As shared by one respondent, “the observation issue can severely damage the therapeutic bond with patient/family, who may conclude that the hospitalist has more interest in saving someone money at the expense of patient care.”7 Thus, there is significant concern about the nature of observation stays and the experience for patients and providers. We should take care to better understand these experiences given that readmission reduction efforts may funnel more patients into observation care.
As a next step, we recommend further examination of how “revisit” rates have changed over time for patients with any discharge diagnosis, and not just those with pneumonia, AMI, or HF.8 Such examinations should be stratified by payer to identify differential impacts on those with lower socioeconomic status. Analyses should also examine changes in “revisit” types at the hospital level to better understand if hospitals with reductions in readmission rates are simply shifting revisits to the observation unit or ED. It is possible that inpatient readmissions for any given hospital are decreasing without concomitant increases in observation visits, as there are forces independent of the readmission penalty, such as the Recovery Audit Contractor program, that are driving hospitals to more frequently code patients as observation visits rather than inpatient admissions.9 Thus, readmissions could decrease and observation unit visits could increase independent of one another. We also recommend further research to examine differences in care quality, clinical outcomes, and costs for those readmitted to the hospital within 30 days of discharge versus those cared for in observation units or the ED. The challenge of such studies will be to identify and examine comparable populations of patients across these three settings. Examining patient perceptions and preferences across these settings is also critical. Finally, when assessing interventions to reduce inpatient readmissions, we need to consider “revisits” as a whole, not simply readmissions.10 Otherwise, we may simply be promoting the use of interventions that shift inpatient readmissions to observation unit or ED revisits, and there is little that is patient-centered or high value about that.9
Disclosures
The authors have nothing to disclose.
1. Smith M, Saunders R, Stuckhardt L, McGinnis JM, eds. Best care at lower cost: the path to continuously learning health care in America. Washington, DC: National Academies Press; 2013. PubMed
2. Nuckols TK, Fingar KR, Barrett ML, et al. Returns to emergency department, observation, or inpatient care within 30 days after hospitalization in 4 states, 2009 and 2010 versus 2013 and 2014. J Hosp Med. 2018;13(5):296-303. PubMed
3. Fingar KR, Washington R. Trends in Hospital Readmissions for Four High-Volume Conditions, 2009–2013. Statistical Brief No. 196. https://www.hcup-us.ahrq.gov/reports/statbriefs/sb196-Readmissions-Trends-High-Volume-Conditions.pdf. Accessed March 5, 2018.
4. Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016;374(16):1543-1551. DOI: 10.1056/NEJMsa1513024. PubMed
5. Gerhardt G, Yemane A, Apostle K, Oelschlaeger A, Rollins E, Brennan N. Evaluating whether changes in utilization of hospital outpatient services contributed to lower Medicare readmission rate. Medicare Medicaid Res Rev. 2014;4(1). DOI: 10.5600/mmrr2014-004-01-b03 PubMed
6. Kangovi S, Cafardi SG, Smith RA, Kulkarni R, Grande D. Patient financial responsibility for observation care. J Hosp Med. 2015;10(11):718-723. DOI: 10.1002/jhm.2436. PubMed
7. The Hospital Observation Care Problem: Perspectives and Solutions from the Society of Hospital Medicine. Society of Hospital Medicine Public Policy Committee. https://www.hospitalmedicine.org/globalassets/policy-and-advocacy/advocacy-pdf/shms-observation-white-paper-2017. Accessed February 12, 2018.
8. Rosen AK, Chen Q, Shwartz M, et al. Does use of a hospital-wide readmission measure versus condition-specific readmission measures make a difference for hospital profiling and payment penalties? Medical Care. 2016;54(2):155-161. DOI: 10.1097/MLR.0000000000000455. PubMed
9. Baugh CW, Schuur JD. Observation care-high-value care or a cost-shifting loophole? N Engl J Med. 2013;369(4):302-305. DOI: 10.1056/NEJMp1304493. PubMed
10. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):2145-2147. DOI: 10.1056/NEJMp1408345. PubMed
1. Smith M, Saunders R, Stuckhardt L, McGinnis JM, eds. Best care at lower cost: the path to continuously learning health care in America. Washington, DC: National Academies Press; 2013. PubMed
2. Nuckols TK, Fingar KR, Barrett ML, et al. Returns to emergency department, observation, or inpatient care within 30 days after hospitalization in 4 states, 2009 and 2010 versus 2013 and 2014. J Hosp Med. 2018;13(5):296-303. PubMed
3. Fingar KR, Washington R. Trends in Hospital Readmissions for Four High-Volume Conditions, 2009–2013. Statistical Brief No. 196. https://www.hcup-us.ahrq.gov/reports/statbriefs/sb196-Readmissions-Trends-High-Volume-Conditions.pdf. Accessed March 5, 2018.
4. Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016;374(16):1543-1551. DOI: 10.1056/NEJMsa1513024. PubMed
5. Gerhardt G, Yemane A, Apostle K, Oelschlaeger A, Rollins E, Brennan N. Evaluating whether changes in utilization of hospital outpatient services contributed to lower Medicare readmission rate. Medicare Medicaid Res Rev. 2014;4(1). DOI: 10.5600/mmrr2014-004-01-b03 PubMed
6. Kangovi S, Cafardi SG, Smith RA, Kulkarni R, Grande D. Patient financial responsibility for observation care. J Hosp Med. 2015;10(11):718-723. DOI: 10.1002/jhm.2436. PubMed
7. The Hospital Observation Care Problem: Perspectives and Solutions from the Society of Hospital Medicine. Society of Hospital Medicine Public Policy Committee. https://www.hospitalmedicine.org/globalassets/policy-and-advocacy/advocacy-pdf/shms-observation-white-paper-2017. Accessed February 12, 2018.
8. Rosen AK, Chen Q, Shwartz M, et al. Does use of a hospital-wide readmission measure versus condition-specific readmission measures make a difference for hospital profiling and payment penalties? Medical Care. 2016;54(2):155-161. DOI: 10.1097/MLR.0000000000000455. PubMed
9. Baugh CW, Schuur JD. Observation care-high-value care or a cost-shifting loophole? N Engl J Med. 2013;369(4):302-305. DOI: 10.1056/NEJMp1304493. PubMed
10. Cassel CK, Conway PH, Delbanco SF, Jha AK, Saunders RS, Lee TH. Getting more performance from performance measurement. N Engl J Med. 2014;371(23):2145-2147. DOI: 10.1056/NEJMp1408345. PubMed
© 2018 Society of Hospital Medicine
Hospital Evidence‐Based Practice Centers
Hospital evidence‐based practice centers (EPCs) are structures with the potential to facilitate the integration of evidence into institutional decision making to close knowing‐doing gaps[1, 2, 3, 4, 5, 6]; in the process, they can support the evolution of their parent institutions into learning healthcare systems.[7] The potential of hospital EPCs stems from their ability to identify and adapt national evidence‐based guidelines and systematic reviews for the local setting,[8] create local evidence‐based guidelines in the absence of national guidelines, use local data to help define problems and assess the impact of solutions,[9] and implement evidence into practice through computerized clinical decision support (CDS) interventions and other quality‐improvement (QI) initiatives.[9, 10] As such, hospital EPCs have the potential to strengthen relationships and understanding between clinicians and administrators[11]; foster a culture of evidence‐based practice; and improve the quality, safety, and value of care provided.[10]
Formal hospital EPCs remain uncommon in the United States,[10, 11, 12] though their numbers have expanded worldwide.[13, 14] This growth is due not to any reduced role for national EPCs, such as the National Institute for Health and Clinical Excellence[15] in the United Kingdom, or the 13 EPCs funded by the Agency for Healthcare Research and Quality (AHRQ)[16, 17] in the United States. Rather, this growth is fueled by the heightened awareness that the value of healthcare interventions often needs to be assessed locally, and that clinical guidelines that consider local context have a greater potential to improve quality and efficiency.[9, 18, 19]
Despite the increasing number of hospital EPCs globally, their impact on administrative and clinical decision making has rarely been examined,[13, 20] especially for hospital EPCs in the United States. The few studies that have assessed the impact of hospital EPCs on institutional decision making have done so in the context of technology acquisition, neglecting the role hospital EPCs may play in the integration of evidence into clinical practice. For example, the Technology Assessment Unit at McGill University Health Center found that of the 27 reviews commissioned in their first 5 years, 25 were implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of $10 million.[21] Understanding the activities and impact of hospital EPCs is particularly critical for hospitalist leaders, who could leverage hospital EPCs to inform efforts to support the quality, safety, and value of care provided, or who may choose to establish or lead such infrastructure. The availability of such opportunities could also support hospitalist recruitment and retention.
In 2006, the University of Pennsylvania Health System (UPHS) created the Center for Evidence‐based Practice (CEP) to support the integration of evidence into practice to strengthen quality, safety, and value.[10] Cofounded by hospitalists with formal training in clinical epidemiology, the CEP performs rapid systematic reviews of the scientific literature to inform local practice and policy. In this article, we describe the first 8 years of the CEP's evidence synthesis activities and examine its impact on decision making across the health system.
METHODS
Setting
The UPHS includes 3 acute care hospitals, and inpatient facilities specializing in acute rehabilitation, skilled nursing, long‐term acute care, and hospice, with a capacity of more than 1800 beds and 75,000 annual admissions, as well as primary care and specialty clinics with more than 2 million annual outpatient visits. The CEP is funded by and organized within the Office of the UPHS Chief Medical Officer, serves all UPHS facilities, has an annual budget of approximately $1 million, and is currently staffed by a hospitalist director, 3 research analysts, 6 physician and nurse liaisons, a health economist, biostatistician, administrator, and librarians, totaling 5.5 full time equivalents.
The mission of the CEP is to support the quality, safety, and value of care at Penn through evidence‐based practice. To accomplish this mission, the CEP performs rapid systematic reviews, translates evidence into practice through the use of CDS interventions and clinical pathways, and offers education in evidence‐based decision making to trainees, staff, and faculty. This study is focused on the CEP's evidence synthesis activities.
Typically, clinical and administrative leaders submit a request to the CEP for an evidence review, the request is discussed and approved at the weekly staff meeting, and a research analyst and clinical liaison are assigned to the request and communicate with the requestor to clearly define the question of interest. Subsequently, the research analyst completes a protocol, a draft search, and a draft report, each reviewed and approved by the clinical liaison and requestor. The final report is posted to the website, disseminated to all key stakeholders across the UPHS as identified by the clinical liaisons, and integrated into decision making through various routes, including in‐person presentations to decision makers, and CDS and QI initiatives.
Study Design
The study included an analysis of an internal database of evidence reviews and a survey of report requestors, and was exempted from institutional review board review. Survey respondents were informed that their responses would be confidential and did not receive incentives.
Internal Database of Reports
Data from the CEP's internal management database were analyzed for its first 8 fiscal years (July 2006June 2014). Variables included requestor characteristics, report characteristics (eg, technology reviewed, clinical specialty examined, completion time, and performance of meta‐analyses and GRADE [Grading of Recommendations Assessment, Development and Evaluation] analyses[22]), report use (eg, integration of report into CDS interventions) and dissemination beyond the UPHS (eg, submission to Center for Reviews and Dissemination [CRD] Health Technology Assessment [HTA] database[23] and to peer‐reviewed journals). Report completion time was defined as the time between the date work began on the report and the date the final report was sent to the requestor. The technology categorization scheme was adapted from that provided by Goodman (2004)[24] and the UK National Institute for Health Research HTA Programme.[25] We systematically assigned the technology reviewed in each report to 1 of 8 mutually exclusive categories. The clinical specialty examined in each report was determined using an algorithm (see Supporting Information, Appendix 1, in the online version of this article).
We compared the report completion times and the proportions of requestor types, technologies reviewed, and clinical specialties examined in the CEP's first 4 fiscal years (July 2006June 2010) to those in the CEP's second 4 fiscal years (July 2010June 2014) using t tests and 2 tests for continuous and categorical variables, respectively.
Survey
We conducted a Web‐based survey (see Supporting Information, Appendix 2, in the online version of this article) of all requestors of the 139 rapid reviews completed in the last 4 fiscal years. Participants who requested multiple reports were surveyed only about the most recent report. Requestors were invited to participate in the survey via e‐mail, and follow‐up e‐mails were sent to nonrespondents at 7, 14, and 16 days. Nonrespondents and respondents were compared with respect to requestor type (physician vs nonphysician) and topic evaluated (traditional HTA topics such as drugs, biologics, and devices vs nontraditional HTA topics such as processes of care). The survey was administered using REDCap[26] electronic data capture tools. The 44‐item questionnaire collected data on the interaction between the requestor and the CEP, report characteristics, report impact, and requestor satisfaction.
Survey results were imported into Microsoft Excel (Microsoft Corp, Redmond, WA) and SPSS (IBM, Armonk, NY) for analysis. Descriptive statistics were generated, and statistical comparisons were conducted using 2 and Fisher exact tests.
RESULTS
Evidence Synthesis Activity
The CEP has produced several different report products since its inception. Evidence reviews (57%, n = 142) consist of a systematic review and analysis of the primary literature. Evidence advisories (32%, n = 79) are summaries of evidence from secondary sources such as guidelines or systematic reviews. Evidence inventories (3%, n = 7) are literature searches that describe the quantity and focus of available evidence, without analysis or synthesis.[27]
The categories of technologies examined, including their definitions and examples, are provided in Table 1. Drugs (24%, n = 60) and devices/equipment/supplies (19%, n = 48) were most commonly examined. The proportion of reports examining technology types traditionally evaluated by HTA organizations significantly decreased when comparing the first 4 years of CEP activity to the second 4 years (62% vs 38%, P < 0.01), whereas reports examining less traditionally reviewed categories increased (38% vs 62%, P < 0.01). The most common clinical specialties represented by the CEP reports were nursing (11%, n = 28), general surgery (11%, n = 28), critical care (10%, n = 24), and general medicine (9%, n = 22) (see Supporting Information, Appendix 3, in the online version of this article). Clinical departments were the most common requestors (29%, n = 72) (Table 2). The proportion of requests from clinical departments significantly increased when comparing the first 4 years to the second 4 years (20% vs 36%, P < 0.01), with requests from purchasing committees significantly decreasing (25% vs 6%, P < 0.01). The overall report completion time was 70 days, and significantly decreased when comparing the first 4 years to the second 4 years (89 days vs 50 days, P < 0.01).
Category | Definition | Examples | Total | 20072010 | 20112014 | P Value |
---|---|---|---|---|---|---|
Total | 249 (100%) | 109 (100%) | 140 (100%) | |||
Drug | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a pharmacologic agent | Celecoxib for pain in joint arthroplasty; colchicine for prevention of pericarditis and atrial fibrillation | 60 (24%) | 35 (32%) | 25 (18%) | 0.009 |
Device, equipment, and supplies | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory that is intended for use in the prevention, diagnosis, or treatment of disease and does not achieve its primary intended purposes though chemical action or metabolism[50] | Thermometers for pediatric use; femoral closure devices for cardiac catheterization | 48 (19%) | 25 (23%) | 23 (16%) | 0.19 |
Process of care | A report primarily examining a clinical pathway or a clinical practice guideline that significantly involves elements of prevention, diagnosis, and/or treatment or significantly incorporates 2 or more of the other technology categories | Preventing patient falls; prevention and management of delirium | 31 (12%) | 18 (17%) | 13 (9%) | 0.09 |
Test, scale, or risk factor | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a test intended to screen for, diagnose, classify, or monitor the progression of a disease | Computed tomography for acute chest pain; urine drug screening in chronic pain patients on opioid therapy | 31 (12%) | 8 (7%) | 23 (16%) | 0.03 |
Medical/surgical procedure | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a medical intervention that is not a drug, device, or test or of the application or removal of a device | Biliary drainage for chemotherapy patients; cognitive behavioral therapy for insomnia | 26 (10%) | 8 (7%) | 18 (13%) | 0.16 |
Policy or organizational/managerial system | A report primarily examining laws or regulations; the organization, financing, or delivery of care, including settings of care; or healthcare providers | Medical care costs and productivity changes associated with smoking; physician training and credentialing for robotic surgery in obstetrics and gynecology | 26 (10%) | 4 (4%) | 22 (16%) | 0.002 |
Support system | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an intervention designed to provide a new or improved service to patients or healthcare providers that does not fall into 1 of the other categories | Reconciliation of data from differing electronic medical records; social media, text messaging, and postdischarge communication | 14 (6%) | 3 (3%) | 11 (8%) | 0.09 |
Biologic | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a product manufactured in a living system | Recombinant factor VIIa for cardiovascular surgery; osteobiologics for orthopedic fusions | 13 (5%) | 8 (7%) | 5 (4%) | 0.19 |
Category | Total | 20072010 | 20112014 | P Value |
---|---|---|---|---|
| ||||
Total | 249 (100%) | 109 (100%) | 140 (100%) | |
Clinical department | 72 (29%) | 22 (20%) | 50 (36%) | 0.007 |
CMO | 47 (19%) | 21 (19%) | 26 (19%) | 0.92 |
Purchasing committee | 35 (14%) | 27 (25%) | 8 (6%) | <0.001 |
Formulary committee | 22 (9%) | 12 (11%) | 10 (7%) | 0.54 |
Quality committee | 21 (8%) | 11 (10%) | 10 (7%) | 0.42 |
Administrative department | 19 (8%) | 5 (5%) | 14 (10%) | 0.11 |
Nursing | 14 (6%) | 4 (4%) | 10 (7%) | 0.23 |
Other* | 19 (8%) | 7 (6%) | 12 (9%) | 0.55 |
Thirty‐seven (15%) reports included meta‐analyses conducted by CEP staff. Seventy‐five reports (30%) contained an evaluation of the quality of the evidence base using GRADE analyses.[22] Of these reports, the highest GRADE of evidence available for any comparison of interest was moderate (35%, n = 26) or high (33%, n = 25) in most cases, followed by very low (19%, n = 14) and low (13%, n = 10).
Reports were disseminated in a variety of ways beyond direct dissemination and presentation to requestors and posting on the center website. Thirty reports (12%) informed CDS interventions, 24 (10%) resulted in peer‐reviewed publications, and 204 (82%) were posted to the CRD HTA database.
Evidence Synthesis Impact
A total of 139 reports were completed between July 2010 and June 2014 for 65 individual requestors. Email invitations to participate in the survey were sent to the 64 requestors employed by the UPHS. The response rate was 72% (46/64). The proportions of physician requestors and traditional HTA topics evaluated were similar across respondents and nonrespondents (43% [20/46] vs 39% [7/18], P = 0.74; and 37% [17/46] vs 44% [8/18], P = 0.58, respectively). Aggregated survey responses are presented for items using a Likert scale in Figure 1, and for items using a yes/no or ordinal scale in Table 3.
Items | % of Respondents Responding Affirmatively |
---|---|
Percentage of Respondents Ranking as First Choice* | |
| |
Requestor activity | |
What factors prompted you to request a report from CEP? (Please select all that apply.) | |
My own time constraints | 28% (13/46) |
CEP's ability to identify and synthesize evidence | 89% (41/46) |
CEP's objectivity | 52% (24/46) |
Recommendation from colleague | 30% (14/46) |
Did you conduct any of your own literature searches before contacting CEP? | 67% (31/46) |
Did you obtain and read any of the articles cited in CEP's report? | 63% (29/46) |
Did you read the following sections of CEP's report? | |
Evidence summary (at beginning of report) | 100% (45/45) |
Introduction/background | 93% (42/45) |
Methods | 84% (38/45) |
Results | 98% (43/43) |
Conclusion | 100% (43/43) |
Report dissemination | |
Did you share CEP's report with anyone NOT involved in requesting the report or in making the final decision? | 67% (30/45) |
Did you share CEP's report with anyone outside of Penn? | 7% (3/45) |
Requestor preferences | |
Would it be helpful for CEP staff to call you after you receive any future CEP reports to answer any questions you might have? | 55% (24/44) |
Following any future reports you request from CEP, would you be willing to complete a brief questionnaire? | 100% (44/44) |
Please rank how you would prefer to receive reports from CEP in the future. | |
E‐mail containing the report as a PDF attachment | 77% (34/44) |
E‐mail containing a link to the report on CEP's website | 16% (7/44) |
In‐person presentation by the CEP analyst writing the report | 18% (8/44) |
In‐person presentation by the CEP director involved in the report | 16% (7/44) |
In general, respondents found reports easy to request, easy to use, timely, and relevant, resulting in high requestor satisfaction. In addition, 98% described the scope of content and level of detail as about right. Report impact was rated highly as well, with the evidence summary and conclusions rated as the most critical to decision making. A majority of respondents indicated that reports confirmed their tentative decision (77%, n = 34), whereas some changed their tentative decision (7%, n = 3), and others suggested the report had no effect on their tentative decision (16%, n = 7). Respondents indicated the amount of time that elapsed between receiving reports and making final decisions was 1 to 7 days (5%, n = 2), 8 to 30 days (40%, n = 17), 1 to 3 months (37%, n = 16), 4 to 6 months (9%, n = 4), or greater than 6 months (9%, n = 4). The most common reasons cited for requesting a report were the CEP's evidence synthesis skills and objectivity.
DISCUSSION
To our knowledge, this is the first comprehensive description and assessment of evidence synthesis activity by a hospital EPC in the United States. Our findings suggest that clinical and administrative leaders will request reports from a hospital EPC, and that hospital EPCs can promptly produce reports when requested. Moreover, these syntheses can address a wide range of clinical and policy topics, and can be disseminated through a variety of routes. Lastly, requestors are satisfied by these syntheses, and report that they inform decision making. These results suggest that EPCs may be an effective infrastructure paradigm for promoting evidence‐based decision making within healthcare provider organizations, and are consistent with previous analyses of hospital‐based EPCs.[21, 28, 29]
Over half of report requestors cited CEP's objectivity as a factor in their decision to request a report, underscoring the value of a neutral entity in an environment where clinical departments and hospital committees may have competing interests.[10] This asset was 1 of the primary drivers for establishing our hospital EPC. Concerns by clinical executives about the influence of industry and local politics on institutional decision making, and a desire to have clinical evidence more systematically and objectively integrated into decision making, fueled our center's funding.
The survey results also demonstrate that respondents were satisfied with the reports for many reasons, including readability, concision, timeliness, scope, and content, consistent with the evaluation of the French hospital‐based EPC CEDIT (French Committee for the Assessment and Dissemination of Technological Innovations).[29] Given the importance of readability, concision, and relevance that has been previously described,[16, 28, 30] nearly all CEP reports contain an evidence summary on the first page that highlights key findings in a concise, user‐friendly format.[31] The evidence summaries include bullet points that: (1) reference the most pertinent guideline recommendations along with their strength of recommendation and underlying quality of evidence; (2) organize and summarize study findings using the most critical clinical outcomes, including an assessment of the quality of the underlying evidence for each outcome; and (3) note important limitations of the findings.
Evidence syntheses must be timely to allow decision makers to act on the findings.[28, 32] The primary criticism of CEDIT was the lag between requests and report publication.[29] Rapid reviews, designed to inform urgent decisions, can overcome this challenge.[31, 33, 34] CEP reviews required approximately 2 months to complete on average, consistent with the most rapid timelines reported,[31, 33, 34] and much shorter than standard systematic review timelines, which can take up to 12 to 24 months.[33] Working with requestors to limit the scope of reviews to those issues most critical to a decision, using secondary resources when available, and hiring experienced research analysts help achieve these efficiencies.
The study by Bodeau‐Livinec also argues for the importance of report accessibility to ensure dissemination.[29] This is consistent with the CEP's approach, where all reports are posted on the UPHS internal website. Many also inform QI initiatives, as well as CDS interventions that address topics of general interest to acute care hospitals, such as venous thromboembolism (VTE) prophylaxis,[35] blood product transfusions,[36] sepsis care,[37, 38] and prevention of catheter‐associated urinary tract infections (CAUTI)[39] and hospital readmissions.[40] Most reports are also listed in an international database of rapid reviews,[23] and reports that address topics of general interest, have sufficient evidence to synthesize, and have no prior published systematic reviews are published in the peer‐reviewed literature.[41, 42]
The majority of reports completed by the CEP were evidence reviews, or systematic reviews of primary literature, suggesting that CEP reports often address questions previously unanswered by existing published systematic reviews; however, about a third of reports were evidence advisories, or summaries of evidence from preexisting secondary sources. The relative scarcity of high‐quality evidence bases in those reports where GRADE analyses were conducted might be expected, as requestors may be more likely to seek guidance when the evidence base on a topic is lacking. This was further supported by the small percentage (15%) of reports where adequate data of sufficient homogeneity existed to allow meta‐analyses. The small number of original meta‐analyses performed also reflects our reliance on secondary resources when available.
Only 7% of respondents reported that tentative decisions were changed based on their report. This is not surprising, as evidence reviews infrequently result in clear go or no go recommendations. More commonly, they address or inform complex clinical questions or pathways. In this context, the change/confirm/no effect framework may not completely reflect respondents' use of or benefit from reports. Thus, we included a diverse set of questions in our survey to best estimate the value of our reports. For example, when asked whether the report answered the question posed, informed their final decision, or was consistent with their final decision, 91%, 79%, and 71% agreed or strongly agreed, respectively. When asked whether they would request a report again if they had to do it all over, recommend CEP to their colleagues, and be likely to request reports in the future, at least 95% of survey respondents agreed or strongly agreed. In addition, no respondent indicated that their report was not timely enough to influence their decision. Moreover, only a minority of respondents expressed disappointment that the CEP's report did not provide actionable recommendations due to a lack of published evidence (9%, n = 4). Importantly, the large proportion of requestors indicating that reports confirmed their tentative decisions may be a reflection of hindsight bias.
The most apparent trend in the production of CEP reviews over time is the relative increase in requests by clinical departments, suggesting that the CEP is being increasingly consulted to help define best clinical practices. This is also supported by the relative increase in reports focused on policy or organizational/managerial systems. These findings suggest that hospital EPCs have value beyond the traditional realm of HTA.
This study has a number of limitations. First, not all of the eligible report requestors responded to our survey. Despite this, our response rate of 72% compares favorably with surveys published in medical journals.[43] In addition, nonresponse bias may be less important in physician surveys than surveys of the general population.[44] The similarity in requestor and report characteristics for respondents and nonrespondents supports this. Second, our survey of impact is self‐reported rather than an evaluation of actual decision making or patient outcomes. Thus, the survey relies on the accuracy of the responses. Third, recall bias must be considered, as some respondents were asked to evaluate reports that were greater than 1 year old. To reduce this bias, we asked respondents to consider the most recent report they requested, included that report as an attachment in the survey request, and only surveyed requestors from the most recent 4 of the CEP's 8 fiscal years. Fourth, social desirability bias could have also affected the survey responses, though it was likely minimized by the promise of confidentiality. Fifth, an examination of the impact of the CEP on costs was outside the scope of this evaluation; however, such information may be important to those assessing the sustainability or return on investment of such centers. Simple approaches we have previously used to approximate the value of our activities include: (1) estimating hospital cost savings resulting from decisions supported by our reports, such as the use of technologies like chlorhexidine for surgical site infections[45] or discontinuation of technologies like aprotinin for cardiac surgery[46]; and (2) estimating penalties avoided or rewards attained as a result of center‐led initiatives, such as those to increase VTE prophylaxis,[35] reduce CAUTI rates,[39] and reduce preventable mortality associated with sepsis.[37, 38] Similarly, given the focus of this study on the local evidence synthesis activities of our center, our examination did not include a detailed description of our CDS activities, or teaching activities, including our multidisciplinary workshops for physicians and nurses in evidence‐based QI[47] and our novel evidence‐based practice curriculum for medical students. Our study also did not include a description of our extramural activities, such as those supported by our contract with AHRQ as 1 of their 13 EPCs.[16, 17, 48, 49] A consideration of all of these activities enables a greater appreciation for the potential of such centers. Lastly, we examined a single EPC, which may not be representative of the diversity of hospitals and hospital staff across the United States. However, our EPC serves a diverse array of patient populations, clinical services, and service models throughout our multientity academic healthcare system, which may improve the generalizability of our experience to other settings.
As next steps, we recommend evaluation of other existing hospital EPCs nationally. Such studies could help hospitals and health systems ascertain which of their internal decisions might benefit from locally sourced rapid systematic reviews and determine whether an in‐house EPC could improve the value of care delivered.
In conclusion, our findings suggest that hospital EPCs within academic healthcare systems can efficiently synthesize and disseminate evidence for a variety of stakeholders. Moreover, these syntheses impact decision making in a variety of hospital contexts and clinical specialties. Hospitals and hospitalist leaders seeking to improve the implementation of evidence‐based practice at a systems level might consider establishing such infrastructure locally.
Acknowledgements
The authors thank Fran Barg, PhD (Department of Family Medicine and Community Health, University of Pennsylvania Perelman School of Medicine) and Joel Betesh, MD (University of Pennsylvania Health System) for their contributions to developing the survey. They did not receive any compensation for their contributions.
Disclosures: An earlier version of this work was presented as a poster at the 2014 AMA Research Symposium, November 7, 2014, Dallas, Texas. Mr. Jayakumar reports having received a University of Pennsylvania fellowship as a summer intern at the Center for Evidence‐based Practice. Dr. Umscheid cocreated and directs a hospital evidence‐based practice center, is the Senior Associate Director of an Agency for Healthcare Research and Quality Evidence‐Based Practice Center, and is a past member of the Medicare Evidence Development and Coverage Advisory Committee, which uses evidence reports developed by the Evidence‐based Practice Centers of the Agency for Healthcare Research and Quality. Dr. Umscheid's contribution was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. None of the funders had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Dr. Lavenberg, Dr. Mitchell, and Mr. Leas are employed as research analysts by a hospital evidence‐based practice center. Dr. Doshi is supported in part by a hospital evidence‐based practice center and is an Associate Director of an Agency for Healthcare Research and Quality Evidence‐based Practice Center. Dr. Goldmann is emeritus faculty at Penn, is supported in part by a hospital evidence‐based practice center, and is the Vice President and Chief Quality Assurance Officer in Clinical Solutions, a division of Elsevier, Inc., a global publishing company, and director of the division's Evidence‐based Medicine Center. Dr. Williams cocreated and codirects a hospital evidence‐based practice center. Dr. Brennan has oversight for and helped create a hospital evidence‐based practice center.
- “Bench to behavior”: translating comparative effectiveness research into improved clinical practice. Health Aff (Millwood). 2010;29(10):1891–1900. , .
- Evaluating the status of “translating research into practice” at a major academic healthcare system. Int J Technol Assess Health Care. 2009;25(1):84–89. , , .
- Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff (Millwood). 2012;31(10):2168–2175. , , , .
- Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50. , , , , , .
- From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225–1230. , .
- Incentivizing “structures” over “outcomes” to bridge the knowing‐doing gap. JAMA Intern Med. 2015;175(3):354. , .
- Olsen L, Aisner D, McGinnis JM, eds. Institute of Medicine (US) Roundtable on Evidence‐Based Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: National Academies Press; 2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK53494/. Accessed October 29, 2014.
- Adapting clinical practice guidelines to local context and assessing barriers to their use. Can Med Assoc J. 2010;182(2):E78–E84. , , , .
- Integrating local data into hospital‐based healthcare technology assessment: two case studies. Int J Technol Assess Health Care. 2010;26(3):294–300. , , , .
- Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):1352–1355. , , .
- Health technology assessment at the University of California‐San Francisco. J Healthc Manag Am Coll Healthc Exec. 2011;56(1):15–29; discussion 29–30. , , , , .
- Kaiser Permanente Southern California regional technology management process: evidence‐based medicine operationalized. Perm J. 2006;10(1):38–41. , .
- Hospital‐based health technology assessment: developments to date. Pharmacoeconomics. 2014;32(9):819–824. .
- Hospital based health technology assessment world‐wide survey. Available at: http://www.htai.org/fileadmin/HTAi_Files/ISG/HospitalBasedHTA/2008Files/HospitalBasedHTAISGSurveyReport.pdf. Accessed October 11, 2015. , , , .
- At the center of health care policy making: the use of health technology assessment at NICE. Med Decis Making. 2013;33(3):320–324. , .
- Better information for better health care: the Evidence‐based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 part 2):1035–1041. , , .
- AHRQ's Effective Health Care Program: why comparative effectiveness matters. Am J Med Qual. 2009;24(1):67–70. , .
- Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):1317–1322. , .
- Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24. , , , et al.
- Effects and repercussions of local/hospital‐based health technology assessment (HTA): a systematic. Syst Rev. 2014;3:129. , , , .
- Impact of TAU Reports. McGill University Health Centre. Available at: https://francais.mcgill.ca/files/tau/FINAL_TAU_IMPACT_REPORT_FEB_2008.pdf. Published Feb 1, 2008. Accessed August 19, 2014. , , , et al.
- GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–926. , , , et al.
- Centre for Reviews and Dissemination databases: value, content, and developments. Int J Technol Assess Health Care. 2010;26(4):470–472. , , .
- HTA 101. Introduction to Health Technology Assessment. Available at: https://www.nlm.nih.gov/nichsr/hta101/ta10103.html. Accessed October 11, 2015. .
- National Institute for Health Research. Remit. NIHR HTA Programme. Available at: http://www.nets.nihr.ac.uk/programmes/hta/remit. Accessed August 20, 2014.
- Research Electronic Data Capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–381. , , , , , .
- When the decision is what to decide: Using evidence inventory reports to focus health technology assessments. Int J Technol Assess Health Care. 2011;27(2):127–132. , , , .
- End‐user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(02):263–267. , .
- Impact of CEDIT recommendations: an example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161–168. , , , , .
- Increasing the relevance of research to health care managers: hospital CEO imperatives for improving quality and lowering costs. Health Care Manage Rev. 2007;32(2):150–159. , , , .
- Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10. , , , , .
- Health care decision makers' use of comparative effectiveness research: report from a series of focus groups. J Manag Care Pharm. 2013;19(9):745–754. , , , .
- Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133–139. , , , et al.
- EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at: http://www.ncbi.nlm.nih.gov/books/NBK274092. Accessed March 5, 2015. , , , et al.
- Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi‐experimental study. BMC Med Inform Decis Mak. 2012;12:92. , , , , .
- Order sets in electronic health records: principles of good practice. Chest. 2013;143(1):228–235. .
- Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):26–31. , , , et al.
- Clinician perception of the effectiveness of an automated early warning and response system for sepsis in an academic medical center. Ann Am Thorac Soc. 2015;12(10):1514–1519. , , , et al.
- Usability and impact of a computerized clinical decision support intervention designed to reduce urinary catheter utilization and catheter‐associated urinary tract infections. Infect Control Hosp Epidemiol. 2014;35(9):1147–1155. , , , , , .
- The readmission risk flag: using the electronic health record to automatically identify patients at risk for 30‐day readmission. J Hosp Med. 2013;8(12):689–695. , , , et al.
- A systematic review to inform institutional decisions about the use of extracorporeal membrane oxygenation during the H1N1 influenza pandemic. Crit Care Med. 2010;38(6):1398–1404. , , , , , .
- Heparin flushing and other interventions to maintain patency of central venous catheters: a systematic review. J Adv Nurs. 2009;65(10):2007–2021. , , , .
- Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):1129–1136. , , .
- Physician response to surveys: a review of the literature. Am J Prev Med. 2001;20(1):61–67. , .
- Systematic review and cost analysis comparing use of chlorhexidine with use of iodine for preoperative skin antisepsis to prevent surgical site infection. Infect Control Hosp Epidemiol. 2010;31(12):1219–1229. , , , , .
- Antifibrinolytic use in adult cardiac surgery. Curr Opin Hematol. 2007;14(5):455–467. , , .
- Teaching evidence assimilation for collaborative health care (TEACH) 2009–2014: building evidence‐based capacity within health care provider organizations. EGEMS (Wash DC). 2015;3(2):1165. , , , , .
- Cleaning hospital room surfaces to prevent health care‐associated infections: a technical brief [published online August 11, 2015]. Ann Intern Med. doi:10.7326/M15‐1192. , , , , , .
- Healthcare Infection Control Practices Advisory Committee. Updating the guideline development methodology of the Healthcare Infection Control Practices Advisory Committee (HICPAC). Am J Infect Control. 2010;38(4):264–273. , , ,
- U.S. Food and Drug Administration. FDA basics—What is a medical device? Available at: http://www.fda.gov/AboutFDA/Transparency/Basics/ucm211822.htm. Accessed November 12, 2014.
Hospital evidence‐based practice centers (EPCs) are structures with the potential to facilitate the integration of evidence into institutional decision making to close knowing‐doing gaps[1, 2, 3, 4, 5, 6]; in the process, they can support the evolution of their parent institutions into learning healthcare systems.[7] The potential of hospital EPCs stems from their ability to identify and adapt national evidence‐based guidelines and systematic reviews for the local setting,[8] create local evidence‐based guidelines in the absence of national guidelines, use local data to help define problems and assess the impact of solutions,[9] and implement evidence into practice through computerized clinical decision support (CDS) interventions and other quality‐improvement (QI) initiatives.[9, 10] As such, hospital EPCs have the potential to strengthen relationships and understanding between clinicians and administrators[11]; foster a culture of evidence‐based practice; and improve the quality, safety, and value of care provided.[10]
Formal hospital EPCs remain uncommon in the United States,[10, 11, 12] though their numbers have expanded worldwide.[13, 14] This growth is due not to any reduced role for national EPCs, such as the National Institute for Health and Clinical Excellence[15] in the United Kingdom, or the 13 EPCs funded by the Agency for Healthcare Research and Quality (AHRQ)[16, 17] in the United States. Rather, this growth is fueled by the heightened awareness that the value of healthcare interventions often needs to be assessed locally, and that clinical guidelines that consider local context have a greater potential to improve quality and efficiency.[9, 18, 19]
Despite the increasing number of hospital EPCs globally, their impact on administrative and clinical decision making has rarely been examined,[13, 20] especially for hospital EPCs in the United States. The few studies that have assessed the impact of hospital EPCs on institutional decision making have done so in the context of technology acquisition, neglecting the role hospital EPCs may play in the integration of evidence into clinical practice. For example, the Technology Assessment Unit at McGill University Health Center found that of the 27 reviews commissioned in their first 5 years, 25 were implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of $10 million.[21] Understanding the activities and impact of hospital EPCs is particularly critical for hospitalist leaders, who could leverage hospital EPCs to inform efforts to support the quality, safety, and value of care provided, or who may choose to establish or lead such infrastructure. The availability of such opportunities could also support hospitalist recruitment and retention.
In 2006, the University of Pennsylvania Health System (UPHS) created the Center for Evidence‐based Practice (CEP) to support the integration of evidence into practice to strengthen quality, safety, and value.[10] Cofounded by hospitalists with formal training in clinical epidemiology, the CEP performs rapid systematic reviews of the scientific literature to inform local practice and policy. In this article, we describe the first 8 years of the CEP's evidence synthesis activities and examine its impact on decision making across the health system.
METHODS
Setting
The UPHS includes 3 acute care hospitals, and inpatient facilities specializing in acute rehabilitation, skilled nursing, long‐term acute care, and hospice, with a capacity of more than 1800 beds and 75,000 annual admissions, as well as primary care and specialty clinics with more than 2 million annual outpatient visits. The CEP is funded by and organized within the Office of the UPHS Chief Medical Officer, serves all UPHS facilities, has an annual budget of approximately $1 million, and is currently staffed by a hospitalist director, 3 research analysts, 6 physician and nurse liaisons, a health economist, biostatistician, administrator, and librarians, totaling 5.5 full time equivalents.
The mission of the CEP is to support the quality, safety, and value of care at Penn through evidence‐based practice. To accomplish this mission, the CEP performs rapid systematic reviews, translates evidence into practice through the use of CDS interventions and clinical pathways, and offers education in evidence‐based decision making to trainees, staff, and faculty. This study is focused on the CEP's evidence synthesis activities.
Typically, clinical and administrative leaders submit a request to the CEP for an evidence review, the request is discussed and approved at the weekly staff meeting, and a research analyst and clinical liaison are assigned to the request and communicate with the requestor to clearly define the question of interest. Subsequently, the research analyst completes a protocol, a draft search, and a draft report, each reviewed and approved by the clinical liaison and requestor. The final report is posted to the website, disseminated to all key stakeholders across the UPHS as identified by the clinical liaisons, and integrated into decision making through various routes, including in‐person presentations to decision makers, and CDS and QI initiatives.
Study Design
The study included an analysis of an internal database of evidence reviews and a survey of report requestors, and was exempted from institutional review board review. Survey respondents were informed that their responses would be confidential and did not receive incentives.
Internal Database of Reports
Data from the CEP's internal management database were analyzed for its first 8 fiscal years (July 2006June 2014). Variables included requestor characteristics, report characteristics (eg, technology reviewed, clinical specialty examined, completion time, and performance of meta‐analyses and GRADE [Grading of Recommendations Assessment, Development and Evaluation] analyses[22]), report use (eg, integration of report into CDS interventions) and dissemination beyond the UPHS (eg, submission to Center for Reviews and Dissemination [CRD] Health Technology Assessment [HTA] database[23] and to peer‐reviewed journals). Report completion time was defined as the time between the date work began on the report and the date the final report was sent to the requestor. The technology categorization scheme was adapted from that provided by Goodman (2004)[24] and the UK National Institute for Health Research HTA Programme.[25] We systematically assigned the technology reviewed in each report to 1 of 8 mutually exclusive categories. The clinical specialty examined in each report was determined using an algorithm (see Supporting Information, Appendix 1, in the online version of this article).
We compared the report completion times and the proportions of requestor types, technologies reviewed, and clinical specialties examined in the CEP's first 4 fiscal years (July 2006June 2010) to those in the CEP's second 4 fiscal years (July 2010June 2014) using t tests and 2 tests for continuous and categorical variables, respectively.
Survey
We conducted a Web‐based survey (see Supporting Information, Appendix 2, in the online version of this article) of all requestors of the 139 rapid reviews completed in the last 4 fiscal years. Participants who requested multiple reports were surveyed only about the most recent report. Requestors were invited to participate in the survey via e‐mail, and follow‐up e‐mails were sent to nonrespondents at 7, 14, and 16 days. Nonrespondents and respondents were compared with respect to requestor type (physician vs nonphysician) and topic evaluated (traditional HTA topics such as drugs, biologics, and devices vs nontraditional HTA topics such as processes of care). The survey was administered using REDCap[26] electronic data capture tools. The 44‐item questionnaire collected data on the interaction between the requestor and the CEP, report characteristics, report impact, and requestor satisfaction.
Survey results were imported into Microsoft Excel (Microsoft Corp, Redmond, WA) and SPSS (IBM, Armonk, NY) for analysis. Descriptive statistics were generated, and statistical comparisons were conducted using 2 and Fisher exact tests.
RESULTS
Evidence Synthesis Activity
The CEP has produced several different report products since its inception. Evidence reviews (57%, n = 142) consist of a systematic review and analysis of the primary literature. Evidence advisories (32%, n = 79) are summaries of evidence from secondary sources such as guidelines or systematic reviews. Evidence inventories (3%, n = 7) are literature searches that describe the quantity and focus of available evidence, without analysis or synthesis.[27]
The categories of technologies examined, including their definitions and examples, are provided in Table 1. Drugs (24%, n = 60) and devices/equipment/supplies (19%, n = 48) were most commonly examined. The proportion of reports examining technology types traditionally evaluated by HTA organizations significantly decreased when comparing the first 4 years of CEP activity to the second 4 years (62% vs 38%, P < 0.01), whereas reports examining less traditionally reviewed categories increased (38% vs 62%, P < 0.01). The most common clinical specialties represented by the CEP reports were nursing (11%, n = 28), general surgery (11%, n = 28), critical care (10%, n = 24), and general medicine (9%, n = 22) (see Supporting Information, Appendix 3, in the online version of this article). Clinical departments were the most common requestors (29%, n = 72) (Table 2). The proportion of requests from clinical departments significantly increased when comparing the first 4 years to the second 4 years (20% vs 36%, P < 0.01), with requests from purchasing committees significantly decreasing (25% vs 6%, P < 0.01). The overall report completion time was 70 days, and significantly decreased when comparing the first 4 years to the second 4 years (89 days vs 50 days, P < 0.01).
Category | Definition | Examples | Total | 20072010 | 20112014 | P Value |
---|---|---|---|---|---|---|
Total | 249 (100%) | 109 (100%) | 140 (100%) | |||
Drug | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a pharmacologic agent | Celecoxib for pain in joint arthroplasty; colchicine for prevention of pericarditis and atrial fibrillation | 60 (24%) | 35 (32%) | 25 (18%) | 0.009 |
Device, equipment, and supplies | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory that is intended for use in the prevention, diagnosis, or treatment of disease and does not achieve its primary intended purposes though chemical action or metabolism[50] | Thermometers for pediatric use; femoral closure devices for cardiac catheterization | 48 (19%) | 25 (23%) | 23 (16%) | 0.19 |
Process of care | A report primarily examining a clinical pathway or a clinical practice guideline that significantly involves elements of prevention, diagnosis, and/or treatment or significantly incorporates 2 or more of the other technology categories | Preventing patient falls; prevention and management of delirium | 31 (12%) | 18 (17%) | 13 (9%) | 0.09 |
Test, scale, or risk factor | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a test intended to screen for, diagnose, classify, or monitor the progression of a disease | Computed tomography for acute chest pain; urine drug screening in chronic pain patients on opioid therapy | 31 (12%) | 8 (7%) | 23 (16%) | 0.03 |
Medical/surgical procedure | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a medical intervention that is not a drug, device, or test or of the application or removal of a device | Biliary drainage for chemotherapy patients; cognitive behavioral therapy for insomnia | 26 (10%) | 8 (7%) | 18 (13%) | 0.16 |
Policy or organizational/managerial system | A report primarily examining laws or regulations; the organization, financing, or delivery of care, including settings of care; or healthcare providers | Medical care costs and productivity changes associated with smoking; physician training and credentialing for robotic surgery in obstetrics and gynecology | 26 (10%) | 4 (4%) | 22 (16%) | 0.002 |
Support system | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an intervention designed to provide a new or improved service to patients or healthcare providers that does not fall into 1 of the other categories | Reconciliation of data from differing electronic medical records; social media, text messaging, and postdischarge communication | 14 (6%) | 3 (3%) | 11 (8%) | 0.09 |
Biologic | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a product manufactured in a living system | Recombinant factor VIIa for cardiovascular surgery; osteobiologics for orthopedic fusions | 13 (5%) | 8 (7%) | 5 (4%) | 0.19 |
Category | Total | 20072010 | 20112014 | P Value |
---|---|---|---|---|
| ||||
Total | 249 (100%) | 109 (100%) | 140 (100%) | |
Clinical department | 72 (29%) | 22 (20%) | 50 (36%) | 0.007 |
CMO | 47 (19%) | 21 (19%) | 26 (19%) | 0.92 |
Purchasing committee | 35 (14%) | 27 (25%) | 8 (6%) | <0.001 |
Formulary committee | 22 (9%) | 12 (11%) | 10 (7%) | 0.54 |
Quality committee | 21 (8%) | 11 (10%) | 10 (7%) | 0.42 |
Administrative department | 19 (8%) | 5 (5%) | 14 (10%) | 0.11 |
Nursing | 14 (6%) | 4 (4%) | 10 (7%) | 0.23 |
Other* | 19 (8%) | 7 (6%) | 12 (9%) | 0.55 |
Thirty‐seven (15%) reports included meta‐analyses conducted by CEP staff. Seventy‐five reports (30%) contained an evaluation of the quality of the evidence base using GRADE analyses.[22] Of these reports, the highest GRADE of evidence available for any comparison of interest was moderate (35%, n = 26) or high (33%, n = 25) in most cases, followed by very low (19%, n = 14) and low (13%, n = 10).
Reports were disseminated in a variety of ways beyond direct dissemination and presentation to requestors and posting on the center website. Thirty reports (12%) informed CDS interventions, 24 (10%) resulted in peer‐reviewed publications, and 204 (82%) were posted to the CRD HTA database.
Evidence Synthesis Impact
A total of 139 reports were completed between July 2010 and June 2014 for 65 individual requestors. Email invitations to participate in the survey were sent to the 64 requestors employed by the UPHS. The response rate was 72% (46/64). The proportions of physician requestors and traditional HTA topics evaluated were similar across respondents and nonrespondents (43% [20/46] vs 39% [7/18], P = 0.74; and 37% [17/46] vs 44% [8/18], P = 0.58, respectively). Aggregated survey responses are presented for items using a Likert scale in Figure 1, and for items using a yes/no or ordinal scale in Table 3.
Items | % of Respondents Responding Affirmatively |
---|---|
Percentage of Respondents Ranking as First Choice* | |
| |
Requestor activity | |
What factors prompted you to request a report from CEP? (Please select all that apply.) | |
My own time constraints | 28% (13/46) |
CEP's ability to identify and synthesize evidence | 89% (41/46) |
CEP's objectivity | 52% (24/46) |
Recommendation from colleague | 30% (14/46) |
Did you conduct any of your own literature searches before contacting CEP? | 67% (31/46) |
Did you obtain and read any of the articles cited in CEP's report? | 63% (29/46) |
Did you read the following sections of CEP's report? | |
Evidence summary (at beginning of report) | 100% (45/45) |
Introduction/background | 93% (42/45) |
Methods | 84% (38/45) |
Results | 98% (43/43) |
Conclusion | 100% (43/43) |
Report dissemination | |
Did you share CEP's report with anyone NOT involved in requesting the report or in making the final decision? | 67% (30/45) |
Did you share CEP's report with anyone outside of Penn? | 7% (3/45) |
Requestor preferences | |
Would it be helpful for CEP staff to call you after you receive any future CEP reports to answer any questions you might have? | 55% (24/44) |
Following any future reports you request from CEP, would you be willing to complete a brief questionnaire? | 100% (44/44) |
Please rank how you would prefer to receive reports from CEP in the future. | |
E‐mail containing the report as a PDF attachment | 77% (34/44) |
E‐mail containing a link to the report on CEP's website | 16% (7/44) |
In‐person presentation by the CEP analyst writing the report | 18% (8/44) |
In‐person presentation by the CEP director involved in the report | 16% (7/44) |
In general, respondents found reports easy to request, easy to use, timely, and relevant, resulting in high requestor satisfaction. In addition, 98% described the scope of content and level of detail as about right. Report impact was rated highly as well, with the evidence summary and conclusions rated as the most critical to decision making. A majority of respondents indicated that reports confirmed their tentative decision (77%, n = 34), whereas some changed their tentative decision (7%, n = 3), and others suggested the report had no effect on their tentative decision (16%, n = 7). Respondents indicated the amount of time that elapsed between receiving reports and making final decisions was 1 to 7 days (5%, n = 2), 8 to 30 days (40%, n = 17), 1 to 3 months (37%, n = 16), 4 to 6 months (9%, n = 4), or greater than 6 months (9%, n = 4). The most common reasons cited for requesting a report were the CEP's evidence synthesis skills and objectivity.
DISCUSSION
To our knowledge, this is the first comprehensive description and assessment of evidence synthesis activity by a hospital EPC in the United States. Our findings suggest that clinical and administrative leaders will request reports from a hospital EPC, and that hospital EPCs can promptly produce reports when requested. Moreover, these syntheses can address a wide range of clinical and policy topics, and can be disseminated through a variety of routes. Lastly, requestors are satisfied by these syntheses, and report that they inform decision making. These results suggest that EPCs may be an effective infrastructure paradigm for promoting evidence‐based decision making within healthcare provider organizations, and are consistent with previous analyses of hospital‐based EPCs.[21, 28, 29]
Over half of report requestors cited CEP's objectivity as a factor in their decision to request a report, underscoring the value of a neutral entity in an environment where clinical departments and hospital committees may have competing interests.[10] This asset was 1 of the primary drivers for establishing our hospital EPC. Concerns by clinical executives about the influence of industry and local politics on institutional decision making, and a desire to have clinical evidence more systematically and objectively integrated into decision making, fueled our center's funding.
The survey results also demonstrate that respondents were satisfied with the reports for many reasons, including readability, concision, timeliness, scope, and content, consistent with the evaluation of the French hospital‐based EPC CEDIT (French Committee for the Assessment and Dissemination of Technological Innovations).[29] Given the importance of readability, concision, and relevance that has been previously described,[16, 28, 30] nearly all CEP reports contain an evidence summary on the first page that highlights key findings in a concise, user‐friendly format.[31] The evidence summaries include bullet points that: (1) reference the most pertinent guideline recommendations along with their strength of recommendation and underlying quality of evidence; (2) organize and summarize study findings using the most critical clinical outcomes, including an assessment of the quality of the underlying evidence for each outcome; and (3) note important limitations of the findings.
Evidence syntheses must be timely to allow decision makers to act on the findings.[28, 32] The primary criticism of CEDIT was the lag between requests and report publication.[29] Rapid reviews, designed to inform urgent decisions, can overcome this challenge.[31, 33, 34] CEP reviews required approximately 2 months to complete on average, consistent with the most rapid timelines reported,[31, 33, 34] and much shorter than standard systematic review timelines, which can take up to 12 to 24 months.[33] Working with requestors to limit the scope of reviews to those issues most critical to a decision, using secondary resources when available, and hiring experienced research analysts help achieve these efficiencies.
The study by Bodeau‐Livinec also argues for the importance of report accessibility to ensure dissemination.[29] This is consistent with the CEP's approach, where all reports are posted on the UPHS internal website. Many also inform QI initiatives, as well as CDS interventions that address topics of general interest to acute care hospitals, such as venous thromboembolism (VTE) prophylaxis,[35] blood product transfusions,[36] sepsis care,[37, 38] and prevention of catheter‐associated urinary tract infections (CAUTI)[39] and hospital readmissions.[40] Most reports are also listed in an international database of rapid reviews,[23] and reports that address topics of general interest, have sufficient evidence to synthesize, and have no prior published systematic reviews are published in the peer‐reviewed literature.[41, 42]
The majority of reports completed by the CEP were evidence reviews, or systematic reviews of primary literature, suggesting that CEP reports often address questions previously unanswered by existing published systematic reviews; however, about a third of reports were evidence advisories, or summaries of evidence from preexisting secondary sources. The relative scarcity of high‐quality evidence bases in those reports where GRADE analyses were conducted might be expected, as requestors may be more likely to seek guidance when the evidence base on a topic is lacking. This was further supported by the small percentage (15%) of reports where adequate data of sufficient homogeneity existed to allow meta‐analyses. The small number of original meta‐analyses performed also reflects our reliance on secondary resources when available.
Only 7% of respondents reported that tentative decisions were changed based on their report. This is not surprising, as evidence reviews infrequently result in clear go or no go recommendations. More commonly, they address or inform complex clinical questions or pathways. In this context, the change/confirm/no effect framework may not completely reflect respondents' use of or benefit from reports. Thus, we included a diverse set of questions in our survey to best estimate the value of our reports. For example, when asked whether the report answered the question posed, informed their final decision, or was consistent with their final decision, 91%, 79%, and 71% agreed or strongly agreed, respectively. When asked whether they would request a report again if they had to do it all over, recommend CEP to their colleagues, and be likely to request reports in the future, at least 95% of survey respondents agreed or strongly agreed. In addition, no respondent indicated that their report was not timely enough to influence their decision. Moreover, only a minority of respondents expressed disappointment that the CEP's report did not provide actionable recommendations due to a lack of published evidence (9%, n = 4). Importantly, the large proportion of requestors indicating that reports confirmed their tentative decisions may be a reflection of hindsight bias.
The most apparent trend in the production of CEP reviews over time is the relative increase in requests by clinical departments, suggesting that the CEP is being increasingly consulted to help define best clinical practices. This is also supported by the relative increase in reports focused on policy or organizational/managerial systems. These findings suggest that hospital EPCs have value beyond the traditional realm of HTA.
This study has a number of limitations. First, not all of the eligible report requestors responded to our survey. Despite this, our response rate of 72% compares favorably with surveys published in medical journals.[43] In addition, nonresponse bias may be less important in physician surveys than surveys of the general population.[44] The similarity in requestor and report characteristics for respondents and nonrespondents supports this. Second, our survey of impact is self‐reported rather than an evaluation of actual decision making or patient outcomes. Thus, the survey relies on the accuracy of the responses. Third, recall bias must be considered, as some respondents were asked to evaluate reports that were greater than 1 year old. To reduce this bias, we asked respondents to consider the most recent report they requested, included that report as an attachment in the survey request, and only surveyed requestors from the most recent 4 of the CEP's 8 fiscal years. Fourth, social desirability bias could have also affected the survey responses, though it was likely minimized by the promise of confidentiality. Fifth, an examination of the impact of the CEP on costs was outside the scope of this evaluation; however, such information may be important to those assessing the sustainability or return on investment of such centers. Simple approaches we have previously used to approximate the value of our activities include: (1) estimating hospital cost savings resulting from decisions supported by our reports, such as the use of technologies like chlorhexidine for surgical site infections[45] or discontinuation of technologies like aprotinin for cardiac surgery[46]; and (2) estimating penalties avoided or rewards attained as a result of center‐led initiatives, such as those to increase VTE prophylaxis,[35] reduce CAUTI rates,[39] and reduce preventable mortality associated with sepsis.[37, 38] Similarly, given the focus of this study on the local evidence synthesis activities of our center, our examination did not include a detailed description of our CDS activities, or teaching activities, including our multidisciplinary workshops for physicians and nurses in evidence‐based QI[47] and our novel evidence‐based practice curriculum for medical students. Our study also did not include a description of our extramural activities, such as those supported by our contract with AHRQ as 1 of their 13 EPCs.[16, 17, 48, 49] A consideration of all of these activities enables a greater appreciation for the potential of such centers. Lastly, we examined a single EPC, which may not be representative of the diversity of hospitals and hospital staff across the United States. However, our EPC serves a diverse array of patient populations, clinical services, and service models throughout our multientity academic healthcare system, which may improve the generalizability of our experience to other settings.
As next steps, we recommend evaluation of other existing hospital EPCs nationally. Such studies could help hospitals and health systems ascertain which of their internal decisions might benefit from locally sourced rapid systematic reviews and determine whether an in‐house EPC could improve the value of care delivered.
In conclusion, our findings suggest that hospital EPCs within academic healthcare systems can efficiently synthesize and disseminate evidence for a variety of stakeholders. Moreover, these syntheses impact decision making in a variety of hospital contexts and clinical specialties. Hospitals and hospitalist leaders seeking to improve the implementation of evidence‐based practice at a systems level might consider establishing such infrastructure locally.
Acknowledgements
The authors thank Fran Barg, PhD (Department of Family Medicine and Community Health, University of Pennsylvania Perelman School of Medicine) and Joel Betesh, MD (University of Pennsylvania Health System) for their contributions to developing the survey. They did not receive any compensation for their contributions.
Disclosures: An earlier version of this work was presented as a poster at the 2014 AMA Research Symposium, November 7, 2014, Dallas, Texas. Mr. Jayakumar reports having received a University of Pennsylvania fellowship as a summer intern at the Center for Evidence‐based Practice. Dr. Umscheid cocreated and directs a hospital evidence‐based practice center, is the Senior Associate Director of an Agency for Healthcare Research and Quality Evidence‐Based Practice Center, and is a past member of the Medicare Evidence Development and Coverage Advisory Committee, which uses evidence reports developed by the Evidence‐based Practice Centers of the Agency for Healthcare Research and Quality. Dr. Umscheid's contribution was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. None of the funders had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Dr. Lavenberg, Dr. Mitchell, and Mr. Leas are employed as research analysts by a hospital evidence‐based practice center. Dr. Doshi is supported in part by a hospital evidence‐based practice center and is an Associate Director of an Agency for Healthcare Research and Quality Evidence‐based Practice Center. Dr. Goldmann is emeritus faculty at Penn, is supported in part by a hospital evidence‐based practice center, and is the Vice President and Chief Quality Assurance Officer in Clinical Solutions, a division of Elsevier, Inc., a global publishing company, and director of the division's Evidence‐based Medicine Center. Dr. Williams cocreated and codirects a hospital evidence‐based practice center. Dr. Brennan has oversight for and helped create a hospital evidence‐based practice center.
Hospital evidence‐based practice centers (EPCs) are structures with the potential to facilitate the integration of evidence into institutional decision making to close knowing‐doing gaps[1, 2, 3, 4, 5, 6]; in the process, they can support the evolution of their parent institutions into learning healthcare systems.[7] The potential of hospital EPCs stems from their ability to identify and adapt national evidence‐based guidelines and systematic reviews for the local setting,[8] create local evidence‐based guidelines in the absence of national guidelines, use local data to help define problems and assess the impact of solutions,[9] and implement evidence into practice through computerized clinical decision support (CDS) interventions and other quality‐improvement (QI) initiatives.[9, 10] As such, hospital EPCs have the potential to strengthen relationships and understanding between clinicians and administrators[11]; foster a culture of evidence‐based practice; and improve the quality, safety, and value of care provided.[10]
Formal hospital EPCs remain uncommon in the United States,[10, 11, 12] though their numbers have expanded worldwide.[13, 14] This growth is due not to any reduced role for national EPCs, such as the National Institute for Health and Clinical Excellence[15] in the United Kingdom, or the 13 EPCs funded by the Agency for Healthcare Research and Quality (AHRQ)[16, 17] in the United States. Rather, this growth is fueled by the heightened awareness that the value of healthcare interventions often needs to be assessed locally, and that clinical guidelines that consider local context have a greater potential to improve quality and efficiency.[9, 18, 19]
Despite the increasing number of hospital EPCs globally, their impact on administrative and clinical decision making has rarely been examined,[13, 20] especially for hospital EPCs in the United States. The few studies that have assessed the impact of hospital EPCs on institutional decision making have done so in the context of technology acquisition, neglecting the role hospital EPCs may play in the integration of evidence into clinical practice. For example, the Technology Assessment Unit at McGill University Health Center found that of the 27 reviews commissioned in their first 5 years, 25 were implemented, with 6 (24%) recommending investments in new technologies and 19 (76%) recommending rejection, for a reported net hospital savings of $10 million.[21] Understanding the activities and impact of hospital EPCs is particularly critical for hospitalist leaders, who could leverage hospital EPCs to inform efforts to support the quality, safety, and value of care provided, or who may choose to establish or lead such infrastructure. The availability of such opportunities could also support hospitalist recruitment and retention.
In 2006, the University of Pennsylvania Health System (UPHS) created the Center for Evidence‐based Practice (CEP) to support the integration of evidence into practice to strengthen quality, safety, and value.[10] Cofounded by hospitalists with formal training in clinical epidemiology, the CEP performs rapid systematic reviews of the scientific literature to inform local practice and policy. In this article, we describe the first 8 years of the CEP's evidence synthesis activities and examine its impact on decision making across the health system.
METHODS
Setting
The UPHS includes 3 acute care hospitals, and inpatient facilities specializing in acute rehabilitation, skilled nursing, long‐term acute care, and hospice, with a capacity of more than 1800 beds and 75,000 annual admissions, as well as primary care and specialty clinics with more than 2 million annual outpatient visits. The CEP is funded by and organized within the Office of the UPHS Chief Medical Officer, serves all UPHS facilities, has an annual budget of approximately $1 million, and is currently staffed by a hospitalist director, 3 research analysts, 6 physician and nurse liaisons, a health economist, biostatistician, administrator, and librarians, totaling 5.5 full time equivalents.
The mission of the CEP is to support the quality, safety, and value of care at Penn through evidence‐based practice. To accomplish this mission, the CEP performs rapid systematic reviews, translates evidence into practice through the use of CDS interventions and clinical pathways, and offers education in evidence‐based decision making to trainees, staff, and faculty. This study is focused on the CEP's evidence synthesis activities.
Typically, clinical and administrative leaders submit a request to the CEP for an evidence review, the request is discussed and approved at the weekly staff meeting, and a research analyst and clinical liaison are assigned to the request and communicate with the requestor to clearly define the question of interest. Subsequently, the research analyst completes a protocol, a draft search, and a draft report, each reviewed and approved by the clinical liaison and requestor. The final report is posted to the website, disseminated to all key stakeholders across the UPHS as identified by the clinical liaisons, and integrated into decision making through various routes, including in‐person presentations to decision makers, and CDS and QI initiatives.
Study Design
The study included an analysis of an internal database of evidence reviews and a survey of report requestors, and was exempted from institutional review board review. Survey respondents were informed that their responses would be confidential and did not receive incentives.
Internal Database of Reports
Data from the CEP's internal management database were analyzed for its first 8 fiscal years (July 2006June 2014). Variables included requestor characteristics, report characteristics (eg, technology reviewed, clinical specialty examined, completion time, and performance of meta‐analyses and GRADE [Grading of Recommendations Assessment, Development and Evaluation] analyses[22]), report use (eg, integration of report into CDS interventions) and dissemination beyond the UPHS (eg, submission to Center for Reviews and Dissemination [CRD] Health Technology Assessment [HTA] database[23] and to peer‐reviewed journals). Report completion time was defined as the time between the date work began on the report and the date the final report was sent to the requestor. The technology categorization scheme was adapted from that provided by Goodman (2004)[24] and the UK National Institute for Health Research HTA Programme.[25] We systematically assigned the technology reviewed in each report to 1 of 8 mutually exclusive categories. The clinical specialty examined in each report was determined using an algorithm (see Supporting Information, Appendix 1, in the online version of this article).
We compared the report completion times and the proportions of requestor types, technologies reviewed, and clinical specialties examined in the CEP's first 4 fiscal years (July 2006June 2010) to those in the CEP's second 4 fiscal years (July 2010June 2014) using t tests and 2 tests for continuous and categorical variables, respectively.
Survey
We conducted a Web‐based survey (see Supporting Information, Appendix 2, in the online version of this article) of all requestors of the 139 rapid reviews completed in the last 4 fiscal years. Participants who requested multiple reports were surveyed only about the most recent report. Requestors were invited to participate in the survey via e‐mail, and follow‐up e‐mails were sent to nonrespondents at 7, 14, and 16 days. Nonrespondents and respondents were compared with respect to requestor type (physician vs nonphysician) and topic evaluated (traditional HTA topics such as drugs, biologics, and devices vs nontraditional HTA topics such as processes of care). The survey was administered using REDCap[26] electronic data capture tools. The 44‐item questionnaire collected data on the interaction between the requestor and the CEP, report characteristics, report impact, and requestor satisfaction.
Survey results were imported into Microsoft Excel (Microsoft Corp, Redmond, WA) and SPSS (IBM, Armonk, NY) for analysis. Descriptive statistics were generated, and statistical comparisons were conducted using 2 and Fisher exact tests.
RESULTS
Evidence Synthesis Activity
The CEP has produced several different report products since its inception. Evidence reviews (57%, n = 142) consist of a systematic review and analysis of the primary literature. Evidence advisories (32%, n = 79) are summaries of evidence from secondary sources such as guidelines or systematic reviews. Evidence inventories (3%, n = 7) are literature searches that describe the quantity and focus of available evidence, without analysis or synthesis.[27]
The categories of technologies examined, including their definitions and examples, are provided in Table 1. Drugs (24%, n = 60) and devices/equipment/supplies (19%, n = 48) were most commonly examined. The proportion of reports examining technology types traditionally evaluated by HTA organizations significantly decreased when comparing the first 4 years of CEP activity to the second 4 years (62% vs 38%, P < 0.01), whereas reports examining less traditionally reviewed categories increased (38% vs 62%, P < 0.01). The most common clinical specialties represented by the CEP reports were nursing (11%, n = 28), general surgery (11%, n = 28), critical care (10%, n = 24), and general medicine (9%, n = 22) (see Supporting Information, Appendix 3, in the online version of this article). Clinical departments were the most common requestors (29%, n = 72) (Table 2). The proportion of requests from clinical departments significantly increased when comparing the first 4 years to the second 4 years (20% vs 36%, P < 0.01), with requests from purchasing committees significantly decreasing (25% vs 6%, P < 0.01). The overall report completion time was 70 days, and significantly decreased when comparing the first 4 years to the second 4 years (89 days vs 50 days, P < 0.01).
Category | Definition | Examples | Total | 20072010 | 20112014 | P Value |
---|---|---|---|---|---|---|
Total | 249 (100%) | 109 (100%) | 140 (100%) | |||
Drug | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a pharmacologic agent | Celecoxib for pain in joint arthroplasty; colchicine for prevention of pericarditis and atrial fibrillation | 60 (24%) | 35 (32%) | 25 (18%) | 0.009 |
Device, equipment, and supplies | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an instrument, apparatus, implement, machine, contrivance, implant, in vitro reagent, or other similar or related article, including a component part, or accessory that is intended for use in the prevention, diagnosis, or treatment of disease and does not achieve its primary intended purposes though chemical action or metabolism[50] | Thermometers for pediatric use; femoral closure devices for cardiac catheterization | 48 (19%) | 25 (23%) | 23 (16%) | 0.19 |
Process of care | A report primarily examining a clinical pathway or a clinical practice guideline that significantly involves elements of prevention, diagnosis, and/or treatment or significantly incorporates 2 or more of the other technology categories | Preventing patient falls; prevention and management of delirium | 31 (12%) | 18 (17%) | 13 (9%) | 0.09 |
Test, scale, or risk factor | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a test intended to screen for, diagnose, classify, or monitor the progression of a disease | Computed tomography for acute chest pain; urine drug screening in chronic pain patients on opioid therapy | 31 (12%) | 8 (7%) | 23 (16%) | 0.03 |
Medical/surgical procedure | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a medical intervention that is not a drug, device, or test or of the application or removal of a device | Biliary drainage for chemotherapy patients; cognitive behavioral therapy for insomnia | 26 (10%) | 8 (7%) | 18 (13%) | 0.16 |
Policy or organizational/managerial system | A report primarily examining laws or regulations; the organization, financing, or delivery of care, including settings of care; or healthcare providers | Medical care costs and productivity changes associated with smoking; physician training and credentialing for robotic surgery in obstetrics and gynecology | 26 (10%) | 4 (4%) | 22 (16%) | 0.002 |
Support system | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of an intervention designed to provide a new or improved service to patients or healthcare providers that does not fall into 1 of the other categories | Reconciliation of data from differing electronic medical records; social media, text messaging, and postdischarge communication | 14 (6%) | 3 (3%) | 11 (8%) | 0.09 |
Biologic | A report primarily examining the efficacy/effectiveness, safety, appropriate use, or cost of a product manufactured in a living system | Recombinant factor VIIa for cardiovascular surgery; osteobiologics for orthopedic fusions | 13 (5%) | 8 (7%) | 5 (4%) | 0.19 |
Category | Total | 20072010 | 20112014 | P Value |
---|---|---|---|---|
| ||||
Total | 249 (100%) | 109 (100%) | 140 (100%) | |
Clinical department | 72 (29%) | 22 (20%) | 50 (36%) | 0.007 |
CMO | 47 (19%) | 21 (19%) | 26 (19%) | 0.92 |
Purchasing committee | 35 (14%) | 27 (25%) | 8 (6%) | <0.001 |
Formulary committee | 22 (9%) | 12 (11%) | 10 (7%) | 0.54 |
Quality committee | 21 (8%) | 11 (10%) | 10 (7%) | 0.42 |
Administrative department | 19 (8%) | 5 (5%) | 14 (10%) | 0.11 |
Nursing | 14 (6%) | 4 (4%) | 10 (7%) | 0.23 |
Other* | 19 (8%) | 7 (6%) | 12 (9%) | 0.55 |
Thirty‐seven (15%) reports included meta‐analyses conducted by CEP staff. Seventy‐five reports (30%) contained an evaluation of the quality of the evidence base using GRADE analyses.[22] Of these reports, the highest GRADE of evidence available for any comparison of interest was moderate (35%, n = 26) or high (33%, n = 25) in most cases, followed by very low (19%, n = 14) and low (13%, n = 10).
Reports were disseminated in a variety of ways beyond direct dissemination and presentation to requestors and posting on the center website. Thirty reports (12%) informed CDS interventions, 24 (10%) resulted in peer‐reviewed publications, and 204 (82%) were posted to the CRD HTA database.
Evidence Synthesis Impact
A total of 139 reports were completed between July 2010 and June 2014 for 65 individual requestors. Email invitations to participate in the survey were sent to the 64 requestors employed by the UPHS. The response rate was 72% (46/64). The proportions of physician requestors and traditional HTA topics evaluated were similar across respondents and nonrespondents (43% [20/46] vs 39% [7/18], P = 0.74; and 37% [17/46] vs 44% [8/18], P = 0.58, respectively). Aggregated survey responses are presented for items using a Likert scale in Figure 1, and for items using a yes/no or ordinal scale in Table 3.
Items | % of Respondents Responding Affirmatively |
---|---|
Percentage of Respondents Ranking as First Choice* | |
| |
Requestor activity | |
What factors prompted you to request a report from CEP? (Please select all that apply.) | |
My own time constraints | 28% (13/46) |
CEP's ability to identify and synthesize evidence | 89% (41/46) |
CEP's objectivity | 52% (24/46) |
Recommendation from colleague | 30% (14/46) |
Did you conduct any of your own literature searches before contacting CEP? | 67% (31/46) |
Did you obtain and read any of the articles cited in CEP's report? | 63% (29/46) |
Did you read the following sections of CEP's report? | |
Evidence summary (at beginning of report) | 100% (45/45) |
Introduction/background | 93% (42/45) |
Methods | 84% (38/45) |
Results | 98% (43/43) |
Conclusion | 100% (43/43) |
Report dissemination | |
Did you share CEP's report with anyone NOT involved in requesting the report or in making the final decision? | 67% (30/45) |
Did you share CEP's report with anyone outside of Penn? | 7% (3/45) |
Requestor preferences | |
Would it be helpful for CEP staff to call you after you receive any future CEP reports to answer any questions you might have? | 55% (24/44) |
Following any future reports you request from CEP, would you be willing to complete a brief questionnaire? | 100% (44/44) |
Please rank how you would prefer to receive reports from CEP in the future. | |
E‐mail containing the report as a PDF attachment | 77% (34/44) |
E‐mail containing a link to the report on CEP's website | 16% (7/44) |
In‐person presentation by the CEP analyst writing the report | 18% (8/44) |
In‐person presentation by the CEP director involved in the report | 16% (7/44) |
In general, respondents found reports easy to request, easy to use, timely, and relevant, resulting in high requestor satisfaction. In addition, 98% described the scope of content and level of detail as about right. Report impact was rated highly as well, with the evidence summary and conclusions rated as the most critical to decision making. A majority of respondents indicated that reports confirmed their tentative decision (77%, n = 34), whereas some changed their tentative decision (7%, n = 3), and others suggested the report had no effect on their tentative decision (16%, n = 7). Respondents indicated the amount of time that elapsed between receiving reports and making final decisions was 1 to 7 days (5%, n = 2), 8 to 30 days (40%, n = 17), 1 to 3 months (37%, n = 16), 4 to 6 months (9%, n = 4), or greater than 6 months (9%, n = 4). The most common reasons cited for requesting a report were the CEP's evidence synthesis skills and objectivity.
DISCUSSION
To our knowledge, this is the first comprehensive description and assessment of evidence synthesis activity by a hospital EPC in the United States. Our findings suggest that clinical and administrative leaders will request reports from a hospital EPC, and that hospital EPCs can promptly produce reports when requested. Moreover, these syntheses can address a wide range of clinical and policy topics, and can be disseminated through a variety of routes. Lastly, requestors are satisfied by these syntheses, and report that they inform decision making. These results suggest that EPCs may be an effective infrastructure paradigm for promoting evidence‐based decision making within healthcare provider organizations, and are consistent with previous analyses of hospital‐based EPCs.[21, 28, 29]
Over half of report requestors cited CEP's objectivity as a factor in their decision to request a report, underscoring the value of a neutral entity in an environment where clinical departments and hospital committees may have competing interests.[10] This asset was 1 of the primary drivers for establishing our hospital EPC. Concerns by clinical executives about the influence of industry and local politics on institutional decision making, and a desire to have clinical evidence more systematically and objectively integrated into decision making, fueled our center's funding.
The survey results also demonstrate that respondents were satisfied with the reports for many reasons, including readability, concision, timeliness, scope, and content, consistent with the evaluation of the French hospital‐based EPC CEDIT (French Committee for the Assessment and Dissemination of Technological Innovations).[29] Given the importance of readability, concision, and relevance that has been previously described,[16, 28, 30] nearly all CEP reports contain an evidence summary on the first page that highlights key findings in a concise, user‐friendly format.[31] The evidence summaries include bullet points that: (1) reference the most pertinent guideline recommendations along with their strength of recommendation and underlying quality of evidence; (2) organize and summarize study findings using the most critical clinical outcomes, including an assessment of the quality of the underlying evidence for each outcome; and (3) note important limitations of the findings.
Evidence syntheses must be timely to allow decision makers to act on the findings.[28, 32] The primary criticism of CEDIT was the lag between requests and report publication.[29] Rapid reviews, designed to inform urgent decisions, can overcome this challenge.[31, 33, 34] CEP reviews required approximately 2 months to complete on average, consistent with the most rapid timelines reported,[31, 33, 34] and much shorter than standard systematic review timelines, which can take up to 12 to 24 months.[33] Working with requestors to limit the scope of reviews to those issues most critical to a decision, using secondary resources when available, and hiring experienced research analysts help achieve these efficiencies.
The study by Bodeau‐Livinec also argues for the importance of report accessibility to ensure dissemination.[29] This is consistent with the CEP's approach, where all reports are posted on the UPHS internal website. Many also inform QI initiatives, as well as CDS interventions that address topics of general interest to acute care hospitals, such as venous thromboembolism (VTE) prophylaxis,[35] blood product transfusions,[36] sepsis care,[37, 38] and prevention of catheter‐associated urinary tract infections (CAUTI)[39] and hospital readmissions.[40] Most reports are also listed in an international database of rapid reviews,[23] and reports that address topics of general interest, have sufficient evidence to synthesize, and have no prior published systematic reviews are published in the peer‐reviewed literature.[41, 42]
The majority of reports completed by the CEP were evidence reviews, or systematic reviews of primary literature, suggesting that CEP reports often address questions previously unanswered by existing published systematic reviews; however, about a third of reports were evidence advisories, or summaries of evidence from preexisting secondary sources. The relative scarcity of high‐quality evidence bases in those reports where GRADE analyses were conducted might be expected, as requestors may be more likely to seek guidance when the evidence base on a topic is lacking. This was further supported by the small percentage (15%) of reports where adequate data of sufficient homogeneity existed to allow meta‐analyses. The small number of original meta‐analyses performed also reflects our reliance on secondary resources when available.
Only 7% of respondents reported that tentative decisions were changed based on their report. This is not surprising, as evidence reviews infrequently result in clear go or no go recommendations. More commonly, they address or inform complex clinical questions or pathways. In this context, the change/confirm/no effect framework may not completely reflect respondents' use of or benefit from reports. Thus, we included a diverse set of questions in our survey to best estimate the value of our reports. For example, when asked whether the report answered the question posed, informed their final decision, or was consistent with their final decision, 91%, 79%, and 71% agreed or strongly agreed, respectively. When asked whether they would request a report again if they had to do it all over, recommend CEP to their colleagues, and be likely to request reports in the future, at least 95% of survey respondents agreed or strongly agreed. In addition, no respondent indicated that their report was not timely enough to influence their decision. Moreover, only a minority of respondents expressed disappointment that the CEP's report did not provide actionable recommendations due to a lack of published evidence (9%, n = 4). Importantly, the large proportion of requestors indicating that reports confirmed their tentative decisions may be a reflection of hindsight bias.
The most apparent trend in the production of CEP reviews over time is the relative increase in requests by clinical departments, suggesting that the CEP is being increasingly consulted to help define best clinical practices. This is also supported by the relative increase in reports focused on policy or organizational/managerial systems. These findings suggest that hospital EPCs have value beyond the traditional realm of HTA.
This study has a number of limitations. First, not all of the eligible report requestors responded to our survey. Despite this, our response rate of 72% compares favorably with surveys published in medical journals.[43] In addition, nonresponse bias may be less important in physician surveys than surveys of the general population.[44] The similarity in requestor and report characteristics for respondents and nonrespondents supports this. Second, our survey of impact is self‐reported rather than an evaluation of actual decision making or patient outcomes. Thus, the survey relies on the accuracy of the responses. Third, recall bias must be considered, as some respondents were asked to evaluate reports that were greater than 1 year old. To reduce this bias, we asked respondents to consider the most recent report they requested, included that report as an attachment in the survey request, and only surveyed requestors from the most recent 4 of the CEP's 8 fiscal years. Fourth, social desirability bias could have also affected the survey responses, though it was likely minimized by the promise of confidentiality. Fifth, an examination of the impact of the CEP on costs was outside the scope of this evaluation; however, such information may be important to those assessing the sustainability or return on investment of such centers. Simple approaches we have previously used to approximate the value of our activities include: (1) estimating hospital cost savings resulting from decisions supported by our reports, such as the use of technologies like chlorhexidine for surgical site infections[45] or discontinuation of technologies like aprotinin for cardiac surgery[46]; and (2) estimating penalties avoided or rewards attained as a result of center‐led initiatives, such as those to increase VTE prophylaxis,[35] reduce CAUTI rates,[39] and reduce preventable mortality associated with sepsis.[37, 38] Similarly, given the focus of this study on the local evidence synthesis activities of our center, our examination did not include a detailed description of our CDS activities, or teaching activities, including our multidisciplinary workshops for physicians and nurses in evidence‐based QI[47] and our novel evidence‐based practice curriculum for medical students. Our study also did not include a description of our extramural activities, such as those supported by our contract with AHRQ as 1 of their 13 EPCs.[16, 17, 48, 49] A consideration of all of these activities enables a greater appreciation for the potential of such centers. Lastly, we examined a single EPC, which may not be representative of the diversity of hospitals and hospital staff across the United States. However, our EPC serves a diverse array of patient populations, clinical services, and service models throughout our multientity academic healthcare system, which may improve the generalizability of our experience to other settings.
As next steps, we recommend evaluation of other existing hospital EPCs nationally. Such studies could help hospitals and health systems ascertain which of their internal decisions might benefit from locally sourced rapid systematic reviews and determine whether an in‐house EPC could improve the value of care delivered.
In conclusion, our findings suggest that hospital EPCs within academic healthcare systems can efficiently synthesize and disseminate evidence for a variety of stakeholders. Moreover, these syntheses impact decision making in a variety of hospital contexts and clinical specialties. Hospitals and hospitalist leaders seeking to improve the implementation of evidence‐based practice at a systems level might consider establishing such infrastructure locally.
Acknowledgements
The authors thank Fran Barg, PhD (Department of Family Medicine and Community Health, University of Pennsylvania Perelman School of Medicine) and Joel Betesh, MD (University of Pennsylvania Health System) for their contributions to developing the survey. They did not receive any compensation for their contributions.
Disclosures: An earlier version of this work was presented as a poster at the 2014 AMA Research Symposium, November 7, 2014, Dallas, Texas. Mr. Jayakumar reports having received a University of Pennsylvania fellowship as a summer intern at the Center for Evidence‐based Practice. Dr. Umscheid cocreated and directs a hospital evidence‐based practice center, is the Senior Associate Director of an Agency for Healthcare Research and Quality Evidence‐Based Practice Center, and is a past member of the Medicare Evidence Development and Coverage Advisory Committee, which uses evidence reports developed by the Evidence‐based Practice Centers of the Agency for Healthcare Research and Quality. Dr. Umscheid's contribution was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. None of the funders had a role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication. Dr. Lavenberg, Dr. Mitchell, and Mr. Leas are employed as research analysts by a hospital evidence‐based practice center. Dr. Doshi is supported in part by a hospital evidence‐based practice center and is an Associate Director of an Agency for Healthcare Research and Quality Evidence‐based Practice Center. Dr. Goldmann is emeritus faculty at Penn, is supported in part by a hospital evidence‐based practice center, and is the Vice President and Chief Quality Assurance Officer in Clinical Solutions, a division of Elsevier, Inc., a global publishing company, and director of the division's Evidence‐based Medicine Center. Dr. Williams cocreated and codirects a hospital evidence‐based practice center. Dr. Brennan has oversight for and helped create a hospital evidence‐based practice center.
- “Bench to behavior”: translating comparative effectiveness research into improved clinical practice. Health Aff (Millwood). 2010;29(10):1891–1900. , .
- Evaluating the status of “translating research into practice” at a major academic healthcare system. Int J Technol Assess Health Care. 2009;25(1):84–89. , , .
- Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff (Millwood). 2012;31(10):2168–2175. , , , .
- Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50. , , , , , .
- From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225–1230. , .
- Incentivizing “structures” over “outcomes” to bridge the knowing‐doing gap. JAMA Intern Med. 2015;175(3):354. , .
- Olsen L, Aisner D, McGinnis JM, eds. Institute of Medicine (US) Roundtable on Evidence‐Based Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: National Academies Press; 2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK53494/. Accessed October 29, 2014.
- Adapting clinical practice guidelines to local context and assessing barriers to their use. Can Med Assoc J. 2010;182(2):E78–E84. , , , .
- Integrating local data into hospital‐based healthcare technology assessment: two case studies. Int J Technol Assess Health Care. 2010;26(3):294–300. , , , .
- Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):1352–1355. , , .
- Health technology assessment at the University of California‐San Francisco. J Healthc Manag Am Coll Healthc Exec. 2011;56(1):15–29; discussion 29–30. , , , , .
- Kaiser Permanente Southern California regional technology management process: evidence‐based medicine operationalized. Perm J. 2006;10(1):38–41. , .
- Hospital‐based health technology assessment: developments to date. Pharmacoeconomics. 2014;32(9):819–824. .
- Hospital based health technology assessment world‐wide survey. Available at: http://www.htai.org/fileadmin/HTAi_Files/ISG/HospitalBasedHTA/2008Files/HospitalBasedHTAISGSurveyReport.pdf. Accessed October 11, 2015. , , , .
- At the center of health care policy making: the use of health technology assessment at NICE. Med Decis Making. 2013;33(3):320–324. , .
- Better information for better health care: the Evidence‐based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 part 2):1035–1041. , , .
- AHRQ's Effective Health Care Program: why comparative effectiveness matters. Am J Med Qual. 2009;24(1):67–70. , .
- Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):1317–1322. , .
- Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24. , , , et al.
- Effects and repercussions of local/hospital‐based health technology assessment (HTA): a systematic. Syst Rev. 2014;3:129. , , , .
- Impact of TAU Reports. McGill University Health Centre. Available at: https://francais.mcgill.ca/files/tau/FINAL_TAU_IMPACT_REPORT_FEB_2008.pdf. Published Feb 1, 2008. Accessed August 19, 2014. , , , et al.
- GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–926. , , , et al.
- Centre for Reviews and Dissemination databases: value, content, and developments. Int J Technol Assess Health Care. 2010;26(4):470–472. , , .
- HTA 101. Introduction to Health Technology Assessment. Available at: https://www.nlm.nih.gov/nichsr/hta101/ta10103.html. Accessed October 11, 2015. .
- National Institute for Health Research. Remit. NIHR HTA Programme. Available at: http://www.nets.nihr.ac.uk/programmes/hta/remit. Accessed August 20, 2014.
- Research Electronic Data Capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–381. , , , , , .
- When the decision is what to decide: Using evidence inventory reports to focus health technology assessments. Int J Technol Assess Health Care. 2011;27(2):127–132. , , , .
- End‐user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(02):263–267. , .
- Impact of CEDIT recommendations: an example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161–168. , , , , .
- Increasing the relevance of research to health care managers: hospital CEO imperatives for improving quality and lowering costs. Health Care Manage Rev. 2007;32(2):150–159. , , , .
- Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10. , , , , .
- Health care decision makers' use of comparative effectiveness research: report from a series of focus groups. J Manag Care Pharm. 2013;19(9):745–754. , , , .
- Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133–139. , , , et al.
- EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at: http://www.ncbi.nlm.nih.gov/books/NBK274092. Accessed March 5, 2015. , , , et al.
- Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi‐experimental study. BMC Med Inform Decis Mak. 2012;12:92. , , , , .
- Order sets in electronic health records: principles of good practice. Chest. 2013;143(1):228–235. .
- Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):26–31. , , , et al.
- Clinician perception of the effectiveness of an automated early warning and response system for sepsis in an academic medical center. Ann Am Thorac Soc. 2015;12(10):1514–1519. , , , et al.
- Usability and impact of a computerized clinical decision support intervention designed to reduce urinary catheter utilization and catheter‐associated urinary tract infections. Infect Control Hosp Epidemiol. 2014;35(9):1147–1155. , , , , , .
- The readmission risk flag: using the electronic health record to automatically identify patients at risk for 30‐day readmission. J Hosp Med. 2013;8(12):689–695. , , , et al.
- A systematic review to inform institutional decisions about the use of extracorporeal membrane oxygenation during the H1N1 influenza pandemic. Crit Care Med. 2010;38(6):1398–1404. , , , , , .
- Heparin flushing and other interventions to maintain patency of central venous catheters: a systematic review. J Adv Nurs. 2009;65(10):2007–2021. , , , .
- Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):1129–1136. , , .
- Physician response to surveys: a review of the literature. Am J Prev Med. 2001;20(1):61–67. , .
- Systematic review and cost analysis comparing use of chlorhexidine with use of iodine for preoperative skin antisepsis to prevent surgical site infection. Infect Control Hosp Epidemiol. 2010;31(12):1219–1229. , , , , .
- Antifibrinolytic use in adult cardiac surgery. Curr Opin Hematol. 2007;14(5):455–467. , , .
- Teaching evidence assimilation for collaborative health care (TEACH) 2009–2014: building evidence‐based capacity within health care provider organizations. EGEMS (Wash DC). 2015;3(2):1165. , , , , .
- Cleaning hospital room surfaces to prevent health care‐associated infections: a technical brief [published online August 11, 2015]. Ann Intern Med. doi:10.7326/M15‐1192. , , , , , .
- Healthcare Infection Control Practices Advisory Committee. Updating the guideline development methodology of the Healthcare Infection Control Practices Advisory Committee (HICPAC). Am J Infect Control. 2010;38(4):264–273. , , ,
- U.S. Food and Drug Administration. FDA basics—What is a medical device? Available at: http://www.fda.gov/AboutFDA/Transparency/Basics/ucm211822.htm. Accessed November 12, 2014.
- “Bench to behavior”: translating comparative effectiveness research into improved clinical practice. Health Aff (Millwood). 2010;29(10):1891–1900. , .
- Evaluating the status of “translating research into practice” at a major academic healthcare system. Int J Technol Assess Health Care. 2009;25(1):84–89. , , .
- Five reasons that many comparative effectiveness studies fail to change patient care and clinical practice. Health Aff (Millwood). 2012;31(10):2168–2175. , , , .
- Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50. , , , , , .
- From best evidence to best practice: effective implementation of change in patients' care. Lancet. 2003;362(9391):1225–1230. , .
- Incentivizing “structures” over “outcomes” to bridge the knowing‐doing gap. JAMA Intern Med. 2015;175(3):354. , .
- Olsen L, Aisner D, McGinnis JM, eds. Institute of Medicine (US) Roundtable on Evidence‐Based Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: National Academies Press; 2007. Available at: http://www.ncbi.nlm.nih.gov/books/NBK53494/. Accessed October 29, 2014.
- Adapting clinical practice guidelines to local context and assessing barriers to their use. Can Med Assoc J. 2010;182(2):E78–E84. , , , .
- Integrating local data into hospital‐based healthcare technology assessment: two case studies. Int J Technol Assess Health Care. 2010;26(3):294–300. , , , .
- Hospital‐based comparative effectiveness centers: translating research into practice to improve the quality, safety and value of patient care. J Gen Intern Med. 2010;25(12):1352–1355. , , .
- Health technology assessment at the University of California‐San Francisco. J Healthc Manag Am Coll Healthc Exec. 2011;56(1):15–29; discussion 29–30. , , , , .
- Kaiser Permanente Southern California regional technology management process: evidence‐based medicine operationalized. Perm J. 2006;10(1):38–41. , .
- Hospital‐based health technology assessment: developments to date. Pharmacoeconomics. 2014;32(9):819–824. .
- Hospital based health technology assessment world‐wide survey. Available at: http://www.htai.org/fileadmin/HTAi_Files/ISG/HospitalBasedHTA/2008Files/HospitalBasedHTAISGSurveyReport.pdf. Accessed October 11, 2015. , , , .
- At the center of health care policy making: the use of health technology assessment at NICE. Med Decis Making. 2013;33(3):320–324. , .
- Better information for better health care: the Evidence‐based Practice Center program and the Agency for Healthcare Research and Quality. Ann Intern Med. 2005;142(12 part 2):1035–1041. , , .
- AHRQ's Effective Health Care Program: why comparative effectiveness matters. Am J Med Qual. 2009;24(1):67–70. , .
- Effect of clinical guidelines on medical practice: a systematic review of rigorous evaluations. Lancet. 1993;342(8883):1317–1322. , .
- Lost in knowledge translation: time for a map? J Contin Educ Health Prof. 2006;26(1):13–24. , , , et al.
- Effects and repercussions of local/hospital‐based health technology assessment (HTA): a systematic. Syst Rev. 2014;3:129. , , , .
- Impact of TAU Reports. McGill University Health Centre. Available at: https://francais.mcgill.ca/files/tau/FINAL_TAU_IMPACT_REPORT_FEB_2008.pdf. Published Feb 1, 2008. Accessed August 19, 2014. , , , et al.
- GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. 2008;336(7650):924–926. , , , et al.
- Centre for Reviews and Dissemination databases: value, content, and developments. Int J Technol Assess Health Care. 2010;26(4):470–472. , , .
- HTA 101. Introduction to Health Technology Assessment. Available at: https://www.nlm.nih.gov/nichsr/hta101/ta10103.html. Accessed October 11, 2015. .
- National Institute for Health Research. Remit. NIHR HTA Programme. Available at: http://www.nets.nihr.ac.uk/programmes/hta/remit. Accessed August 20, 2014.
- Research Electronic Data Capture (REDCap)—a metadata‐driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–381. , , , , , .
- When the decision is what to decide: Using evidence inventory reports to focus health technology assessments. Int J Technol Assess Health Care. 2011;27(2):127–132. , , , .
- End‐user involvement in health technology assessment (HTA) development: a way to increase impact. Int J Technol Assess Health Care. 2005;21(02):263–267. , .
- Impact of CEDIT recommendations: an example of health technology assessment in a hospital network. Int J Technol Assess Health Care. 2006;22(2):161–168. , , , , .
- Increasing the relevance of research to health care managers: hospital CEO imperatives for improving quality and lowering costs. Health Care Manage Rev. 2007;32(2):150–159. , , , .
- Evidence summaries: the evolution of a rapid review approach. Syst Rev. 2012;1(1):10. , , , , .
- Health care decision makers' use of comparative effectiveness research: report from a series of focus groups. J Manag Care Pharm. 2013;19(9):745–754. , , , .
- Rapid reviews versus full systematic reviews: an inventory of current methods and practice in health technology assessment. Int J Technol Assess Health Care. 2008;24(2):133–139. , , , et al.
- EPC Methods: An Exploration of Methods and Context for the Production of Rapid Reviews. Rockville, MD: Agency for Healthcare Research and Quality; 2015. Available at: http://www.ncbi.nlm.nih.gov/books/NBK274092. Accessed March 5, 2015. , , , et al.
- Effectiveness of a novel and scalable clinical decision support intervention to improve venous thromboembolism prophylaxis: a quasi‐experimental study. BMC Med Inform Decis Mak. 2012;12:92. , , , , .
- Order sets in electronic health records: principles of good practice. Chest. 2013;143(1):228–235. .
- Development, implementation, and impact of an automated early warning and response system for sepsis. J Hosp Med. 2015;10(1):26–31. , , , et al.
- Clinician perception of the effectiveness of an automated early warning and response system for sepsis in an academic medical center. Ann Am Thorac Soc. 2015;12(10):1514–1519. , , , et al.
- Usability and impact of a computerized clinical decision support intervention designed to reduce urinary catheter utilization and catheter‐associated urinary tract infections. Infect Control Hosp Epidemiol. 2014;35(9):1147–1155. , , , , , .
- The readmission risk flag: using the electronic health record to automatically identify patients at risk for 30‐day readmission. J Hosp Med. 2013;8(12):689–695. , , , et al.
- A systematic review to inform institutional decisions about the use of extracorporeal membrane oxygenation during the H1N1 influenza pandemic. Crit Care Med. 2010;38(6):1398–1404. , , , , , .
- Heparin flushing and other interventions to maintain patency of central venous catheters: a systematic review. J Adv Nurs. 2009;65(10):2007–2021. , , , .
- Response rates to mail surveys published in medical journals. J Clin Epidemiol. 1997;50(10):1129–1136. , , .
- Physician response to surveys: a review of the literature. Am J Prev Med. 2001;20(1):61–67. , .
- Systematic review and cost analysis comparing use of chlorhexidine with use of iodine for preoperative skin antisepsis to prevent surgical site infection. Infect Control Hosp Epidemiol. 2010;31(12):1219–1229. , , , , .
- Antifibrinolytic use in adult cardiac surgery. Curr Opin Hematol. 2007;14(5):455–467. , , .
- Teaching evidence assimilation for collaborative health care (TEACH) 2009–2014: building evidence‐based capacity within health care provider organizations. EGEMS (Wash DC). 2015;3(2):1165. , , , , .
- Cleaning hospital room surfaces to prevent health care‐associated infections: a technical brief [published online August 11, 2015]. Ann Intern Med. doi:10.7326/M15‐1192. , , , , , .
- Healthcare Infection Control Practices Advisory Committee. Updating the guideline development methodology of the Healthcare Infection Control Practices Advisory Committee (HICPAC). Am J Infect Control. 2010;38(4):264–273. , , ,
- U.S. Food and Drug Administration. FDA basics—What is a medical device? Available at: http://www.fda.gov/AboutFDA/Transparency/Basics/ucm211822.htm. Accessed November 12, 2014.
© 2015 Society of Hospital Medicine
EWRS for Sepsis
There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]
Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.
Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.
METHODS
Setting and Data Sources
The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.
Development of the Intervention
The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.
To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]
To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.
The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.
Implementation of the EWRS
All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).
The Preimplementation (Silent) Period and EWRS Validation
The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.
The Postimplementation (Live) Period and Impact Analysis
The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.
Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.
The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).
RESULTS
In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.
In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).
Hospitals AC | |||
---|---|---|---|
Preimplementation | Postimplementation | P Value | |
| |||
No. of encounters | 15,567 | 15,526 | |
No. of alerts | 595 (4%) | 545 (4%) | 0.14 |
Age, y, median (IQR) | 62.0 (48.570.5) | 59.7 (46.169.6) | 0.04 |
Female | 298 (50%) | 274 (50%) | 0.95 |
Race | |||
White | 343 (58%) | 312 (57%) | 0.14 |
Black | 207 (35%) | 171 (31%) | |
Other | 23 (4%) | 31 (6%) | |
Unknown | 22 (4%) | 31 (6%) | |
Admission type | |||
Elective | 201 (34%) | 167 (31%) | 0.40 |
ED | 300 (50%) | 278 (51%) | |
Transfer | 94 (16%) | 99 (18%) | |
BMI, kg/m2, median (IQR) | 27.0 (23.032.0) | 26.0 (22.031.0) | 0.24 |
Previous ICU admission | 137 (23%) | 127 (23%) | 0.91 |
RRT before alert | 27 (5%) | 20 (4%) | 0.46 |
Admission Charlson index, median (IQR) | 2.0 (1.04.0) | 2.0 (1.04.0) | 0.04 |
Admitting service | |||
Medicine | 398 (67%) | 364 (67%) | 0.18 |
Surgery | 173 (29%) | 169 (31%) | |
Other | 24 (4%) | 12 (2%) | |
Service where alert fired | |||
Medicine | 391 (66%) | 365 (67%) | 0.18 |
Surgery | 175 (29%) | 164 (30%) | |
Other | 29 (5%) | 15 (3%) |
In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)
In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).
Hospitals AC | |||
---|---|---|---|
Preimplementation | Postimplementation | P Value | |
| |||
No. of alerts | 595 | 545 | |
500 mL IV bolus order <3 h after alert | 92 (15%) | 142 (26%) | <0.01 |
IV/PO antibiotic order <3 h after alert | 75 (13%) | 123 (23%) | <0.01 |
IV/PO sepsis antibiotic order <3 h after alert | 61 (10%) | 85 (16%) | <0.01 |
Lactic acid order <3 h after alert | 57 (10%) | 128 (23%) | <0.01 |
Blood culture order <3 h after alert | 68 (11%) | 99 (18%) | <0.01 |
Blood gas order <6 h after alert | 53 (9%) | 59 (11%) | 0.28 |
CBC or BMP <6 h after alert | 247 (42%) | 219 (40%) | 0.65 |
Vasopressor <6 h after alert | 17 (3%) | 21 (4%) | 0.35 |
Bronchodilator administration <6 h after alert | 71 (12%) | 64 (12%) | 0.92 |
RBC, plasma, or platelet transfusion order <6 h after alert | 31 (5%) | 52 (10%) | <0.01 |
Naloxone order <6 h after alert | 0 (0%) | 1 (0%) | 0.30 |
AV node blocker order <6 h after alert | 35 (6%) | 35 (6%) | 0.70 |
Loop diuretic order <6 h after alert | 35 (6%) | 28 (5%) | 0.58 |
CXR <6 h after alert | 92 (15%) | 113 (21%) | 0.02 |
CT head, chest, or ABD <6 h after alert | 29 (5%) | 34 (6%) | 0.31 |
Cardiac monitoring (ECG or telemetry) <6 h after alert | 70 (12%) | 90 (17%) | 0.02 |
All Alerted Patients | Discharged With Sepsis Code* | |||
---|---|---|---|---|
Unadjusted Odds Ratio | Adjusted Odds Ratio | Unadjusted Odds Ratio | Adjusted Odds Ratio | |
| ||||
500 mL IV bolus order <3 h after alert | 1.93 (1.442.58) | 1.93 (1.432.61) | 1.64 (1.112.43) | 1.65 (1.102.47) |
IV/PO antibiotic order <3 h after alert | 2.02 (1.482.77) | 2.02 (1.462.78) | 1.99 (1.323.00) | 2.02 (1.323.09) |
IV/PO sepsis antibiotic order <3 h after alert | 1.62 (1.142.30) | 1.57 (1.102.25) | 1.63 (1.052.53) | 1.65 (1.052.58) |
Lactic acid order <3 h after alert | 2.90 (2.074.06) | 3.11 (2.194.41) | 2.41 (1.583.67) | 2.79 (1.794.34) |
Blood culture <3 h after alert | 1.72 (1.232.40) | 1.76 (1.252.47) | 1.36 (0.872.10) | 1.40 (0.902.20) |
Blood gas order <6 h after alert | 1.24 (0.841.83) | 1.32 (0.891.97) | 1.06 (0.631.77) | 1.13 (0.671.92) |
BMP or CBC order <6 h after alert | 0.95 (0.751.20) | 0.96 (0.751.21) | 1.00 (0.701.44) | 1.04 (0.721.50) |
Vasopressor order <6 h after alert | 1.36 (0.712.61) | 1.47 (0.762.83) | 1.32 (0.583.04) | 1.38 (0.593.25) |
Bronchodilator administration <6 h after alert | 0.98 (0.691.41) | 1.02 (0.701.47) | 1.13 (0.641.99) | 1.17 (0.652.10) |
Transfusion order <6 h after alert | 1.92 (1.213.04) | 1.95 (1.233.11) | 1.65 (0.913.01) | 1.68 (0.913.10) |
AV node blocker order <6 h after alert | 1.10 (0.681.78) | 1.20 (0.722.00) | 0.38 (0.131.08) | 0.39 (0.121.20) |
Loop diuretic order <6 h after alert | 0.87 (0.521.44) | 0.93 (0.561.57) | 1.63 (0.634.21) | 1.87 (0.705.00) |
CXR <6 h after alert | 1.43 (1.061.94) | 1.47 (1.081.99) | 1.45 (0.942.24) | 1.56 (1.002.43) |
CT <6 h after alert | 1.30 (0.782.16) | 1.30 (0.782.19) | 0.97 (0.521.82) | 0.94 (0.491.79) |
Cardiac monitoring <6 h after alert | 1.48 (1.062.08) | 1.54 (1.092.16) | 1.32 (0.792.18) | 1.44 (0.862.41) |
Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).
Hospitals AC | ||||
---|---|---|---|---|
Preimplementation | Postimplementation | P Value | ||
| ||||
No. of alerts | 595 | 545 | ||
Hospital LOS, d, median (IQR) | 10.1 (5.119.1) | 9.4 (5.218.9) | 0.92 | |
ICU LOS after alert, d, median (IQR) | 3.4 (1.77.4) | 3.6 (1.96.8) | 0.72 | |
ICU transfer <6 h after alert | 40 (7%) | 53 (10%) | 0.06 | |
ICU transfer <24 h after alert | 71 (12%) | 79 (14%) | 0.20 | |
ICU transfer any time after alert | 134 (23%) | 124 (23%) | 0.93 | |
Time to first ICU after alert, h, median (IQR) | 21.3 (4.463.9) | 11.0 (2.358.7) | 0.22 | |
RRT 6 h after alert | 13 (2%) | 9 (2%) | 0.51 | |
Mortality of all patients | 52 (9%) | 41 (8%) | 0.45 | |
Mortality 30 days after alert | 48 (8%) | 33 (6%) | 0.19 | |
Mortality of those transferred to ICU | 40 (30%) | 32 (26%) | 0.47 | |
Deceased or IP hospice | 94 (16%) | 72 (13%) | 0.22 | |
Discharge to home | 347 (58%) | 351 (64%) | 0.04 | |
Disposition location | ||||
Home | 347 (58%) | 351 (64%) | 0.25 | |
SNF | 89 (15%) | 65 (12%) | ||
Rehab | 24 (4%) | 20 (4%) | ||
LTC | 8 (1%) | 9 (2%) | ||
Other hospital | 16 (3%) | 6 (1%) | ||
Expired | 52 (9%) | 41 (8%) | ||
Hospice IP | 42 (7%) | 31 (6%) | ||
Hospice other | 11 (2%) | 14 (3%) | ||
Other location | 6 (1%) | 8 (1%) | ||
Sepsis discharge diagnosis | 230 (39%) | 247 (45%) | 0.02 | |
Sepsis O/E | 1.37 | 1.06 | 0.18 |
All Alerted Patients | Discharged With Sepsis Code* | |||
---|---|---|---|---|
Unadjusted Estimate | Adjusted Estimate | Unadjusted Estimate | Adjusted Estimate | |
| ||||
Hospital LOS, d | 1.01 (0.921.11) | 1.02 (0.931.12) | 0.99 (0.851.15) | 1.00 (0.871.16) |
ICU transfer | 1.49 (0.972.29) | 1.65 (1.072.55) | 1.61 (0.922.84) | 1.82 (1.023.25) |
Time to first ICU transfer after alert, h‖ | 1.17 (0.871.57) | 1.23 (0.921.66) | 1.21 (0.831.75) | 1.31 (0.901.90) |
ICU LOS, d | 1.01 (0.771.31) | 0.99 (0.761.28) | 0.87 (0.621.21) | 0.88 (0.641.21) |
RRT | 0.75 (0.321.77) | 0.84 (0.352.02) | 0.81 (0.292.27) | 0.82 (0.272.43) |
Mortality | 0.85 (0.551.30) | 0.98 (0.631.53) | 0.85 (0.551.30) | 0.98 (0.631.53) |
Mortality within 30 days of alert | 0.73 (0.461.16) | 0.87 (0.541.40) | 0.59 (0.341.04) | 0.69 (0.381.26) |
Mortality or inpatient hospice transfer | 0.82 (0.471.41) | 0.78 (0.441.41) | 0.67 (0.361.25) | 0.65 (0.331.29) |
Discharge to home | 1.29 (1.021.64) | 1.18 (0.911.52) | 1.36 (0.951.95) | 1.22 (0.811.84) |
Sepsis discharge diagnosis | 1.32 (1.041.67) | 1.43 (1.101.85) | NA | NA |
In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.
The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).
DISCUSSION
This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.
Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).
Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]
Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.
CONCLUSION
By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.
Acknowledgements
The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.
Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.
- Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):1167–1174. , , , .
- Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580–637. , , , et al.
- The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367–374. , , , et al.
- Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):1579–1595. , , , et al.
- Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):1368–1377. , , , et al.
- Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945–953. , , , , , .
- A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236–242. , , , et al.
- Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):83–88. , , , , , .
- Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500–504. , , , .
- Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469–473. , , , et al.
- Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):1644–1655. , , , et al.
- 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):1250–1256. , , , et al.
- Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):1588–1597. , , , , .
- Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268–274. .
- Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28. , , , et al.
There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]
Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.
Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.
METHODS
Setting and Data Sources
The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.
Development of the Intervention
The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.
To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]
To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.
The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.
Implementation of the EWRS
All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).
The Preimplementation (Silent) Period and EWRS Validation
The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.
The Postimplementation (Live) Period and Impact Analysis
The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.
Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.
The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).
RESULTS
In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.
In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).
Hospitals AC | |||
---|---|---|---|
Preimplementation | Postimplementation | P Value | |
| |||
No. of encounters | 15,567 | 15,526 | |
No. of alerts | 595 (4%) | 545 (4%) | 0.14 |
Age, y, median (IQR) | 62.0 (48.570.5) | 59.7 (46.169.6) | 0.04 |
Female | 298 (50%) | 274 (50%) | 0.95 |
Race | |||
White | 343 (58%) | 312 (57%) | 0.14 |
Black | 207 (35%) | 171 (31%) | |
Other | 23 (4%) | 31 (6%) | |
Unknown | 22 (4%) | 31 (6%) | |
Admission type | |||
Elective | 201 (34%) | 167 (31%) | 0.40 |
ED | 300 (50%) | 278 (51%) | |
Transfer | 94 (16%) | 99 (18%) | |
BMI, kg/m2, median (IQR) | 27.0 (23.032.0) | 26.0 (22.031.0) | 0.24 |
Previous ICU admission | 137 (23%) | 127 (23%) | 0.91 |
RRT before alert | 27 (5%) | 20 (4%) | 0.46 |
Admission Charlson index, median (IQR) | 2.0 (1.04.0) | 2.0 (1.04.0) | 0.04 |
Admitting service | |||
Medicine | 398 (67%) | 364 (67%) | 0.18 |
Surgery | 173 (29%) | 169 (31%) | |
Other | 24 (4%) | 12 (2%) | |
Service where alert fired | |||
Medicine | 391 (66%) | 365 (67%) | 0.18 |
Surgery | 175 (29%) | 164 (30%) | |
Other | 29 (5%) | 15 (3%) |
In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)
In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).
Hospitals AC | |||
---|---|---|---|
Preimplementation | Postimplementation | P Value | |
| |||
No. of alerts | 595 | 545 | |
500 mL IV bolus order <3 h after alert | 92 (15%) | 142 (26%) | <0.01 |
IV/PO antibiotic order <3 h after alert | 75 (13%) | 123 (23%) | <0.01 |
IV/PO sepsis antibiotic order <3 h after alert | 61 (10%) | 85 (16%) | <0.01 |
Lactic acid order <3 h after alert | 57 (10%) | 128 (23%) | <0.01 |
Blood culture order <3 h after alert | 68 (11%) | 99 (18%) | <0.01 |
Blood gas order <6 h after alert | 53 (9%) | 59 (11%) | 0.28 |
CBC or BMP <6 h after alert | 247 (42%) | 219 (40%) | 0.65 |
Vasopressor <6 h after alert | 17 (3%) | 21 (4%) | 0.35 |
Bronchodilator administration <6 h after alert | 71 (12%) | 64 (12%) | 0.92 |
RBC, plasma, or platelet transfusion order <6 h after alert | 31 (5%) | 52 (10%) | <0.01 |
Naloxone order <6 h after alert | 0 (0%) | 1 (0%) | 0.30 |
AV node blocker order <6 h after alert | 35 (6%) | 35 (6%) | 0.70 |
Loop diuretic order <6 h after alert | 35 (6%) | 28 (5%) | 0.58 |
CXR <6 h after alert | 92 (15%) | 113 (21%) | 0.02 |
CT head, chest, or ABD <6 h after alert | 29 (5%) | 34 (6%) | 0.31 |
Cardiac monitoring (ECG or telemetry) <6 h after alert | 70 (12%) | 90 (17%) | 0.02 |
All Alerted Patients | Discharged With Sepsis Code* | |||
---|---|---|---|---|
Unadjusted Odds Ratio | Adjusted Odds Ratio | Unadjusted Odds Ratio | Adjusted Odds Ratio | |
| ||||
500 mL IV bolus order <3 h after alert | 1.93 (1.442.58) | 1.93 (1.432.61) | 1.64 (1.112.43) | 1.65 (1.102.47) |
IV/PO antibiotic order <3 h after alert | 2.02 (1.482.77) | 2.02 (1.462.78) | 1.99 (1.323.00) | 2.02 (1.323.09) |
IV/PO sepsis antibiotic order <3 h after alert | 1.62 (1.142.30) | 1.57 (1.102.25) | 1.63 (1.052.53) | 1.65 (1.052.58) |
Lactic acid order <3 h after alert | 2.90 (2.074.06) | 3.11 (2.194.41) | 2.41 (1.583.67) | 2.79 (1.794.34) |
Blood culture <3 h after alert | 1.72 (1.232.40) | 1.76 (1.252.47) | 1.36 (0.872.10) | 1.40 (0.902.20) |
Blood gas order <6 h after alert | 1.24 (0.841.83) | 1.32 (0.891.97) | 1.06 (0.631.77) | 1.13 (0.671.92) |
BMP or CBC order <6 h after alert | 0.95 (0.751.20) | 0.96 (0.751.21) | 1.00 (0.701.44) | 1.04 (0.721.50) |
Vasopressor order <6 h after alert | 1.36 (0.712.61) | 1.47 (0.762.83) | 1.32 (0.583.04) | 1.38 (0.593.25) |
Bronchodilator administration <6 h after alert | 0.98 (0.691.41) | 1.02 (0.701.47) | 1.13 (0.641.99) | 1.17 (0.652.10) |
Transfusion order <6 h after alert | 1.92 (1.213.04) | 1.95 (1.233.11) | 1.65 (0.913.01) | 1.68 (0.913.10) |
AV node blocker order <6 h after alert | 1.10 (0.681.78) | 1.20 (0.722.00) | 0.38 (0.131.08) | 0.39 (0.121.20) |
Loop diuretic order <6 h after alert | 0.87 (0.521.44) | 0.93 (0.561.57) | 1.63 (0.634.21) | 1.87 (0.705.00) |
CXR <6 h after alert | 1.43 (1.061.94) | 1.47 (1.081.99) | 1.45 (0.942.24) | 1.56 (1.002.43) |
CT <6 h after alert | 1.30 (0.782.16) | 1.30 (0.782.19) | 0.97 (0.521.82) | 0.94 (0.491.79) |
Cardiac monitoring <6 h after alert | 1.48 (1.062.08) | 1.54 (1.092.16) | 1.32 (0.792.18) | 1.44 (0.862.41) |
Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).
Hospitals AC | ||||
---|---|---|---|---|
Preimplementation | Postimplementation | P Value | ||
| ||||
No. of alerts | 595 | 545 | ||
Hospital LOS, d, median (IQR) | 10.1 (5.119.1) | 9.4 (5.218.9) | 0.92 | |
ICU LOS after alert, d, median (IQR) | 3.4 (1.77.4) | 3.6 (1.96.8) | 0.72 | |
ICU transfer <6 h after alert | 40 (7%) | 53 (10%) | 0.06 | |
ICU transfer <24 h after alert | 71 (12%) | 79 (14%) | 0.20 | |
ICU transfer any time after alert | 134 (23%) | 124 (23%) | 0.93 | |
Time to first ICU after alert, h, median (IQR) | 21.3 (4.463.9) | 11.0 (2.358.7) | 0.22 | |
RRT 6 h after alert | 13 (2%) | 9 (2%) | 0.51 | |
Mortality of all patients | 52 (9%) | 41 (8%) | 0.45 | |
Mortality 30 days after alert | 48 (8%) | 33 (6%) | 0.19 | |
Mortality of those transferred to ICU | 40 (30%) | 32 (26%) | 0.47 | |
Deceased or IP hospice | 94 (16%) | 72 (13%) | 0.22 | |
Discharge to home | 347 (58%) | 351 (64%) | 0.04 | |
Disposition location | ||||
Home | 347 (58%) | 351 (64%) | 0.25 | |
SNF | 89 (15%) | 65 (12%) | ||
Rehab | 24 (4%) | 20 (4%) | ||
LTC | 8 (1%) | 9 (2%) | ||
Other hospital | 16 (3%) | 6 (1%) | ||
Expired | 52 (9%) | 41 (8%) | ||
Hospice IP | 42 (7%) | 31 (6%) | ||
Hospice other | 11 (2%) | 14 (3%) | ||
Other location | 6 (1%) | 8 (1%) | ||
Sepsis discharge diagnosis | 230 (39%) | 247 (45%) | 0.02 | |
Sepsis O/E | 1.37 | 1.06 | 0.18 |
All Alerted Patients | Discharged With Sepsis Code* | |||
---|---|---|---|---|
Unadjusted Estimate | Adjusted Estimate | Unadjusted Estimate | Adjusted Estimate | |
| ||||
Hospital LOS, d | 1.01 (0.921.11) | 1.02 (0.931.12) | 0.99 (0.851.15) | 1.00 (0.871.16) |
ICU transfer | 1.49 (0.972.29) | 1.65 (1.072.55) | 1.61 (0.922.84) | 1.82 (1.023.25) |
Time to first ICU transfer after alert, h‖ | 1.17 (0.871.57) | 1.23 (0.921.66) | 1.21 (0.831.75) | 1.31 (0.901.90) |
ICU LOS, d | 1.01 (0.771.31) | 0.99 (0.761.28) | 0.87 (0.621.21) | 0.88 (0.641.21) |
RRT | 0.75 (0.321.77) | 0.84 (0.352.02) | 0.81 (0.292.27) | 0.82 (0.272.43) |
Mortality | 0.85 (0.551.30) | 0.98 (0.631.53) | 0.85 (0.551.30) | 0.98 (0.631.53) |
Mortality within 30 days of alert | 0.73 (0.461.16) | 0.87 (0.541.40) | 0.59 (0.341.04) | 0.69 (0.381.26) |
Mortality or inpatient hospice transfer | 0.82 (0.471.41) | 0.78 (0.441.41) | 0.67 (0.361.25) | 0.65 (0.331.29) |
Discharge to home | 1.29 (1.021.64) | 1.18 (0.911.52) | 1.36 (0.951.95) | 1.22 (0.811.84) |
Sepsis discharge diagnosis | 1.32 (1.041.67) | 1.43 (1.101.85) | NA | NA |
In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.
The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).
DISCUSSION
This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.
Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).
Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]
Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.
CONCLUSION
By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.
Acknowledgements
The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.
Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.
There are as many as 3 million cases of severe sepsis and 750,000 resulting deaths in the United States annually.[1] Interventions such as goal‐directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. Thus, timely recognition is critical.[2, 3, 4, 5]
Despite this, early recognition in hospitalized patients can be challenging. Using chart documentation as a surrogate for provider recognition, we recently found only 20% of patients with severe sepsis admitted to our hospital from the emergency department were recognized.[6] Given these challenges, there has been increasing interest in developing automated systems to improve the timeliness of sepsis detection.[7, 8, 9, 10] Systems described in the literature have varied considerably in triggering criteria, effector responses, and study settings. Of those examining the impact of automated surveillance and response in the nonintensive care unit (ICU) acute inpatient setting, results suggest an increase in the timeliness of diagnostic and therapeutic interventions,[10] but less impact on patient outcomes.[7] Whether these results reflect inadequacies in the criteria used to identify patients (parameters or their thresholds) or an ineffective response to the alert (magnitude or timeliness) is unclear.
Given the consequences of severe sepsis in hospitalized patients, as well as the introduction of vital sign (VS) and provider data in our electronic health record (EHR), we sought to develop and implement an electronic sepsis detection and response system to improve patient outcomes. This study describes the development, validation, and impact of that system.
METHODS
Setting and Data Sources
The University of Pennsylvania Health System (UPHS) includes 3 hospitals with a capacity of over 1500 beds and 70,000 annual admissions. All hospitals use the EHR Sunrise Clinical Manager version 5.5 (Allscripts, Chicago, IL). The study period began in October 2011, when VS and provider contact information became available electronically. Data were retrieved from the Penn Data Store, which includes professionally coded data as well as clinical data from our EHRs. The study received expedited approval and a Health Insurance Portability and Accountability Act waiver from our institutional review board.
Development of the Intervention
The early warning and response system (EWRS) for sepsis was designed to monitor laboratory values and VSs in real time in our inpatient EHR to detect patients at risk for clinical deterioration and development of severe sepsis. The development team was multidisciplinary, including informaticians, physicians, nurses, and data analysts from all 3 hospitals.
To identify at‐risk patients, we used established criteria for severe sepsis, including the systemic inflammatory response syndrome criteria (temperature <36C or >38C, heart rate >90 bpm, respiratory rate >20 breaths/min or PaCO2 <32 mm Hg, and total white blood cell count <4000 or >12,000 or >10% bands) coupled with criteria suggesting organ dysfunction (cardiovascular dysfunction based on a systolic blood pressure <100 mm Hg, and hypoperfusion based on a serum lactate measure >2.2 mmol/L [the threshold for an abnormal result in our lab]).[11, 12]
To establish a threshold for triggering the system, a derivation cohort was used and defined as patients admitted between October 1, 2011 to October 31, 2011 1 to any inpatient acute care service. Those <18 years old or admitted to hospice, research, and obstetrics services were excluded. We calculated a risk score for each patient, defined as the sum of criteria met at any single time during their visit. At any given point in time, we used the most recent value for each criteria, with a look‐back period of 24 hours for VSs and 48 hours for labs. The minimum and maximum number of criteria that a patient could achieve at any single time was 0 and 6, respectively. We then categorized patients by the maximum number of criteria achieved and estimated the proportion of patients in each category who: (1) were transferred to an ICU during their hospital visit; (2) had a rapid response team (RRT) called during their visit; (3) died during their visit; (4) had a composite of 1, 2, or 3; or (5) were coded as sepsis at discharge (see Supporting Information in the online version of this article for further information). Once a threshold was chosen, we examined the time from first trigger to: (1) any ICU transfer; (2) any RRT; (3) death; or (4) a composite of 1, 2, or 3. We then estimated the screen positive rate, test characteristics, predictive values, and likelihood ratios of the specified threshold.
The efferent response arm of the EWRS included the covering provider (usually an intern), the bedside nurse, and rapid response coordinators, who were engaged from the outset in developing the operational response to the alert. This team was required to perform a bedside evaluation within 30 minutes of the alert, and enact changes in management if warranted. The rapid response coordinator was required to complete a 3‐question follow‐up assessment in the EHR asking whether all 3 team members gathered at the bedside, the most likely condition triggering the EWRS, and whether management changed (see Supporting Figure 1 in the online version of this article). To minimize the number of triggers, once a patient triggered an alert, any additional alert triggers during the same hospital stay were censored.
Implementation of the EWRS
All inpatients on noncritical care services were screened continuously. Hospice, research, and obstetrics services were excluded. If a patient met the EWRS criteria threshold, an alert was sent to the covering provider and rapid response coordinator by text page. The bedside nurses, who do not carry text‐enabled devices, were alerted by pop‐up notification in the EHR (see Supporting Figure 2 in the online version of this article). The notification was linked to a task that required nurses to verify in the EHR the VSs triggering the EWRS, and adverse trends in VSs or labs (see Supporting Figure 3 in the online version of this article).
The Preimplementation (Silent) Period and EWRS Validation
The EWRS was initially activated for a preimplementation silent period (June 6, 2012September 4, 2012) to both validate the tool and provide the baseline data to which the postimplementation period was compared. During this time, new admissions could trigger the alert, but notifications were not sent. We used admissions from the first 30 days of the preimplementation period to estimate the tool's screen positive rate, test characteristics, predictive values, and likelihood ratios.
The Postimplementation (Live) Period and Impact Analysis
The EWRS went live September 12, 2012, upon which new admissions triggering the alert would result in a notification and response. Unadjusted analyses using the [2] test for dichotomous variables and the Wilcoxon rank sum test for continuous variables compared demographics and the proportion of clinical process and outcome measures for those admitted during the silent period (June 6, 2012September 4, 2012) and a similar timeframe 1 year later when the intervention was live (June 6, 2013September 4, 2013). To be included in either of the time periods, patients had to trigger the alert during the period and be discharged within 45 days of the end of the period. The pre‐ and post‐sepsis mortality index was also examined (see the Supporting Information in the online version of this article for a detailed description of study measures). Multivariable regression models estimated the impact of the EWRS on the process and outcome measures, adjusted for differences between the patients in the preimplementation and postimplementation periods with respect to age, gender, Charlson index on admission, admitting service, hospital, and admission month. Logistic regression models examined dichotomous variables. Continuous variables were log transformed and examined using linear regression models. Cox regression models explored time to ICU transfer from trigger. Among patients with sepsis, a logistic regression model was used to compare the odds of mortality between the silent and live periods, adjusted for expected mortality, both within each hospital and across all hospitals.
Because there is a risk of providers becoming overly reliant on automated systems and overlooking those not triggering the system, we also examined the discharge disposition and mortality outcomes of those in both study periods not identified by the EWRS.
The primary analysis examined the impact of the EWRS across UPHS; we also examined the EWRS impact at each of our hospitals. Last, we performed subgroup analyses examining the EWRS impact in those assigned an International Classification of Diseases, 9th Revision code for sepsis at discharge or death. All analyses were performed using SAS version 9.3 (SAS Institute Inc., Cary, NC).
RESULTS
In the derivation cohort, 4575 patients met the inclusion criteria. The proportion of those in each category (06) achieving our outcomes of interest are described in Supporting Table 1 in the online version of this article. We defined a positive trigger as a score 4, as this threshold identified a limited number of patients (3.9% [180/4575]) with a high proportion experiencing our composite outcome (25.6% [46/180]). The proportion of patients with an EWRS score 4 and their time to event by hospital and health system is described in Supporting Table 2 in the online version of this article. Those with a score 4 were almost 4 times as likely to be transferred to the ICU, almost 7 times as likely to experience an RRT, and almost 10 times as likely to die. The screen positive, sensitivity, specificity, and positive and negative predictive values and likelihood ratios using this threshold and our composite outcome in the derivation cohort were 6%, 16%, 97%, 26%, 94%, 5.3, and 0.9, respectively, and in our validation cohort were 6%, 17%, 97%, 28%, 95%, 5.7, and 0.9, respectively.
In the preimplementation period, 3.8% of admissions (595/15,567) triggered the alert, as compared to 3.5% (545/15,526) in the postimplementation period. Demographics were similar across periods, except that in the postimplementation period patients were slightly younger and had a lower Charlson Comorbidity Index at admission (Table 1). The distribution of alerts across medicine and surgery services were similar (Table 1).
Hospitals AC | |||
---|---|---|---|
Preimplementation | Postimplementation | P Value | |
| |||
No. of encounters | 15,567 | 15,526 | |
No. of alerts | 595 (4%) | 545 (4%) | 0.14 |
Age, y, median (IQR) | 62.0 (48.570.5) | 59.7 (46.169.6) | 0.04 |
Female | 298 (50%) | 274 (50%) | 0.95 |
Race | |||
White | 343 (58%) | 312 (57%) | 0.14 |
Black | 207 (35%) | 171 (31%) | |
Other | 23 (4%) | 31 (6%) | |
Unknown | 22 (4%) | 31 (6%) | |
Admission type | |||
Elective | 201 (34%) | 167 (31%) | 0.40 |
ED | 300 (50%) | 278 (51%) | |
Transfer | 94 (16%) | 99 (18%) | |
BMI, kg/m2, median (IQR) | 27.0 (23.032.0) | 26.0 (22.031.0) | 0.24 |
Previous ICU admission | 137 (23%) | 127 (23%) | 0.91 |
RRT before alert | 27 (5%) | 20 (4%) | 0.46 |
Admission Charlson index, median (IQR) | 2.0 (1.04.0) | 2.0 (1.04.0) | 0.04 |
Admitting service | |||
Medicine | 398 (67%) | 364 (67%) | 0.18 |
Surgery | 173 (29%) | 169 (31%) | |
Other | 24 (4%) | 12 (2%) | |
Service where alert fired | |||
Medicine | 391 (66%) | 365 (67%) | 0.18 |
Surgery | 175 (29%) | 164 (30%) | |
Other | 29 (5%) | 15 (3%) |
In our postimplementation period, 99% of coordinator pages and over three‐fourths of provider notifications were sent successfully. Almost three‐fourths of nurses reviewed the initial alert notification, and over 99% completed the electronic data verification and adverse trend review, with over half documenting adverse trends. Ninety‐five percent of the time the coordinators completed the follow‐up assessment. Over 90% of the time, the entire team evaluated the patient at bedside within 30 minutes. Almost half of the time, the team thought the patient had no critical illness. Over a third of the time, they thought the patient had sepsis, but reported over 90% of the time that they were aware of the diagnosis prior to the alert. (Supporting Table 3 in the online version of this article includes more details about the responses to the electronic notifications and follow‐up assessments.)
In unadjusted and adjusted analyses, ordering of antibiotics, intravenous fluid boluses, and lactate and blood cultures within 3 hours of the trigger increased significantly, as did ordering of blood products, chest radiographs, and cardiac monitoring within 6 hours of the trigger (Tables 2 and 3).
Hospitals AC | |||
---|---|---|---|
Preimplementation | Postimplementation | P Value | |
| |||
No. of alerts | 595 | 545 | |
500 mL IV bolus order <3 h after alert | 92 (15%) | 142 (26%) | <0.01 |
IV/PO antibiotic order <3 h after alert | 75 (13%) | 123 (23%) | <0.01 |
IV/PO sepsis antibiotic order <3 h after alert | 61 (10%) | 85 (16%) | <0.01 |
Lactic acid order <3 h after alert | 57 (10%) | 128 (23%) | <0.01 |
Blood culture order <3 h after alert | 68 (11%) | 99 (18%) | <0.01 |
Blood gas order <6 h after alert | 53 (9%) | 59 (11%) | 0.28 |
CBC or BMP <6 h after alert | 247 (42%) | 219 (40%) | 0.65 |
Vasopressor <6 h after alert | 17 (3%) | 21 (4%) | 0.35 |
Bronchodilator administration <6 h after alert | 71 (12%) | 64 (12%) | 0.92 |
RBC, plasma, or platelet transfusion order <6 h after alert | 31 (5%) | 52 (10%) | <0.01 |
Naloxone order <6 h after alert | 0 (0%) | 1 (0%) | 0.30 |
AV node blocker order <6 h after alert | 35 (6%) | 35 (6%) | 0.70 |
Loop diuretic order <6 h after alert | 35 (6%) | 28 (5%) | 0.58 |
CXR <6 h after alert | 92 (15%) | 113 (21%) | 0.02 |
CT head, chest, or ABD <6 h after alert | 29 (5%) | 34 (6%) | 0.31 |
Cardiac monitoring (ECG or telemetry) <6 h after alert | 70 (12%) | 90 (17%) | 0.02 |
All Alerted Patients | Discharged With Sepsis Code* | |||
---|---|---|---|---|
Unadjusted Odds Ratio | Adjusted Odds Ratio | Unadjusted Odds Ratio | Adjusted Odds Ratio | |
| ||||
500 mL IV bolus order <3 h after alert | 1.93 (1.442.58) | 1.93 (1.432.61) | 1.64 (1.112.43) | 1.65 (1.102.47) |
IV/PO antibiotic order <3 h after alert | 2.02 (1.482.77) | 2.02 (1.462.78) | 1.99 (1.323.00) | 2.02 (1.323.09) |
IV/PO sepsis antibiotic order <3 h after alert | 1.62 (1.142.30) | 1.57 (1.102.25) | 1.63 (1.052.53) | 1.65 (1.052.58) |
Lactic acid order <3 h after alert | 2.90 (2.074.06) | 3.11 (2.194.41) | 2.41 (1.583.67) | 2.79 (1.794.34) |
Blood culture <3 h after alert | 1.72 (1.232.40) | 1.76 (1.252.47) | 1.36 (0.872.10) | 1.40 (0.902.20) |
Blood gas order <6 h after alert | 1.24 (0.841.83) | 1.32 (0.891.97) | 1.06 (0.631.77) | 1.13 (0.671.92) |
BMP or CBC order <6 h after alert | 0.95 (0.751.20) | 0.96 (0.751.21) | 1.00 (0.701.44) | 1.04 (0.721.50) |
Vasopressor order <6 h after alert | 1.36 (0.712.61) | 1.47 (0.762.83) | 1.32 (0.583.04) | 1.38 (0.593.25) |
Bronchodilator administration <6 h after alert | 0.98 (0.691.41) | 1.02 (0.701.47) | 1.13 (0.641.99) | 1.17 (0.652.10) |
Transfusion order <6 h after alert | 1.92 (1.213.04) | 1.95 (1.233.11) | 1.65 (0.913.01) | 1.68 (0.913.10) |
AV node blocker order <6 h after alert | 1.10 (0.681.78) | 1.20 (0.722.00) | 0.38 (0.131.08) | 0.39 (0.121.20) |
Loop diuretic order <6 h after alert | 0.87 (0.521.44) | 0.93 (0.561.57) | 1.63 (0.634.21) | 1.87 (0.705.00) |
CXR <6 h after alert | 1.43 (1.061.94) | 1.47 (1.081.99) | 1.45 (0.942.24) | 1.56 (1.002.43) |
CT <6 h after alert | 1.30 (0.782.16) | 1.30 (0.782.19) | 0.97 (0.521.82) | 0.94 (0.491.79) |
Cardiac monitoring <6 h after alert | 1.48 (1.062.08) | 1.54 (1.092.16) | 1.32 (0.792.18) | 1.44 (0.862.41) |
Hospital and ICU length of stay were similar in the preimplementation and postimplementation periods. There was no difference in the proportion of patients transferred to the ICU following the alert; however, the proportion transferred within 6 hours of the alert increased, and the time to ICU transfer was halved (see Supporting Figure 4 in the online version of this article), but neither change was statistically significant in unadjusted analyses. Transfer to the ICU within 6 hours became statistically significant after adjustment. All mortality measures were lower in the postimplementation period, but none reached statistical significance. Discharge to home and sepsis documentation were both statistically higher in the postimplementation period, but discharge to home lost statistical significance after adjustment (Tables 4 and 5) (see Supporting Table 4 in the online version of this article).
Hospitals AC | ||||
---|---|---|---|---|
Preimplementation | Postimplementation | P Value | ||
| ||||
No. of alerts | 595 | 545 | ||
Hospital LOS, d, median (IQR) | 10.1 (5.119.1) | 9.4 (5.218.9) | 0.92 | |
ICU LOS after alert, d, median (IQR) | 3.4 (1.77.4) | 3.6 (1.96.8) | 0.72 | |
ICU transfer <6 h after alert | 40 (7%) | 53 (10%) | 0.06 | |
ICU transfer <24 h after alert | 71 (12%) | 79 (14%) | 0.20 | |
ICU transfer any time after alert | 134 (23%) | 124 (23%) | 0.93 | |
Time to first ICU after alert, h, median (IQR) | 21.3 (4.463.9) | 11.0 (2.358.7) | 0.22 | |
RRT 6 h after alert | 13 (2%) | 9 (2%) | 0.51 | |
Mortality of all patients | 52 (9%) | 41 (8%) | 0.45 | |
Mortality 30 days after alert | 48 (8%) | 33 (6%) | 0.19 | |
Mortality of those transferred to ICU | 40 (30%) | 32 (26%) | 0.47 | |
Deceased or IP hospice | 94 (16%) | 72 (13%) | 0.22 | |
Discharge to home | 347 (58%) | 351 (64%) | 0.04 | |
Disposition location | ||||
Home | 347 (58%) | 351 (64%) | 0.25 | |
SNF | 89 (15%) | 65 (12%) | ||
Rehab | 24 (4%) | 20 (4%) | ||
LTC | 8 (1%) | 9 (2%) | ||
Other hospital | 16 (3%) | 6 (1%) | ||
Expired | 52 (9%) | 41 (8%) | ||
Hospice IP | 42 (7%) | 31 (6%) | ||
Hospice other | 11 (2%) | 14 (3%) | ||
Other location | 6 (1%) | 8 (1%) | ||
Sepsis discharge diagnosis | 230 (39%) | 247 (45%) | 0.02 | |
Sepsis O/E | 1.37 | 1.06 | 0.18 |
All Alerted Patients | Discharged With Sepsis Code* | |||
---|---|---|---|---|
Unadjusted Estimate | Adjusted Estimate | Unadjusted Estimate | Adjusted Estimate | |
| ||||
Hospital LOS, d | 1.01 (0.921.11) | 1.02 (0.931.12) | 0.99 (0.851.15) | 1.00 (0.871.16) |
ICU transfer | 1.49 (0.972.29) | 1.65 (1.072.55) | 1.61 (0.922.84) | 1.82 (1.023.25) |
Time to first ICU transfer after alert, h‖ | 1.17 (0.871.57) | 1.23 (0.921.66) | 1.21 (0.831.75) | 1.31 (0.901.90) |
ICU LOS, d | 1.01 (0.771.31) | 0.99 (0.761.28) | 0.87 (0.621.21) | 0.88 (0.641.21) |
RRT | 0.75 (0.321.77) | 0.84 (0.352.02) | 0.81 (0.292.27) | 0.82 (0.272.43) |
Mortality | 0.85 (0.551.30) | 0.98 (0.631.53) | 0.85 (0.551.30) | 0.98 (0.631.53) |
Mortality within 30 days of alert | 0.73 (0.461.16) | 0.87 (0.541.40) | 0.59 (0.341.04) | 0.69 (0.381.26) |
Mortality or inpatient hospice transfer | 0.82 (0.471.41) | 0.78 (0.441.41) | 0.67 (0.361.25) | 0.65 (0.331.29) |
Discharge to home | 1.29 (1.021.64) | 1.18 (0.911.52) | 1.36 (0.951.95) | 1.22 (0.811.84) |
Sepsis discharge diagnosis | 1.32 (1.041.67) | 1.43 (1.101.85) | NA | NA |
In a subanalysis of EWRS impact on patients documented with sepsis at discharge, unadjusted and adjusted changes in clinical process and outcome measures across the time periods were similar to that of the total population (see Supporting Tables 5 and 6 and Supporting Figure 5 in the online version of this article). The unadjusted composite outcome of mortality or inpatient hospice was statistically lower in the postimplementation period, but lost statistical significance after adjustment.
The disposition and mortality outcomes of those not triggering the alert were unchanged across the 2 periods (see Supporting Tables 7, 8, and 9 in the online version of this article).
DISCUSSION
This study demonstrated that a predictive tool can accurately identify non‐ICU inpatients at increased risk for deterioration and death. In addition, we demonstrated the feasibility of deploying our EHR to screen patients in real time for deterioration and to trigger electronically a timely, robust, multidisciplinary bedside clinical evaluation. Compared to the control (silent) period, the EWRS resulted in a marked increase in early sepsis care, transfer to the ICU, and sepsis documentation, and an indication of a decreased sepsis mortality index and mortality, and increased discharge to home, although none of these latter 3 findings reached statistical significance.
Our study is unique in that it was implemented across a multihospital health system, which has identical EHRs, but diverse cultures, populations, staffing, and practice models. In addition, our study includes a preimplementation population similar to the postimplementation population (in terms of setting, month of admission, and adjustment for potential confounders).
Interestingly, patients identified by the EWRS who were subsequently transferred to an ICU had higher mortality rates (30% and 26% in the preimplementation and postimplementation periods, respectively, across UPHS) than those transferred to an ICU who were not identified by the EWRS (7% and 6% in the preimplementation and postimplementation periods, respectively, across UPHS) (Table 4) (see Supporting Table 7 in the online version of this article). This finding was robust to the study period, so is likely not related to the bedside evaluation prompted by the EWRS. It suggests the EWRS could help triage patients for appropriateness of ICU transfer, a particularly valuable role that should be explored further given the typical strains on ICU capacity,[13] and the mortality resulting from delays in patient transfers into ICUs.[14, 15]
Although we did not find a statistically significant mortality reduction, our study may have been underpowered to detect this outcome. Our study has other limitations. First, our preimplementation/postimplementation design may not fully account for secular changes in sepsis mortality. However, our comparison of similar time periods and our adjustment for observed demographic differences allow us to estimate with more certainty the change in sepsis care and mortality attributable to the intervention. Second, our study did not examine the effect of the EWRS on mortality after hospital discharge, where many such events occur. However, our capture of at least 45 hospital days on all study patients, as well as our inclusion of only those who died or were discharged during our study period, and our assessment of discharge disposition such as hospice, increase the chance that mortality reductions directly attributable to the EWRS were captured. Third, although the EWRS changed patient management, we did not assess the appropriateness of management changes. However, the impact of care changes was captured crudely by examining mortality rates and discharge disposition. Fourth, our study was limited to a single academic healthcare system, and our experience may not be generalizable to other healthcare systems with different EHRs and staff. However, the integration of our automated alert into a commercial EHR serving a diverse array of patient populations, clinical services, and service models throughout our healthcare system may improve the generalizability of our experience to other settings.
CONCLUSION
By leveraging readily available electronic data, an automated prediction tool identified at‐risk patients and mobilized care teams, resulting in more timely sepsis care, improved sepsis documentation, and a suggestion of reduced mortality. This alert may be scalable to other healthcare systems.
Acknowledgements
The authors thank Jennifer Barger, MS, BSN, RN; Patty Baroni, MSN, RN; Patrick J. Donnelly, MS, RN, CCRN; Mika Epps, MSN, RN; Allen L. Fasnacht, MSN, RN; Neil O. Fishman, MD; Kevin M. Fosnocht, MD; David F. Gaieski, MD; Tonya Johnson, MSN, RN, CCRN; Craig R. Kean, MS; Arash Kia, MD, MS; Matthew D. Mitchell, PhD; Stacie Neefe, BSN, RN; Nina J. Renzi, BSN, RN, CCRN; Alexander Roederer, Jean C. Romano, MSN, RN, NE‐BC; Heather Ross, BSN, RN, CCRN; William D. Schweickert, MD; Esme Singer, MD; and Kendal Williams, MD, MPH for their help in developing, testing and operationalizing the EWRS examined in this study; their assistance in data acquisition; and for advice regarding data analysis. This study was previously presented as an oral abstract at the 2013 American Medical Informatics Association Meeting, November 1620, 2013, Washington, DC.
Disclosures: Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources, grant UL1RR024134, which is now at the National Center for Advancing Translational Sciences, grant UL1TR000003. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors report no potential financial conflicts of interest relevant to this article.
- Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):1167–1174. , , , .
- Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580–637. , , , et al.
- The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367–374. , , , et al.
- Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):1579–1595. , , , et al.
- Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):1368–1377. , , , et al.
- Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945–953. , , , , , .
- A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236–242. , , , et al.
- Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):83–88. , , , , , .
- Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500–504. , , , .
- Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469–473. , , , et al.
- Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):1644–1655. , , , et al.
- 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):1250–1256. , , , et al.
- Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):1588–1597. , , , , .
- Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268–274. .
- Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28. , , , et al.
- Benchmarking the incidence and mortality of severe sepsis in the United States. Crit Care Med. 2013;41(5):1167–1174. , , , .
- Surviving sepsis campaign: international guidelines for management of severe sepsis and septic shock: 2012. Crit Care Med. 2013;41(2):580–637. , , , et al.
- The Surviving Sepsis Campaign: results of an international guideline‐based performance improvement program targeting severe sepsis. Crit Care Med. 2010;38(2):367–374. , , , et al.
- Early goal‐directed therapy in severe sepsis and septic shock revisited: concepts, controversies, and contemporary findings. Chest. 2006;130(5):1579–1595. , , , et al.
- Early goal‐directed therapy in the treatment of severe sepsis and septic shock. N Engl J Med. 2001;345(19):1368–1377. , , , et al.
- Severe sepsis cohorts derived from claims‐based strategies appear to be biased toward a more severely ill patient population. Crit Care Med. 2013;41(4):945–953. , , , , , .
- A trial of a real‐time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236–242. , , , et al.
- Bedside electronic capture of clinical observations and automated clinical alerts to improve compliance with an Early Warning Score protocol. Crit Care Resusc. 2011;13(2):83–88. , , , , , .
- Prospective trial of real‐time electronic surveillance to expedite early care of severe sepsis. Ann Emerg Med. 2011;57(5):500–504. , , , .
- Implementation of a real‐time computerized sepsis alert in nonintensive care unit patients. Crit Care Med. 2011;39(3):469–473. , , , et al.
- Definitions for sepsis and organ failure and guidelines for the use of innovative therapies in sepsis. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest. 1992;101(6):1644–1655. , , , et al.
- 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis Definitions Conference. Crit Care Med. 2003;31(4):1250–1256. , , , et al.
- Rationing critical care beds: a systematic review. Crit Care Med. 2004;32(7):1588–1597. , , , , .
- Delayed admission to intensive care unit for critically surgical patients is associated with increased mortality. Am J Surg. 2014;208:268–274. .
- Impact of delayed admission to intensive care units on mortality of critically ill patients: a cohort study. Crit Care. 2011;15(1):R28. , , , et al.
© 2014 Society of Hospital Medicine
Hospital Readmissions and Preventability
Hospital readmissions cost Medicare $15 to $17 billion per year.[1, 2] In 2010, the Hospital Readmission Reduction Program (HRRP), created by the Patient Protection and Affordable Care Act, authorized the Centers for Medicare and Medicaid Services (CMS) to penalize hospitals with higher‐than‐expected readmission rates for certain index conditions.[3] Other payers may follow suit, so hospitals and health systems nationwide are devoting significant resources to reducing readmissions.[4, 5, 6]
Implicit in these efforts are the assumptions that a significant proportion of readmissions are preventable, and that preventable readmissions can be identified. Unfortunately, estimates of preventability vary widely.[7, 8] In this article, we examine how preventable readmissions have been defined, measured, and calculated, and explore the associated implications for readmission reduction efforts.
THE MEDICARE READMISSION METRIC
The medical literature reveals substantial heterogeneity in how readmissions are assessed. Time periods range from 14 days to 4 years, and readmissions may be counted differently depending on whether they are to the same hospital or to any hospital, whether they are for the same (or a related) condition or for any condition, whether a patient is allowed to count only once during the follow‐up period, how mortality is treated, and whether observation stays are considered.[9]
Despite a lack of consensus in the literature, the approach adopted by CMS is endorsed by the National Quality Forum (NQF)[10] and has become the de facto standard for calculating readmission rates. CMS derives risk‐standardized readmission rates for acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN), using administrative claims data for each Medicare fee‐for‐service beneficiary 65 years or older.[11, 12, 13, 14] CMS counts the first readmission (but not subsequent ones) for any cause within 30 days of the index discharge, including readmissions to other facilities. Certain planned readmissions for revascularization are excluded, as are patients who left against medical advice, transferred to another acute‐care hospital, or died during the index admission. Admissions to psychiatric, rehabilitation, cancer specialty, and children's hospitals[12] are also excluded, as well as patients classified as observation status for either hospital stay.[15] Only administrative data are used in readmission calculations (ie, there are no chart reviews or interviews with healthcare personnel or patients). Details are published online and updated at least annually.[15]
EFFECTS AND LIMITATIONS OF THE HRRP AND THE CMS READMISSION METRIC
Penalizing hospitals for higher‐than‐expected readmission rates based on the CMS metric has been successful in the sense that hospitals now feel more accountable for patient outcomes after discharge; they are implementing transitional care programs, improving communication, and building relationships with community programs.[4, 5, 16] Early data suggest a small decline in readmission rates of Medicare beneficiaries nationally.[17] Previously, such readmission rates were constant.[18]
Nevertheless, significant concerns with the current approach have surfaced.[19, 20, 21] First, why choose 30 days? This time horizon was believed to be long enough to identify readmissions attributable to an index admission and short enough to reflect hospital‐delivered care and transitions to the outpatient setting, and it allows for collaboration between hospitals and their communities to reduce readmissions.[3] However, some have argued that this time horizon has little scientific basis,[22] and that hospitals are unfairly held accountable for a timeframe when outcomes may largely be influenced by the quality of outpatient care or the development of new problems.[23, 24] Approximately one‐third of 30‐day readmissions occur within the first 7 days, and more than half (55.7%) occur within the first 14 days[22, 25]; such time frames may be more appropriate for hospital accountability.[26]
Second, spurred by the focus of CMS penalties, efforts to reduce readmissions have largely concerned patients admitted for HF, AMI, or PN, although these 3 medical conditions account for only 10% of Medicare hospitalizations.[18] Programs focused on a narrow patient population may not benefit other patients with high readmission rates, such as those with gastrointestinal or psychiatric problems,[2] or lead to improvements in the underlying processes of care that could benefit patients in additional ways. Indeed, research suggests that low readmission rates may not be related to other measures of hospital quality.[27, 28]
Third, public reporting and hospital penalties are based on 3‐year historical performance, in part to accumulate a large enough sample size for each diagnosis. Hospitals that seek real‐time performance monitoring are limited to tracking surrogate outcomes, such as readmissions back to their own facility.[29, 30] Moreover, because of the long performance time frame, hospitals that achieve rapid improvement may endure penalties precisely when they are attempting to sustain their achievements.
Fourth, the CMS approach utilizes a complex risk‐standardization methodology, which has only modest ability to predict readmissions and allow hospital comparisons.[9] There is no adjustment for community characteristics, even though practice patterns are significantly associated with readmission rates,[9, 31] and more than half of the variation in readmission rates across hospitals can be explained by characteristics of the community such as access to care.[32] Moreover, patient factors, such as race and socioeconomic status, are currently not included in an attempt to hold hospitals to similar standards regardless of their patient population. This is hotly contested, however, and critics note this policy penalizes hospitals for factors outside of their control, such as patients' ability to afford medications.[33] Indeed, the June 2013 Medicare Payment Advisory Committee (MedPAC) report to Congress recommended evaluating hospital performance against facilities with a like percentage of low‐income patients as a way to take into account socioeconomic status.[34]
Fifth, observation stays are excluded, so patients who remain in observation status during their index or subsequent hospitalization cannot be counted as a readmission. Prevalence of observation care has increased, raising concerns that inpatient admissions are being shifted to observation status, producing an artificial decline in readmissions.[35] Fortunately, recent population‐level data provide some reassuring evidence to the contrary.[36]
Finally, and perhaps most significantly, the current readmission metric does not consider preventability. Recent reviews have demonstrated that estimates of preventability vary widely in individual studies, ranging from 5% to 79%, depending on study methodology and setting.[7, 8] Across these studies, on average, only 23% of 30‐day readmissions appear to be avoidable.[8] Another way to consider the preventability of hospital readmissions is by noting that the most effective multimodal care‐transition interventions reduce readmission rates by only about 30%, and most interventions are much less effective.[26] The likely fact that only 23% to 30% of readmissions are preventable has profound implications for the anticipated results of hospital readmission reduction efforts. Interventions that are 75% effective in reducing preventable readmissions should be expected to produce only an 18% to 22% reduction in overall readmission rates.[37]
FOCUSING ON PREVENTABLE READMISSIONS
A greater focus on identifying and targeting preventable readmissions would offer a number of advantages over the present approach. First, it is more meaningful to compare hospitals based on their percentage of discharges resulting in a preventable readmission, than on the basis of highly complex risk standardization procedures for selected conditions. Second, a focus on preventable readmissions more clearly identifies and permits hospitals to target opportunities for improvement. Third, if the focus were on preventable readmissions for a large number of conditions, the necessary sample size could be obtained over a shorter period of time. Overall, such a preventable readmissions metric could serve as a more agile and undiluted performance indicator, as opposed to the present 3‐year rolling average rate of all‐cause readmissions for certain conditions, the majority of which are probably not preventable.
DEFINING PREVENTABILITY
Defining a preventable readmission is critically important. However, neither a consensus definition nor a validated standard for assessing preventable hospital readmissions exists. Different conceptual frameworks and terms (eg, avoidable, potentially preventable, or urgent readmission) complicate the issue.[38, 39, 40]
Although the CMS measure does not address preventability, it is helpful to consider whether other readmission metrics incorporate this concept. The United Health Group's (UHG, formerly Pacificare) All‐Cause Readmission Index, University HealthSystem Consortium's 30‐Day Readmission Rate (all cause), and 3M Health Information Systems' (3M) Potentially Preventable Readmissions (PPR) are 3 commonly used measures.
Of these, only the 3M PPR metric includes the concept of preventability. 3M created a proprietary matrix of 98,000 readmission‐index admission All Patient Refined Diagnosis Related Group pairs based on the review of several physicians and the logical assumption that a readmission for a clinically related diagnosis is potentially preventable.[24, 41] Readmission and index admissions are considered clinically related if any of the following occur: (1) medical readmission for continuation or recurrence of an initial, or closely related, condition; (2) medical readmission for acute decompensation of a chronic condition that was not the reason for the index admission but was plausibly related to care during or immediately afterward (eg, readmission for diabetes in a patient whose index admission was AMI); (3) medical readmission for acute complication plausibly related to care during index admission; (4) readmission for surgical procedure for continuation or recurrence of initial problem (eg, readmission for appendectomy following admission for abdominal pain and fever); or (5) readmission for surgical procedure to address complication resulting from care during index admission.[24, 41] The readmission time frame is not standardized and may be set by the user. Though conceptually appealing in some ways, CMS and the NQF have expressed concern about this specific approach because of the uncertain reliability of the relatedness of the admission‐readmission diagnosis dyads.[3]
In the research literature, only a few studies have examined the 3M PPR or other preventability assessments that rely on the relatedness of diagnostic codes.[8] Using the 3M PPR, a study showed that 78% of readmissions were classified as potentially preventable,[42] which explains why the 3M PPR and all‐cause readmission metric may correlate highly.[43] Others have demonstrated that ratings of hospital performance on readmission rates vary by a moderate to large amount, depending on whether the 3M PPR, CMS, or UHG methodology is used.[43, 44] An algorithm called SQLape[45, 46] is used in Switzerland to benchmark hospitals and defines potentially avoidable readmissions as being related to index diagnoses or complications of those conditions. It has recently been tested in the United States in a single‐center study,[47] and a multihospital study is underway.
Aside from these algorithms using related diagnosis codes, most ratings of preventability have relied on subjective assessments made primarily through a review of hospital records, and approximately one‐third also included data from clinic visits or interviews with the treating medical team or patients/families.[8] Unfortunately, these reports provide insufficient detail on how to apply their preventability criteria to subsequent readmission reviews. Studies did, however, provide categories of preventability into which readmissions could be organized (see Supporting Information, Appendix Table 1, in the online version of this article for details from a subset of studies cited in van Walraven's reviews that illustrate this point).
Assessment of preventability by clinician review can be challenging. In general, such assessments have considered readmissions resulting from factors within the hospital's control to be avoidable (eg, providing appropriate discharge instructions, reconciling medications, arranging timely postdischarge follow‐up appointments), whereas readmissions resulting from factors not within the hospital's control are unavoidable (eg, patient socioeconomic status, social support, disease progression). However, readmissions resulting from patient behaviors or social reasons could potentially be classified as avoidable or unavoidable depending on the circumstances. For example, if a patient decides not to take a prescribed antibiotic and is readmitted with worsening infection, this could be classified as an unavoidable readmission from the hospital's perspective. Alternatively, if the physician prescribing the antibiotic was inattentive to the cost of the medication and the patient would have taken a less expensive medication had it been prescribed, this could be classified as an avoidable readmission. Differing interpretations of contextual factors may partially account for the variability in clinical assessments of preventability.
Indeed, despite the lack of consensus around a standard method of defining preventability, hospitals and health systems are moving forward to address the issue and reduce readmissions. A recent survey by America's Essential Hospitals (previously the National Association of Public Hospitals and Health Systems), indicated that: (1) reducing readmissions was a high priority for the majority (86%) of members, (2) most had established interdisciplinary teams to address the issue, and (3) over half had a formal process for determining which readmissions were potentially preventable. Of the survey respondents, just over one‐third rely on staff review of individual patient charts or patient and family interviews, and slightly less than one‐third rely on other mechanisms such as external consultants, criteria developed by other entities, or the Institute for Clinical Systems Improvement methodology.[48] Approximately one‐fifth make use of 3M's PPR product, and slightly fewer use the list of the Agency for Healthcare Research and Quality's ambulatory care sensitive conditions (ACSCs). These are medical conditions for which it is believed that good outpatient care could prevent the need for hospitalization (eg, asthma, congestive heart failure, diabetes) or for which early intervention minimizes complications.[49] Hospitalization rates for ACSCs may represent a good measure of excess hospitalization, with a focus on the quality of outpatient care.
RECOMMENDATIONS
We recommend that reporting of hospital readmission rates be based on preventable or potentially preventable readmissions. Although we acknowledge the challenges in doing so, the advantages are notable. At minimum, a preventable readmission rate would more accurately reflect the true gap in care and therefore hospitals' real opportunity for improvement, without being obscured by readmissions that are not preventable.
Because readmission rates are used for public reporting and financial penalties for hospitals, we favor a measure of preventability that reflects the readmissions that the hospital or hospital system has the ability to prevent. This would not penalize hospitals for factors that are under the control of others, namely patients and caregivers, community supports, or society at large. We further recommend that this measure apply to a broader composite of unplanned care, inclusive of both inpatient and observation stays, which have little distinction in patients' eyes, and both represent potentially unnecessary utilization of acute‐care resources.[50] Such a measure would require development, validation, and appropriate vetting before it is implemented.
The first step is for researchers and policy makers to agree on how a measure of preventable or potentially preventable readmissions could be defined. A common element of preventability assessment is to identify the degree to which the reasons for readmission are related to the diagnoses of the index hospitalization. To be reliable and scalable, this measure will need to be based on algorithms that relate the index and readmission diagnoses, most likely using claims data. Choosing common medical and surgical conditions and developing a consensus‐based list of related readmission diagnoses is an important first step. It would also be important to include some less common conditions, because they may reflect very different aspects of hospital care.
An approach based on a list of related diagnoses would represent potentially preventable rehospitalizations. Generally, clinical review is required to determine actual preventability, taking into account patient factors such as a high level of illness or functional impairment that leads to clinical decompensation in spite of excellent management.[51, 52] Clinical review, like a root cause analysis, also provides greater insight into hospital processes that may warrant improvement. Therefore, even if an administrative measure of potentially preventable readmissions is implemented, hospitals may wish to continue performing detailed clinical review of some readmissions for quality improvement purposes. When clinical review becomes more standardized,[53] a combined approach that uses administrative data plus clinical verification and arbitration may be feasible, as with hospital‐acquired infections.
Similar work to develop related sets of admission and readmission diagnoses has already been undertaken in development of the 3M PPR and SQLape measures.[41, 46] However, the 3M PPR is a proprietary system that has low specificity and a high false‐positive rate for identifying preventable readmissions when compared to clinical review.[42] Moreover, neither measure has yet achieved the consensus required for widespread adoption in the United States. What is needed is a nonproprietary listing of related admission and readmission diagnoses, developed with the engagement of relevant stakeholders, that goes through a period of public comment and vetting by a body such as the NQF.
Until a validated measure of potentially preventable readmission can be developed, how could the current approach evolve toward preventability? The most feasible, rapidly implementable change would be to alter the readmission time horizon from 30 days to 7 or 15 days. A 30‐day period holds hospitals accountable for complications of outpatient care or new problems that may develop weeks after discharge. Even though this may foster shared accountability and collaboration among hospitals and outpatient or community settings, research has demonstrated that early readmissions (eg, within 715 days of discharge) are more likely preventable.[54] Second, consideration of the socioeconomic status of hospital patients, as recommended by MedPAC,[34] would improve on the current model by comparing hospitals to like facilities when determining penalties for excess readmission rates. Finally, adjustment for community factors, such as practice patterns and access to care, would enable readmission metrics to better reflect factors under the hospital's control.[32]
CONCLUSION
Holding hospitals accountable for the quality of acute and transitional care is an important policy initiative that has accelerated many improvements in discharge planning and care coordination. Optimally, the policies, public reporting, and penalties should target preventable readmissions, which may represent as little as one‐quarter of all readmissions. By summarizing some of the issues in defining preventability, we hope to foster continued refinement of quality metrics used in this arena.
Acknowledgements
We thank Eduard Vasilevskis, MD, MPH, for feedback on an earlier draft of this article. This manuscript was informed by a special report titled Preventable Readmissions, written by Julia Lavenberg, Joel Betesh, David Goldmann, Craig Kean, and Kendal Williams of the Penn Medicine Center for Evidence‐based Practice. The review was performed at the request of the Penn Medicine Chief Medical Officer Patrick J. Brennan to inform the development of local readmission prevention metrics, and is available at
Disclosures
Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1TR000003. Dr. Kripalani receives support from the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number R01HL109388, and from the Centers for Medicare and Medicaid Services under awards 1C1CMS331006‐01 and 1C1CMS330979‐01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Centers for Medicare and Medicaid Services.
- Physician Visits After Hospital Discharge: Implications for Reducing Readmissions. Washington, DC: National Institute for Health Care Reform; 2011. Report no. 6. , .
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Centers for Medicare and Medicaid Services, US Department of Health and Human Services. Medicare program: hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and FY 2012 rates. Fed Regist. 2011;76(160):51476–51846.
- Quality collaboratives and campaigns to reduce readmissions: what strategies are hospitals using? J Hosp Med. 2013;8:601–608. , , , , , .
- Contemporary data about hospital strategies to reduce unplanned readmissions: what has changed [research letter]? JAMA Intern Med. 2014;174(1):154–156. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- Comparing methods to calculate hospital‐specific rates of early death or urgent readmission. CMAJ. 2012;184(15):E810–E817. , , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- National Quality Forum. Patient outcomes: all‐cause readmissions expedited review 2011. Available at: http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id60(7):607–614.
- Data shows reduction in Medicare hospital readmission rates during 2012. Medicare Medicaid Res Rev. 2013;3(2):E1–E11. , , , , , .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Moving beyond readmission penalties: creating an ideal process to improve transitional care. J Hosp Med. 2013;8(2):102–109. , , , .
- A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):1175–1177. , .
- American Hospital Association. TrendWatch: examining the drivers of readmissions and reducing unnecessary readmissions for better patient care. Washington, DC: American Hospital Association; 2011.
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Characteristics of hospitals receiving penalties under the hospital readmissions reduction program. JAMA. 2013;309(4):342–343. , .
- Identifying potentially preventable readmissions. Health Care Financ Rev. 2008;30(1):75–91. , , , , , , et al.
- Use of hospital‐based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364–371. , , , et al.
- Reducing hospital readmission rates: current strategies and future directions. Annu Rev Med. 2014;65:471–485. , , , .
- Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2013;309(6):587–593. , , , et al.
- Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377–385. , , , et al.
- Limitations of using same‐hospital readmission metrics. Int J Qual Health Care. 2013;25(6):633–639. , , , .
- Is same‐hospital readmission rate a good surrogate for all‐hospital readmission rate? Med Care. 2010;48(5):477–481. , , , et al.
- The relationship between hospital admission rates and rehospitalizations. N Engl J Med. 2011;365(24):2287–2295. , , .
- Community factors and hospital readmission rates [published online April 9, 2014]. Health Serv Res. doi: 10.1111/1475–6773.12177. .
- American Hospital Association. Hospital readmissions reduction program: factsheet. American Hospital Association. Available at: http://www.aha.org/content/13/fs‐readmissions.pdf. Published April 14, 2014. Accessed May 5, 2014.
- Medicare Payment Advisory Commission. Report to the congress: Medicare and the health care delivery system. Available at: http://www.medpac.gov/documents/Jun13_EntireReport.pdf. Published June 14, 2013. Accessed May 5, 2014.
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Quality improvement of care transitions and the trend of composite hospital care. JAMA. 2014;311(10):1013–1014. , , .
- When projecting required effectiveness of interventions for hospital readmission reduction, the percentage that is potentially avoidable must be considered. J Clin Epidemiol. 2013;66(6):688–690. , .
- Urgent readmission rates can be used to infer differences in avoidable readmission rates between hospitals. J Clin Epidemiol. 2012;65(10):1124–1130. , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Measuring and preventing potentially avoidable hospital readmissions: a review of the literature. Hong Kong Med J. 2010;16(5):383–389. , , , , , .
- 3M Health Information Systems. Potentially preventable readmissions classification system methodology: overview. 3M Health Information Systems; May 2008. Report No.: GRP‐139. Available at: http://multimedia.3m.com/mws/mediawebserver?66666UuZjcFSLXTtNXMtmxMEEVuQEcuZgVs6EVs6E666666‐‐. Accessed June 8, 2014.
- Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system. BMC Med Inform Decis Mak. 2014;14:28. , , , , , .
- Comparing 2 methods of assessing 30‐day readmissions: what is the impact on hospital profiling in the Veterans Health Administration? Med Care. 2013;51(7):589–596. , , , , , , et al.
- It's not six of one, half‐dozen the other: a comparative analysis of 3 rehospitalization measurement systems for Massachusetts. Academy Health Annual Research Meeting. Seattle, WA. 2011. Available at: http://www.academyhealth.org/files/2011/tuesday/boutwell.pdf. Accessed May 9, 2014. , .
- Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11):972–981. , , , , , .
- Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573–587. , , , , , .
- Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632–638. , , , .
- National Association of Public Hospitals and Health Systems. NAPH members focus on reducing readmissions. Available at: www.naph.org. Published June 2011. Accessed October 19, 2011.
- Agency for Healthcare Research and Quality. AHRQ quality indicators: prevention quality indicators. Available at: http://www.qualityindicators.ahrq.gov/Modules/pqi_resources.aspx. Accessed February 11, 2014.
- Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450–453. , , , , , .
- Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–102. .
- The hospital‐dependent patient. N Engl J Med. 2014;370(8):694–697. , .
- The hospital medicine reengineering network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415–420. , , , et al.
- Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067–E1072. , , , et al.
Hospital readmissions cost Medicare $15 to $17 billion per year.[1, 2] In 2010, the Hospital Readmission Reduction Program (HRRP), created by the Patient Protection and Affordable Care Act, authorized the Centers for Medicare and Medicaid Services (CMS) to penalize hospitals with higher‐than‐expected readmission rates for certain index conditions.[3] Other payers may follow suit, so hospitals and health systems nationwide are devoting significant resources to reducing readmissions.[4, 5, 6]
Implicit in these efforts are the assumptions that a significant proportion of readmissions are preventable, and that preventable readmissions can be identified. Unfortunately, estimates of preventability vary widely.[7, 8] In this article, we examine how preventable readmissions have been defined, measured, and calculated, and explore the associated implications for readmission reduction efforts.
THE MEDICARE READMISSION METRIC
The medical literature reveals substantial heterogeneity in how readmissions are assessed. Time periods range from 14 days to 4 years, and readmissions may be counted differently depending on whether they are to the same hospital or to any hospital, whether they are for the same (or a related) condition or for any condition, whether a patient is allowed to count only once during the follow‐up period, how mortality is treated, and whether observation stays are considered.[9]
Despite a lack of consensus in the literature, the approach adopted by CMS is endorsed by the National Quality Forum (NQF)[10] and has become the de facto standard for calculating readmission rates. CMS derives risk‐standardized readmission rates for acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN), using administrative claims data for each Medicare fee‐for‐service beneficiary 65 years or older.[11, 12, 13, 14] CMS counts the first readmission (but not subsequent ones) for any cause within 30 days of the index discharge, including readmissions to other facilities. Certain planned readmissions for revascularization are excluded, as are patients who left against medical advice, transferred to another acute‐care hospital, or died during the index admission. Admissions to psychiatric, rehabilitation, cancer specialty, and children's hospitals[12] are also excluded, as well as patients classified as observation status for either hospital stay.[15] Only administrative data are used in readmission calculations (ie, there are no chart reviews or interviews with healthcare personnel or patients). Details are published online and updated at least annually.[15]
EFFECTS AND LIMITATIONS OF THE HRRP AND THE CMS READMISSION METRIC
Penalizing hospitals for higher‐than‐expected readmission rates based on the CMS metric has been successful in the sense that hospitals now feel more accountable for patient outcomes after discharge; they are implementing transitional care programs, improving communication, and building relationships with community programs.[4, 5, 16] Early data suggest a small decline in readmission rates of Medicare beneficiaries nationally.[17] Previously, such readmission rates were constant.[18]
Nevertheless, significant concerns with the current approach have surfaced.[19, 20, 21] First, why choose 30 days? This time horizon was believed to be long enough to identify readmissions attributable to an index admission and short enough to reflect hospital‐delivered care and transitions to the outpatient setting, and it allows for collaboration between hospitals and their communities to reduce readmissions.[3] However, some have argued that this time horizon has little scientific basis,[22] and that hospitals are unfairly held accountable for a timeframe when outcomes may largely be influenced by the quality of outpatient care or the development of new problems.[23, 24] Approximately one‐third of 30‐day readmissions occur within the first 7 days, and more than half (55.7%) occur within the first 14 days[22, 25]; such time frames may be more appropriate for hospital accountability.[26]
Second, spurred by the focus of CMS penalties, efforts to reduce readmissions have largely concerned patients admitted for HF, AMI, or PN, although these 3 medical conditions account for only 10% of Medicare hospitalizations.[18] Programs focused on a narrow patient population may not benefit other patients with high readmission rates, such as those with gastrointestinal or psychiatric problems,[2] or lead to improvements in the underlying processes of care that could benefit patients in additional ways. Indeed, research suggests that low readmission rates may not be related to other measures of hospital quality.[27, 28]
Third, public reporting and hospital penalties are based on 3‐year historical performance, in part to accumulate a large enough sample size for each diagnosis. Hospitals that seek real‐time performance monitoring are limited to tracking surrogate outcomes, such as readmissions back to their own facility.[29, 30] Moreover, because of the long performance time frame, hospitals that achieve rapid improvement may endure penalties precisely when they are attempting to sustain their achievements.
Fourth, the CMS approach utilizes a complex risk‐standardization methodology, which has only modest ability to predict readmissions and allow hospital comparisons.[9] There is no adjustment for community characteristics, even though practice patterns are significantly associated with readmission rates,[9, 31] and more than half of the variation in readmission rates across hospitals can be explained by characteristics of the community such as access to care.[32] Moreover, patient factors, such as race and socioeconomic status, are currently not included in an attempt to hold hospitals to similar standards regardless of their patient population. This is hotly contested, however, and critics note this policy penalizes hospitals for factors outside of their control, such as patients' ability to afford medications.[33] Indeed, the June 2013 Medicare Payment Advisory Committee (MedPAC) report to Congress recommended evaluating hospital performance against facilities with a like percentage of low‐income patients as a way to take into account socioeconomic status.[34]
Fifth, observation stays are excluded, so patients who remain in observation status during their index or subsequent hospitalization cannot be counted as a readmission. Prevalence of observation care has increased, raising concerns that inpatient admissions are being shifted to observation status, producing an artificial decline in readmissions.[35] Fortunately, recent population‐level data provide some reassuring evidence to the contrary.[36]
Finally, and perhaps most significantly, the current readmission metric does not consider preventability. Recent reviews have demonstrated that estimates of preventability vary widely in individual studies, ranging from 5% to 79%, depending on study methodology and setting.[7, 8] Across these studies, on average, only 23% of 30‐day readmissions appear to be avoidable.[8] Another way to consider the preventability of hospital readmissions is by noting that the most effective multimodal care‐transition interventions reduce readmission rates by only about 30%, and most interventions are much less effective.[26] The likely fact that only 23% to 30% of readmissions are preventable has profound implications for the anticipated results of hospital readmission reduction efforts. Interventions that are 75% effective in reducing preventable readmissions should be expected to produce only an 18% to 22% reduction in overall readmission rates.[37]
FOCUSING ON PREVENTABLE READMISSIONS
A greater focus on identifying and targeting preventable readmissions would offer a number of advantages over the present approach. First, it is more meaningful to compare hospitals based on their percentage of discharges resulting in a preventable readmission, than on the basis of highly complex risk standardization procedures for selected conditions. Second, a focus on preventable readmissions more clearly identifies and permits hospitals to target opportunities for improvement. Third, if the focus were on preventable readmissions for a large number of conditions, the necessary sample size could be obtained over a shorter period of time. Overall, such a preventable readmissions metric could serve as a more agile and undiluted performance indicator, as opposed to the present 3‐year rolling average rate of all‐cause readmissions for certain conditions, the majority of which are probably not preventable.
DEFINING PREVENTABILITY
Defining a preventable readmission is critically important. However, neither a consensus definition nor a validated standard for assessing preventable hospital readmissions exists. Different conceptual frameworks and terms (eg, avoidable, potentially preventable, or urgent readmission) complicate the issue.[38, 39, 40]
Although the CMS measure does not address preventability, it is helpful to consider whether other readmission metrics incorporate this concept. The United Health Group's (UHG, formerly Pacificare) All‐Cause Readmission Index, University HealthSystem Consortium's 30‐Day Readmission Rate (all cause), and 3M Health Information Systems' (3M) Potentially Preventable Readmissions (PPR) are 3 commonly used measures.
Of these, only the 3M PPR metric includes the concept of preventability. 3M created a proprietary matrix of 98,000 readmission‐index admission All Patient Refined Diagnosis Related Group pairs based on the review of several physicians and the logical assumption that a readmission for a clinically related diagnosis is potentially preventable.[24, 41] Readmission and index admissions are considered clinically related if any of the following occur: (1) medical readmission for continuation or recurrence of an initial, or closely related, condition; (2) medical readmission for acute decompensation of a chronic condition that was not the reason for the index admission but was plausibly related to care during or immediately afterward (eg, readmission for diabetes in a patient whose index admission was AMI); (3) medical readmission for acute complication plausibly related to care during index admission; (4) readmission for surgical procedure for continuation or recurrence of initial problem (eg, readmission for appendectomy following admission for abdominal pain and fever); or (5) readmission for surgical procedure to address complication resulting from care during index admission.[24, 41] The readmission time frame is not standardized and may be set by the user. Though conceptually appealing in some ways, CMS and the NQF have expressed concern about this specific approach because of the uncertain reliability of the relatedness of the admission‐readmission diagnosis dyads.[3]
In the research literature, only a few studies have examined the 3M PPR or other preventability assessments that rely on the relatedness of diagnostic codes.[8] Using the 3M PPR, a study showed that 78% of readmissions were classified as potentially preventable,[42] which explains why the 3M PPR and all‐cause readmission metric may correlate highly.[43] Others have demonstrated that ratings of hospital performance on readmission rates vary by a moderate to large amount, depending on whether the 3M PPR, CMS, or UHG methodology is used.[43, 44] An algorithm called SQLape[45, 46] is used in Switzerland to benchmark hospitals and defines potentially avoidable readmissions as being related to index diagnoses or complications of those conditions. It has recently been tested in the United States in a single‐center study,[47] and a multihospital study is underway.
Aside from these algorithms using related diagnosis codes, most ratings of preventability have relied on subjective assessments made primarily through a review of hospital records, and approximately one‐third also included data from clinic visits or interviews with the treating medical team or patients/families.[8] Unfortunately, these reports provide insufficient detail on how to apply their preventability criteria to subsequent readmission reviews. Studies did, however, provide categories of preventability into which readmissions could be organized (see Supporting Information, Appendix Table 1, in the online version of this article for details from a subset of studies cited in van Walraven's reviews that illustrate this point).
Assessment of preventability by clinician review can be challenging. In general, such assessments have considered readmissions resulting from factors within the hospital's control to be avoidable (eg, providing appropriate discharge instructions, reconciling medications, arranging timely postdischarge follow‐up appointments), whereas readmissions resulting from factors not within the hospital's control are unavoidable (eg, patient socioeconomic status, social support, disease progression). However, readmissions resulting from patient behaviors or social reasons could potentially be classified as avoidable or unavoidable depending on the circumstances. For example, if a patient decides not to take a prescribed antibiotic and is readmitted with worsening infection, this could be classified as an unavoidable readmission from the hospital's perspective. Alternatively, if the physician prescribing the antibiotic was inattentive to the cost of the medication and the patient would have taken a less expensive medication had it been prescribed, this could be classified as an avoidable readmission. Differing interpretations of contextual factors may partially account for the variability in clinical assessments of preventability.
Indeed, despite the lack of consensus around a standard method of defining preventability, hospitals and health systems are moving forward to address the issue and reduce readmissions. A recent survey by America's Essential Hospitals (previously the National Association of Public Hospitals and Health Systems), indicated that: (1) reducing readmissions was a high priority for the majority (86%) of members, (2) most had established interdisciplinary teams to address the issue, and (3) over half had a formal process for determining which readmissions were potentially preventable. Of the survey respondents, just over one‐third rely on staff review of individual patient charts or patient and family interviews, and slightly less than one‐third rely on other mechanisms such as external consultants, criteria developed by other entities, or the Institute for Clinical Systems Improvement methodology.[48] Approximately one‐fifth make use of 3M's PPR product, and slightly fewer use the list of the Agency for Healthcare Research and Quality's ambulatory care sensitive conditions (ACSCs). These are medical conditions for which it is believed that good outpatient care could prevent the need for hospitalization (eg, asthma, congestive heart failure, diabetes) or for which early intervention minimizes complications.[49] Hospitalization rates for ACSCs may represent a good measure of excess hospitalization, with a focus on the quality of outpatient care.
RECOMMENDATIONS
We recommend that reporting of hospital readmission rates be based on preventable or potentially preventable readmissions. Although we acknowledge the challenges in doing so, the advantages are notable. At minimum, a preventable readmission rate would more accurately reflect the true gap in care and therefore hospitals' real opportunity for improvement, without being obscured by readmissions that are not preventable.
Because readmission rates are used for public reporting and financial penalties for hospitals, we favor a measure of preventability that reflects the readmissions that the hospital or hospital system has the ability to prevent. This would not penalize hospitals for factors that are under the control of others, namely patients and caregivers, community supports, or society at large. We further recommend that this measure apply to a broader composite of unplanned care, inclusive of both inpatient and observation stays, which have little distinction in patients' eyes, and both represent potentially unnecessary utilization of acute‐care resources.[50] Such a measure would require development, validation, and appropriate vetting before it is implemented.
The first step is for researchers and policy makers to agree on how a measure of preventable or potentially preventable readmissions could be defined. A common element of preventability assessment is to identify the degree to which the reasons for readmission are related to the diagnoses of the index hospitalization. To be reliable and scalable, this measure will need to be based on algorithms that relate the index and readmission diagnoses, most likely using claims data. Choosing common medical and surgical conditions and developing a consensus‐based list of related readmission diagnoses is an important first step. It would also be important to include some less common conditions, because they may reflect very different aspects of hospital care.
An approach based on a list of related diagnoses would represent potentially preventable rehospitalizations. Generally, clinical review is required to determine actual preventability, taking into account patient factors such as a high level of illness or functional impairment that leads to clinical decompensation in spite of excellent management.[51, 52] Clinical review, like a root cause analysis, also provides greater insight into hospital processes that may warrant improvement. Therefore, even if an administrative measure of potentially preventable readmissions is implemented, hospitals may wish to continue performing detailed clinical review of some readmissions for quality improvement purposes. When clinical review becomes more standardized,[53] a combined approach that uses administrative data plus clinical verification and arbitration may be feasible, as with hospital‐acquired infections.
Similar work to develop related sets of admission and readmission diagnoses has already been undertaken in development of the 3M PPR and SQLape measures.[41, 46] However, the 3M PPR is a proprietary system that has low specificity and a high false‐positive rate for identifying preventable readmissions when compared to clinical review.[42] Moreover, neither measure has yet achieved the consensus required for widespread adoption in the United States. What is needed is a nonproprietary listing of related admission and readmission diagnoses, developed with the engagement of relevant stakeholders, that goes through a period of public comment and vetting by a body such as the NQF.
Until a validated measure of potentially preventable readmission can be developed, how could the current approach evolve toward preventability? The most feasible, rapidly implementable change would be to alter the readmission time horizon from 30 days to 7 or 15 days. A 30‐day period holds hospitals accountable for complications of outpatient care or new problems that may develop weeks after discharge. Even though this may foster shared accountability and collaboration among hospitals and outpatient or community settings, research has demonstrated that early readmissions (eg, within 715 days of discharge) are more likely preventable.[54] Second, consideration of the socioeconomic status of hospital patients, as recommended by MedPAC,[34] would improve on the current model by comparing hospitals to like facilities when determining penalties for excess readmission rates. Finally, adjustment for community factors, such as practice patterns and access to care, would enable readmission metrics to better reflect factors under the hospital's control.[32]
CONCLUSION
Holding hospitals accountable for the quality of acute and transitional care is an important policy initiative that has accelerated many improvements in discharge planning and care coordination. Optimally, the policies, public reporting, and penalties should target preventable readmissions, which may represent as little as one‐quarter of all readmissions. By summarizing some of the issues in defining preventability, we hope to foster continued refinement of quality metrics used in this arena.
Acknowledgements
We thank Eduard Vasilevskis, MD, MPH, for feedback on an earlier draft of this article. This manuscript was informed by a special report titled Preventable Readmissions, written by Julia Lavenberg, Joel Betesh, David Goldmann, Craig Kean, and Kendal Williams of the Penn Medicine Center for Evidence‐based Practice. The review was performed at the request of the Penn Medicine Chief Medical Officer Patrick J. Brennan to inform the development of local readmission prevention metrics, and is available at
Disclosures
Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1TR000003. Dr. Kripalani receives support from the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number R01HL109388, and from the Centers for Medicare and Medicaid Services under awards 1C1CMS331006‐01 and 1C1CMS330979‐01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Centers for Medicare and Medicaid Services.
Hospital readmissions cost Medicare $15 to $17 billion per year.[1, 2] In 2010, the Hospital Readmission Reduction Program (HRRP), created by the Patient Protection and Affordable Care Act, authorized the Centers for Medicare and Medicaid Services (CMS) to penalize hospitals with higher‐than‐expected readmission rates for certain index conditions.[3] Other payers may follow suit, so hospitals and health systems nationwide are devoting significant resources to reducing readmissions.[4, 5, 6]
Implicit in these efforts are the assumptions that a significant proportion of readmissions are preventable, and that preventable readmissions can be identified. Unfortunately, estimates of preventability vary widely.[7, 8] In this article, we examine how preventable readmissions have been defined, measured, and calculated, and explore the associated implications for readmission reduction efforts.
THE MEDICARE READMISSION METRIC
The medical literature reveals substantial heterogeneity in how readmissions are assessed. Time periods range from 14 days to 4 years, and readmissions may be counted differently depending on whether they are to the same hospital or to any hospital, whether they are for the same (or a related) condition or for any condition, whether a patient is allowed to count only once during the follow‐up period, how mortality is treated, and whether observation stays are considered.[9]
Despite a lack of consensus in the literature, the approach adopted by CMS is endorsed by the National Quality Forum (NQF)[10] and has become the de facto standard for calculating readmission rates. CMS derives risk‐standardized readmission rates for acute myocardial infarction (AMI), heart failure (HF), and pneumonia (PN), using administrative claims data for each Medicare fee‐for‐service beneficiary 65 years or older.[11, 12, 13, 14] CMS counts the first readmission (but not subsequent ones) for any cause within 30 days of the index discharge, including readmissions to other facilities. Certain planned readmissions for revascularization are excluded, as are patients who left against medical advice, transferred to another acute‐care hospital, or died during the index admission. Admissions to psychiatric, rehabilitation, cancer specialty, and children's hospitals[12] are also excluded, as well as patients classified as observation status for either hospital stay.[15] Only administrative data are used in readmission calculations (ie, there are no chart reviews or interviews with healthcare personnel or patients). Details are published online and updated at least annually.[15]
EFFECTS AND LIMITATIONS OF THE HRRP AND THE CMS READMISSION METRIC
Penalizing hospitals for higher‐than‐expected readmission rates based on the CMS metric has been successful in the sense that hospitals now feel more accountable for patient outcomes after discharge; they are implementing transitional care programs, improving communication, and building relationships with community programs.[4, 5, 16] Early data suggest a small decline in readmission rates of Medicare beneficiaries nationally.[17] Previously, such readmission rates were constant.[18]
Nevertheless, significant concerns with the current approach have surfaced.[19, 20, 21] First, why choose 30 days? This time horizon was believed to be long enough to identify readmissions attributable to an index admission and short enough to reflect hospital‐delivered care and transitions to the outpatient setting, and it allows for collaboration between hospitals and their communities to reduce readmissions.[3] However, some have argued that this time horizon has little scientific basis,[22] and that hospitals are unfairly held accountable for a timeframe when outcomes may largely be influenced by the quality of outpatient care or the development of new problems.[23, 24] Approximately one‐third of 30‐day readmissions occur within the first 7 days, and more than half (55.7%) occur within the first 14 days[22, 25]; such time frames may be more appropriate for hospital accountability.[26]
Second, spurred by the focus of CMS penalties, efforts to reduce readmissions have largely concerned patients admitted for HF, AMI, or PN, although these 3 medical conditions account for only 10% of Medicare hospitalizations.[18] Programs focused on a narrow patient population may not benefit other patients with high readmission rates, such as those with gastrointestinal or psychiatric problems,[2] or lead to improvements in the underlying processes of care that could benefit patients in additional ways. Indeed, research suggests that low readmission rates may not be related to other measures of hospital quality.[27, 28]
Third, public reporting and hospital penalties are based on 3‐year historical performance, in part to accumulate a large enough sample size for each diagnosis. Hospitals that seek real‐time performance monitoring are limited to tracking surrogate outcomes, such as readmissions back to their own facility.[29, 30] Moreover, because of the long performance time frame, hospitals that achieve rapid improvement may endure penalties precisely when they are attempting to sustain their achievements.
Fourth, the CMS approach utilizes a complex risk‐standardization methodology, which has only modest ability to predict readmissions and allow hospital comparisons.[9] There is no adjustment for community characteristics, even though practice patterns are significantly associated with readmission rates,[9, 31] and more than half of the variation in readmission rates across hospitals can be explained by characteristics of the community such as access to care.[32] Moreover, patient factors, such as race and socioeconomic status, are currently not included in an attempt to hold hospitals to similar standards regardless of their patient population. This is hotly contested, however, and critics note this policy penalizes hospitals for factors outside of their control, such as patients' ability to afford medications.[33] Indeed, the June 2013 Medicare Payment Advisory Committee (MedPAC) report to Congress recommended evaluating hospital performance against facilities with a like percentage of low‐income patients as a way to take into account socioeconomic status.[34]
Fifth, observation stays are excluded, so patients who remain in observation status during their index or subsequent hospitalization cannot be counted as a readmission. Prevalence of observation care has increased, raising concerns that inpatient admissions are being shifted to observation status, producing an artificial decline in readmissions.[35] Fortunately, recent population‐level data provide some reassuring evidence to the contrary.[36]
Finally, and perhaps most significantly, the current readmission metric does not consider preventability. Recent reviews have demonstrated that estimates of preventability vary widely in individual studies, ranging from 5% to 79%, depending on study methodology and setting.[7, 8] Across these studies, on average, only 23% of 30‐day readmissions appear to be avoidable.[8] Another way to consider the preventability of hospital readmissions is by noting that the most effective multimodal care‐transition interventions reduce readmission rates by only about 30%, and most interventions are much less effective.[26] The likely fact that only 23% to 30% of readmissions are preventable has profound implications for the anticipated results of hospital readmission reduction efforts. Interventions that are 75% effective in reducing preventable readmissions should be expected to produce only an 18% to 22% reduction in overall readmission rates.[37]
FOCUSING ON PREVENTABLE READMISSIONS
A greater focus on identifying and targeting preventable readmissions would offer a number of advantages over the present approach. First, it is more meaningful to compare hospitals based on their percentage of discharges resulting in a preventable readmission, than on the basis of highly complex risk standardization procedures for selected conditions. Second, a focus on preventable readmissions more clearly identifies and permits hospitals to target opportunities for improvement. Third, if the focus were on preventable readmissions for a large number of conditions, the necessary sample size could be obtained over a shorter period of time. Overall, such a preventable readmissions metric could serve as a more agile and undiluted performance indicator, as opposed to the present 3‐year rolling average rate of all‐cause readmissions for certain conditions, the majority of which are probably not preventable.
DEFINING PREVENTABILITY
Defining a preventable readmission is critically important. However, neither a consensus definition nor a validated standard for assessing preventable hospital readmissions exists. Different conceptual frameworks and terms (eg, avoidable, potentially preventable, or urgent readmission) complicate the issue.[38, 39, 40]
Although the CMS measure does not address preventability, it is helpful to consider whether other readmission metrics incorporate this concept. The United Health Group's (UHG, formerly Pacificare) All‐Cause Readmission Index, University HealthSystem Consortium's 30‐Day Readmission Rate (all cause), and 3M Health Information Systems' (3M) Potentially Preventable Readmissions (PPR) are 3 commonly used measures.
Of these, only the 3M PPR metric includes the concept of preventability. 3M created a proprietary matrix of 98,000 readmission‐index admission All Patient Refined Diagnosis Related Group pairs based on the review of several physicians and the logical assumption that a readmission for a clinically related diagnosis is potentially preventable.[24, 41] Readmission and index admissions are considered clinically related if any of the following occur: (1) medical readmission for continuation or recurrence of an initial, or closely related, condition; (2) medical readmission for acute decompensation of a chronic condition that was not the reason for the index admission but was plausibly related to care during or immediately afterward (eg, readmission for diabetes in a patient whose index admission was AMI); (3) medical readmission for acute complication plausibly related to care during index admission; (4) readmission for surgical procedure for continuation or recurrence of initial problem (eg, readmission for appendectomy following admission for abdominal pain and fever); or (5) readmission for surgical procedure to address complication resulting from care during index admission.[24, 41] The readmission time frame is not standardized and may be set by the user. Though conceptually appealing in some ways, CMS and the NQF have expressed concern about this specific approach because of the uncertain reliability of the relatedness of the admission‐readmission diagnosis dyads.[3]
In the research literature, only a few studies have examined the 3M PPR or other preventability assessments that rely on the relatedness of diagnostic codes.[8] Using the 3M PPR, a study showed that 78% of readmissions were classified as potentially preventable,[42] which explains why the 3M PPR and all‐cause readmission metric may correlate highly.[43] Others have demonstrated that ratings of hospital performance on readmission rates vary by a moderate to large amount, depending on whether the 3M PPR, CMS, or UHG methodology is used.[43, 44] An algorithm called SQLape[45, 46] is used in Switzerland to benchmark hospitals and defines potentially avoidable readmissions as being related to index diagnoses or complications of those conditions. It has recently been tested in the United States in a single‐center study,[47] and a multihospital study is underway.
Aside from these algorithms using related diagnosis codes, most ratings of preventability have relied on subjective assessments made primarily through a review of hospital records, and approximately one‐third also included data from clinic visits or interviews with the treating medical team or patients/families.[8] Unfortunately, these reports provide insufficient detail on how to apply their preventability criteria to subsequent readmission reviews. Studies did, however, provide categories of preventability into which readmissions could be organized (see Supporting Information, Appendix Table 1, in the online version of this article for details from a subset of studies cited in van Walraven's reviews that illustrate this point).
Assessment of preventability by clinician review can be challenging. In general, such assessments have considered readmissions resulting from factors within the hospital's control to be avoidable (eg, providing appropriate discharge instructions, reconciling medications, arranging timely postdischarge follow‐up appointments), whereas readmissions resulting from factors not within the hospital's control are unavoidable (eg, patient socioeconomic status, social support, disease progression). However, readmissions resulting from patient behaviors or social reasons could potentially be classified as avoidable or unavoidable depending on the circumstances. For example, if a patient decides not to take a prescribed antibiotic and is readmitted with worsening infection, this could be classified as an unavoidable readmission from the hospital's perspective. Alternatively, if the physician prescribing the antibiotic was inattentive to the cost of the medication and the patient would have taken a less expensive medication had it been prescribed, this could be classified as an avoidable readmission. Differing interpretations of contextual factors may partially account for the variability in clinical assessments of preventability.
Indeed, despite the lack of consensus around a standard method of defining preventability, hospitals and health systems are moving forward to address the issue and reduce readmissions. A recent survey by America's Essential Hospitals (previously the National Association of Public Hospitals and Health Systems), indicated that: (1) reducing readmissions was a high priority for the majority (86%) of members, (2) most had established interdisciplinary teams to address the issue, and (3) over half had a formal process for determining which readmissions were potentially preventable. Of the survey respondents, just over one‐third rely on staff review of individual patient charts or patient and family interviews, and slightly less than one‐third rely on other mechanisms such as external consultants, criteria developed by other entities, or the Institute for Clinical Systems Improvement methodology.[48] Approximately one‐fifth make use of 3M's PPR product, and slightly fewer use the list of the Agency for Healthcare Research and Quality's ambulatory care sensitive conditions (ACSCs). These are medical conditions for which it is believed that good outpatient care could prevent the need for hospitalization (eg, asthma, congestive heart failure, diabetes) or for which early intervention minimizes complications.[49] Hospitalization rates for ACSCs may represent a good measure of excess hospitalization, with a focus on the quality of outpatient care.
RECOMMENDATIONS
We recommend that reporting of hospital readmission rates be based on preventable or potentially preventable readmissions. Although we acknowledge the challenges in doing so, the advantages are notable. At minimum, a preventable readmission rate would more accurately reflect the true gap in care and therefore hospitals' real opportunity for improvement, without being obscured by readmissions that are not preventable.
Because readmission rates are used for public reporting and financial penalties for hospitals, we favor a measure of preventability that reflects the readmissions that the hospital or hospital system has the ability to prevent. This would not penalize hospitals for factors that are under the control of others, namely patients and caregivers, community supports, or society at large. We further recommend that this measure apply to a broader composite of unplanned care, inclusive of both inpatient and observation stays, which have little distinction in patients' eyes, and both represent potentially unnecessary utilization of acute‐care resources.[50] Such a measure would require development, validation, and appropriate vetting before it is implemented.
The first step is for researchers and policy makers to agree on how a measure of preventable or potentially preventable readmissions could be defined. A common element of preventability assessment is to identify the degree to which the reasons for readmission are related to the diagnoses of the index hospitalization. To be reliable and scalable, this measure will need to be based on algorithms that relate the index and readmission diagnoses, most likely using claims data. Choosing common medical and surgical conditions and developing a consensus‐based list of related readmission diagnoses is an important first step. It would also be important to include some less common conditions, because they may reflect very different aspects of hospital care.
An approach based on a list of related diagnoses would represent potentially preventable rehospitalizations. Generally, clinical review is required to determine actual preventability, taking into account patient factors such as a high level of illness or functional impairment that leads to clinical decompensation in spite of excellent management.[51, 52] Clinical review, like a root cause analysis, also provides greater insight into hospital processes that may warrant improvement. Therefore, even if an administrative measure of potentially preventable readmissions is implemented, hospitals may wish to continue performing detailed clinical review of some readmissions for quality improvement purposes. When clinical review becomes more standardized,[53] a combined approach that uses administrative data plus clinical verification and arbitration may be feasible, as with hospital‐acquired infections.
Similar work to develop related sets of admission and readmission diagnoses has already been undertaken in development of the 3M PPR and SQLape measures.[41, 46] However, the 3M PPR is a proprietary system that has low specificity and a high false‐positive rate for identifying preventable readmissions when compared to clinical review.[42] Moreover, neither measure has yet achieved the consensus required for widespread adoption in the United States. What is needed is a nonproprietary listing of related admission and readmission diagnoses, developed with the engagement of relevant stakeholders, that goes through a period of public comment and vetting by a body such as the NQF.
Until a validated measure of potentially preventable readmission can be developed, how could the current approach evolve toward preventability? The most feasible, rapidly implementable change would be to alter the readmission time horizon from 30 days to 7 or 15 days. A 30‐day period holds hospitals accountable for complications of outpatient care or new problems that may develop weeks after discharge. Even though this may foster shared accountability and collaboration among hospitals and outpatient or community settings, research has demonstrated that early readmissions (eg, within 715 days of discharge) are more likely preventable.[54] Second, consideration of the socioeconomic status of hospital patients, as recommended by MedPAC,[34] would improve on the current model by comparing hospitals to like facilities when determining penalties for excess readmission rates. Finally, adjustment for community factors, such as practice patterns and access to care, would enable readmission metrics to better reflect factors under the hospital's control.[32]
CONCLUSION
Holding hospitals accountable for the quality of acute and transitional care is an important policy initiative that has accelerated many improvements in discharge planning and care coordination. Optimally, the policies, public reporting, and penalties should target preventable readmissions, which may represent as little as one‐quarter of all readmissions. By summarizing some of the issues in defining preventability, we hope to foster continued refinement of quality metrics used in this arena.
Acknowledgements
We thank Eduard Vasilevskis, MD, MPH, for feedback on an earlier draft of this article. This manuscript was informed by a special report titled Preventable Readmissions, written by Julia Lavenberg, Joel Betesh, David Goldmann, Craig Kean, and Kendal Williams of the Penn Medicine Center for Evidence‐based Practice. The review was performed at the request of the Penn Medicine Chief Medical Officer Patrick J. Brennan to inform the development of local readmission prevention metrics, and is available at
Disclosures
Dr. Umscheid's contribution to this project was supported in part by the National Center for Research Resources and the National Center for Advancing Translational Sciences, National Institutes of Health, through grant UL1TR000003. Dr. Kripalani receives support from the National Heart, Lung, and Blood Institute of the National Institutes of Health under award number R01HL109388, and from the Centers for Medicare and Medicaid Services under awards 1C1CMS331006‐01 and 1C1CMS330979‐01. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or Centers for Medicare and Medicaid Services.
- Physician Visits After Hospital Discharge: Implications for Reducing Readmissions. Washington, DC: National Institute for Health Care Reform; 2011. Report no. 6. , .
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Centers for Medicare and Medicaid Services, US Department of Health and Human Services. Medicare program: hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and FY 2012 rates. Fed Regist. 2011;76(160):51476–51846.
- Quality collaboratives and campaigns to reduce readmissions: what strategies are hospitals using? J Hosp Med. 2013;8:601–608. , , , , , .
- Contemporary data about hospital strategies to reduce unplanned readmissions: what has changed [research letter]? JAMA Intern Med. 2014;174(1):154–156. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- Comparing methods to calculate hospital‐specific rates of early death or urgent readmission. CMAJ. 2012;184(15):E810–E817. , , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- National Quality Forum. Patient outcomes: all‐cause readmissions expedited review 2011. Available at: http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id60(7):607–614.
- Data shows reduction in Medicare hospital readmission rates during 2012. Medicare Medicaid Res Rev. 2013;3(2):E1–E11. , , , , , .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Moving beyond readmission penalties: creating an ideal process to improve transitional care. J Hosp Med. 2013;8(2):102–109. , , , .
- A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):1175–1177. , .
- American Hospital Association. TrendWatch: examining the drivers of readmissions and reducing unnecessary readmissions for better patient care. Washington, DC: American Hospital Association; 2011.
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Characteristics of hospitals receiving penalties under the hospital readmissions reduction program. JAMA. 2013;309(4):342–343. , .
- Identifying potentially preventable readmissions. Health Care Financ Rev. 2008;30(1):75–91. , , , , , , et al.
- Use of hospital‐based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364–371. , , , et al.
- Reducing hospital readmission rates: current strategies and future directions. Annu Rev Med. 2014;65:471–485. , , , .
- Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2013;309(6):587–593. , , , et al.
- Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377–385. , , , et al.
- Limitations of using same‐hospital readmission metrics. Int J Qual Health Care. 2013;25(6):633–639. , , , .
- Is same‐hospital readmission rate a good surrogate for all‐hospital readmission rate? Med Care. 2010;48(5):477–481. , , , et al.
- The relationship between hospital admission rates and rehospitalizations. N Engl J Med. 2011;365(24):2287–2295. , , .
- Community factors and hospital readmission rates [published online April 9, 2014]. Health Serv Res. doi: 10.1111/1475–6773.12177. .
- American Hospital Association. Hospital readmissions reduction program: factsheet. American Hospital Association. Available at: http://www.aha.org/content/13/fs‐readmissions.pdf. Published April 14, 2014. Accessed May 5, 2014.
- Medicare Payment Advisory Commission. Report to the congress: Medicare and the health care delivery system. Available at: http://www.medpac.gov/documents/Jun13_EntireReport.pdf. Published June 14, 2013. Accessed May 5, 2014.
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Quality improvement of care transitions and the trend of composite hospital care. JAMA. 2014;311(10):1013–1014. , , .
- When projecting required effectiveness of interventions for hospital readmission reduction, the percentage that is potentially avoidable must be considered. J Clin Epidemiol. 2013;66(6):688–690. , .
- Urgent readmission rates can be used to infer differences in avoidable readmission rates between hospitals. J Clin Epidemiol. 2012;65(10):1124–1130. , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Measuring and preventing potentially avoidable hospital readmissions: a review of the literature. Hong Kong Med J. 2010;16(5):383–389. , , , , , .
- 3M Health Information Systems. Potentially preventable readmissions classification system methodology: overview. 3M Health Information Systems; May 2008. Report No.: GRP‐139. Available at: http://multimedia.3m.com/mws/mediawebserver?66666UuZjcFSLXTtNXMtmxMEEVuQEcuZgVs6EVs6E666666‐‐. Accessed June 8, 2014.
- Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system. BMC Med Inform Decis Mak. 2014;14:28. , , , , , .
- Comparing 2 methods of assessing 30‐day readmissions: what is the impact on hospital profiling in the Veterans Health Administration? Med Care. 2013;51(7):589–596. , , , , , , et al.
- It's not six of one, half‐dozen the other: a comparative analysis of 3 rehospitalization measurement systems for Massachusetts. Academy Health Annual Research Meeting. Seattle, WA. 2011. Available at: http://www.academyhealth.org/files/2011/tuesday/boutwell.pdf. Accessed May 9, 2014. , .
- Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11):972–981. , , , , , .
- Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573–587. , , , , , .
- Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632–638. , , , .
- National Association of Public Hospitals and Health Systems. NAPH members focus on reducing readmissions. Available at: www.naph.org. Published June 2011. Accessed October 19, 2011.
- Agency for Healthcare Research and Quality. AHRQ quality indicators: prevention quality indicators. Available at: http://www.qualityindicators.ahrq.gov/Modules/pqi_resources.aspx. Accessed February 11, 2014.
- Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450–453. , , , , , .
- Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–102. .
- The hospital‐dependent patient. N Engl J Med. 2014;370(8):694–697. , .
- The hospital medicine reengineering network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415–420. , , , et al.
- Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067–E1072. , , , et al.
- Physician Visits After Hospital Discharge: Implications for Reducing Readmissions. Washington, DC: National Institute for Health Care Reform; 2011. Report no. 6. , .
- Rehospitalizations among patients in the Medicare fee‐for‐service program. N Engl J Med. 2009;360(14):1418–1428. , , .
- Centers for Medicare and Medicaid Services, US Department of Health and Human Services. Medicare program: hospital inpatient prospective payment systems for acute care hospitals and the long‐term care hospital prospective payment system and FY 2012 rates. Fed Regist. 2011;76(160):51476–51846.
- Quality collaboratives and campaigns to reduce readmissions: what strategies are hospitals using? J Hosp Med. 2013;8:601–608. , , , , , .
- Contemporary data about hospital strategies to reduce unplanned readmissions: what has changed [research letter]? JAMA Intern Med. 2014;174(1):154–156. , , , , .
- Interventions to reduce 30‐day rehospitalization: a systematic review. Ann Intern Med. 2011;155(8):520–528. , , , , .
- Comparing methods to calculate hospital‐specific rates of early death or urgent readmission. CMAJ. 2012;184(15):E810–E817. , , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306(15):1688–1698. , , , et al.
- National Quality Forum. Patient outcomes: all‐cause readmissions expedited review 2011. Available at: http://www.qualityforum.org/WorkArea/linkit.aspx?LinkIdentifier=id60(7):607–614.
- Data shows reduction in Medicare hospital readmission rates during 2012. Medicare Medicaid Res Rev. 2013;3(2):E1–E11. , , , , , .
- Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366(15):1366–1369. , .
- Moving beyond readmission penalties: creating an ideal process to improve transitional care. J Hosp Med. 2013;8(2):102–109. , , , .
- A path forward on Medicare readmissions. N Engl J Med. 2013;368(13):1175–1177. , .
- American Hospital Association. TrendWatch: examining the drivers of readmissions and reducing unnecessary readmissions for better patient care. Washington, DC: American Hospital Association; 2011.
- Diagnoses and timing of 30‐day readmissions after hospitalization for heart failure, acute myocardial infarction, or pneumonia. JAMA. 2013;309(4):355–363. , , , et al.
- Characteristics of hospitals receiving penalties under the hospital readmissions reduction program. JAMA. 2013;309(4):342–343. , .
- Identifying potentially preventable readmissions. Health Care Financ Rev. 2008;30(1):75–91. , , , , , , et al.
- Use of hospital‐based acute care among patients recently discharged from the hospital. JAMA. 2013;309(4):364–371. , , , et al.
- Reducing hospital readmission rates: current strategies and future directions. Annu Rev Med. 2014;65:471–485. , , , .
- Relationship between hospital readmission and mortality rates for patients hospitalized with acute myocardial infarction, heart failure, or pneumonia. JAMA. 2013;309(6):587–593. , , , et al.
- Hospital performance measures and 30‐day readmission rates. J Gen Intern Med. 2013;28(3):377–385. , , , et al.
- Limitations of using same‐hospital readmission metrics. Int J Qual Health Care. 2013;25(6):633–639. , , , .
- Is same‐hospital readmission rate a good surrogate for all‐hospital readmission rate? Med Care. 2010;48(5):477–481. , , , et al.
- The relationship between hospital admission rates and rehospitalizations. N Engl J Med. 2011;365(24):2287–2295. , , .
- Community factors and hospital readmission rates [published online April 9, 2014]. Health Serv Res. doi: 10.1111/1475–6773.12177. .
- American Hospital Association. Hospital readmissions reduction program: factsheet. American Hospital Association. Available at: http://www.aha.org/content/13/fs‐readmissions.pdf. Published April 14, 2014. Accessed May 5, 2014.
- Medicare Payment Advisory Commission. Report to the congress: Medicare and the health care delivery system. Available at: http://www.medpac.gov/documents/Jun13_EntireReport.pdf. Published June 14, 2013. Accessed May 5, 2014.
- Sharp rise in Medicare enrollees being held in hospitals for observation raises concerns about causes and consequences. Health Aff (Millwood). 2012;31(6):1251–1259. , , .
- Quality improvement of care transitions and the trend of composite hospital care. JAMA. 2014;311(10):1013–1014. , , .
- When projecting required effectiveness of interventions for hospital readmission reduction, the percentage that is potentially avoidable must be considered. J Clin Epidemiol. 2013;66(6):688–690. , .
- Urgent readmission rates can be used to infer differences in avoidable readmission rates between hospitals. J Clin Epidemiol. 2012;65(10):1124–1130. , , .
- Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183(7):E391–E402. , , , , .
- Measuring and preventing potentially avoidable hospital readmissions: a review of the literature. Hong Kong Med J. 2010;16(5):383–389. , , , , , .
- 3M Health Information Systems. Potentially preventable readmissions classification system methodology: overview. 3M Health Information Systems; May 2008. Report No.: GRP‐139. Available at: http://multimedia.3m.com/mws/mediawebserver?66666UuZjcFSLXTtNXMtmxMEEVuQEcuZgVs6EVs6E666666‐‐. Accessed June 8, 2014.
- Manual and automated methods for identifying potentially preventable readmissions: a comparison in a large healthcare system. BMC Med Inform Decis Mak. 2014;14:28. , , , , , .
- Comparing 2 methods of assessing 30‐day readmissions: what is the impact on hospital profiling in the Veterans Health Administration? Med Care. 2013;51(7):589–596. , , , , , , et al.
- It's not six of one, half‐dozen the other: a comparative analysis of 3 rehospitalization measurement systems for Massachusetts. Academy Health Annual Research Meeting. Seattle, WA. 2011. Available at: http://www.academyhealth.org/files/2011/tuesday/boutwell.pdf. Accessed May 9, 2014. , .
- Validation of the potentially avoidable hospital readmission rate as a routine indicator of the quality of hospital care. Med Care. 2006;44(11):972–981. , , , , , .
- Measuring potentially avoidable hospital readmissions. J Clin Epidemiol. 2002;55:573–587. , , , , , .
- Potentially avoidable 30‐day hospital readmissions in medical patients: derivation and validation of a prediction model. JAMA Intern Med. 2013;173(8):632–638. , , , .
- National Association of Public Hospitals and Health Systems. NAPH members focus on reducing readmissions. Available at: www.naph.org. Published June 2011. Accessed October 19, 2011.
- Agency for Healthcare Research and Quality. AHRQ quality indicators: prevention quality indicators. Available at: http://www.qualityindicators.ahrq.gov/Modules/pqi_resources.aspx. Accessed February 11, 2014.
- Shifting the dialogue from hospital readmissions to unplanned care. Am J Manag Care. 2013;19(6):450–453. , , , , , .
- Post‐hospital syndrome—an acquired, transient condition of generalized risk. N Engl J Med. 2013;368(2):100–102. .
- The hospital‐dependent patient. N Engl J Med. 2014;370(8):694–697. , .
- The hospital medicine reengineering network (HOMERuN): a learning organization focused on improving hospital care. Acad Med. 2014;89(3):415–420. , , , et al.
- Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183(14):E1067–E1072. , , , et al.
In the Literature: Research You Need to Know
Clinical question: Does the use of steroids and/or antivirals improve recovery in patients with newly diagnosed Bell's palsy?
Background: The American Academy of Neurology's last recommendation in 2001 stated that steroids were probably effective and antivirals possibly effective. The current review and recommendations looked at additional studies published since 2000.
Study design: Systematic review of MEDLINE and Cochrane Database of Systematic Reviews data published since June 2000.
Setting: Prospective controlled studies from Germany, Sweden, Scotland, Italy, South Korea, Japan, and Bangladesh.
Synopsis: The authors identified nine studies that fulfilled inclusion criteria. Two of these studies examined treatment with steroids alone and were judged to have the lowest risk for bias. Both studies enrolled patients within three days of symptom onset, continued treatment for 10 days, and demonstrated a significant increase in the probability of complete recovery in patients randomized to steroids (NNT 6-8). Two high-quality studies were identified that looked at the addition of antivirals to steroids. Neither study showed a statistically significant benefit.
Of note, the studies did not quantify the risk of harm from steroid use in patients with comorbidities, such as diabetes. Thus, the authors concluded that in some patients, it would be reasonable to consider limiting steroid use.
Bottom line: For patients with new-onset Bell’s palsy, steroids increase the probability of recovery of facial nerve function. Patients offered antivirals should be counseled that a benefit from antivirals has not been established and, if there is a benefit, it is modest at best.
Citation: Gronseth GS, Paduga R. Evidence-based guideline update: steroids and antivirals for Bell palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79(22):2209-2213.
Visit our website for more physician reviews of recent HM-relevant literature.
Clinical question: Does the use of steroids and/or antivirals improve recovery in patients with newly diagnosed Bell's palsy?
Background: The American Academy of Neurology's last recommendation in 2001 stated that steroids were probably effective and antivirals possibly effective. The current review and recommendations looked at additional studies published since 2000.
Study design: Systematic review of MEDLINE and Cochrane Database of Systematic Reviews data published since June 2000.
Setting: Prospective controlled studies from Germany, Sweden, Scotland, Italy, South Korea, Japan, and Bangladesh.
Synopsis: The authors identified nine studies that fulfilled inclusion criteria. Two of these studies examined treatment with steroids alone and were judged to have the lowest risk for bias. Both studies enrolled patients within three days of symptom onset, continued treatment for 10 days, and demonstrated a significant increase in the probability of complete recovery in patients randomized to steroids (NNT 6-8). Two high-quality studies were identified that looked at the addition of antivirals to steroids. Neither study showed a statistically significant benefit.
Of note, the studies did not quantify the risk of harm from steroid use in patients with comorbidities, such as diabetes. Thus, the authors concluded that in some patients, it would be reasonable to consider limiting steroid use.
Bottom line: For patients with new-onset Bell’s palsy, steroids increase the probability of recovery of facial nerve function. Patients offered antivirals should be counseled that a benefit from antivirals has not been established and, if there is a benefit, it is modest at best.
Citation: Gronseth GS, Paduga R. Evidence-based guideline update: steroids and antivirals for Bell palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79(22):2209-2213.
Visit our website for more physician reviews of recent HM-relevant literature.
Clinical question: Does the use of steroids and/or antivirals improve recovery in patients with newly diagnosed Bell's palsy?
Background: The American Academy of Neurology's last recommendation in 2001 stated that steroids were probably effective and antivirals possibly effective. The current review and recommendations looked at additional studies published since 2000.
Study design: Systematic review of MEDLINE and Cochrane Database of Systematic Reviews data published since June 2000.
Setting: Prospective controlled studies from Germany, Sweden, Scotland, Italy, South Korea, Japan, and Bangladesh.
Synopsis: The authors identified nine studies that fulfilled inclusion criteria. Two of these studies examined treatment with steroids alone and were judged to have the lowest risk for bias. Both studies enrolled patients within three days of symptom onset, continued treatment for 10 days, and demonstrated a significant increase in the probability of complete recovery in patients randomized to steroids (NNT 6-8). Two high-quality studies were identified that looked at the addition of antivirals to steroids. Neither study showed a statistically significant benefit.
Of note, the studies did not quantify the risk of harm from steroid use in patients with comorbidities, such as diabetes. Thus, the authors concluded that in some patients, it would be reasonable to consider limiting steroid use.
Bottom line: For patients with new-onset Bell’s palsy, steroids increase the probability of recovery of facial nerve function. Patients offered antivirals should be counseled that a benefit from antivirals has not been established and, if there is a benefit, it is modest at best.
Citation: Gronseth GS, Paduga R. Evidence-based guideline update: steroids and antivirals for Bell palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79(22):2209-2213.
Visit our website for more physician reviews of recent HM-relevant literature.
ITL: Physician Reviews of HM-Relevant Research
In This Edition
Literature At A Glance
A guide to this month’s studies
- Guidelines on steroids and antivirals to treat Bell’s palsy
- Probiotics to reduce Clostridium difficile-associated diarrhea
- Rates of hemorrhage from warfarin therapy higher in clinical practice
- Less experienced doctors incur higher treatment costs
- Pay-for-performance incentive reduces mortality in England
- No benefit in ultrafiltration to treat acute heart failure
- Hospitalized patients often receive too much acetaminophen
- Longer anticoagulation therapy beneficial after bioprosthetic aortic valve replacement
- Antimicrobial-coated catheters and risk of urinary tract infection
- Patient outcomes improve after in-hospital cardiac arrest
Updated Guidelines on Steroids and Antivirals in Bell’s Palsy
Clinical question: Does the use of steroids and/or antivirals improve recovery in patients with newly diagnosed Bell’s palsy?
Background: The American Academy of Neurology’s last recommendation in 2001 stated that steroids were probably effective and antivirals possibly effective. The current review and recommendations looked at additional studies published since 2000.
Study design: Systematic review of MEDLINE and Cochrane Database of Systematic Reviews data published since June 2000.
Setting: Prospective controlled studies from Germany, Sweden, Scotland, Italy, South Korea, Japan, and Bangladesh.
Synopsis: The authors identified nine studies that fulfilled inclusion criteria. Two of these studies examined treatment with steroids alone and were judged to have the lowest risk for bias. Both studies enrolled patients within three days of symptom onset, continued treatment for 10 days, and demonstrated a significant increase in the probability of complete recovery in patients randomized to steroids (NNT 6-8). Two high-quality studies were identified that looked at the addition of antivirals to steroids. Neither study showed a statistically significant benefit.
Of note, the studies did not quantify the risk of harm from steroid use in patients with comorbidities, such as diabetes. Thus, the authors concluded that in some patients, it would be reasonable to consider limiting steroid use.
Bottom line: For patients with new-onset Bell’s palsy, steroids increase the probability of recovery of facial nerve function. Patients offered antivirals should be counseled that a benefit from antivirals has not been established, and, if there is a benefit, it is modest at best.
Citation: Gronseth GS, Paduga R. Evidence-based guideline update: steroids and antivirals for Bell palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79(22):2209-2213.
Probiotic Prophylaxis Reduces Clostridium Difficile-Associated Diarrhea
Clinical question: Are probiotics a safe and efficacious therapy for the prevention of Clostridium difficile-associated diarrhea (CDAD)?
Background: CDAD is the most common cause of hospital-acquired infectious diarrhea in high-income countries. There has been a dramatic rise in the incidence and severity of CDAD since 2002. Previous studies suggested that probiotics might reduce the incidence of CDAD with few adverse events.
Study design: Systematic review and meta-analysis of the literature.
Setting: Randomized controlled trials from the U.S., Canada, Chile, China, United Kingdom, Turkey, Poland, and Sweden.
Synopsis: Investigators identified 20 trials including 3,818 participants using a systematic search of randomized controlled trials of a specified probiotic of any strain in adults or pediatric subjects treated with antibiotics. Probiotics reduced the incidence of CDAD by 66% (risk ratio 0.34, 95% CI 0.24 to 0.49). Subgroup analyses showed similar results in both adults and children, with lower and high doses, and with different probiotic species.
Of probiotic-treated patients, 9.3% experienced an adverse event compared with 12.6% of control patients (relative risk 0.82, 95% CI 0.65 to 1.05). There was no report of any serious adverse events attributable to probiotics.
One limitation is the considerable variability in the reported risk of CDAD in the control group (0% to 40%). The absolute benefit from probiotics will depend on the risk in patients who do not receive prophylaxis.
Bottom line: Moderate-quality evidence suggests that probiotic prophylaxis results in a large reduction in C. diff-associated diarrhea without an increase in clinically important adverse events.
Citation: Johnston BC, Ma SSY, Goldenberg JZ, et al. Probiotics for the prevention of Clostridium difficile-associated diarrhea: a systematic review and meta-analysis. Ann Intern Med. 2012;157(12):878-888.
Rates of Hemorrhage from Warfarin Therapy Higher in Clinical Practice
Clinical question: What is the incidence of hemorrhage in a large population-based cohort of patients with atrial fibrillation who have started warfarin therapy?
Background: There is strong evidence that supports the use of warfarin to reduce the risk of stroke and systemic embolism in patients with atrial fibrillation. There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin.
Study design: Retrospective cohort study.
Setting: Ontario.
Synopsis: This population-based, cohort study included 125,195 residents of Ontario age ≥66 years with atrial fibrillation who started taking warfarin sometime from 1997 to 2008. Hemorrhage was defined as bleeding requiring an emergency department visit or hospital admission. The overall risk of hemorrhage was 3.8% per person-year, but it was 11.8% in the first 30 days of therapy. For subjects age >75 years, the overall risk was 4.6% compared with 2.9% for those between 66 and 75 years.
Most hospital admissions involved gastrointestinal hemorrhages (63%). Almost 1 in 5 people (18%) with hospital admissions for hemorrhages died in the hospital or within seven days of discharge.
Bottom line: Rates of hemorrhage for older patients on warfarin therapy are significantly higher in clinical practice than the rates reported in clinical trials. The difference is likely due to the strict inclusion criteria, younger average age, and close monitoring of patients in clinical trials.
Citation: Gomes T, Mamdani MM, Holbrook AM, Paterson JM, Hellings C, Juurlink DN. Rates of hemorrhage during warfarin therapy for atrial fibrillation. CMAJ. 2013; Jan 21 [Epub ahead of print].
Less Experienced Doctors Incur Higher Treatment Costs
Clinical question: Which physician characteristics are associated with higher cost profiles?
Background: While both public and private insurers increasingly use physician cost profiles to identify physicians whose practice patterns account for more healthcare spending than other physicians, the individual physician characteristics associated with cost-profile performance are unknown.
Study design: Retrospective cohort study.
Setting: Four commercial health plans in Massachusetts.
Synopsis: Data collected from the insurance claims records of 1.13 million patients aged 18-65 years who were enrolled in one of four commercial health plans in Massachusetts in 2004 and 2005 were matched with the public records of 12,116 doctors who were stratified into five groups according to years of experience (<10, 10-19, 20-29, 30-39, and ≥40 years).
A strong association was found between physician experience and cost profiles, with the most experienced doctors—40 or more years of experience—providing the least costly care. Costs increased with each successively less experienced group (by 2.5%, 6.5%, 10%, and 13.2% more, respectively, to treat the same condition). No association was found between cost profiles and other physician characteristics, such as having had malpractice claims or disciplinary actions, board certification status, and practice size.
Differences appear to be driven by high-cost outlier patients. While median costs were similar between physicians with different levels of experience, the costs of treating patients at the 95 percentile of cost were much higher among physicians with less experience.
Bottom line: Doctors in this study with the least experience incurred 13.2% greater costs than their most senior counterparts.
Citation: Mehrotra A, Reid RO, Adams JL, Friedberg MW, McGlynn EA, Hussey PS. Physicians with the least experience have higher cost profiles than do physicians with the most experience. Health Aff (Millwood). 2012;31(11):2453-2463.
Pay-For-Performance Incentive Reduces Mortality in England
Clinical question: Do pay-for-performance programs improve quality of care?
Background: Pay-for-performance programs are being widely adopted both internationally and in the U.S. There is, however, limited evidence that these programs improve patient outcomes, and most prior studies have shown modest or inconsistent improvements in quality of care.
Study design: Prospective cohort study.
Setting: National Health Service (NHS) hospitals in northwest England.
Synopsis: The Advanced Quality program, the first hospital-based pay-for-performance program in England, was introduced in October 2004 in all 24 NHS hospitals in northwest England that provide emergency care. The program used a “tournament” system in which only the top-performing hospitals received bonus payments. There was no penalty for poor performers.
The primary end-point was 30-day in-hospital mortality among patients admitted for pneumonia, heart failure, or acute myocardial infarction. Over the three-year period studied (18 months before and 18 months after introduction of the program), the risk-adjusted mortality for these three conditions decreased significantly with an absolute reduction of 1.3% (95% CI 0.4 to 2.1%; P=0.006). The largest change, for pneumonia, was significant (1.9%, 95% CI 0.9 to 3.0, P<0.001), with nonsignificant reductions for acute myocardial infarction (0.6%, 95% CI -0.4 to 1.7; P=0.23) and heart failure (0.6%, 95% CI -0.6 to 1.8; P=0.30).
Bottom line: The introduction of a pay-for-performance program for all National Health Service hospitals in one region of England was associated with a significant reduction in mortality.
Citation: Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367(19):1821-1828.
Ultrafiltration Shows No Benefit in Acute Heart Failure
Clinical question: Is ultrafiltration superior to pharmacotherapy in the treatment of patients with acute heart failure and cardiorenal syndrome?
Background: Venovenous ultrafiltration is an alternative to diuretic therapy in patients with acute decompensated heart failure and worsened renal function that could allow greater control of the rate of fluid removal and improve outcomes. Little is known about the efficacy and safety of ultrafiltration compared to standard pharmacological therapy.
Study design: Multicenter randomized controlled trial.
Setting: Fourteen clinical centers in the U.S. and Canada.
Synopsis: One hundred eighty-eight patients admitted to a hospital with acute decompensated heart failure and worsened renal function were randomized to stepped pharmacological therapy or ultrafiltration. Ultrafiltration was inferior to pharmacological therapy with respect to the pre-specified primary composite endpoint, the change in serum creatinine level, and body weight at 96 hours after enrollment (P=0.003). This difference was primarily due to an increase in the serum creatinine level in the ultrafiltration group (0.23 vs. -0.04 mg/dl; P=0.003). There was no significant difference in weight loss at 96 hours (loss of 5.5 kg vs. 5.7kg; P=0.58).
A higher percentage of patients in the ultrafiltration group had a serious adverse event over the 60-day follow-up period (72% vs. 57%, P=0.03). There was no significant difference in the composite rate of death or rehospitalization for heart failure in the ultrafiltration group compared to the pharmacologic-therapy group (38% vs. 35%; P=0.96).
Bottom line: Pharmacological therapy is superior to ultrafiltration in patients with acute decompensated heart failure and worsened renal function.
Citation: Bart BA, Goldsmith SR, Lee KL, et al. Ultrafiltration in decompensated heart failure with cardiorenal syndrome. N Engl J Med. 2012;367:2296-2304.
Hospitalized Patients Often Receive Too Much Acetaminophen
Clinical question: What are the prevalence and factors associated with supratherapeutic dosing of acetaminophen in hospitalized patients?
Background: Acetaminophen is a commonly used medication that at high doses can be associated with significant adverse events, including liver failure. Considerable efforts have been made in the outpatient setting to limit the risks associated with acetaminophen. Little research has examined acetaminophen exposure in the inpatient setting.
Study design: Retrospective cohort study.
Setting: Two academic tertiary-care hospitals in the U.S.
Synopsis: The authors reviewed the electronic medication administration record of all adult patients admitted to two academic hospitals from June 1, 2010, to Aug. 31, 2010. A total of 14,411 patients (60.7%) were prescribed acetaminophen, of whom 955 (6.6%) were prescribed more than the 4g per day (the maximum recommended daily dose) at least once. In addition, 22.3% of patients >65 and 17.6% of patients with chronic liver disease exceeded the recommended limit of 3g per day. Half the supratherapeutic episodes involved doses exceeding 5g a day, often for several days. In adjusted analyses, scheduled administration (rather than as needed), a diagnosis of osteoarthritis, and higher-strength tablets were all associated with a higher risk of exposure to supratherapeutic doses.
Bottom line: A significant proportion of hospitalized patients are exposed to supratherapeutic dosing of acetaminophen.
Citation: Zhou L, Maviglia SM, Mahoney LM, et al. Supra-therapeutic dosing of acetaminophen among hospitalized patients. Arch Intern Med. 2012;172(22):1721-1728.
Longer Anticoagulation Therapy after Bioprosthetic Aortic Valve Replacement Might Be Beneficial
Clinical question: How long should anticoagulation therapy with warfarin be continued after surgical bioprosthetic aortic valve replacement?
Background: Current guidelines recommend a three-month course of anticoagulation therapy after bioprosthetic aortic valve surgery. However, the appropriate duration of post-operative anticoagulation therapy has not been well established
Study design: Retrospective cohort study.
Setting: Denmark.
Synopsis: Using data from the Danish National Registries, 4,075 subjects without atrial fibrillation who underwent bioprosthetic aortic valve implantation from 1997 to 2009 were identified. The association between different durations of warfarin therapy after aortic valve implantation and the combined end point of stroke, thromboembolic events, cardiovascular death, or bleeding episodes was examined.
The risk of adverse outcomes was substantially higher for patients not treated with warfarin compared to treated patents. The estimated adverse event rate was 7 per 100 person-years for untreated patients versus 2.7 per 100 for warfarin-treated patients (adjusted incidence rate ratio [IRR] 2.46, 95% CI 1.09 to 6.48). Patients not treated with warfarin were at higher risk of cardiovascular death within 30 to 89 days after surgery, with an event rate of 31.7 per 100 person-years versus 3.8 per 100 person-years (adjusted IRR 7.61, 95% CI 4.37 to 13.26). The difference in cardiovascular mortality continued to be significant from 90 to 179 days after surgery, with an event rate of 6.5 per 100 person-years versus 2.1 per 100 person-years (IRR 3.51, 95% CI 1.54 to 8.03).
Bottom line: Discontinuation of warfarin therapy within six months of bioprosthetic aortic valve replacement is associated with increased cardiovascular death.
Citation: Mérie C, Køber L, Skov Olsen P, et al. Association of warfarin therapy duration after bioprosthetic aortic valve replacement with risk of mortality, thromboembolic complications, and bleeding. JAMA. 2012;308(20):2118-2125.
Limited Evidence for Antimicrobial-Coated Catheters
Clinical question: Does the use of antimicrobial-coated catheters reduce the risk of catheter-associated urinary tract infection (UTI) compared to standard polytetrafluoroethylene (PTFE) catheters?
Background: UTIs associated with indwelling catheters are a major preventable cause of harm for hospitalized patients. Prior studies have shown that catheters made with antimicrobial coatings can reduce rates of bacteriuria, but their usefulness against symptomatic catheter-associated UTIs remains uncertain.
Study design: Multicenter randomized controlled trial.
Setting: Twenty-four hospitals in the United Kingdom.
Synopsis: A total of 7,102 patients >16 undergoing urethral catheterization for an anticipated duration of <14 days were randomly allocated in a 1:1:1 ratio to receive a silver-alloy-coated catheter, a nitrofural-impregnated silicone catheter, or a standard PTFE-coated catheter. The primary outcome was defined as presence of patient-reported symptoms of UTI and prescription of antibiotic for UTI. Incidence of symptomatic catheter-associated UTI up to six weeks after randomization did not differ significantly between groups and occurred in 12.6% of the PTFE control, 12.5% of the silver alloy group, and 10.6% of the nitrofural group. In secondary outcomes, the nitrofural catheter was associated with a slightly reduced incidence of culture-confirmed symptomatic UTI (absolute risk reduction of 1.4%) and lower rate of bacteriuria, but it also had greater patient-reported discomfort during use and removal.
Bottom line: Antimicrobial-coated catheters do not show a clinically significant benefit over standard PTFE catheters in preventing catheter-associated UTI.
Citation: Pickard R, Lam T, Maclennan G, et al. Antimicrobial catheters for reduction of symptomatic urinary tract infection in adults requiring short-term catheterisation in hospital: a multicentre randomized controlled trial. Lancet. 2012;380:1927-1935.
Outcomes Improve after In-Hospital Cardiac Arrest
Clinical question: Have outcomes after in-hospital cardiac arrest improved with recent advances in resuscitation care?
Background: Over the past decade, quality-improvement (QI) efforts in hospital resuscitation care have included use of mock cardiac arrests, defibrillation by nonmedical personnel, and participation in QI registries. It is unclear what effect these efforts have had on overall survival and neurologic recovery.
Study design: Retrospective cohort study.
Setting: Five hundred fifty-three hospitals in the U.S.
Synopsis: A total of 113,514 patients age >18 with a cardiac arrest occurring from Jan. 1, 2000, to Nov. 19, 2009, were identified. Analyses were separated by initial rhythm (PEA/asystole or ventricular fibrillation/tachycardia). Overall survival to discharge increased significantly to 22.3% in 2009 from 13.7% in 2000, with similar increases within each rhythm group. Rates of acute resuscitation survival (return of spontaneous circulation for at least 20 contiguous minutes after initial arrest) and post-resuscitation survival (survival to discharge among patients surviving acute resuscitation) also improved during the study period. Rates of clinically significant neurologic disability, as defined by cerebral performance scores >1, decreased over time for the overall cohort and the subset with ventricular fibrillation/tachycardia. The study was limited by including only hospitals motivated to participate in a QI registry.
Bottom line: From 2000 to 2009, survival after in-hospital cardiac arrest improved, and rates of clinically significant neurologic disability among survivors decreased.
Citation: Girotra S, Nallamothu B, Spertus J, et al. Trends in survival after in-hospital cardiac arrest. N Engl J Med. 2012;367:1912-1920.
In This Edition
Literature At A Glance
A guide to this month’s studies
- Guidelines on steroids and antivirals to treat Bell’s palsy
- Probiotics to reduce Clostridium difficile-associated diarrhea
- Rates of hemorrhage from warfarin therapy higher in clinical practice
- Less experienced doctors incur higher treatment costs
- Pay-for-performance incentive reduces mortality in England
- No benefit in ultrafiltration to treat acute heart failure
- Hospitalized patients often receive too much acetaminophen
- Longer anticoagulation therapy beneficial after bioprosthetic aortic valve replacement
- Antimicrobial-coated catheters and risk of urinary tract infection
- Patient outcomes improve after in-hospital cardiac arrest
Updated Guidelines on Steroids and Antivirals in Bell’s Palsy
Clinical question: Does the use of steroids and/or antivirals improve recovery in patients with newly diagnosed Bell’s palsy?
Background: The American Academy of Neurology’s last recommendation in 2001 stated that steroids were probably effective and antivirals possibly effective. The current review and recommendations looked at additional studies published since 2000.
Study design: Systematic review of MEDLINE and Cochrane Database of Systematic Reviews data published since June 2000.
Setting: Prospective controlled studies from Germany, Sweden, Scotland, Italy, South Korea, Japan, and Bangladesh.
Synopsis: The authors identified nine studies that fulfilled inclusion criteria. Two of these studies examined treatment with steroids alone and were judged to have the lowest risk for bias. Both studies enrolled patients within three days of symptom onset, continued treatment for 10 days, and demonstrated a significant increase in the probability of complete recovery in patients randomized to steroids (NNT 6-8). Two high-quality studies were identified that looked at the addition of antivirals to steroids. Neither study showed a statistically significant benefit.
Of note, the studies did not quantify the risk of harm from steroid use in patients with comorbidities, such as diabetes. Thus, the authors concluded that in some patients, it would be reasonable to consider limiting steroid use.
Bottom line: For patients with new-onset Bell’s palsy, steroids increase the probability of recovery of facial nerve function. Patients offered antivirals should be counseled that a benefit from antivirals has not been established, and, if there is a benefit, it is modest at best.
Citation: Gronseth GS, Paduga R. Evidence-based guideline update: steroids and antivirals for Bell palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79(22):2209-2213.
Probiotic Prophylaxis Reduces Clostridium Difficile-Associated Diarrhea
Clinical question: Are probiotics a safe and efficacious therapy for the prevention of Clostridium difficile-associated diarrhea (CDAD)?
Background: CDAD is the most common cause of hospital-acquired infectious diarrhea in high-income countries. There has been a dramatic rise in the incidence and severity of CDAD since 2002. Previous studies suggested that probiotics might reduce the incidence of CDAD with few adverse events.
Study design: Systematic review and meta-analysis of the literature.
Setting: Randomized controlled trials from the U.S., Canada, Chile, China, United Kingdom, Turkey, Poland, and Sweden.
Synopsis: Investigators identified 20 trials including 3,818 participants using a systematic search of randomized controlled trials of a specified probiotic of any strain in adults or pediatric subjects treated with antibiotics. Probiotics reduced the incidence of CDAD by 66% (risk ratio 0.34, 95% CI 0.24 to 0.49). Subgroup analyses showed similar results in both adults and children, with lower and high doses, and with different probiotic species.
Of probiotic-treated patients, 9.3% experienced an adverse event compared with 12.6% of control patients (relative risk 0.82, 95% CI 0.65 to 1.05). There was no report of any serious adverse events attributable to probiotics.
One limitation is the considerable variability in the reported risk of CDAD in the control group (0% to 40%). The absolute benefit from probiotics will depend on the risk in patients who do not receive prophylaxis.
Bottom line: Moderate-quality evidence suggests that probiotic prophylaxis results in a large reduction in C. diff-associated diarrhea without an increase in clinically important adverse events.
Citation: Johnston BC, Ma SSY, Goldenberg JZ, et al. Probiotics for the prevention of Clostridium difficile-associated diarrhea: a systematic review and meta-analysis. Ann Intern Med. 2012;157(12):878-888.
Rates of Hemorrhage from Warfarin Therapy Higher in Clinical Practice
Clinical question: What is the incidence of hemorrhage in a large population-based cohort of patients with atrial fibrillation who have started warfarin therapy?
Background: There is strong evidence that supports the use of warfarin to reduce the risk of stroke and systemic embolism in patients with atrial fibrillation. There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin.
Study design: Retrospective cohort study.
Setting: Ontario.
Synopsis: This population-based, cohort study included 125,195 residents of Ontario age ≥66 years with atrial fibrillation who started taking warfarin sometime from 1997 to 2008. Hemorrhage was defined as bleeding requiring an emergency department visit or hospital admission. The overall risk of hemorrhage was 3.8% per person-year, but it was 11.8% in the first 30 days of therapy. For subjects age >75 years, the overall risk was 4.6% compared with 2.9% for those between 66 and 75 years.
Most hospital admissions involved gastrointestinal hemorrhages (63%). Almost 1 in 5 people (18%) with hospital admissions for hemorrhages died in the hospital or within seven days of discharge.
Bottom line: Rates of hemorrhage for older patients on warfarin therapy are significantly higher in clinical practice than the rates reported in clinical trials. The difference is likely due to the strict inclusion criteria, younger average age, and close monitoring of patients in clinical trials.
Citation: Gomes T, Mamdani MM, Holbrook AM, Paterson JM, Hellings C, Juurlink DN. Rates of hemorrhage during warfarin therapy for atrial fibrillation. CMAJ. 2013; Jan 21 [Epub ahead of print].
Less Experienced Doctors Incur Higher Treatment Costs
Clinical question: Which physician characteristics are associated with higher cost profiles?
Background: While both public and private insurers increasingly use physician cost profiles to identify physicians whose practice patterns account for more healthcare spending than other physicians, the individual physician characteristics associated with cost-profile performance are unknown.
Study design: Retrospective cohort study.
Setting: Four commercial health plans in Massachusetts.
Synopsis: Data collected from the insurance claims records of 1.13 million patients aged 18-65 years who were enrolled in one of four commercial health plans in Massachusetts in 2004 and 2005 were matched with the public records of 12,116 doctors who were stratified into five groups according to years of experience (<10, 10-19, 20-29, 30-39, and ≥40 years).
A strong association was found between physician experience and cost profiles, with the most experienced doctors—40 or more years of experience—providing the least costly care. Costs increased with each successively less experienced group (by 2.5%, 6.5%, 10%, and 13.2% more, respectively, to treat the same condition). No association was found between cost profiles and other physician characteristics, such as having had malpractice claims or disciplinary actions, board certification status, and practice size.
Differences appear to be driven by high-cost outlier patients. While median costs were similar between physicians with different levels of experience, the costs of treating patients at the 95 percentile of cost were much higher among physicians with less experience.
Bottom line: Doctors in this study with the least experience incurred 13.2% greater costs than their most senior counterparts.
Citation: Mehrotra A, Reid RO, Adams JL, Friedberg MW, McGlynn EA, Hussey PS. Physicians with the least experience have higher cost profiles than do physicians with the most experience. Health Aff (Millwood). 2012;31(11):2453-2463.
Pay-For-Performance Incentive Reduces Mortality in England
Clinical question: Do pay-for-performance programs improve quality of care?
Background: Pay-for-performance programs are being widely adopted both internationally and in the U.S. There is, however, limited evidence that these programs improve patient outcomes, and most prior studies have shown modest or inconsistent improvements in quality of care.
Study design: Prospective cohort study.
Setting: National Health Service (NHS) hospitals in northwest England.
Synopsis: The Advanced Quality program, the first hospital-based pay-for-performance program in England, was introduced in October 2004 in all 24 NHS hospitals in northwest England that provide emergency care. The program used a “tournament” system in which only the top-performing hospitals received bonus payments. There was no penalty for poor performers.
The primary end-point was 30-day in-hospital mortality among patients admitted for pneumonia, heart failure, or acute myocardial infarction. Over the three-year period studied (18 months before and 18 months after introduction of the program), the risk-adjusted mortality for these three conditions decreased significantly with an absolute reduction of 1.3% (95% CI 0.4 to 2.1%; P=0.006). The largest change, for pneumonia, was significant (1.9%, 95% CI 0.9 to 3.0, P<0.001), with nonsignificant reductions for acute myocardial infarction (0.6%, 95% CI -0.4 to 1.7; P=0.23) and heart failure (0.6%, 95% CI -0.6 to 1.8; P=0.30).
Bottom line: The introduction of a pay-for-performance program for all National Health Service hospitals in one region of England was associated with a significant reduction in mortality.
Citation: Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367(19):1821-1828.
Ultrafiltration Shows No Benefit in Acute Heart Failure
Clinical question: Is ultrafiltration superior to pharmacotherapy in the treatment of patients with acute heart failure and cardiorenal syndrome?
Background: Venovenous ultrafiltration is an alternative to diuretic therapy in patients with acute decompensated heart failure and worsened renal function that could allow greater control of the rate of fluid removal and improve outcomes. Little is known about the efficacy and safety of ultrafiltration compared to standard pharmacological therapy.
Study design: Multicenter randomized controlled trial.
Setting: Fourteen clinical centers in the U.S. and Canada.
Synopsis: One hundred eighty-eight patients admitted to a hospital with acute decompensated heart failure and worsened renal function were randomized to stepped pharmacological therapy or ultrafiltration. Ultrafiltration was inferior to pharmacological therapy with respect to the pre-specified primary composite endpoint, the change in serum creatinine level, and body weight at 96 hours after enrollment (P=0.003). This difference was primarily due to an increase in the serum creatinine level in the ultrafiltration group (0.23 vs. -0.04 mg/dl; P=0.003). There was no significant difference in weight loss at 96 hours (loss of 5.5 kg vs. 5.7kg; P=0.58).
A higher percentage of patients in the ultrafiltration group had a serious adverse event over the 60-day follow-up period (72% vs. 57%, P=0.03). There was no significant difference in the composite rate of death or rehospitalization for heart failure in the ultrafiltration group compared to the pharmacologic-therapy group (38% vs. 35%; P=0.96).
Bottom line: Pharmacological therapy is superior to ultrafiltration in patients with acute decompensated heart failure and worsened renal function.
Citation: Bart BA, Goldsmith SR, Lee KL, et al. Ultrafiltration in decompensated heart failure with cardiorenal syndrome. N Engl J Med. 2012;367:2296-2304.
Hospitalized Patients Often Receive Too Much Acetaminophen
Clinical question: What are the prevalence and factors associated with supratherapeutic dosing of acetaminophen in hospitalized patients?
Background: Acetaminophen is a commonly used medication that at high doses can be associated with significant adverse events, including liver failure. Considerable efforts have been made in the outpatient setting to limit the risks associated with acetaminophen. Little research has examined acetaminophen exposure in the inpatient setting.
Study design: Retrospective cohort study.
Setting: Two academic tertiary-care hospitals in the U.S.
Synopsis: The authors reviewed the electronic medication administration record of all adult patients admitted to two academic hospitals from June 1, 2010, to Aug. 31, 2010. A total of 14,411 patients (60.7%) were prescribed acetaminophen, of whom 955 (6.6%) were prescribed more than the 4g per day (the maximum recommended daily dose) at least once. In addition, 22.3% of patients >65 and 17.6% of patients with chronic liver disease exceeded the recommended limit of 3g per day. Half the supratherapeutic episodes involved doses exceeding 5g a day, often for several days. In adjusted analyses, scheduled administration (rather than as needed), a diagnosis of osteoarthritis, and higher-strength tablets were all associated with a higher risk of exposure to supratherapeutic doses.
Bottom line: A significant proportion of hospitalized patients are exposed to supratherapeutic dosing of acetaminophen.
Citation: Zhou L, Maviglia SM, Mahoney LM, et al. Supra-therapeutic dosing of acetaminophen among hospitalized patients. Arch Intern Med. 2012;172(22):1721-1728.
Longer Anticoagulation Therapy after Bioprosthetic Aortic Valve Replacement Might Be Beneficial
Clinical question: How long should anticoagulation therapy with warfarin be continued after surgical bioprosthetic aortic valve replacement?
Background: Current guidelines recommend a three-month course of anticoagulation therapy after bioprosthetic aortic valve surgery. However, the appropriate duration of post-operative anticoagulation therapy has not been well established
Study design: Retrospective cohort study.
Setting: Denmark.
Synopsis: Using data from the Danish National Registries, 4,075 subjects without atrial fibrillation who underwent bioprosthetic aortic valve implantation from 1997 to 2009 were identified. The association between different durations of warfarin therapy after aortic valve implantation and the combined end point of stroke, thromboembolic events, cardiovascular death, or bleeding episodes was examined.
The risk of adverse outcomes was substantially higher for patients not treated with warfarin compared to treated patents. The estimated adverse event rate was 7 per 100 person-years for untreated patients versus 2.7 per 100 for warfarin-treated patients (adjusted incidence rate ratio [IRR] 2.46, 95% CI 1.09 to 6.48). Patients not treated with warfarin were at higher risk of cardiovascular death within 30 to 89 days after surgery, with an event rate of 31.7 per 100 person-years versus 3.8 per 100 person-years (adjusted IRR 7.61, 95% CI 4.37 to 13.26). The difference in cardiovascular mortality continued to be significant from 90 to 179 days after surgery, with an event rate of 6.5 per 100 person-years versus 2.1 per 100 person-years (IRR 3.51, 95% CI 1.54 to 8.03).
Bottom line: Discontinuation of warfarin therapy within six months of bioprosthetic aortic valve replacement is associated with increased cardiovascular death.
Citation: Mérie C, Køber L, Skov Olsen P, et al. Association of warfarin therapy duration after bioprosthetic aortic valve replacement with risk of mortality, thromboembolic complications, and bleeding. JAMA. 2012;308(20):2118-2125.
Limited Evidence for Antimicrobial-Coated Catheters
Clinical question: Does the use of antimicrobial-coated catheters reduce the risk of catheter-associated urinary tract infection (UTI) compared to standard polytetrafluoroethylene (PTFE) catheters?
Background: UTIs associated with indwelling catheters are a major preventable cause of harm for hospitalized patients. Prior studies have shown that catheters made with antimicrobial coatings can reduce rates of bacteriuria, but their usefulness against symptomatic catheter-associated UTIs remains uncertain.
Study design: Multicenter randomized controlled trial.
Setting: Twenty-four hospitals in the United Kingdom.
Synopsis: A total of 7,102 patients >16 undergoing urethral catheterization for an anticipated duration of <14 days were randomly allocated in a 1:1:1 ratio to receive a silver-alloy-coated catheter, a nitrofural-impregnated silicone catheter, or a standard PTFE-coated catheter. The primary outcome was defined as presence of patient-reported symptoms of UTI and prescription of antibiotic for UTI. Incidence of symptomatic catheter-associated UTI up to six weeks after randomization did not differ significantly between groups and occurred in 12.6% of the PTFE control, 12.5% of the silver alloy group, and 10.6% of the nitrofural group. In secondary outcomes, the nitrofural catheter was associated with a slightly reduced incidence of culture-confirmed symptomatic UTI (absolute risk reduction of 1.4%) and lower rate of bacteriuria, but it also had greater patient-reported discomfort during use and removal.
Bottom line: Antimicrobial-coated catheters do not show a clinically significant benefit over standard PTFE catheters in preventing catheter-associated UTI.
Citation: Pickard R, Lam T, Maclennan G, et al. Antimicrobial catheters for reduction of symptomatic urinary tract infection in adults requiring short-term catheterisation in hospital: a multicentre randomized controlled trial. Lancet. 2012;380:1927-1935.
Outcomes Improve after In-Hospital Cardiac Arrest
Clinical question: Have outcomes after in-hospital cardiac arrest improved with recent advances in resuscitation care?
Background: Over the past decade, quality-improvement (QI) efforts in hospital resuscitation care have included use of mock cardiac arrests, defibrillation by nonmedical personnel, and participation in QI registries. It is unclear what effect these efforts have had on overall survival and neurologic recovery.
Study design: Retrospective cohort study.
Setting: Five hundred fifty-three hospitals in the U.S.
Synopsis: A total of 113,514 patients age >18 with a cardiac arrest occurring from Jan. 1, 2000, to Nov. 19, 2009, were identified. Analyses were separated by initial rhythm (PEA/asystole or ventricular fibrillation/tachycardia). Overall survival to discharge increased significantly to 22.3% in 2009 from 13.7% in 2000, with similar increases within each rhythm group. Rates of acute resuscitation survival (return of spontaneous circulation for at least 20 contiguous minutes after initial arrest) and post-resuscitation survival (survival to discharge among patients surviving acute resuscitation) also improved during the study period. Rates of clinically significant neurologic disability, as defined by cerebral performance scores >1, decreased over time for the overall cohort and the subset with ventricular fibrillation/tachycardia. The study was limited by including only hospitals motivated to participate in a QI registry.
Bottom line: From 2000 to 2009, survival after in-hospital cardiac arrest improved, and rates of clinically significant neurologic disability among survivors decreased.
Citation: Girotra S, Nallamothu B, Spertus J, et al. Trends in survival after in-hospital cardiac arrest. N Engl J Med. 2012;367:1912-1920.
In This Edition
Literature At A Glance
A guide to this month’s studies
- Guidelines on steroids and antivirals to treat Bell’s palsy
- Probiotics to reduce Clostridium difficile-associated diarrhea
- Rates of hemorrhage from warfarin therapy higher in clinical practice
- Less experienced doctors incur higher treatment costs
- Pay-for-performance incentive reduces mortality in England
- No benefit in ultrafiltration to treat acute heart failure
- Hospitalized patients often receive too much acetaminophen
- Longer anticoagulation therapy beneficial after bioprosthetic aortic valve replacement
- Antimicrobial-coated catheters and risk of urinary tract infection
- Patient outcomes improve after in-hospital cardiac arrest
Updated Guidelines on Steroids and Antivirals in Bell’s Palsy
Clinical question: Does the use of steroids and/or antivirals improve recovery in patients with newly diagnosed Bell’s palsy?
Background: The American Academy of Neurology’s last recommendation in 2001 stated that steroids were probably effective and antivirals possibly effective. The current review and recommendations looked at additional studies published since 2000.
Study design: Systematic review of MEDLINE and Cochrane Database of Systematic Reviews data published since June 2000.
Setting: Prospective controlled studies from Germany, Sweden, Scotland, Italy, South Korea, Japan, and Bangladesh.
Synopsis: The authors identified nine studies that fulfilled inclusion criteria. Two of these studies examined treatment with steroids alone and were judged to have the lowest risk for bias. Both studies enrolled patients within three days of symptom onset, continued treatment for 10 days, and demonstrated a significant increase in the probability of complete recovery in patients randomized to steroids (NNT 6-8). Two high-quality studies were identified that looked at the addition of antivirals to steroids. Neither study showed a statistically significant benefit.
Of note, the studies did not quantify the risk of harm from steroid use in patients with comorbidities, such as diabetes. Thus, the authors concluded that in some patients, it would be reasonable to consider limiting steroid use.
Bottom line: For patients with new-onset Bell’s palsy, steroids increase the probability of recovery of facial nerve function. Patients offered antivirals should be counseled that a benefit from antivirals has not been established, and, if there is a benefit, it is modest at best.
Citation: Gronseth GS, Paduga R. Evidence-based guideline update: steroids and antivirals for Bell palsy: report of the Guideline Development Subcommittee of the American Academy of Neurology. Neurology. 2012;79(22):2209-2213.
Probiotic Prophylaxis Reduces Clostridium Difficile-Associated Diarrhea
Clinical question: Are probiotics a safe and efficacious therapy for the prevention of Clostridium difficile-associated diarrhea (CDAD)?
Background: CDAD is the most common cause of hospital-acquired infectious diarrhea in high-income countries. There has been a dramatic rise in the incidence and severity of CDAD since 2002. Previous studies suggested that probiotics might reduce the incidence of CDAD with few adverse events.
Study design: Systematic review and meta-analysis of the literature.
Setting: Randomized controlled trials from the U.S., Canada, Chile, China, United Kingdom, Turkey, Poland, and Sweden.
Synopsis: Investigators identified 20 trials including 3,818 participants using a systematic search of randomized controlled trials of a specified probiotic of any strain in adults or pediatric subjects treated with antibiotics. Probiotics reduced the incidence of CDAD by 66% (risk ratio 0.34, 95% CI 0.24 to 0.49). Subgroup analyses showed similar results in both adults and children, with lower and high doses, and with different probiotic species.
Of probiotic-treated patients, 9.3% experienced an adverse event compared with 12.6% of control patients (relative risk 0.82, 95% CI 0.65 to 1.05). There was no report of any serious adverse events attributable to probiotics.
One limitation is the considerable variability in the reported risk of CDAD in the control group (0% to 40%). The absolute benefit from probiotics will depend on the risk in patients who do not receive prophylaxis.
Bottom line: Moderate-quality evidence suggests that probiotic prophylaxis results in a large reduction in C. diff-associated diarrhea without an increase in clinically important adverse events.
Citation: Johnston BC, Ma SSY, Goldenberg JZ, et al. Probiotics for the prevention of Clostridium difficile-associated diarrhea: a systematic review and meta-analysis. Ann Intern Med. 2012;157(12):878-888.
Rates of Hemorrhage from Warfarin Therapy Higher in Clinical Practice
Clinical question: What is the incidence of hemorrhage in a large population-based cohort of patients with atrial fibrillation who have started warfarin therapy?
Background: There is strong evidence that supports the use of warfarin to reduce the risk of stroke and systemic embolism in patients with atrial fibrillation. There are currently no large studies offering real-world, population-based estimates of hemorrhage rates among patients taking warfarin.
Study design: Retrospective cohort study.
Setting: Ontario.
Synopsis: This population-based, cohort study included 125,195 residents of Ontario age ≥66 years with atrial fibrillation who started taking warfarin sometime from 1997 to 2008. Hemorrhage was defined as bleeding requiring an emergency department visit or hospital admission. The overall risk of hemorrhage was 3.8% per person-year, but it was 11.8% in the first 30 days of therapy. For subjects age >75 years, the overall risk was 4.6% compared with 2.9% for those between 66 and 75 years.
Most hospital admissions involved gastrointestinal hemorrhages (63%). Almost 1 in 5 people (18%) with hospital admissions for hemorrhages died in the hospital or within seven days of discharge.
Bottom line: Rates of hemorrhage for older patients on warfarin therapy are significantly higher in clinical practice than the rates reported in clinical trials. The difference is likely due to the strict inclusion criteria, younger average age, and close monitoring of patients in clinical trials.
Citation: Gomes T, Mamdani MM, Holbrook AM, Paterson JM, Hellings C, Juurlink DN. Rates of hemorrhage during warfarin therapy for atrial fibrillation. CMAJ. 2013; Jan 21 [Epub ahead of print].
Less Experienced Doctors Incur Higher Treatment Costs
Clinical question: Which physician characteristics are associated with higher cost profiles?
Background: While both public and private insurers increasingly use physician cost profiles to identify physicians whose practice patterns account for more healthcare spending than other physicians, the individual physician characteristics associated with cost-profile performance are unknown.
Study design: Retrospective cohort study.
Setting: Four commercial health plans in Massachusetts.
Synopsis: Data collected from the insurance claims records of 1.13 million patients aged 18-65 years who were enrolled in one of four commercial health plans in Massachusetts in 2004 and 2005 were matched with the public records of 12,116 doctors who were stratified into five groups according to years of experience (<10, 10-19, 20-29, 30-39, and ≥40 years).
A strong association was found between physician experience and cost profiles, with the most experienced doctors—40 or more years of experience—providing the least costly care. Costs increased with each successively less experienced group (by 2.5%, 6.5%, 10%, and 13.2% more, respectively, to treat the same condition). No association was found between cost profiles and other physician characteristics, such as having had malpractice claims or disciplinary actions, board certification status, and practice size.
Differences appear to be driven by high-cost outlier patients. While median costs were similar between physicians with different levels of experience, the costs of treating patients at the 95 percentile of cost were much higher among physicians with less experience.
Bottom line: Doctors in this study with the least experience incurred 13.2% greater costs than their most senior counterparts.
Citation: Mehrotra A, Reid RO, Adams JL, Friedberg MW, McGlynn EA, Hussey PS. Physicians with the least experience have higher cost profiles than do physicians with the most experience. Health Aff (Millwood). 2012;31(11):2453-2463.
Pay-For-Performance Incentive Reduces Mortality in England
Clinical question: Do pay-for-performance programs improve quality of care?
Background: Pay-for-performance programs are being widely adopted both internationally and in the U.S. There is, however, limited evidence that these programs improve patient outcomes, and most prior studies have shown modest or inconsistent improvements in quality of care.
Study design: Prospective cohort study.
Setting: National Health Service (NHS) hospitals in northwest England.
Synopsis: The Advanced Quality program, the first hospital-based pay-for-performance program in England, was introduced in October 2004 in all 24 NHS hospitals in northwest England that provide emergency care. The program used a “tournament” system in which only the top-performing hospitals received bonus payments. There was no penalty for poor performers.
The primary end-point was 30-day in-hospital mortality among patients admitted for pneumonia, heart failure, or acute myocardial infarction. Over the three-year period studied (18 months before and 18 months after introduction of the program), the risk-adjusted mortality for these three conditions decreased significantly with an absolute reduction of 1.3% (95% CI 0.4 to 2.1%; P=0.006). The largest change, for pneumonia, was significant (1.9%, 95% CI 0.9 to 3.0, P<0.001), with nonsignificant reductions for acute myocardial infarction (0.6%, 95% CI -0.4 to 1.7; P=0.23) and heart failure (0.6%, 95% CI -0.6 to 1.8; P=0.30).
Bottom line: The introduction of a pay-for-performance program for all National Health Service hospitals in one region of England was associated with a significant reduction in mortality.
Citation: Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367(19):1821-1828.
Ultrafiltration Shows No Benefit in Acute Heart Failure
Clinical question: Is ultrafiltration superior to pharmacotherapy in the treatment of patients with acute heart failure and cardiorenal syndrome?
Background: Venovenous ultrafiltration is an alternative to diuretic therapy in patients with acute decompensated heart failure and worsened renal function that could allow greater control of the rate of fluid removal and improve outcomes. Little is known about the efficacy and safety of ultrafiltration compared to standard pharmacological therapy.
Study design: Multicenter randomized controlled trial.
Setting: Fourteen clinical centers in the U.S. and Canada.
Synopsis: One hundred eighty-eight patients admitted to a hospital with acute decompensated heart failure and worsened renal function were randomized to stepped pharmacological therapy or ultrafiltration. Ultrafiltration was inferior to pharmacological therapy with respect to the pre-specified primary composite endpoint, the change in serum creatinine level, and body weight at 96 hours after enrollment (P=0.003). This difference was primarily due to an increase in the serum creatinine level in the ultrafiltration group (0.23 vs. -0.04 mg/dl; P=0.003). There was no significant difference in weight loss at 96 hours (loss of 5.5 kg vs. 5.7kg; P=0.58).
A higher percentage of patients in the ultrafiltration group had a serious adverse event over the 60-day follow-up period (72% vs. 57%, P=0.03). There was no significant difference in the composite rate of death or rehospitalization for heart failure in the ultrafiltration group compared to the pharmacologic-therapy group (38% vs. 35%; P=0.96).
Bottom line: Pharmacological therapy is superior to ultrafiltration in patients with acute decompensated heart failure and worsened renal function.
Citation: Bart BA, Goldsmith SR, Lee KL, et al. Ultrafiltration in decompensated heart failure with cardiorenal syndrome. N Engl J Med. 2012;367:2296-2304.
Hospitalized Patients Often Receive Too Much Acetaminophen
Clinical question: What are the prevalence and factors associated with supratherapeutic dosing of acetaminophen in hospitalized patients?
Background: Acetaminophen is a commonly used medication that at high doses can be associated with significant adverse events, including liver failure. Considerable efforts have been made in the outpatient setting to limit the risks associated with acetaminophen. Little research has examined acetaminophen exposure in the inpatient setting.
Study design: Retrospective cohort study.
Setting: Two academic tertiary-care hospitals in the U.S.
Synopsis: The authors reviewed the electronic medication administration record of all adult patients admitted to two academic hospitals from June 1, 2010, to Aug. 31, 2010. A total of 14,411 patients (60.7%) were prescribed acetaminophen, of whom 955 (6.6%) were prescribed more than the 4g per day (the maximum recommended daily dose) at least once. In addition, 22.3% of patients >65 and 17.6% of patients with chronic liver disease exceeded the recommended limit of 3g per day. Half the supratherapeutic episodes involved doses exceeding 5g a day, often for several days. In adjusted analyses, scheduled administration (rather than as needed), a diagnosis of osteoarthritis, and higher-strength tablets were all associated with a higher risk of exposure to supratherapeutic doses.
Bottom line: A significant proportion of hospitalized patients are exposed to supratherapeutic dosing of acetaminophen.
Citation: Zhou L, Maviglia SM, Mahoney LM, et al. Supra-therapeutic dosing of acetaminophen among hospitalized patients. Arch Intern Med. 2012;172(22):1721-1728.
Longer Anticoagulation Therapy after Bioprosthetic Aortic Valve Replacement Might Be Beneficial
Clinical question: How long should anticoagulation therapy with warfarin be continued after surgical bioprosthetic aortic valve replacement?
Background: Current guidelines recommend a three-month course of anticoagulation therapy after bioprosthetic aortic valve surgery. However, the appropriate duration of post-operative anticoagulation therapy has not been well established
Study design: Retrospective cohort study.
Setting: Denmark.
Synopsis: Using data from the Danish National Registries, 4,075 subjects without atrial fibrillation who underwent bioprosthetic aortic valve implantation from 1997 to 2009 were identified. The association between different durations of warfarin therapy after aortic valve implantation and the combined end point of stroke, thromboembolic events, cardiovascular death, or bleeding episodes was examined.
The risk of adverse outcomes was substantially higher for patients not treated with warfarin compared to treated patents. The estimated adverse event rate was 7 per 100 person-years for untreated patients versus 2.7 per 100 for warfarin-treated patients (adjusted incidence rate ratio [IRR] 2.46, 95% CI 1.09 to 6.48). Patients not treated with warfarin were at higher risk of cardiovascular death within 30 to 89 days after surgery, with an event rate of 31.7 per 100 person-years versus 3.8 per 100 person-years (adjusted IRR 7.61, 95% CI 4.37 to 13.26). The difference in cardiovascular mortality continued to be significant from 90 to 179 days after surgery, with an event rate of 6.5 per 100 person-years versus 2.1 per 100 person-years (IRR 3.51, 95% CI 1.54 to 8.03).
Bottom line: Discontinuation of warfarin therapy within six months of bioprosthetic aortic valve replacement is associated with increased cardiovascular death.
Citation: Mérie C, Køber L, Skov Olsen P, et al. Association of warfarin therapy duration after bioprosthetic aortic valve replacement with risk of mortality, thromboembolic complications, and bleeding. JAMA. 2012;308(20):2118-2125.
Limited Evidence for Antimicrobial-Coated Catheters
Clinical question: Does the use of antimicrobial-coated catheters reduce the risk of catheter-associated urinary tract infection (UTI) compared to standard polytetrafluoroethylene (PTFE) catheters?
Background: UTIs associated with indwelling catheters are a major preventable cause of harm for hospitalized patients. Prior studies have shown that catheters made with antimicrobial coatings can reduce rates of bacteriuria, but their usefulness against symptomatic catheter-associated UTIs remains uncertain.
Study design: Multicenter randomized controlled trial.
Setting: Twenty-four hospitals in the United Kingdom.
Synopsis: A total of 7,102 patients >16 undergoing urethral catheterization for an anticipated duration of <14 days were randomly allocated in a 1:1:1 ratio to receive a silver-alloy-coated catheter, a nitrofural-impregnated silicone catheter, or a standard PTFE-coated catheter. The primary outcome was defined as presence of patient-reported symptoms of UTI and prescription of antibiotic for UTI. Incidence of symptomatic catheter-associated UTI up to six weeks after randomization did not differ significantly between groups and occurred in 12.6% of the PTFE control, 12.5% of the silver alloy group, and 10.6% of the nitrofural group. In secondary outcomes, the nitrofural catheter was associated with a slightly reduced incidence of culture-confirmed symptomatic UTI (absolute risk reduction of 1.4%) and lower rate of bacteriuria, but it also had greater patient-reported discomfort during use and removal.
Bottom line: Antimicrobial-coated catheters do not show a clinically significant benefit over standard PTFE catheters in preventing catheter-associated UTI.
Citation: Pickard R, Lam T, Maclennan G, et al. Antimicrobial catheters for reduction of symptomatic urinary tract infection in adults requiring short-term catheterisation in hospital: a multicentre randomized controlled trial. Lancet. 2012;380:1927-1935.
Outcomes Improve after In-Hospital Cardiac Arrest
Clinical question: Have outcomes after in-hospital cardiac arrest improved with recent advances in resuscitation care?
Background: Over the past decade, quality-improvement (QI) efforts in hospital resuscitation care have included use of mock cardiac arrests, defibrillation by nonmedical personnel, and participation in QI registries. It is unclear what effect these efforts have had on overall survival and neurologic recovery.
Study design: Retrospective cohort study.
Setting: Five hundred fifty-three hospitals in the U.S.
Synopsis: A total of 113,514 patients age >18 with a cardiac arrest occurring from Jan. 1, 2000, to Nov. 19, 2009, were identified. Analyses were separated by initial rhythm (PEA/asystole or ventricular fibrillation/tachycardia). Overall survival to discharge increased significantly to 22.3% in 2009 from 13.7% in 2000, with similar increases within each rhythm group. Rates of acute resuscitation survival (return of spontaneous circulation for at least 20 contiguous minutes after initial arrest) and post-resuscitation survival (survival to discharge among patients surviving acute resuscitation) also improved during the study period. Rates of clinically significant neurologic disability, as defined by cerebral performance scores >1, decreased over time for the overall cohort and the subset with ventricular fibrillation/tachycardia. The study was limited by including only hospitals motivated to participate in a QI registry.
Bottom line: From 2000 to 2009, survival after in-hospital cardiac arrest improved, and rates of clinically significant neurologic disability among survivors decreased.
Citation: Girotra S, Nallamothu B, Spertus J, et al. Trends in survival after in-hospital cardiac arrest. N Engl J Med. 2012;367:1912-1920.
In the Literature: HM-Related Research You Need to Know
In This Edition
Literature at a Glance
A guide to this month’s studies
- Eplerenone and heart failure mortality
- Fidaxomicin for C. difficile diarrhea
- Guidelines for intensive insulin therapy
- Benefits of hospitalist comanagement
- Peritoneal dialysis versus hemodialysis
- Pneumococcal urinary antigen to guide CAP treatment
- Race and readmission rate
- Factors associated with readmission
- Unplanned transfers to the ICU
Eplerenone Improves Mortality in Patients with Systolic Heart Failure and Mild Symptoms
Clinical question: Does the selective mineralocorticoid antagonist eplerenone improve outcomes in patients with chronic heart failure and mild symptoms?
Background: In prior studies of miner alocorticoid antagonists in systolic heart failure, spironolactone reduced mortality in patients with moderate to severe heart failure symptoms, and eplerenone reduced mortality in patients with acute myocardial infarction complicated by left ventricular dysfunction. The use of eplerenone in patients with systolic heart failure and mild symptoms has not previously been examined.
Study design: Randomized, double-blind, multicenter, placebo-controlled trial.
Setting: Two hundred seventy-eight centers in 29 countries.
Synopsis: The study authors randomized 2,737 patients with New York Heart Association Class II heart failure and an ejection fraction of no more than 35% to either eplerenone (up to 50 mg daily) or placebo, in addition to recommended therapy. Patients with baseline potassium levels >5 mmol/L or estimated GFR <30 were excluded. The primary outcome was a composite of death from cardiovascular causes or hospitalization for heart failure.
The trial was stopped early, after a median follow-up period of 21 months, when an interim analysis showed significant benefit with eplerenone. The primary outcome occurred in 18.3% of patients in the eplerenone group and 25.9% in the placebo group (hazard ratio [HR], 0.63; 95% CI, 0.54 to 0.74; P<0.001). All-cause mortality was 12.5% in the eplerenone group and 15.5% in the placebo group (HR 0.76; 95% CI, 0.62 to 0.93; P=0.008). A serum potassium level exceeding 5.5 mmol/L occurred in 11.8% of patients in the eplerenone group and 7.2% of those in the placebo group (P<0.001).
Bottom line: Eplerenone reduces both the risk of death and the risk of hospitalization in patients with systolic heart failure and mild symptoms.
Citation: Zannad F, McMurray JJ, Krum H, et al. Eplerenone in patients with systolic heart failure and mild symptoms. N Engl J Med. 2011;364(1):11-21.
Fidaxomicin Noninferior to Vancomycin for C. Difficile Treatment
Clinical question: What is the safety and efficacy of fidaxomicin compared to vancomycin in the treatment of patients with C. difficile infection?
Background: Fidaxomicin, a new macrocyclic antibiotic, has shown high efficacy in vitro against C. diff, minimal systemic absorption, and a narrow-spectrum profile. In previously published Phase 2 trials of fidaxomicin for the treatment of C. diff, it has been associated with good clinical response and low recurrence rates.
Study design: Prospective, multicenter, double-blind, randomized trial.
Setting: Fifty-two sites in the United States and 15 in Canada.
Synopsis: The study included 629 adults with acute symptoms of C. diff and a positive stool toxin test. The patients were randomly assigned to 200-mg twice-daily fidaxomicin or 125-mg four-times-daily vancomycin for a course of 10 days. The primary endpoint was rate of clinical cure (resolution of diarrhea), and secondary endpoints were recurrence of C. diff and global cure (clinical cure and lack of relapse within four weeks of cessation of therapy).
The rate of clinical cure associated with fidaxomicin was noninferior to that associated with vancomycin (88.2% vs. 85.8%, respectively). Patients receiving fidaxomicin had a lower rate of relapse than those receiving vancomycin (15.4% vs. 25.3%, respectively, P=0.005) and a higher global cure rate (74.6 vs. 61.1%, P=0.006). In subgroup analysis, the lower rate of recurrence was seen in patients with non-North American pulsed-field Type 1 strain (NAP1/BI/027 strain), while in patients with the NAP1/BI/027 strain, the recurrence rate was similar for both drugs. There was no difference in adverse event rates.
Bottom line: Clinical cure rates of C. diff with fidaxomicin are noninferior to those with vancomycin; however, fidaxomicin is associated with a significantly lower rate of recurrence among those infected with the non-NAP1/BI/027 strain.
Citation: Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med. 2011;364(5):422-431.
ACP Guideline Discourages Use of Intensive Insulin Therapy in Hospitalized Patients
Clinical question: Does the use of intensive insulin therapy (IIT) to achieve tight glycemic control in hospitalized patients (whether in the SICU, MICU, or on the general medicine floor) improve important health outcomes?
Background: Hyperglycemia is a common finding in hospitalized patients and is associated with prolonged length of stay (LOS), death, and worsening health outcomes. Despite this, prospective studies have yet to provide consistent evidence that using IIT to achieve strict glycemic control (80 mg/dL-110 mg/dL) improves outcomes in hospitalized patients.
Study design: Systematic review of MEDLINE and the Cochrane Database of Systematic Reviews from 1950 to January 2010.
Setting: Trials included subjects with myocardial infarction, stroke, and brain injury, as well as those in perioperative settings and ICUs.
Synopsis: The review informing this guideline meta-analyzed 21 trials and found that IIT did not improve short-term mortality, long-term mortality, infection rates, LOS, or the need for renal replacement therapy. Furthermore, IIT was associated with a sixfold increase in risk for severe hypoglycemia in all hospital settings.
Based on these findings, the American College of Physicians (ACP) issued three recommendations:
- To not use IIT to strictly control blood glucose in non-SICU/non-MICU patients with or without diabetes (strong recommendation, moderate-quality evidence);
- To not use IIT to normalize blood glucose in SICU or MICU patients with or without diabetes (strong recommendation, high-quality evidence); and
- To consider a target blood glucose level of 140 mg to 200 mg if insulin therapy is used in SICU or MICU patients (weak recommendation, moderate-quality evidence).
Bottom line: The ACP recommends against using IIT to strictly control blood glucose (80 mg/dL-180 mg/dL) in hospitalized patients, whether in the SICU, MICU, or on the general medicine floor.
Citation: Qaseem A, Humphrey LL, Chou R, Snow V, Shekelle P. Use of intensive insulin therapy for the management of glycemic control in hospitalized patients: a clinical practice guideline from the American College of Physicians. Ann Intern Med. 2011;154(4):260-267.
Limited Benefits Seen with Hospitalist-Neurosurgeon Comanagement
Clinical question: Does hospitalist-neurosurgeon comanagement improve patient outcomes?
Background: The shared management of surgical patients between surgeons and hospitalists is increasingly common despite limited data supporting its effectiveness in reducing costs or improving patient outcomes.
Study design: Single-center, retrospective study.
Setting: Tertiary-care academic medical center.
Synopsis: Data were collected on the 7,596 patients who were admitted to the neurosurgical service of the University of California San Francisco Medical Center from June 1, 2005, to December 31, 2008. The study looked at 4,203 patients (55.3%) admitted before July 1, 2007, when hospitalist comanagement was implemented, and 3,393 patients (44.7%) after comanagement began. Of those admitted during the post-implementation period, 988 (29.1%) were comanaged.
After adjusting for patient characteristics and background trends, and accounting for clustering at the physician level, no differences were found in patient mortality rate, readmissions, or LOS after implementation of comanagement. No consistent improvements were seen in patient satisfaction.
However, physician and staff perceptions of safety and quality of care were significantly better after comanagement. There was a moderate decrease in adjusted hospital costs after implementation (adjusted cost ratio 0.94, range 0.88-1.00) equivalent to a cost savings of about $1,439 per hospitalization.
Bottom line: The implementation of a hospitalist-neurosurgery comanagement service did not improve patient outcomes or satisfaction, but it did appear to improve providers’ perception of care quality and reduce hospital costs.
Citation: Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010.
Comparable Mortality Between Hemodialysis and Peritoneal Dialysis
Clinical question: What effect does the initial dialysis modality used have on mortality for patients with end-stage renal disease (ESRD)?
Background: Despite the substantially lower annual per-person costs of peritoneal dialysis (PD) as compared with hemodialysis (HD), only 7% of dialysis patients were treated with PD in 2008. It is unknown whether there are differences in mortality between those using PD and HD when examined in contemporary cohorts.
Study design: Retrospective cohort study.
Setting: National cohort.
Synopsis: Data for patients with incident ESRD over a nine-year period were obtained from the U.S. Renal Data Systems (USRDS), a national registry of all patients with ESRD. Initial dialysis modality was defined as the dialysis modality used 90 days after initiation of dialysis. Patients were divided into three three-year cohorts (1996-1998, 1999-2001, and 2002-2004) based on the date dialysis was initiated and followed for up to five years.
A substantial and consistent reduction in mortality was seen for PD patients across the three time periods. No such improvements were observed across the time periods for the HD patients. PD patients were, on average, younger, healthier, and more likely to be white. In an analysis of the most recent cohort adjusting for these factors, there was no significant difference in the risk of death between HD and PD patients. The median life expectancy of HD and PD patients was 38.4 and 36.6 months, respectively.
Limitations of the study include a lack of randomization and failure to consider switches from one dialysis modality to the other.
Bottom line: Patients beginning their renal replacement therapy with PD had similar mortality after five years compared to patients using in-center HD.
Citation: Mehrotra R, Chiu YW, Kalantar-Zadeh K, Bargman J, Vonesh E. Similar outcomes with hemodialysis and peritoneal dialysis in patients with end-stage renal disease. Arch Intern Med. 2011;171(2):110-118.
Pneumococcal Urinary Antigen Test Might Guide Community-Acquired Pneumonia Treatment
Clinical question: What is the diagnostic accuracy and clinical utility of pneumococcal urinary antigen testing in adult patients hospitalized with community-acquired pneumonia (CAP)?
Background: Although CAP is common, our ability to determine its etiology is limited, and empirical broad-spectrum antibiotic therapy is the norm. Pneumococcal urinary antigen testing could allow for the more frequent use of narrow-spectrum pathogen-focused antibiotic therapy.
Study design: Prospective cohort study.
Setting: University-affiliated hospital in Spain.
Synopsis: This study included consecutive adult patients hospitalized with CAP from February 2007 though January 2008. A total of 464 patients with 474 episodes of CAP were included. Pneumococcal urinary antigen testing was performed in 383 (80.8%) episodes of CAP. Streptococcus pneumoniae was felt to be the causative pathogen in 171 cases (36.1%). It was detected exclusively by urinary antigen test in 75 of those cases (43.8%).
For the urine antigen test, specificity was 96% (95% CI, 86.5 to 99.5), and the positive predictive value was 96.5% (95% CI, 87.9 to 99.5). The results of the test led clinicians to reduce the spectrum of antibiotics in 41 patients, and pneumonia was cured in all 41 of these patients. Treatment was not modified despite positive antigen test results in 89 patients.
Limitations of this study include a lack of complete microbiological data for all patients. The study also highlighted the difficulty in changing clinicians’ prescribing patterns, even when test results indicate the need for treatment modification.
Bottom line: A positive pneumococcal urinary antigen test result in adult patients hospitalized with CAP can help clinicians narrow antimicrobial therapy with good clinical outcomes.
Citation: Sordé R, Falcó V, Lowak M, et al. Current and potential usefulness of pneumococcal urinary antigen detection in hospitalized patients with community-acquired pneumonia to guide antimicrobial therapy. Arch Intern Med. 2011;171(2):166-172.
Racial Disparities Detected in Hospital Readmission Rates
Clinical question: Do black patients have higher odds of readmission than white patients, and, if so, are these disparities related to where black patients receive care?
Background: Racial disparities in healthcare are well documented. Understanding and eliminating those disparities remains a national priority. Reducing hospital readmissions also is a policy focus, as it represents an opportunity to improve quality while reducing costs. Whether there are racial disparities in hospital readmissions at the national level is unknown.
Study design: Retrospective cohort study.
Setting: Medicare fee-for-service beneficiaries from 2006 to 2008.
Synopsis: Medicare discharge data for more than 3 million Medicare fee-for-service beneficiaries aged 65 years or older discharged from January 1, 2006, to November 30, 2008, with the primary discharge diagnosis of acute myocardial infarction (MI), congestive heart failure, or pneumonia were used to calculate risk-adjusted odds of readmission within 30 days of discharge. Hospitals in the highest decile of proportion of black patients were categorized as minority-serving.
Overall, black patients had 13% higher odds of all-cause 30-day readmission than white patients (24.8% vs. 22.6%, OR 1.13, 95% CI, 1.11-1.14), and patients discharged from minority-serving hospitals had 23% higher odds of readmission than patients from non-minority-serving hospitals (25.5% vs. 22.0%, OR 1.23, 95% CI, 1.20-1.27). Among those with acute MI, black patients had 13% higher odds of readmission (OR 1.13, 95% CI, 1.10-1.16), irrespective of the site of care, while patients from minority-serving hospitals had 22% higher odds of readmissions (OR 1.22, 95% CI, 1.17-1.27), even after adjusting for patient race. Similar disparities were seen for CHF and pneumonia. Results were unchanged after adjusting for hospital characteristics, including markers of caring for poor patients.
Bottom line: Compared with white patients, elderly black Medicare patients have a higher 30-day hospital readmission rate for MI, CHF, and pneumonia that is not fully explained by the higher readmission rates seen among hospitals that disproportionately care for black patients.
Citation: Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681.
Easily Identifiable Clinical and Demographic Factors Associated with Hospital Readmission
Clinical question: Which clinical, operational, or demographic factors are associated with 30-day readmission for general medicine patients?
Background: While a few clinical risk factors for hospital readmission have been well defined in subgroups of inpatients, there are still limited data regarding readmission risk that might be associated with a broad range of operational, demographic, and clinical factors in a heterogeneous population of general medicine patients.
Study design: Retrospective observational study.
Setting: Single academic medical center.
Synopsis: The study examined more than 10,300 consecutive admissions (6,805 patients) discharged over a two-year period from 2006 to 2008 from the general medicine service of an urban academic medical center. The 30-day readmission rate was 17.0%.
In multivariate analysis, factors associated with readmission included black race (OR 1.43, 95% CI, 1.24-1.65), inpatient use of narcotics (OR 1.33, 95% CI, 1.16-1.53) and corticosteroids (OR 1.24, 95% CI, 1.09-1.42), and the disease states of cancer (with metastasis 1.61, 95% CI, 1.33-1.95; without metastasis 1.95, 95% CI 1.54-2.47), renal failure (OR 1.19, 95% CI 1.05-1.36), congestive heart failure (OR 1.30, 95% CI, 1.09-1.56), and weight loss (OR 1.26, 95% CI, 1.09-1.47). Medicaid payor status (OR 1.15, 95% CI, 0.97-1.36) had a trend toward readmission. None of the operational factors were significantly associated with readmission, including discharge to skilled nursing facility or weekend discharge.
A major limitation of the study was its inability to capture readmissions to hospitals other than the study hospital, which, based on prior studies, could have accounted for nearly a quarter of readmissions.
Bottom line: Readmission of general medicine patients within 30 days is common and associated with several easily identifiable clinical and nonclinical factors.
Citation: Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54-60.
Unplanned Medical ICU Transfers Tied to Preventable Errors
Clinical question: What fraction of unplanned medical ICU (MICU) transfers result from errors in care and why do they occur?
Background: Prior studies have suggested that 14% to 28% of patients admitted to the MICU are unplanned transfers. It is not known what fraction of these transfers result from errors in care, and whether these transfers could be prevented.
Study design: Retrospective cohort study.
Setting: University-affiliated academic medical center.
Synopsis: All unplanned transfers to the MICU from June 1, 2005, to May 30, 2006, were included in the study. Three independent observers, all hospitalists for more than three years, reviewed patient records to determine the cause of unplanned transfers according to a taxonomy the researchers developed for classifying the transfers. They also determined whether the transfer could have been prevented.
Of the 4,468 general medicine admissions during the study period, 152 met inclusion criteria for an unplanned MICU transfer. Errors in care were judged to account for 19% (n=29) of unplanned transfers, 15 of which were due to incorrect triage at admission and 14 to iatrogenic errors, such as opiate overdose during pain treatment or delayed treatment. All 15 triage errors were considered preventable. Of the iatrogenic errors, eight were considered preventable through an earlier intervention. Overall, 23 errors (15%) were thought to be preventable. Observer agreement was moderate to almost perfect (κ0.55-0.90).
Bottom line: Nearly 1 in 7 unplanned transfers to the medical ICU are associated with preventable errors in care, with the most common error being inappropriate admission triage.
Citation: Bapoje SR, Gaudiani JL, Narayanan V, Albert RK. Unplanned transfers to a medical intensive care unit: causes and relationship to preventable errors in care. J Hosp Med. 2011;6(2):68-72. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- Eplerenone and heart failure mortality
- Fidaxomicin for C. difficile diarrhea
- Guidelines for intensive insulin therapy
- Benefits of hospitalist comanagement
- Peritoneal dialysis versus hemodialysis
- Pneumococcal urinary antigen to guide CAP treatment
- Race and readmission rate
- Factors associated with readmission
- Unplanned transfers to the ICU
Eplerenone Improves Mortality in Patients with Systolic Heart Failure and Mild Symptoms
Clinical question: Does the selective mineralocorticoid antagonist eplerenone improve outcomes in patients with chronic heart failure and mild symptoms?
Background: In prior studies of miner alocorticoid antagonists in systolic heart failure, spironolactone reduced mortality in patients with moderate to severe heart failure symptoms, and eplerenone reduced mortality in patients with acute myocardial infarction complicated by left ventricular dysfunction. The use of eplerenone in patients with systolic heart failure and mild symptoms has not previously been examined.
Study design: Randomized, double-blind, multicenter, placebo-controlled trial.
Setting: Two hundred seventy-eight centers in 29 countries.
Synopsis: The study authors randomized 2,737 patients with New York Heart Association Class II heart failure and an ejection fraction of no more than 35% to either eplerenone (up to 50 mg daily) or placebo, in addition to recommended therapy. Patients with baseline potassium levels >5 mmol/L or estimated GFR <30 were excluded. The primary outcome was a composite of death from cardiovascular causes or hospitalization for heart failure.
The trial was stopped early, after a median follow-up period of 21 months, when an interim analysis showed significant benefit with eplerenone. The primary outcome occurred in 18.3% of patients in the eplerenone group and 25.9% in the placebo group (hazard ratio [HR], 0.63; 95% CI, 0.54 to 0.74; P<0.001). All-cause mortality was 12.5% in the eplerenone group and 15.5% in the placebo group (HR 0.76; 95% CI, 0.62 to 0.93; P=0.008). A serum potassium level exceeding 5.5 mmol/L occurred in 11.8% of patients in the eplerenone group and 7.2% of those in the placebo group (P<0.001).
Bottom line: Eplerenone reduces both the risk of death and the risk of hospitalization in patients with systolic heart failure and mild symptoms.
Citation: Zannad F, McMurray JJ, Krum H, et al. Eplerenone in patients with systolic heart failure and mild symptoms. N Engl J Med. 2011;364(1):11-21.
Fidaxomicin Noninferior to Vancomycin for C. Difficile Treatment
Clinical question: What is the safety and efficacy of fidaxomicin compared to vancomycin in the treatment of patients with C. difficile infection?
Background: Fidaxomicin, a new macrocyclic antibiotic, has shown high efficacy in vitro against C. diff, minimal systemic absorption, and a narrow-spectrum profile. In previously published Phase 2 trials of fidaxomicin for the treatment of C. diff, it has been associated with good clinical response and low recurrence rates.
Study design: Prospective, multicenter, double-blind, randomized trial.
Setting: Fifty-two sites in the United States and 15 in Canada.
Synopsis: The study included 629 adults with acute symptoms of C. diff and a positive stool toxin test. The patients were randomly assigned to 200-mg twice-daily fidaxomicin or 125-mg four-times-daily vancomycin for a course of 10 days. The primary endpoint was rate of clinical cure (resolution of diarrhea), and secondary endpoints were recurrence of C. diff and global cure (clinical cure and lack of relapse within four weeks of cessation of therapy).
The rate of clinical cure associated with fidaxomicin was noninferior to that associated with vancomycin (88.2% vs. 85.8%, respectively). Patients receiving fidaxomicin had a lower rate of relapse than those receiving vancomycin (15.4% vs. 25.3%, respectively, P=0.005) and a higher global cure rate (74.6 vs. 61.1%, P=0.006). In subgroup analysis, the lower rate of recurrence was seen in patients with non-North American pulsed-field Type 1 strain (NAP1/BI/027 strain), while in patients with the NAP1/BI/027 strain, the recurrence rate was similar for both drugs. There was no difference in adverse event rates.
Bottom line: Clinical cure rates of C. diff with fidaxomicin are noninferior to those with vancomycin; however, fidaxomicin is associated with a significantly lower rate of recurrence among those infected with the non-NAP1/BI/027 strain.
Citation: Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med. 2011;364(5):422-431.
ACP Guideline Discourages Use of Intensive Insulin Therapy in Hospitalized Patients
Clinical question: Does the use of intensive insulin therapy (IIT) to achieve tight glycemic control in hospitalized patients (whether in the SICU, MICU, or on the general medicine floor) improve important health outcomes?
Background: Hyperglycemia is a common finding in hospitalized patients and is associated with prolonged length of stay (LOS), death, and worsening health outcomes. Despite this, prospective studies have yet to provide consistent evidence that using IIT to achieve strict glycemic control (80 mg/dL-110 mg/dL) improves outcomes in hospitalized patients.
Study design: Systematic review of MEDLINE and the Cochrane Database of Systematic Reviews from 1950 to January 2010.
Setting: Trials included subjects with myocardial infarction, stroke, and brain injury, as well as those in perioperative settings and ICUs.
Synopsis: The review informing this guideline meta-analyzed 21 trials and found that IIT did not improve short-term mortality, long-term mortality, infection rates, LOS, or the need for renal replacement therapy. Furthermore, IIT was associated with a sixfold increase in risk for severe hypoglycemia in all hospital settings.
Based on these findings, the American College of Physicians (ACP) issued three recommendations:
- To not use IIT to strictly control blood glucose in non-SICU/non-MICU patients with or without diabetes (strong recommendation, moderate-quality evidence);
- To not use IIT to normalize blood glucose in SICU or MICU patients with or without diabetes (strong recommendation, high-quality evidence); and
- To consider a target blood glucose level of 140 mg to 200 mg if insulin therapy is used in SICU or MICU patients (weak recommendation, moderate-quality evidence).
Bottom line: The ACP recommends against using IIT to strictly control blood glucose (80 mg/dL-180 mg/dL) in hospitalized patients, whether in the SICU, MICU, or on the general medicine floor.
Citation: Qaseem A, Humphrey LL, Chou R, Snow V, Shekelle P. Use of intensive insulin therapy for the management of glycemic control in hospitalized patients: a clinical practice guideline from the American College of Physicians. Ann Intern Med. 2011;154(4):260-267.
Limited Benefits Seen with Hospitalist-Neurosurgeon Comanagement
Clinical question: Does hospitalist-neurosurgeon comanagement improve patient outcomes?
Background: The shared management of surgical patients between surgeons and hospitalists is increasingly common despite limited data supporting its effectiveness in reducing costs or improving patient outcomes.
Study design: Single-center, retrospective study.
Setting: Tertiary-care academic medical center.
Synopsis: Data were collected on the 7,596 patients who were admitted to the neurosurgical service of the University of California San Francisco Medical Center from June 1, 2005, to December 31, 2008. The study looked at 4,203 patients (55.3%) admitted before July 1, 2007, when hospitalist comanagement was implemented, and 3,393 patients (44.7%) after comanagement began. Of those admitted during the post-implementation period, 988 (29.1%) were comanaged.
After adjusting for patient characteristics and background trends, and accounting for clustering at the physician level, no differences were found in patient mortality rate, readmissions, or LOS after implementation of comanagement. No consistent improvements were seen in patient satisfaction.
However, physician and staff perceptions of safety and quality of care were significantly better after comanagement. There was a moderate decrease in adjusted hospital costs after implementation (adjusted cost ratio 0.94, range 0.88-1.00) equivalent to a cost savings of about $1,439 per hospitalization.
Bottom line: The implementation of a hospitalist-neurosurgery comanagement service did not improve patient outcomes or satisfaction, but it did appear to improve providers’ perception of care quality and reduce hospital costs.
Citation: Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010.
Comparable Mortality Between Hemodialysis and Peritoneal Dialysis
Clinical question: What effect does the initial dialysis modality used have on mortality for patients with end-stage renal disease (ESRD)?
Background: Despite the substantially lower annual per-person costs of peritoneal dialysis (PD) as compared with hemodialysis (HD), only 7% of dialysis patients were treated with PD in 2008. It is unknown whether there are differences in mortality between those using PD and HD when examined in contemporary cohorts.
Study design: Retrospective cohort study.
Setting: National cohort.
Synopsis: Data for patients with incident ESRD over a nine-year period were obtained from the U.S. Renal Data Systems (USRDS), a national registry of all patients with ESRD. Initial dialysis modality was defined as the dialysis modality used 90 days after initiation of dialysis. Patients were divided into three three-year cohorts (1996-1998, 1999-2001, and 2002-2004) based on the date dialysis was initiated and followed for up to five years.
A substantial and consistent reduction in mortality was seen for PD patients across the three time periods. No such improvements were observed across the time periods for the HD patients. PD patients were, on average, younger, healthier, and more likely to be white. In an analysis of the most recent cohort adjusting for these factors, there was no significant difference in the risk of death between HD and PD patients. The median life expectancy of HD and PD patients was 38.4 and 36.6 months, respectively.
Limitations of the study include a lack of randomization and failure to consider switches from one dialysis modality to the other.
Bottom line: Patients beginning their renal replacement therapy with PD had similar mortality after five years compared to patients using in-center HD.
Citation: Mehrotra R, Chiu YW, Kalantar-Zadeh K, Bargman J, Vonesh E. Similar outcomes with hemodialysis and peritoneal dialysis in patients with end-stage renal disease. Arch Intern Med. 2011;171(2):110-118.
Pneumococcal Urinary Antigen Test Might Guide Community-Acquired Pneumonia Treatment
Clinical question: What is the diagnostic accuracy and clinical utility of pneumococcal urinary antigen testing in adult patients hospitalized with community-acquired pneumonia (CAP)?
Background: Although CAP is common, our ability to determine its etiology is limited, and empirical broad-spectrum antibiotic therapy is the norm. Pneumococcal urinary antigen testing could allow for the more frequent use of narrow-spectrum pathogen-focused antibiotic therapy.
Study design: Prospective cohort study.
Setting: University-affiliated hospital in Spain.
Synopsis: This study included consecutive adult patients hospitalized with CAP from February 2007 though January 2008. A total of 464 patients with 474 episodes of CAP were included. Pneumococcal urinary antigen testing was performed in 383 (80.8%) episodes of CAP. Streptococcus pneumoniae was felt to be the causative pathogen in 171 cases (36.1%). It was detected exclusively by urinary antigen test in 75 of those cases (43.8%).
For the urine antigen test, specificity was 96% (95% CI, 86.5 to 99.5), and the positive predictive value was 96.5% (95% CI, 87.9 to 99.5). The results of the test led clinicians to reduce the spectrum of antibiotics in 41 patients, and pneumonia was cured in all 41 of these patients. Treatment was not modified despite positive antigen test results in 89 patients.
Limitations of this study include a lack of complete microbiological data for all patients. The study also highlighted the difficulty in changing clinicians’ prescribing patterns, even when test results indicate the need for treatment modification.
Bottom line: A positive pneumococcal urinary antigen test result in adult patients hospitalized with CAP can help clinicians narrow antimicrobial therapy with good clinical outcomes.
Citation: Sordé R, Falcó V, Lowak M, et al. Current and potential usefulness of pneumococcal urinary antigen detection in hospitalized patients with community-acquired pneumonia to guide antimicrobial therapy. Arch Intern Med. 2011;171(2):166-172.
Racial Disparities Detected in Hospital Readmission Rates
Clinical question: Do black patients have higher odds of readmission than white patients, and, if so, are these disparities related to where black patients receive care?
Background: Racial disparities in healthcare are well documented. Understanding and eliminating those disparities remains a national priority. Reducing hospital readmissions also is a policy focus, as it represents an opportunity to improve quality while reducing costs. Whether there are racial disparities in hospital readmissions at the national level is unknown.
Study design: Retrospective cohort study.
Setting: Medicare fee-for-service beneficiaries from 2006 to 2008.
Synopsis: Medicare discharge data for more than 3 million Medicare fee-for-service beneficiaries aged 65 years or older discharged from January 1, 2006, to November 30, 2008, with the primary discharge diagnosis of acute myocardial infarction (MI), congestive heart failure, or pneumonia were used to calculate risk-adjusted odds of readmission within 30 days of discharge. Hospitals in the highest decile of proportion of black patients were categorized as minority-serving.
Overall, black patients had 13% higher odds of all-cause 30-day readmission than white patients (24.8% vs. 22.6%, OR 1.13, 95% CI, 1.11-1.14), and patients discharged from minority-serving hospitals had 23% higher odds of readmission than patients from non-minority-serving hospitals (25.5% vs. 22.0%, OR 1.23, 95% CI, 1.20-1.27). Among those with acute MI, black patients had 13% higher odds of readmission (OR 1.13, 95% CI, 1.10-1.16), irrespective of the site of care, while patients from minority-serving hospitals had 22% higher odds of readmissions (OR 1.22, 95% CI, 1.17-1.27), even after adjusting for patient race. Similar disparities were seen for CHF and pneumonia. Results were unchanged after adjusting for hospital characteristics, including markers of caring for poor patients.
Bottom line: Compared with white patients, elderly black Medicare patients have a higher 30-day hospital readmission rate for MI, CHF, and pneumonia that is not fully explained by the higher readmission rates seen among hospitals that disproportionately care for black patients.
Citation: Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681.
Easily Identifiable Clinical and Demographic Factors Associated with Hospital Readmission
Clinical question: Which clinical, operational, or demographic factors are associated with 30-day readmission for general medicine patients?
Background: While a few clinical risk factors for hospital readmission have been well defined in subgroups of inpatients, there are still limited data regarding readmission risk that might be associated with a broad range of operational, demographic, and clinical factors in a heterogeneous population of general medicine patients.
Study design: Retrospective observational study.
Setting: Single academic medical center.
Synopsis: The study examined more than 10,300 consecutive admissions (6,805 patients) discharged over a two-year period from 2006 to 2008 from the general medicine service of an urban academic medical center. The 30-day readmission rate was 17.0%.
In multivariate analysis, factors associated with readmission included black race (OR 1.43, 95% CI, 1.24-1.65), inpatient use of narcotics (OR 1.33, 95% CI, 1.16-1.53) and corticosteroids (OR 1.24, 95% CI, 1.09-1.42), and the disease states of cancer (with metastasis 1.61, 95% CI, 1.33-1.95; without metastasis 1.95, 95% CI 1.54-2.47), renal failure (OR 1.19, 95% CI 1.05-1.36), congestive heart failure (OR 1.30, 95% CI, 1.09-1.56), and weight loss (OR 1.26, 95% CI, 1.09-1.47). Medicaid payor status (OR 1.15, 95% CI, 0.97-1.36) had a trend toward readmission. None of the operational factors were significantly associated with readmission, including discharge to skilled nursing facility or weekend discharge.
A major limitation of the study was its inability to capture readmissions to hospitals other than the study hospital, which, based on prior studies, could have accounted for nearly a quarter of readmissions.
Bottom line: Readmission of general medicine patients within 30 days is common and associated with several easily identifiable clinical and nonclinical factors.
Citation: Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54-60.
Unplanned Medical ICU Transfers Tied to Preventable Errors
Clinical question: What fraction of unplanned medical ICU (MICU) transfers result from errors in care and why do they occur?
Background: Prior studies have suggested that 14% to 28% of patients admitted to the MICU are unplanned transfers. It is not known what fraction of these transfers result from errors in care, and whether these transfers could be prevented.
Study design: Retrospective cohort study.
Setting: University-affiliated academic medical center.
Synopsis: All unplanned transfers to the MICU from June 1, 2005, to May 30, 2006, were included in the study. Three independent observers, all hospitalists for more than three years, reviewed patient records to determine the cause of unplanned transfers according to a taxonomy the researchers developed for classifying the transfers. They also determined whether the transfer could have been prevented.
Of the 4,468 general medicine admissions during the study period, 152 met inclusion criteria for an unplanned MICU transfer. Errors in care were judged to account for 19% (n=29) of unplanned transfers, 15 of which were due to incorrect triage at admission and 14 to iatrogenic errors, such as opiate overdose during pain treatment or delayed treatment. All 15 triage errors were considered preventable. Of the iatrogenic errors, eight were considered preventable through an earlier intervention. Overall, 23 errors (15%) were thought to be preventable. Observer agreement was moderate to almost perfect (κ0.55-0.90).
Bottom line: Nearly 1 in 7 unplanned transfers to the medical ICU are associated with preventable errors in care, with the most common error being inappropriate admission triage.
Citation: Bapoje SR, Gaudiani JL, Narayanan V, Albert RK. Unplanned transfers to a medical intensive care unit: causes and relationship to preventable errors in care. J Hosp Med. 2011;6(2):68-72. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- Eplerenone and heart failure mortality
- Fidaxomicin for C. difficile diarrhea
- Guidelines for intensive insulin therapy
- Benefits of hospitalist comanagement
- Peritoneal dialysis versus hemodialysis
- Pneumococcal urinary antigen to guide CAP treatment
- Race and readmission rate
- Factors associated with readmission
- Unplanned transfers to the ICU
Eplerenone Improves Mortality in Patients with Systolic Heart Failure and Mild Symptoms
Clinical question: Does the selective mineralocorticoid antagonist eplerenone improve outcomes in patients with chronic heart failure and mild symptoms?
Background: In prior studies of miner alocorticoid antagonists in systolic heart failure, spironolactone reduced mortality in patients with moderate to severe heart failure symptoms, and eplerenone reduced mortality in patients with acute myocardial infarction complicated by left ventricular dysfunction. The use of eplerenone in patients with systolic heart failure and mild symptoms has not previously been examined.
Study design: Randomized, double-blind, multicenter, placebo-controlled trial.
Setting: Two hundred seventy-eight centers in 29 countries.
Synopsis: The study authors randomized 2,737 patients with New York Heart Association Class II heart failure and an ejection fraction of no more than 35% to either eplerenone (up to 50 mg daily) or placebo, in addition to recommended therapy. Patients with baseline potassium levels >5 mmol/L or estimated GFR <30 were excluded. The primary outcome was a composite of death from cardiovascular causes or hospitalization for heart failure.
The trial was stopped early, after a median follow-up period of 21 months, when an interim analysis showed significant benefit with eplerenone. The primary outcome occurred in 18.3% of patients in the eplerenone group and 25.9% in the placebo group (hazard ratio [HR], 0.63; 95% CI, 0.54 to 0.74; P<0.001). All-cause mortality was 12.5% in the eplerenone group and 15.5% in the placebo group (HR 0.76; 95% CI, 0.62 to 0.93; P=0.008). A serum potassium level exceeding 5.5 mmol/L occurred in 11.8% of patients in the eplerenone group and 7.2% of those in the placebo group (P<0.001).
Bottom line: Eplerenone reduces both the risk of death and the risk of hospitalization in patients with systolic heart failure and mild symptoms.
Citation: Zannad F, McMurray JJ, Krum H, et al. Eplerenone in patients with systolic heart failure and mild symptoms. N Engl J Med. 2011;364(1):11-21.
Fidaxomicin Noninferior to Vancomycin for C. Difficile Treatment
Clinical question: What is the safety and efficacy of fidaxomicin compared to vancomycin in the treatment of patients with C. difficile infection?
Background: Fidaxomicin, a new macrocyclic antibiotic, has shown high efficacy in vitro against C. diff, minimal systemic absorption, and a narrow-spectrum profile. In previously published Phase 2 trials of fidaxomicin for the treatment of C. diff, it has been associated with good clinical response and low recurrence rates.
Study design: Prospective, multicenter, double-blind, randomized trial.
Setting: Fifty-two sites in the United States and 15 in Canada.
Synopsis: The study included 629 adults with acute symptoms of C. diff and a positive stool toxin test. The patients were randomly assigned to 200-mg twice-daily fidaxomicin or 125-mg four-times-daily vancomycin for a course of 10 days. The primary endpoint was rate of clinical cure (resolution of diarrhea), and secondary endpoints were recurrence of C. diff and global cure (clinical cure and lack of relapse within four weeks of cessation of therapy).
The rate of clinical cure associated with fidaxomicin was noninferior to that associated with vancomycin (88.2% vs. 85.8%, respectively). Patients receiving fidaxomicin had a lower rate of relapse than those receiving vancomycin (15.4% vs. 25.3%, respectively, P=0.005) and a higher global cure rate (74.6 vs. 61.1%, P=0.006). In subgroup analysis, the lower rate of recurrence was seen in patients with non-North American pulsed-field Type 1 strain (NAP1/BI/027 strain), while in patients with the NAP1/BI/027 strain, the recurrence rate was similar for both drugs. There was no difference in adverse event rates.
Bottom line: Clinical cure rates of C. diff with fidaxomicin are noninferior to those with vancomycin; however, fidaxomicin is associated with a significantly lower rate of recurrence among those infected with the non-NAP1/BI/027 strain.
Citation: Louie TJ, Miller MA, Mullane KM, et al. Fidaxomicin versus vancomycin for Clostridium difficile infection. N Engl J Med. 2011;364(5):422-431.
ACP Guideline Discourages Use of Intensive Insulin Therapy in Hospitalized Patients
Clinical question: Does the use of intensive insulin therapy (IIT) to achieve tight glycemic control in hospitalized patients (whether in the SICU, MICU, or on the general medicine floor) improve important health outcomes?
Background: Hyperglycemia is a common finding in hospitalized patients and is associated with prolonged length of stay (LOS), death, and worsening health outcomes. Despite this, prospective studies have yet to provide consistent evidence that using IIT to achieve strict glycemic control (80 mg/dL-110 mg/dL) improves outcomes in hospitalized patients.
Study design: Systematic review of MEDLINE and the Cochrane Database of Systematic Reviews from 1950 to January 2010.
Setting: Trials included subjects with myocardial infarction, stroke, and brain injury, as well as those in perioperative settings and ICUs.
Synopsis: The review informing this guideline meta-analyzed 21 trials and found that IIT did not improve short-term mortality, long-term mortality, infection rates, LOS, or the need for renal replacement therapy. Furthermore, IIT was associated with a sixfold increase in risk for severe hypoglycemia in all hospital settings.
Based on these findings, the American College of Physicians (ACP) issued three recommendations:
- To not use IIT to strictly control blood glucose in non-SICU/non-MICU patients with or without diabetes (strong recommendation, moderate-quality evidence);
- To not use IIT to normalize blood glucose in SICU or MICU patients with or without diabetes (strong recommendation, high-quality evidence); and
- To consider a target blood glucose level of 140 mg to 200 mg if insulin therapy is used in SICU or MICU patients (weak recommendation, moderate-quality evidence).
Bottom line: The ACP recommends against using IIT to strictly control blood glucose (80 mg/dL-180 mg/dL) in hospitalized patients, whether in the SICU, MICU, or on the general medicine floor.
Citation: Qaseem A, Humphrey LL, Chou R, Snow V, Shekelle P. Use of intensive insulin therapy for the management of glycemic control in hospitalized patients: a clinical practice guideline from the American College of Physicians. Ann Intern Med. 2011;154(4):260-267.
Limited Benefits Seen with Hospitalist-Neurosurgeon Comanagement
Clinical question: Does hospitalist-neurosurgeon comanagement improve patient outcomes?
Background: The shared management of surgical patients between surgeons and hospitalists is increasingly common despite limited data supporting its effectiveness in reducing costs or improving patient outcomes.
Study design: Single-center, retrospective study.
Setting: Tertiary-care academic medical center.
Synopsis: Data were collected on the 7,596 patients who were admitted to the neurosurgical service of the University of California San Francisco Medical Center from June 1, 2005, to December 31, 2008. The study looked at 4,203 patients (55.3%) admitted before July 1, 2007, when hospitalist comanagement was implemented, and 3,393 patients (44.7%) after comanagement began. Of those admitted during the post-implementation period, 988 (29.1%) were comanaged.
After adjusting for patient characteristics and background trends, and accounting for clustering at the physician level, no differences were found in patient mortality rate, readmissions, or LOS after implementation of comanagement. No consistent improvements were seen in patient satisfaction.
However, physician and staff perceptions of safety and quality of care were significantly better after comanagement. There was a moderate decrease in adjusted hospital costs after implementation (adjusted cost ratio 0.94, range 0.88-1.00) equivalent to a cost savings of about $1,439 per hospitalization.
Bottom line: The implementation of a hospitalist-neurosurgery comanagement service did not improve patient outcomes or satisfaction, but it did appear to improve providers’ perception of care quality and reduce hospital costs.
Citation: Auerbach AD, Wachter RM, Cheng HQ, et al. Comanagement of surgical patients between neurosurgeons and hospitalists. Arch Intern Med. 2010;170(22):2004-2010.
Comparable Mortality Between Hemodialysis and Peritoneal Dialysis
Clinical question: What effect does the initial dialysis modality used have on mortality for patients with end-stage renal disease (ESRD)?
Background: Despite the substantially lower annual per-person costs of peritoneal dialysis (PD) as compared with hemodialysis (HD), only 7% of dialysis patients were treated with PD in 2008. It is unknown whether there are differences in mortality between those using PD and HD when examined in contemporary cohorts.
Study design: Retrospective cohort study.
Setting: National cohort.
Synopsis: Data for patients with incident ESRD over a nine-year period were obtained from the U.S. Renal Data Systems (USRDS), a national registry of all patients with ESRD. Initial dialysis modality was defined as the dialysis modality used 90 days after initiation of dialysis. Patients were divided into three three-year cohorts (1996-1998, 1999-2001, and 2002-2004) based on the date dialysis was initiated and followed for up to five years.
A substantial and consistent reduction in mortality was seen for PD patients across the three time periods. No such improvements were observed across the time periods for the HD patients. PD patients were, on average, younger, healthier, and more likely to be white. In an analysis of the most recent cohort adjusting for these factors, there was no significant difference in the risk of death between HD and PD patients. The median life expectancy of HD and PD patients was 38.4 and 36.6 months, respectively.
Limitations of the study include a lack of randomization and failure to consider switches from one dialysis modality to the other.
Bottom line: Patients beginning their renal replacement therapy with PD had similar mortality after five years compared to patients using in-center HD.
Citation: Mehrotra R, Chiu YW, Kalantar-Zadeh K, Bargman J, Vonesh E. Similar outcomes with hemodialysis and peritoneal dialysis in patients with end-stage renal disease. Arch Intern Med. 2011;171(2):110-118.
Pneumococcal Urinary Antigen Test Might Guide Community-Acquired Pneumonia Treatment
Clinical question: What is the diagnostic accuracy and clinical utility of pneumococcal urinary antigen testing in adult patients hospitalized with community-acquired pneumonia (CAP)?
Background: Although CAP is common, our ability to determine its etiology is limited, and empirical broad-spectrum antibiotic therapy is the norm. Pneumococcal urinary antigen testing could allow for the more frequent use of narrow-spectrum pathogen-focused antibiotic therapy.
Study design: Prospective cohort study.
Setting: University-affiliated hospital in Spain.
Synopsis: This study included consecutive adult patients hospitalized with CAP from February 2007 though January 2008. A total of 464 patients with 474 episodes of CAP were included. Pneumococcal urinary antigen testing was performed in 383 (80.8%) episodes of CAP. Streptococcus pneumoniae was felt to be the causative pathogen in 171 cases (36.1%). It was detected exclusively by urinary antigen test in 75 of those cases (43.8%).
For the urine antigen test, specificity was 96% (95% CI, 86.5 to 99.5), and the positive predictive value was 96.5% (95% CI, 87.9 to 99.5). The results of the test led clinicians to reduce the spectrum of antibiotics in 41 patients, and pneumonia was cured in all 41 of these patients. Treatment was not modified despite positive antigen test results in 89 patients.
Limitations of this study include a lack of complete microbiological data for all patients. The study also highlighted the difficulty in changing clinicians’ prescribing patterns, even when test results indicate the need for treatment modification.
Bottom line: A positive pneumococcal urinary antigen test result in adult patients hospitalized with CAP can help clinicians narrow antimicrobial therapy with good clinical outcomes.
Citation: Sordé R, Falcó V, Lowak M, et al. Current and potential usefulness of pneumococcal urinary antigen detection in hospitalized patients with community-acquired pneumonia to guide antimicrobial therapy. Arch Intern Med. 2011;171(2):166-172.
Racial Disparities Detected in Hospital Readmission Rates
Clinical question: Do black patients have higher odds of readmission than white patients, and, if so, are these disparities related to where black patients receive care?
Background: Racial disparities in healthcare are well documented. Understanding and eliminating those disparities remains a national priority. Reducing hospital readmissions also is a policy focus, as it represents an opportunity to improve quality while reducing costs. Whether there are racial disparities in hospital readmissions at the national level is unknown.
Study design: Retrospective cohort study.
Setting: Medicare fee-for-service beneficiaries from 2006 to 2008.
Synopsis: Medicare discharge data for more than 3 million Medicare fee-for-service beneficiaries aged 65 years or older discharged from January 1, 2006, to November 30, 2008, with the primary discharge diagnosis of acute myocardial infarction (MI), congestive heart failure, or pneumonia were used to calculate risk-adjusted odds of readmission within 30 days of discharge. Hospitals in the highest decile of proportion of black patients were categorized as minority-serving.
Overall, black patients had 13% higher odds of all-cause 30-day readmission than white patients (24.8% vs. 22.6%, OR 1.13, 95% CI, 1.11-1.14), and patients discharged from minority-serving hospitals had 23% higher odds of readmission than patients from non-minority-serving hospitals (25.5% vs. 22.0%, OR 1.23, 95% CI, 1.20-1.27). Among those with acute MI, black patients had 13% higher odds of readmission (OR 1.13, 95% CI, 1.10-1.16), irrespective of the site of care, while patients from minority-serving hospitals had 22% higher odds of readmissions (OR 1.22, 95% CI, 1.17-1.27), even after adjusting for patient race. Similar disparities were seen for CHF and pneumonia. Results were unchanged after adjusting for hospital characteristics, including markers of caring for poor patients.
Bottom line: Compared with white patients, elderly black Medicare patients have a higher 30-day hospital readmission rate for MI, CHF, and pneumonia that is not fully explained by the higher readmission rates seen among hospitals that disproportionately care for black patients.
Citation: Joynt KE, Orav EJ, Jha AK. Thirty-day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305(7):675-681.
Easily Identifiable Clinical and Demographic Factors Associated with Hospital Readmission
Clinical question: Which clinical, operational, or demographic factors are associated with 30-day readmission for general medicine patients?
Background: While a few clinical risk factors for hospital readmission have been well defined in subgroups of inpatients, there are still limited data regarding readmission risk that might be associated with a broad range of operational, demographic, and clinical factors in a heterogeneous population of general medicine patients.
Study design: Retrospective observational study.
Setting: Single academic medical center.
Synopsis: The study examined more than 10,300 consecutive admissions (6,805 patients) discharged over a two-year period from 2006 to 2008 from the general medicine service of an urban academic medical center. The 30-day readmission rate was 17.0%.
In multivariate analysis, factors associated with readmission included black race (OR 1.43, 95% CI, 1.24-1.65), inpatient use of narcotics (OR 1.33, 95% CI, 1.16-1.53) and corticosteroids (OR 1.24, 95% CI, 1.09-1.42), and the disease states of cancer (with metastasis 1.61, 95% CI, 1.33-1.95; without metastasis 1.95, 95% CI 1.54-2.47), renal failure (OR 1.19, 95% CI 1.05-1.36), congestive heart failure (OR 1.30, 95% CI, 1.09-1.56), and weight loss (OR 1.26, 95% CI, 1.09-1.47). Medicaid payor status (OR 1.15, 95% CI, 0.97-1.36) had a trend toward readmission. None of the operational factors were significantly associated with readmission, including discharge to skilled nursing facility or weekend discharge.
A major limitation of the study was its inability to capture readmissions to hospitals other than the study hospital, which, based on prior studies, could have accounted for nearly a quarter of readmissions.
Bottom line: Readmission of general medicine patients within 30 days is common and associated with several easily identifiable clinical and nonclinical factors.
Citation: Allaudeen N, Vidyarthi A, Maselli J, Auerbach A. Redefining readmission risk factors for general medicine patients. J Hosp Med. 2011;6(2):54-60.
Unplanned Medical ICU Transfers Tied to Preventable Errors
Clinical question: What fraction of unplanned medical ICU (MICU) transfers result from errors in care and why do they occur?
Background: Prior studies have suggested that 14% to 28% of patients admitted to the MICU are unplanned transfers. It is not known what fraction of these transfers result from errors in care, and whether these transfers could be prevented.
Study design: Retrospective cohort study.
Setting: University-affiliated academic medical center.
Synopsis: All unplanned transfers to the MICU from June 1, 2005, to May 30, 2006, were included in the study. Three independent observers, all hospitalists for more than three years, reviewed patient records to determine the cause of unplanned transfers according to a taxonomy the researchers developed for classifying the transfers. They also determined whether the transfer could have been prevented.
Of the 4,468 general medicine admissions during the study period, 152 met inclusion criteria for an unplanned MICU transfer. Errors in care were judged to account for 19% (n=29) of unplanned transfers, 15 of which were due to incorrect triage at admission and 14 to iatrogenic errors, such as opiate overdose during pain treatment or delayed treatment. All 15 triage errors were considered preventable. Of the iatrogenic errors, eight were considered preventable through an earlier intervention. Overall, 23 errors (15%) were thought to be preventable. Observer agreement was moderate to almost perfect (κ0.55-0.90).
Bottom line: Nearly 1 in 7 unplanned transfers to the medical ICU are associated with preventable errors in care, with the most common error being inappropriate admission triage.
Citation: Bapoje SR, Gaudiani JL, Narayanan V, Albert RK. Unplanned transfers to a medical intensive care unit: causes and relationship to preventable errors in care. J Hosp Med. 2011;6(2):68-72. TH
In the Literature
In This Edition
Literature at a Glance
A guide to this month’s studies
- Predictors of readmission for patients with CAP.
- High-dose statins vs. lipid-lowering therapy combinations
- Catheter retention and risks of reinfection in patients with coagulase-negative staph
- Stenting vs. medical management of renal-artery stenosis
- Dabigatran for VTE
- Surgical mask vs. N95 respirator for influenza prevention
- Hospitalization and the risk of long-term cognitive decline
- Maturation of rapid-response teams and outcomes
Commonly Available Clinical Variables Predict 30-Day Readmissions for Community-Acquired Pneumonia
Clinical question: What are the risk factors for 30-day readmission in patients hospitalized for community-acquired pneumonia (CAP)?
Background: CAP is a common admission diagnosis associated with significant morbidity, mortality, and resource utilization. While prior data suggested that patients who survive a hospitalization for CAP are particularly vulnerable to readmission, few studies have examined the risk factors for readmission in this population.
Study design: Prospective, observational study.
Setting: A 400-bed teaching hospital in northern Spain.
Synopsis: From 2003 to 2005, this study consecutively enrolled 1,117 patients who were discharged after hospitalization for CAP. Eighty-one patients (7.2%) were readmitted within 30 days of discharge; 29 (35.8%) of these patients were rehospitalized for pneumonia-related causes.
Variables associated with pneumonia-related rehospitalization were treatment failure (HR 2.9; 95% CI, 1.2-6.8) and one or more instability factors at hospital discharge—for example, vital-sign abnormalities or inability to take food or medications by mouth (HR 2.8; 95% CI, 1.3-6.2). Variables associated with readmission unrelated to pneumonia were age greater than 65 years (HR 4.5; 95% CI, 1.4-14.7), Charlson comorbidity index greater than 2 (HR 1.9; 95% CI, 1.0-3.4), and decompensated comorbidities during index hospitalization.
Patients with at least two of the above risk factors were at a significantly higher risk for 30-day hospital readmission (HR 3.37; 95% CI, 2.08-5.46).
Bottom line: The risk factors for readmission after hospitalization for CAP differed between the groups with readmissions related to pneumonia versus other causes. Patients at high risk for readmission can be identified using easily available clinical variables.
Citation: Capelastegui A, España Yandiola PP, Quintana JM, et al. Predictors of short-term rehospitalization following discharge of patients hospitalized with community-acquired pneumonia. Chest. 2009;136(4): 1079-1085.
Combinations of Lipid-Lowering Agents No More Effective than High-Dose Statin Monotherapy
Clinical question: Is high-dose statin monotherapy better than combinations of lipid-lowering agents for dyslipidemia in adults at high risk for coronary artery disease?
Background: While current guidelines support the benefits of aggressive lipid targets, there is little to guide physicians as to the optimal strategy for attaining target lipid levels.
Study design: Systematic review.
Setting: North America, Europe, and Asia.
Synopsis: Very-low-strength evidence showed that statin-ezetimibe (two trials; N=439) and statin-fibrate (one trial; N=166) combinations did not reduce mortality more than high-dose statin monotherapy. No trial data were found comparing the effect of these two strategies on secondary endpoints, including myocardial infarction, stroke, or revascularization.
Two trials (N=295) suggested lower-target lipid levels were more often achieved with statin-ezetimibe combination therapy than with high-dose statin monotherapy (OR 7.21; 95% CI, 4.30-12.08).
Limitations of this systematic review include the small number of studies directly comparing the two strategies, the short duration of most of the studies included, the focus on surrogate outcomes, and the heterogeneity of the study populations’ risk for coronary artery disease. Few studies were available comparing combination therapies other than statin-ezetimibe.
Bottom line: Limited evidence suggests that the combination of a statin with another lipid-lowering agent does not improve clinical outcomes when compared with high-dose statin monotherapy. Low-quality evidence suggests that lower-target lipid levels were more often reached with statin-ezetimibe combination therapy than with high-dose statin monotherapy.
Citation: Sharma M, Ansari MT, Abou-Setta AM, et al. Systematic review: comparative effectiveness and harms of combination therapy and monotherapy for dyslipidemia. Ann Intern Med. 2009;151(9):622-630.
Catheter Retention in Catheter-Related Coagulase-Negative Staphylococcal Bacteremia Is a Significant Risk Factor for Recurrent Infection
Clinical question: Should central venous catheters (CVC) be removed in patients with coagulase-negative staphylococcal catheter-related bloodstream infections (CRBSI)?
Background: Current guidelines for the management of coagulase-negative staphylococcal CRBSI do not recommend routine removal of the CVC, but are based on studies that did not use a strict definition of coagulase-negative staphylococcal CRBSI. Additionally, the studies did not look explicitly at the risk of recurrent infection.
Study design: Retrospective chart review.
Setting: Single academic medical center.
Synopsis: The study retrospectively evaluated 188 patients with coagulase-negative staphylococcal CRBSI. Immediate resolution of the infection was not influenced by the management of the CVC (retention vs. removal or exchange). However, using the multiple logistic regression technique, patients with catheter retention were found to be 6.6 times (95% CI, 1.8-23.9 times) more likely to have recurrence compared with those patients whose catheter was removed or exchanged.
Bottom line: While CVC management does not appear to have an impact on the acute resolution of infection, catheter retention is a significant risk factor for recurrent bacteremia.
Citation: Raad I, Kassar R, Ghannam D, Chaftari AM, Hachem R, Jiang Y. Management of the catheter in documented catheter-related coagulase-negative staphylococcal bacteremia: remove or retain? Clin Infect Dis. 2009;49(8):1187-1194.
Revascularization Offers No Benefit over Medical Therapy for Renal-Artery Stenosis
Clinical question: Does revascularization plus medical therapy compared with medical therapy alone improve outcomes in patients with renal-artery stenosis?
Background: Renal-artery stenosis is associated with significant hypertension and renal dysfunction. Revascularization for atherosclerotic renal-artery stenosis can improve artery patency, but it remains unclear if it provides clinical benefit in terms of preserving renal function or reducing overall mortality.
Study design: Randomized, controlled trial.
Setting: Fifty-seven outpatient sites in the United Kingdom, Australia, and New Zealand.
Synopsis: The study randomized 806 patients with renal-artery stenosis to receive either medical therapy alone (N=403) or medical management plus endovascular revascularization (N=403).
The majority of the patients who underwent revascularization (95%) received a stent.
The data show no significant difference between the two groups in the rate of progression of renal dysfunction, systolic blood pressure, rates of adverse renal and cardiovascular events, and overall survival. Of the 359 patients who underwent revascularization, 23 (6%) experienced serious complications from the procedure, including two deaths and three cases of amputated toes or limbs.
The primary limitation of this trial is the population studied. The trial only included subjects for whom revascularization offered uncertain clinical benefits, according to their doctor. Those subjects for whom revascularization offered certain clinical benefits, as noted by their primary-care physician (PCP), were excluded from the study. Examples include patients presenting with rapidly progressive renal dysfunction or pulmonary edema thought to be a result of renal-artery stenosis.
Bottom line: Revascularization provides no benefit to most patients with renal-artery stenosis, and is associated with some risk.
Citation: ASTRAL investigators, Wheatley K, Ives N, et al. Revascularization versus medical therapy for renal-artery stenosis. N Eng J Med. 2009;361(20):1953-1962.
Dabigatran as Effective as Warfarin in Treatment of Acute VTE
Clinical question: Is dabigatran a safe and effective alternative to warfarin for treatment of acute VTE?
Background: Parenteral anticoagulation followed by warfarin is the standard of care for acute VTE. Warfarin requires frequent monitoring and has numerous drug and food interactions. Dabigatran, which the FDA has yet to approve for use in the U.S., is an oral direct thrombin inhibitor that does not require laboratory monitoring. The role of dabigatran in acute VTE has not been evaluated.
Study design: Randomized, double-blind, noninferiority trial.
Setting: Two hundred twenty-two clinical centers in 29 countries.
Synopsis: This study randomized 2,564 patients with documented VTE (either DVT or pulmonary embolism [PE]) to receive dabigatran 150mg twice daily or warfarin after at least five days of a parenteral anticoagulant. Warfarin was dose-adjusted to an INR goal of 2.0-3.0. The primary outcome was incidence of recurrent VTE and related deaths at six months.
A total of 2.4% of patients assigned to dabigatran and 2.1% of patients assigned to warfarin had recurrent VTE (HR 1.10; 95% CI, 0.8-1.5), which met criteria for noninferiority. Major bleeding occurred in 1.6% of patients assigned to dabigatran and 1.9% assigned to warfarin (HR 0.82; 95% CI, 0.45-1.48). There was no difference between groups in overall adverse effects. Discontinuation due to adverse events was 9% with dabigatran compared with 6.8% with warfarin (P=0.05). Dyspepsia was more common with dabigatran (P<0.001).
Bottom line: Following parenteral anticoagulation, dabigatran is a safe and effective alternative to warfarin for the treatment of acute VTE and does not require therapeutic monitoring.
Citation: Schulman S, Kearon C, Kakkar AK, et al. Dabigatran versus warfarin in the treatment of acute venous thromboembolism. N Engl J Med. 2009;361(24):2342-2352.
Surgical Masks as Effective as N95 Respirators for Preventing Influenza
Clinical question: How effective are surgical masks compared with N95 respirators in protecting healthcare workers against influenza?
Background: Evidence surrounding the effectiveness of the surgical mask compared with the N95 respirator for protecting healthcare workers against influenza is sparse.
Study design: Randomized, controlled trial.
Setting: Eight hospitals in Ontario.
Synopsis: The study looked at 446 nurses working in EDs, medical units, and pediatric units randomized to use either a fit-tested N95 respirator or a surgical mask when caring for patients with febrile respiratory illness during the 2008-2009 flu season. The primary outcome measured was laboratory-confirmed influenza. Only a minority of the study participants (30% in the surgical mask group; 28% in the respirator group) received the influenza vaccine during the study year.
Influenza infection occurred with similar incidence in both the surgical-mask and N95 respirator groups (23.6% vs. 22.9%). A two-week audit period demonstrated solid adherence to the assigned respiratory protection device in both groups (11 out of 11 nurses were compliant in the surgical-mask group; six out of seven nurses were compliant in the respirator group).
The major limitation of this study is that it cannot be extrapolated to other settings where there is a high risk for aerosolization, such as intubation or bronchoscopy, where N95 respirators may be more effective than surgical masks.
Bottom line: Surgical masks are as effective as fit-tested N95 respirators in protecting healthcare workers against influenza in most settings.
Citation: Loeb M, Dafoe N, Mahony J, et al. Surgical mask vs. N95 respirator for preventing influenza among health care workers: a randomized trial. JAMA. 2009;302 (17):1865-1871.
Neither Major Illness Nor Noncardiac Surgery Associated with Long-Term Cognitive Decline in Older Patients
Clinical question: Is there a measurable and lasting cognitive decline in older adults following noncardiac surgery or major illness?
Background: Despite limited evidence, there is some concern that elderly patients are susceptible to significant, long-term deterioration in mental function following surgery or a major illness. Prior studies often have been limited by lack of information about the trajectory of surgical patients’ cognitive status before surgery and lack of relevant control groups.
Study design: Retrospective, cohort study.
Setting: Single outpatient research center.
Synopsis: The Alzheimer’s Disease Research Center (ADRC) at the University of Washington in St. Louis continually enrolls research subjects without regard to their baseline cognitive function and provides annual assessment of cognitive functioning.
From the ADRC database, 575 eligible research participants were identified. Of these, 361 had very mild or mild dementia at enrollment, and 214 had no dementia. Participants were then categorized into three groups: those who had undergone noncardiac surgery (N=180); those who had been admitted to the hospital with a major illness (N=119); and those who had experienced neither surgery nor major illness (N=276).
Cognitive trajectory did not differ between the three groups, although participants with baseline dementia declined more rapidly than participants without dementia. Although 23% of patients without dementia developed detectable evidence of dementia during the study period, this outcome was not more common following surgery or major illness.
As participants were assessed annually, this study does not address the issue of post-operative delirium or early cognitive impairment following surgery.
Bottom line: There is no evidence for a long-term effect on cognitive function independently attributable to noncardiac surgery or major illness.
Citation: Avidan MS, Searleman AC, Storandt M, et al. Long-term cognitive decline in older subjects was not attributable to noncardiac surgery or major illness. Anesthesiology. 2009;111(5):964-970.
Rapid-Response System Maturation Decreases Delays in Emergency Team Activation
Clinical question: Does the maturation of a rapid-response system (RRS) improve performance by decreasing delays in medical emergency team (MET) activation?
Background: RRSs have been widely embraced as a possible means to reduce inpatient cardiopulmonary arrests and unplanned ICU admissions. Assessment of RRSs early in their implementation might underestimate their long-term efficacy. Whether the use and performance of RRSs improve as they mature is currently unknown.
Study design: Observational, cohort study.
Setting: Single tertiary-care hospital.
Synopsis: A recent cohort of 200 patients receiving MET review was prospectively compared with a control cohort of 400 patients receiving an MET review five years earlier, at the start of RRS implementation. Information obtained on the two cohorts included demographics, timing of MET activation in relation to the first documented MET review criterion (activation delay), and patient outcomes.
Fewer patients in the recent cohort had delayed MET activation (22.0% vs. 40.3%). The recent cohort also was independently associated with a decreased risk of delayed activation (OR 0.45; 95% C.I., 0.30-0.67) and ICU admission (OR 0.5; 95% C.I., 0.32-0.78). Delayed MET activation independently was associated with greater risk of unplanned ICU admission (OR 1.79; 95% C.I., 1.33-2.93) and hospital mortality (OR 2.18; 95% C.I., 1.42-3.33).
The study is limited by its observational nature, and thus the association between greater delay and unfavorable outcomes should not infer causality.
Bottom line: The maturation of a RRS decreases delays in MET activation. RRSs might need to mature before their full impact is felt.
Citation: Calzavacca P, Licari E, Tee A, et al. The impact of Rapid Response System on delayed emergency team activation patient characteristics and outcomes—a follow-up study. Resuscitation. 2010;81(1):31-35. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- Predictors of readmission for patients with CAP.
- High-dose statins vs. lipid-lowering therapy combinations
- Catheter retention and risks of reinfection in patients with coagulase-negative staph
- Stenting vs. medical management of renal-artery stenosis
- Dabigatran for VTE
- Surgical mask vs. N95 respirator for influenza prevention
- Hospitalization and the risk of long-term cognitive decline
- Maturation of rapid-response teams and outcomes
Commonly Available Clinical Variables Predict 30-Day Readmissions for Community-Acquired Pneumonia
Clinical question: What are the risk factors for 30-day readmission in patients hospitalized for community-acquired pneumonia (CAP)?
Background: CAP is a common admission diagnosis associated with significant morbidity, mortality, and resource utilization. While prior data suggested that patients who survive a hospitalization for CAP are particularly vulnerable to readmission, few studies have examined the risk factors for readmission in this population.
Study design: Prospective, observational study.
Setting: A 400-bed teaching hospital in northern Spain.
Synopsis: From 2003 to 2005, this study consecutively enrolled 1,117 patients who were discharged after hospitalization for CAP. Eighty-one patients (7.2%) were readmitted within 30 days of discharge; 29 (35.8%) of these patients were rehospitalized for pneumonia-related causes.
Variables associated with pneumonia-related rehospitalization were treatment failure (HR 2.9; 95% CI, 1.2-6.8) and one or more instability factors at hospital discharge—for example, vital-sign abnormalities or inability to take food or medications by mouth (HR 2.8; 95% CI, 1.3-6.2). Variables associated with readmission unrelated to pneumonia were age greater than 65 years (HR 4.5; 95% CI, 1.4-14.7), Charlson comorbidity index greater than 2 (HR 1.9; 95% CI, 1.0-3.4), and decompensated comorbidities during index hospitalization.
Patients with at least two of the above risk factors were at a significantly higher risk for 30-day hospital readmission (HR 3.37; 95% CI, 2.08-5.46).
Bottom line: The risk factors for readmission after hospitalization for CAP differed between the groups with readmissions related to pneumonia versus other causes. Patients at high risk for readmission can be identified using easily available clinical variables.
Citation: Capelastegui A, España Yandiola PP, Quintana JM, et al. Predictors of short-term rehospitalization following discharge of patients hospitalized with community-acquired pneumonia. Chest. 2009;136(4): 1079-1085.
Combinations of Lipid-Lowering Agents No More Effective than High-Dose Statin Monotherapy
Clinical question: Is high-dose statin monotherapy better than combinations of lipid-lowering agents for dyslipidemia in adults at high risk for coronary artery disease?
Background: While current guidelines support the benefits of aggressive lipid targets, there is little to guide physicians as to the optimal strategy for attaining target lipid levels.
Study design: Systematic review.
Setting: North America, Europe, and Asia.
Synopsis: Very-low-strength evidence showed that statin-ezetimibe (two trials; N=439) and statin-fibrate (one trial; N=166) combinations did not reduce mortality more than high-dose statin monotherapy. No trial data were found comparing the effect of these two strategies on secondary endpoints, including myocardial infarction, stroke, or revascularization.
Two trials (N=295) suggested lower-target lipid levels were more often achieved with statin-ezetimibe combination therapy than with high-dose statin monotherapy (OR 7.21; 95% CI, 4.30-12.08).
Limitations of this systematic review include the small number of studies directly comparing the two strategies, the short duration of most of the studies included, the focus on surrogate outcomes, and the heterogeneity of the study populations’ risk for coronary artery disease. Few studies were available comparing combination therapies other than statin-ezetimibe.
Bottom line: Limited evidence suggests that the combination of a statin with another lipid-lowering agent does not improve clinical outcomes when compared with high-dose statin monotherapy. Low-quality evidence suggests that lower-target lipid levels were more often reached with statin-ezetimibe combination therapy than with high-dose statin monotherapy.
Citation: Sharma M, Ansari MT, Abou-Setta AM, et al. Systematic review: comparative effectiveness and harms of combination therapy and monotherapy for dyslipidemia. Ann Intern Med. 2009;151(9):622-630.
Catheter Retention in Catheter-Related Coagulase-Negative Staphylococcal Bacteremia Is a Significant Risk Factor for Recurrent Infection
Clinical question: Should central venous catheters (CVC) be removed in patients with coagulase-negative staphylococcal catheter-related bloodstream infections (CRBSI)?
Background: Current guidelines for the management of coagulase-negative staphylococcal CRBSI do not recommend routine removal of the CVC, but are based on studies that did not use a strict definition of coagulase-negative staphylococcal CRBSI. Additionally, the studies did not look explicitly at the risk of recurrent infection.
Study design: Retrospective chart review.
Setting: Single academic medical center.
Synopsis: The study retrospectively evaluated 188 patients with coagulase-negative staphylococcal CRBSI. Immediate resolution of the infection was not influenced by the management of the CVC (retention vs. removal or exchange). However, using the multiple logistic regression technique, patients with catheter retention were found to be 6.6 times (95% CI, 1.8-23.9 times) more likely to have recurrence compared with those patients whose catheter was removed or exchanged.
Bottom line: While CVC management does not appear to have an impact on the acute resolution of infection, catheter retention is a significant risk factor for recurrent bacteremia.
Citation: Raad I, Kassar R, Ghannam D, Chaftari AM, Hachem R, Jiang Y. Management of the catheter in documented catheter-related coagulase-negative staphylococcal bacteremia: remove or retain? Clin Infect Dis. 2009;49(8):1187-1194.
Revascularization Offers No Benefit over Medical Therapy for Renal-Artery Stenosis
Clinical question: Does revascularization plus medical therapy compared with medical therapy alone improve outcomes in patients with renal-artery stenosis?
Background: Renal-artery stenosis is associated with significant hypertension and renal dysfunction. Revascularization for atherosclerotic renal-artery stenosis can improve artery patency, but it remains unclear if it provides clinical benefit in terms of preserving renal function or reducing overall mortality.
Study design: Randomized, controlled trial.
Setting: Fifty-seven outpatient sites in the United Kingdom, Australia, and New Zealand.
Synopsis: The study randomized 806 patients with renal-artery stenosis to receive either medical therapy alone (N=403) or medical management plus endovascular revascularization (N=403).
The majority of the patients who underwent revascularization (95%) received a stent.
The data show no significant difference between the two groups in the rate of progression of renal dysfunction, systolic blood pressure, rates of adverse renal and cardiovascular events, and overall survival. Of the 359 patients who underwent revascularization, 23 (6%) experienced serious complications from the procedure, including two deaths and three cases of amputated toes or limbs.
The primary limitation of this trial is the population studied. The trial only included subjects for whom revascularization offered uncertain clinical benefits, according to their doctor. Those subjects for whom revascularization offered certain clinical benefits, as noted by their primary-care physician (PCP), were excluded from the study. Examples include patients presenting with rapidly progressive renal dysfunction or pulmonary edema thought to be a result of renal-artery stenosis.
Bottom line: Revascularization provides no benefit to most patients with renal-artery stenosis, and is associated with some risk.
Citation: ASTRAL investigators, Wheatley K, Ives N, et al. Revascularization versus medical therapy for renal-artery stenosis. N Eng J Med. 2009;361(20):1953-1962.
Dabigatran as Effective as Warfarin in Treatment of Acute VTE
Clinical question: Is dabigatran a safe and effective alternative to warfarin for treatment of acute VTE?
Background: Parenteral anticoagulation followed by warfarin is the standard of care for acute VTE. Warfarin requires frequent monitoring and has numerous drug and food interactions. Dabigatran, which the FDA has yet to approve for use in the U.S., is an oral direct thrombin inhibitor that does not require laboratory monitoring. The role of dabigatran in acute VTE has not been evaluated.
Study design: Randomized, double-blind, noninferiority trial.
Setting: Two hundred twenty-two clinical centers in 29 countries.
Synopsis: This study randomized 2,564 patients with documented VTE (either DVT or pulmonary embolism [PE]) to receive dabigatran 150mg twice daily or warfarin after at least five days of a parenteral anticoagulant. Warfarin was dose-adjusted to an INR goal of 2.0-3.0. The primary outcome was incidence of recurrent VTE and related deaths at six months.
A total of 2.4% of patients assigned to dabigatran and 2.1% of patients assigned to warfarin had recurrent VTE (HR 1.10; 95% CI, 0.8-1.5), which met criteria for noninferiority. Major bleeding occurred in 1.6% of patients assigned to dabigatran and 1.9% assigned to warfarin (HR 0.82; 95% CI, 0.45-1.48). There was no difference between groups in overall adverse effects. Discontinuation due to adverse events was 9% with dabigatran compared with 6.8% with warfarin (P=0.05). Dyspepsia was more common with dabigatran (P<0.001).
Bottom line: Following parenteral anticoagulation, dabigatran is a safe and effective alternative to warfarin for the treatment of acute VTE and does not require therapeutic monitoring.
Citation: Schulman S, Kearon C, Kakkar AK, et al. Dabigatran versus warfarin in the treatment of acute venous thromboembolism. N Engl J Med. 2009;361(24):2342-2352.
Surgical Masks as Effective as N95 Respirators for Preventing Influenza
Clinical question: How effective are surgical masks compared with N95 respirators in protecting healthcare workers against influenza?
Background: Evidence surrounding the effectiveness of the surgical mask compared with the N95 respirator for protecting healthcare workers against influenza is sparse.
Study design: Randomized, controlled trial.
Setting: Eight hospitals in Ontario.
Synopsis: The study looked at 446 nurses working in EDs, medical units, and pediatric units randomized to use either a fit-tested N95 respirator or a surgical mask when caring for patients with febrile respiratory illness during the 2008-2009 flu season. The primary outcome measured was laboratory-confirmed influenza. Only a minority of the study participants (30% in the surgical mask group; 28% in the respirator group) received the influenza vaccine during the study year.
Influenza infection occurred with similar incidence in both the surgical-mask and N95 respirator groups (23.6% vs. 22.9%). A two-week audit period demonstrated solid adherence to the assigned respiratory protection device in both groups (11 out of 11 nurses were compliant in the surgical-mask group; six out of seven nurses were compliant in the respirator group).
The major limitation of this study is that it cannot be extrapolated to other settings where there is a high risk for aerosolization, such as intubation or bronchoscopy, where N95 respirators may be more effective than surgical masks.
Bottom line: Surgical masks are as effective as fit-tested N95 respirators in protecting healthcare workers against influenza in most settings.
Citation: Loeb M, Dafoe N, Mahony J, et al. Surgical mask vs. N95 respirator for preventing influenza among health care workers: a randomized trial. JAMA. 2009;302 (17):1865-1871.
Neither Major Illness Nor Noncardiac Surgery Associated with Long-Term Cognitive Decline in Older Patients
Clinical question: Is there a measurable and lasting cognitive decline in older adults following noncardiac surgery or major illness?
Background: Despite limited evidence, there is some concern that elderly patients are susceptible to significant, long-term deterioration in mental function following surgery or a major illness. Prior studies often have been limited by lack of information about the trajectory of surgical patients’ cognitive status before surgery and lack of relevant control groups.
Study design: Retrospective, cohort study.
Setting: Single outpatient research center.
Synopsis: The Alzheimer’s Disease Research Center (ADRC) at the University of Washington in St. Louis continually enrolls research subjects without regard to their baseline cognitive function and provides annual assessment of cognitive functioning.
From the ADRC database, 575 eligible research participants were identified. Of these, 361 had very mild or mild dementia at enrollment, and 214 had no dementia. Participants were then categorized into three groups: those who had undergone noncardiac surgery (N=180); those who had been admitted to the hospital with a major illness (N=119); and those who had experienced neither surgery nor major illness (N=276).
Cognitive trajectory did not differ between the three groups, although participants with baseline dementia declined more rapidly than participants without dementia. Although 23% of patients without dementia developed detectable evidence of dementia during the study period, this outcome was not more common following surgery or major illness.
As participants were assessed annually, this study does not address the issue of post-operative delirium or early cognitive impairment following surgery.
Bottom line: There is no evidence for a long-term effect on cognitive function independently attributable to noncardiac surgery or major illness.
Citation: Avidan MS, Searleman AC, Storandt M, et al. Long-term cognitive decline in older subjects was not attributable to noncardiac surgery or major illness. Anesthesiology. 2009;111(5):964-970.
Rapid-Response System Maturation Decreases Delays in Emergency Team Activation
Clinical question: Does the maturation of a rapid-response system (RRS) improve performance by decreasing delays in medical emergency team (MET) activation?
Background: RRSs have been widely embraced as a possible means to reduce inpatient cardiopulmonary arrests and unplanned ICU admissions. Assessment of RRSs early in their implementation might underestimate their long-term efficacy. Whether the use and performance of RRSs improve as they mature is currently unknown.
Study design: Observational, cohort study.
Setting: Single tertiary-care hospital.
Synopsis: A recent cohort of 200 patients receiving MET review was prospectively compared with a control cohort of 400 patients receiving an MET review five years earlier, at the start of RRS implementation. Information obtained on the two cohorts included demographics, timing of MET activation in relation to the first documented MET review criterion (activation delay), and patient outcomes.
Fewer patients in the recent cohort had delayed MET activation (22.0% vs. 40.3%). The recent cohort also was independently associated with a decreased risk of delayed activation (OR 0.45; 95% C.I., 0.30-0.67) and ICU admission (OR 0.5; 95% C.I., 0.32-0.78). Delayed MET activation independently was associated with greater risk of unplanned ICU admission (OR 1.79; 95% C.I., 1.33-2.93) and hospital mortality (OR 2.18; 95% C.I., 1.42-3.33).
The study is limited by its observational nature, and thus the association between greater delay and unfavorable outcomes should not infer causality.
Bottom line: The maturation of a RRS decreases delays in MET activation. RRSs might need to mature before their full impact is felt.
Citation: Calzavacca P, Licari E, Tee A, et al. The impact of Rapid Response System on delayed emergency team activation patient characteristics and outcomes—a follow-up study. Resuscitation. 2010;81(1):31-35. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- Predictors of readmission for patients with CAP.
- High-dose statins vs. lipid-lowering therapy combinations
- Catheter retention and risks of reinfection in patients with coagulase-negative staph
- Stenting vs. medical management of renal-artery stenosis
- Dabigatran for VTE
- Surgical mask vs. N95 respirator for influenza prevention
- Hospitalization and the risk of long-term cognitive decline
- Maturation of rapid-response teams and outcomes
Commonly Available Clinical Variables Predict 30-Day Readmissions for Community-Acquired Pneumonia
Clinical question: What are the risk factors for 30-day readmission in patients hospitalized for community-acquired pneumonia (CAP)?
Background: CAP is a common admission diagnosis associated with significant morbidity, mortality, and resource utilization. While prior data suggested that patients who survive a hospitalization for CAP are particularly vulnerable to readmission, few studies have examined the risk factors for readmission in this population.
Study design: Prospective, observational study.
Setting: A 400-bed teaching hospital in northern Spain.
Synopsis: From 2003 to 2005, this study consecutively enrolled 1,117 patients who were discharged after hospitalization for CAP. Eighty-one patients (7.2%) were readmitted within 30 days of discharge; 29 (35.8%) of these patients were rehospitalized for pneumonia-related causes.
Variables associated with pneumonia-related rehospitalization were treatment failure (HR 2.9; 95% CI, 1.2-6.8) and one or more instability factors at hospital discharge—for example, vital-sign abnormalities or inability to take food or medications by mouth (HR 2.8; 95% CI, 1.3-6.2). Variables associated with readmission unrelated to pneumonia were age greater than 65 years (HR 4.5; 95% CI, 1.4-14.7), Charlson comorbidity index greater than 2 (HR 1.9; 95% CI, 1.0-3.4), and decompensated comorbidities during index hospitalization.
Patients with at least two of the above risk factors were at a significantly higher risk for 30-day hospital readmission (HR 3.37; 95% CI, 2.08-5.46).
Bottom line: The risk factors for readmission after hospitalization for CAP differed between the groups with readmissions related to pneumonia versus other causes. Patients at high risk for readmission can be identified using easily available clinical variables.
Citation: Capelastegui A, España Yandiola PP, Quintana JM, et al. Predictors of short-term rehospitalization following discharge of patients hospitalized with community-acquired pneumonia. Chest. 2009;136(4): 1079-1085.
Combinations of Lipid-Lowering Agents No More Effective than High-Dose Statin Monotherapy
Clinical question: Is high-dose statin monotherapy better than combinations of lipid-lowering agents for dyslipidemia in adults at high risk for coronary artery disease?
Background: While current guidelines support the benefits of aggressive lipid targets, there is little to guide physicians as to the optimal strategy for attaining target lipid levels.
Study design: Systematic review.
Setting: North America, Europe, and Asia.
Synopsis: Very-low-strength evidence showed that statin-ezetimibe (two trials; N=439) and statin-fibrate (one trial; N=166) combinations did not reduce mortality more than high-dose statin monotherapy. No trial data were found comparing the effect of these two strategies on secondary endpoints, including myocardial infarction, stroke, or revascularization.
Two trials (N=295) suggested lower-target lipid levels were more often achieved with statin-ezetimibe combination therapy than with high-dose statin monotherapy (OR 7.21; 95% CI, 4.30-12.08).
Limitations of this systematic review include the small number of studies directly comparing the two strategies, the short duration of most of the studies included, the focus on surrogate outcomes, and the heterogeneity of the study populations’ risk for coronary artery disease. Few studies were available comparing combination therapies other than statin-ezetimibe.
Bottom line: Limited evidence suggests that the combination of a statin with another lipid-lowering agent does not improve clinical outcomes when compared with high-dose statin monotherapy. Low-quality evidence suggests that lower-target lipid levels were more often reached with statin-ezetimibe combination therapy than with high-dose statin monotherapy.
Citation: Sharma M, Ansari MT, Abou-Setta AM, et al. Systematic review: comparative effectiveness and harms of combination therapy and monotherapy for dyslipidemia. Ann Intern Med. 2009;151(9):622-630.
Catheter Retention in Catheter-Related Coagulase-Negative Staphylococcal Bacteremia Is a Significant Risk Factor for Recurrent Infection
Clinical question: Should central venous catheters (CVC) be removed in patients with coagulase-negative staphylococcal catheter-related bloodstream infections (CRBSI)?
Background: Current guidelines for the management of coagulase-negative staphylococcal CRBSI do not recommend routine removal of the CVC, but are based on studies that did not use a strict definition of coagulase-negative staphylococcal CRBSI. Additionally, the studies did not look explicitly at the risk of recurrent infection.
Study design: Retrospective chart review.
Setting: Single academic medical center.
Synopsis: The study retrospectively evaluated 188 patients with coagulase-negative staphylococcal CRBSI. Immediate resolution of the infection was not influenced by the management of the CVC (retention vs. removal or exchange). However, using the multiple logistic regression technique, patients with catheter retention were found to be 6.6 times (95% CI, 1.8-23.9 times) more likely to have recurrence compared with those patients whose catheter was removed or exchanged.
Bottom line: While CVC management does not appear to have an impact on the acute resolution of infection, catheter retention is a significant risk factor for recurrent bacteremia.
Citation: Raad I, Kassar R, Ghannam D, Chaftari AM, Hachem R, Jiang Y. Management of the catheter in documented catheter-related coagulase-negative staphylococcal bacteremia: remove or retain? Clin Infect Dis. 2009;49(8):1187-1194.
Revascularization Offers No Benefit over Medical Therapy for Renal-Artery Stenosis
Clinical question: Does revascularization plus medical therapy compared with medical therapy alone improve outcomes in patients with renal-artery stenosis?
Background: Renal-artery stenosis is associated with significant hypertension and renal dysfunction. Revascularization for atherosclerotic renal-artery stenosis can improve artery patency, but it remains unclear if it provides clinical benefit in terms of preserving renal function or reducing overall mortality.
Study design: Randomized, controlled trial.
Setting: Fifty-seven outpatient sites in the United Kingdom, Australia, and New Zealand.
Synopsis: The study randomized 806 patients with renal-artery stenosis to receive either medical therapy alone (N=403) or medical management plus endovascular revascularization (N=403).
The majority of the patients who underwent revascularization (95%) received a stent.
The data show no significant difference between the two groups in the rate of progression of renal dysfunction, systolic blood pressure, rates of adverse renal and cardiovascular events, and overall survival. Of the 359 patients who underwent revascularization, 23 (6%) experienced serious complications from the procedure, including two deaths and three cases of amputated toes or limbs.
The primary limitation of this trial is the population studied. The trial only included subjects for whom revascularization offered uncertain clinical benefits, according to their doctor. Those subjects for whom revascularization offered certain clinical benefits, as noted by their primary-care physician (PCP), were excluded from the study. Examples include patients presenting with rapidly progressive renal dysfunction or pulmonary edema thought to be a result of renal-artery stenosis.
Bottom line: Revascularization provides no benefit to most patients with renal-artery stenosis, and is associated with some risk.
Citation: ASTRAL investigators, Wheatley K, Ives N, et al. Revascularization versus medical therapy for renal-artery stenosis. N Eng J Med. 2009;361(20):1953-1962.
Dabigatran as Effective as Warfarin in Treatment of Acute VTE
Clinical question: Is dabigatran a safe and effective alternative to warfarin for treatment of acute VTE?
Background: Parenteral anticoagulation followed by warfarin is the standard of care for acute VTE. Warfarin requires frequent monitoring and has numerous drug and food interactions. Dabigatran, which the FDA has yet to approve for use in the U.S., is an oral direct thrombin inhibitor that does not require laboratory monitoring. The role of dabigatran in acute VTE has not been evaluated.
Study design: Randomized, double-blind, noninferiority trial.
Setting: Two hundred twenty-two clinical centers in 29 countries.
Synopsis: This study randomized 2,564 patients with documented VTE (either DVT or pulmonary embolism [PE]) to receive dabigatran 150mg twice daily or warfarin after at least five days of a parenteral anticoagulant. Warfarin was dose-adjusted to an INR goal of 2.0-3.0. The primary outcome was incidence of recurrent VTE and related deaths at six months.
A total of 2.4% of patients assigned to dabigatran and 2.1% of patients assigned to warfarin had recurrent VTE (HR 1.10; 95% CI, 0.8-1.5), which met criteria for noninferiority. Major bleeding occurred in 1.6% of patients assigned to dabigatran and 1.9% assigned to warfarin (HR 0.82; 95% CI, 0.45-1.48). There was no difference between groups in overall adverse effects. Discontinuation due to adverse events was 9% with dabigatran compared with 6.8% with warfarin (P=0.05). Dyspepsia was more common with dabigatran (P<0.001).
Bottom line: Following parenteral anticoagulation, dabigatran is a safe and effective alternative to warfarin for the treatment of acute VTE and does not require therapeutic monitoring.
Citation: Schulman S, Kearon C, Kakkar AK, et al. Dabigatran versus warfarin in the treatment of acute venous thromboembolism. N Engl J Med. 2009;361(24):2342-2352.
Surgical Masks as Effective as N95 Respirators for Preventing Influenza
Clinical question: How effective are surgical masks compared with N95 respirators in protecting healthcare workers against influenza?
Background: Evidence surrounding the effectiveness of the surgical mask compared with the N95 respirator for protecting healthcare workers against influenza is sparse.
Study design: Randomized, controlled trial.
Setting: Eight hospitals in Ontario.
Synopsis: The study looked at 446 nurses working in EDs, medical units, and pediatric units randomized to use either a fit-tested N95 respirator or a surgical mask when caring for patients with febrile respiratory illness during the 2008-2009 flu season. The primary outcome measured was laboratory-confirmed influenza. Only a minority of the study participants (30% in the surgical mask group; 28% in the respirator group) received the influenza vaccine during the study year.
Influenza infection occurred with similar incidence in both the surgical-mask and N95 respirator groups (23.6% vs. 22.9%). A two-week audit period demonstrated solid adherence to the assigned respiratory protection device in both groups (11 out of 11 nurses were compliant in the surgical-mask group; six out of seven nurses were compliant in the respirator group).
The major limitation of this study is that it cannot be extrapolated to other settings where there is a high risk for aerosolization, such as intubation or bronchoscopy, where N95 respirators may be more effective than surgical masks.
Bottom line: Surgical masks are as effective as fit-tested N95 respirators in protecting healthcare workers against influenza in most settings.
Citation: Loeb M, Dafoe N, Mahony J, et al. Surgical mask vs. N95 respirator for preventing influenza among health care workers: a randomized trial. JAMA. 2009;302 (17):1865-1871.
Neither Major Illness Nor Noncardiac Surgery Associated with Long-Term Cognitive Decline in Older Patients
Clinical question: Is there a measurable and lasting cognitive decline in older adults following noncardiac surgery or major illness?
Background: Despite limited evidence, there is some concern that elderly patients are susceptible to significant, long-term deterioration in mental function following surgery or a major illness. Prior studies often have been limited by lack of information about the trajectory of surgical patients’ cognitive status before surgery and lack of relevant control groups.
Study design: Retrospective, cohort study.
Setting: Single outpatient research center.
Synopsis: The Alzheimer’s Disease Research Center (ADRC) at the University of Washington in St. Louis continually enrolls research subjects without regard to their baseline cognitive function and provides annual assessment of cognitive functioning.
From the ADRC database, 575 eligible research participants were identified. Of these, 361 had very mild or mild dementia at enrollment, and 214 had no dementia. Participants were then categorized into three groups: those who had undergone noncardiac surgery (N=180); those who had been admitted to the hospital with a major illness (N=119); and those who had experienced neither surgery nor major illness (N=276).
Cognitive trajectory did not differ between the three groups, although participants with baseline dementia declined more rapidly than participants without dementia. Although 23% of patients without dementia developed detectable evidence of dementia during the study period, this outcome was not more common following surgery or major illness.
As participants were assessed annually, this study does not address the issue of post-operative delirium or early cognitive impairment following surgery.
Bottom line: There is no evidence for a long-term effect on cognitive function independently attributable to noncardiac surgery or major illness.
Citation: Avidan MS, Searleman AC, Storandt M, et al. Long-term cognitive decline in older subjects was not attributable to noncardiac surgery or major illness. Anesthesiology. 2009;111(5):964-970.
Rapid-Response System Maturation Decreases Delays in Emergency Team Activation
Clinical question: Does the maturation of a rapid-response system (RRS) improve performance by decreasing delays in medical emergency team (MET) activation?
Background: RRSs have been widely embraced as a possible means to reduce inpatient cardiopulmonary arrests and unplanned ICU admissions. Assessment of RRSs early in their implementation might underestimate their long-term efficacy. Whether the use and performance of RRSs improve as they mature is currently unknown.
Study design: Observational, cohort study.
Setting: Single tertiary-care hospital.
Synopsis: A recent cohort of 200 patients receiving MET review was prospectively compared with a control cohort of 400 patients receiving an MET review five years earlier, at the start of RRS implementation. Information obtained on the two cohorts included demographics, timing of MET activation in relation to the first documented MET review criterion (activation delay), and patient outcomes.
Fewer patients in the recent cohort had delayed MET activation (22.0% vs. 40.3%). The recent cohort also was independently associated with a decreased risk of delayed activation (OR 0.45; 95% C.I., 0.30-0.67) and ICU admission (OR 0.5; 95% C.I., 0.32-0.78). Delayed MET activation independently was associated with greater risk of unplanned ICU admission (OR 1.79; 95% C.I., 1.33-2.93) and hospital mortality (OR 2.18; 95% C.I., 1.42-3.33).
The study is limited by its observational nature, and thus the association between greater delay and unfavorable outcomes should not infer causality.
Bottom line: The maturation of a RRS decreases delays in MET activation. RRSs might need to mature before their full impact is felt.
Citation: Calzavacca P, Licari E, Tee A, et al. The impact of Rapid Response System on delayed emergency team activation patient characteristics and outcomes—a follow-up study. Resuscitation. 2010;81(1):31-35. TH