Impact of Peri-Operative Beta Blockers on Cardiovascular Morbidity, Mortality

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Impact of Peri-Operative Beta Blockers on Cardiovascular Morbidity, Mortality

Clinical question: What is the impact of peri-operative beta blockers on cardiovascular morbidity and mortality in patients undergoing surgery under general anesthesia?

Background: Studies evaluating the effects of peri-operative beta blockers on cardiovascular outcomes have yielded conflicting results.

Study design: Systematic review.

Setting: Varied.

Synopsis: This review included 89 randomized controlled trials (RCTs) of peri-operative beta blocker administration for patients undergoing surgery under general anesthesia. For noncardiac surgery (36 trials), beta blockers were associated with an increase in all-cause mortality (RR 1.24, 95% CI 0.99 to 1.54) and cerebrovascular events (RR 1.59, 95% CI 0.93 to 2.71). Beta blockers significantly increased the occurrence of hypotension (RR 1.50, 95% CI 1.38 to 1.64) and bradycardia (RR 2.24, 95% CI 1.49 to 3.35). In noncardiac surgery, beta blockers significantly reduced occurrence of acute myocardial infarction (AMI) (RR 0.73, 95% CI 0.61 to 0.87), myocardial ischemia (RR 0.43, 95% CI 0.27 to 0.70), and supraventricular arrhythmias (RR 0.72, 95% CI 0.56 to 0.92). No effect was found on ventricular arrhythmias, congestive heart failure, or length of hospital stay.

For cardiac surgery (53 trials), peri-operative beta blockers were associated with a significant reduction in ventricular arrhythmias (RR 0.37, 95% CI 0.24 to 0.58), supraventricular arrhythmias (RR 0.44, 95% CI 0.36 to 0.53), and length of hospital stay (by 0.54 days, 95% CI -0.90 to -0.19). No effect was found on all-cause mortality, AMI, myocardial ischemia, cerebrovascular events, hypotension, bradycardia, or congestive heart failure.

These results do not provide sufficient evidence to change recommendations from current ACC/AHA guidelines for peri-operative beta blocker administration.

Bottom line: For noncardiac surgeries, beta blockers might increase all-cause mortality and stroke while reducing supraventricular arrhythmias and acute myocardial infarctions. Because much of the evidence is from low- to moderate-quality trials, there is not sufficient evidence to modify current recommendations regarding the use of peri-operative beta blockers.

Citation: Blessberger H, Kammler J, Domanovits H, et al. Perioperative beta-blockers for preventing surgery-related mortality and morbidity. Cochrane Database Syst Rev. 2014;9:CD004476.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: What is the impact of peri-operative beta blockers on cardiovascular morbidity and mortality in patients undergoing surgery under general anesthesia?

Background: Studies evaluating the effects of peri-operative beta blockers on cardiovascular outcomes have yielded conflicting results.

Study design: Systematic review.

Setting: Varied.

Synopsis: This review included 89 randomized controlled trials (RCTs) of peri-operative beta blocker administration for patients undergoing surgery under general anesthesia. For noncardiac surgery (36 trials), beta blockers were associated with an increase in all-cause mortality (RR 1.24, 95% CI 0.99 to 1.54) and cerebrovascular events (RR 1.59, 95% CI 0.93 to 2.71). Beta blockers significantly increased the occurrence of hypotension (RR 1.50, 95% CI 1.38 to 1.64) and bradycardia (RR 2.24, 95% CI 1.49 to 3.35). In noncardiac surgery, beta blockers significantly reduced occurrence of acute myocardial infarction (AMI) (RR 0.73, 95% CI 0.61 to 0.87), myocardial ischemia (RR 0.43, 95% CI 0.27 to 0.70), and supraventricular arrhythmias (RR 0.72, 95% CI 0.56 to 0.92). No effect was found on ventricular arrhythmias, congestive heart failure, or length of hospital stay.

For cardiac surgery (53 trials), peri-operative beta blockers were associated with a significant reduction in ventricular arrhythmias (RR 0.37, 95% CI 0.24 to 0.58), supraventricular arrhythmias (RR 0.44, 95% CI 0.36 to 0.53), and length of hospital stay (by 0.54 days, 95% CI -0.90 to -0.19). No effect was found on all-cause mortality, AMI, myocardial ischemia, cerebrovascular events, hypotension, bradycardia, or congestive heart failure.

These results do not provide sufficient evidence to change recommendations from current ACC/AHA guidelines for peri-operative beta blocker administration.

Bottom line: For noncardiac surgeries, beta blockers might increase all-cause mortality and stroke while reducing supraventricular arrhythmias and acute myocardial infarctions. Because much of the evidence is from low- to moderate-quality trials, there is not sufficient evidence to modify current recommendations regarding the use of peri-operative beta blockers.

Citation: Blessberger H, Kammler J, Domanovits H, et al. Perioperative beta-blockers for preventing surgery-related mortality and morbidity. Cochrane Database Syst Rev. 2014;9:CD004476.

Clinical question: What is the impact of peri-operative beta blockers on cardiovascular morbidity and mortality in patients undergoing surgery under general anesthesia?

Background: Studies evaluating the effects of peri-operative beta blockers on cardiovascular outcomes have yielded conflicting results.

Study design: Systematic review.

Setting: Varied.

Synopsis: This review included 89 randomized controlled trials (RCTs) of peri-operative beta blocker administration for patients undergoing surgery under general anesthesia. For noncardiac surgery (36 trials), beta blockers were associated with an increase in all-cause mortality (RR 1.24, 95% CI 0.99 to 1.54) and cerebrovascular events (RR 1.59, 95% CI 0.93 to 2.71). Beta blockers significantly increased the occurrence of hypotension (RR 1.50, 95% CI 1.38 to 1.64) and bradycardia (RR 2.24, 95% CI 1.49 to 3.35). In noncardiac surgery, beta blockers significantly reduced occurrence of acute myocardial infarction (AMI) (RR 0.73, 95% CI 0.61 to 0.87), myocardial ischemia (RR 0.43, 95% CI 0.27 to 0.70), and supraventricular arrhythmias (RR 0.72, 95% CI 0.56 to 0.92). No effect was found on ventricular arrhythmias, congestive heart failure, or length of hospital stay.

For cardiac surgery (53 trials), peri-operative beta blockers were associated with a significant reduction in ventricular arrhythmias (RR 0.37, 95% CI 0.24 to 0.58), supraventricular arrhythmias (RR 0.44, 95% CI 0.36 to 0.53), and length of hospital stay (by 0.54 days, 95% CI -0.90 to -0.19). No effect was found on all-cause mortality, AMI, myocardial ischemia, cerebrovascular events, hypotension, bradycardia, or congestive heart failure.

These results do not provide sufficient evidence to change recommendations from current ACC/AHA guidelines for peri-operative beta blocker administration.

Bottom line: For noncardiac surgeries, beta blockers might increase all-cause mortality and stroke while reducing supraventricular arrhythmias and acute myocardial infarctions. Because much of the evidence is from low- to moderate-quality trials, there is not sufficient evidence to modify current recommendations regarding the use of peri-operative beta blockers.

Citation: Blessberger H, Kammler J, Domanovits H, et al. Perioperative beta-blockers for preventing surgery-related mortality and morbidity. Cochrane Database Syst Rev. 2014;9:CD004476.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Impact of Peri-Operative Beta Blockers on Cardiovascular Morbidity, Mortality
Display Headline
Impact of Peri-Operative Beta Blockers on Cardiovascular Morbidity, Mortality
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Resident Handoff Program Associated with Improved Inpatient Outcomes

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Resident Handoff Program Associated with Improved Inpatient Outcomes

Clinical question: Does the implementation of a handoff program lead to improved patient safety?

Background: Communication failure at the time of handoff of patient care from one resident to another is a significant cause of medical errors. Programs to improve the quality of handoffs have been created to reduce such errors, but few have been rigorously evaluated.

Study design: Prospective cohort study.

Setting: Inpatient units at nine pediatric residency programs in the United States and Canada.

Synopsis: The study team evaluated the impact of the I-PASS Handoff Bundle (illness severity, patient summary, action items, situation awareness and contingency planning, and synthesis by receiver) from January 2011 through May 2013. Compared with the pre-intervention period, there was a 23% reduction in medical errors in the post-intervention period (24.5 vs. 18.8 per 100 admissions; P<0.001), a 30% reduction in preventable adverse events (4.7 vs. 3.3 events per 100 admissions; P<0.001), and a significant increase in the inclusion of all key elements of handoff communication. There were no significant changes in duration of handoffs or resident workflow.

Given the emphasis placed on teaching reliable communication to trainees, many residency programs are developing curricula on proper handoff practices. Although the pre-post nature of this study prevents a causal relationship from being established, the outcomes provide evidence in support of this particular handoff improvement program.

Bottom line: The I-PASS Handoff Bundle might reduce preventable adverse events and medical errors without significant impact on handoff duration or resident workflow.

Citation: Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: Does the implementation of a handoff program lead to improved patient safety?

Background: Communication failure at the time of handoff of patient care from one resident to another is a significant cause of medical errors. Programs to improve the quality of handoffs have been created to reduce such errors, but few have been rigorously evaluated.

Study design: Prospective cohort study.

Setting: Inpatient units at nine pediatric residency programs in the United States and Canada.

Synopsis: The study team evaluated the impact of the I-PASS Handoff Bundle (illness severity, patient summary, action items, situation awareness and contingency planning, and synthesis by receiver) from January 2011 through May 2013. Compared with the pre-intervention period, there was a 23% reduction in medical errors in the post-intervention period (24.5 vs. 18.8 per 100 admissions; P<0.001), a 30% reduction in preventable adverse events (4.7 vs. 3.3 events per 100 admissions; P<0.001), and a significant increase in the inclusion of all key elements of handoff communication. There were no significant changes in duration of handoffs or resident workflow.

Given the emphasis placed on teaching reliable communication to trainees, many residency programs are developing curricula on proper handoff practices. Although the pre-post nature of this study prevents a causal relationship from being established, the outcomes provide evidence in support of this particular handoff improvement program.

Bottom line: The I-PASS Handoff Bundle might reduce preventable adverse events and medical errors without significant impact on handoff duration or resident workflow.

Citation: Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812.

Clinical question: Does the implementation of a handoff program lead to improved patient safety?

Background: Communication failure at the time of handoff of patient care from one resident to another is a significant cause of medical errors. Programs to improve the quality of handoffs have been created to reduce such errors, but few have been rigorously evaluated.

Study design: Prospective cohort study.

Setting: Inpatient units at nine pediatric residency programs in the United States and Canada.

Synopsis: The study team evaluated the impact of the I-PASS Handoff Bundle (illness severity, patient summary, action items, situation awareness and contingency planning, and synthesis by receiver) from January 2011 through May 2013. Compared with the pre-intervention period, there was a 23% reduction in medical errors in the post-intervention period (24.5 vs. 18.8 per 100 admissions; P<0.001), a 30% reduction in preventable adverse events (4.7 vs. 3.3 events per 100 admissions; P<0.001), and a significant increase in the inclusion of all key elements of handoff communication. There were no significant changes in duration of handoffs or resident workflow.

Given the emphasis placed on teaching reliable communication to trainees, many residency programs are developing curricula on proper handoff practices. Although the pre-post nature of this study prevents a causal relationship from being established, the outcomes provide evidence in support of this particular handoff improvement program.

Bottom line: The I-PASS Handoff Bundle might reduce preventable adverse events and medical errors without significant impact on handoff duration or resident workflow.

Citation: Starmer AJ, Spector ND, Srivastava R, et al. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371(19):1803-1812.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Resident Handoff Program Associated with Improved Inpatient Outcomes
Display Headline
Resident Handoff Program Associated with Improved Inpatient Outcomes
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Automated Early Warning, Response System Could Improve Sepsis Outcomes

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Automated Early Warning, Response System Could Improve Sepsis Outcomes

Clinical question: Does implementation of an electronic sepsis detection and response system improve patient outcomes?

Background: It is known that interventions such as goal-directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. This fact has increased interest in developing an effective, automated system to improve the timeliness of sepsis detection.

Study design: Pre-implementation/post-implementation with multivariable analysis.

Setting: Urban, academic, multi-hospital healthcare system.

Synopsis: Using the electronic health record (EHR) at the University of Pennsylvania Health System, an automated early warning and response system (EWRS) was developed and implemented to detect patients at risk of clinical deterioration and progression to severe sepsis. The EWRS monitored vital signs and key laboratory results in real time.

Multivariable analysis compared a pre-implementation cohort of adult non-ICU acute care patients admitted from June 6, 2012, to September 4, 2012, to a post-implementation cohort of patients admitted from June 6, 2013, to September 4, 2013.

Hospital and ICU length of stay were similar in both cohorts, and no difference was seen in the proportion of patients transferred to the ICU following the alert; however, transfer to the ICU within six hours became statistically significant after adjustment. All mortality measures were lower in the post-implementation period, but none reached statistical significance. Discharge to home and sepsis documentation were statistically higher in the post-implementation period, but discharge to home lost statistical significance after adjustment.

Although these data are encouraging, the findings are limited, because none of the mortality measures reached statistical significance. Further studies are required before large-scale implementation of such a system can be considered.

Bottom line: An automated prediction tool identified at-risk patients and prompted bedside evaluation resulting in more timely sepsis care, improved documentation, and a trend toward reduced mortality.

Citation: Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis [published online ahead of print September 26, 2014]. J Hosp Med. doi: 10.1002/jhm.2259.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: Does implementation of an electronic sepsis detection and response system improve patient outcomes?

Background: It is known that interventions such as goal-directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. This fact has increased interest in developing an effective, automated system to improve the timeliness of sepsis detection.

Study design: Pre-implementation/post-implementation with multivariable analysis.

Setting: Urban, academic, multi-hospital healthcare system.

Synopsis: Using the electronic health record (EHR) at the University of Pennsylvania Health System, an automated early warning and response system (EWRS) was developed and implemented to detect patients at risk of clinical deterioration and progression to severe sepsis. The EWRS monitored vital signs and key laboratory results in real time.

Multivariable analysis compared a pre-implementation cohort of adult non-ICU acute care patients admitted from June 6, 2012, to September 4, 2012, to a post-implementation cohort of patients admitted from June 6, 2013, to September 4, 2013.

Hospital and ICU length of stay were similar in both cohorts, and no difference was seen in the proportion of patients transferred to the ICU following the alert; however, transfer to the ICU within six hours became statistically significant after adjustment. All mortality measures were lower in the post-implementation period, but none reached statistical significance. Discharge to home and sepsis documentation were statistically higher in the post-implementation period, but discharge to home lost statistical significance after adjustment.

Although these data are encouraging, the findings are limited, because none of the mortality measures reached statistical significance. Further studies are required before large-scale implementation of such a system can be considered.

Bottom line: An automated prediction tool identified at-risk patients and prompted bedside evaluation resulting in more timely sepsis care, improved documentation, and a trend toward reduced mortality.

Citation: Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis [published online ahead of print September 26, 2014]. J Hosp Med. doi: 10.1002/jhm.2259.

Clinical question: Does implementation of an electronic sepsis detection and response system improve patient outcomes?

Background: It is known that interventions such as goal-directed resuscitation and antibiotics can reduce sepsis mortality, but their effectiveness depends on early administration. This fact has increased interest in developing an effective, automated system to improve the timeliness of sepsis detection.

Study design: Pre-implementation/post-implementation with multivariable analysis.

Setting: Urban, academic, multi-hospital healthcare system.

Synopsis: Using the electronic health record (EHR) at the University of Pennsylvania Health System, an automated early warning and response system (EWRS) was developed and implemented to detect patients at risk of clinical deterioration and progression to severe sepsis. The EWRS monitored vital signs and key laboratory results in real time.

Multivariable analysis compared a pre-implementation cohort of adult non-ICU acute care patients admitted from June 6, 2012, to September 4, 2012, to a post-implementation cohort of patients admitted from June 6, 2013, to September 4, 2013.

Hospital and ICU length of stay were similar in both cohorts, and no difference was seen in the proportion of patients transferred to the ICU following the alert; however, transfer to the ICU within six hours became statistically significant after adjustment. All mortality measures were lower in the post-implementation period, but none reached statistical significance. Discharge to home and sepsis documentation were statistically higher in the post-implementation period, but discharge to home lost statistical significance after adjustment.

Although these data are encouraging, the findings are limited, because none of the mortality measures reached statistical significance. Further studies are required before large-scale implementation of such a system can be considered.

Bottom line: An automated prediction tool identified at-risk patients and prompted bedside evaluation resulting in more timely sepsis care, improved documentation, and a trend toward reduced mortality.

Citation: Umscheid CA, Betesh J, VanZandbergen C, et al. Development, implementation, and impact of an automated early warning and response system for sepsis [published online ahead of print September 26, 2014]. J Hosp Med. doi: 10.1002/jhm.2259.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Automated Early Warning, Response System Could Improve Sepsis Outcomes
Display Headline
Automated Early Warning, Response System Could Improve Sepsis Outcomes
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Intermittent, Continuous Proton Pump Inhibitor Therapies Are Comparable

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Intermittent, Continuous Proton Pump Inhibitor Therapies Are Comparable

Clinical question: Is intermittent proton pump inhibitor (PPI) therapy comparable to the current standard of continuous PPI infusion for high risk bleeding ulcers?

Background: Current guidelines recommend an intravenous bolus dose of a PPI followed by continuous PPI infusion for three days after endoscopic therapy in patients with high risk bleeding ulcers. Substitution of intermittent PPI therapy, if comparable, could decrease PPI dose, cost, and resource use.

Study design: Systematic review and meta-analysis of randomized, controlled trials.

Setting: Review of medical databases through December 2013.

Synopsis: A total of 13 studies were identified that met eligibility criteria, with the primary outcome the incidence of recurrent bleeding within seven days of starting a PPI regimen.

The upper boundary of the 95% CI for the absolute risk difference between intermittent and continuous infusion PPI therapy was -0.28% for the primary outcome, indicating that there was no increase in recurrent bleeding with intermittent versus continuous PPI therapy.

Although overall analysis shows that the intermittent use of PPIs is noninferior to bolus plus continuous infusion of PPIs, this study does not delineate which intermittent PPI regimen is the most appropriate.

A variety of dosing schedules and total doses were used, different PPIs were utilized, and both oral and intravenous routes of administration were used. In addition, different endoscopic therapies may have achieved variable results for the primary outcome of rebleeding and could therefore confound the results.

Bottom line: Intermittent PPI therapy is comparable to the current guideline-recommended regimen of intravenous bolus plus continuous infusion of PPIs in patients with endoscopically treated, high risk bleeding ulcers.

Citation: Sachar H, Vaidya K, Laine L. Intermittent vs. continuous proton pump inhibitor therapy for high-risk bleeding ulcers. JAMA Intern Med. 2014;174(11):1755-1762.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: Is intermittent proton pump inhibitor (PPI) therapy comparable to the current standard of continuous PPI infusion for high risk bleeding ulcers?

Background: Current guidelines recommend an intravenous bolus dose of a PPI followed by continuous PPI infusion for three days after endoscopic therapy in patients with high risk bleeding ulcers. Substitution of intermittent PPI therapy, if comparable, could decrease PPI dose, cost, and resource use.

Study design: Systematic review and meta-analysis of randomized, controlled trials.

Setting: Review of medical databases through December 2013.

Synopsis: A total of 13 studies were identified that met eligibility criteria, with the primary outcome the incidence of recurrent bleeding within seven days of starting a PPI regimen.

The upper boundary of the 95% CI for the absolute risk difference between intermittent and continuous infusion PPI therapy was -0.28% for the primary outcome, indicating that there was no increase in recurrent bleeding with intermittent versus continuous PPI therapy.

Although overall analysis shows that the intermittent use of PPIs is noninferior to bolus plus continuous infusion of PPIs, this study does not delineate which intermittent PPI regimen is the most appropriate.

A variety of dosing schedules and total doses were used, different PPIs were utilized, and both oral and intravenous routes of administration were used. In addition, different endoscopic therapies may have achieved variable results for the primary outcome of rebleeding and could therefore confound the results.

Bottom line: Intermittent PPI therapy is comparable to the current guideline-recommended regimen of intravenous bolus plus continuous infusion of PPIs in patients with endoscopically treated, high risk bleeding ulcers.

Citation: Sachar H, Vaidya K, Laine L. Intermittent vs. continuous proton pump inhibitor therapy for high-risk bleeding ulcers. JAMA Intern Med. 2014;174(11):1755-1762.

Clinical question: Is intermittent proton pump inhibitor (PPI) therapy comparable to the current standard of continuous PPI infusion for high risk bleeding ulcers?

Background: Current guidelines recommend an intravenous bolus dose of a PPI followed by continuous PPI infusion for three days after endoscopic therapy in patients with high risk bleeding ulcers. Substitution of intermittent PPI therapy, if comparable, could decrease PPI dose, cost, and resource use.

Study design: Systematic review and meta-analysis of randomized, controlled trials.

Setting: Review of medical databases through December 2013.

Synopsis: A total of 13 studies were identified that met eligibility criteria, with the primary outcome the incidence of recurrent bleeding within seven days of starting a PPI regimen.

The upper boundary of the 95% CI for the absolute risk difference between intermittent and continuous infusion PPI therapy was -0.28% for the primary outcome, indicating that there was no increase in recurrent bleeding with intermittent versus continuous PPI therapy.

Although overall analysis shows that the intermittent use of PPIs is noninferior to bolus plus continuous infusion of PPIs, this study does not delineate which intermittent PPI regimen is the most appropriate.

A variety of dosing schedules and total doses were used, different PPIs were utilized, and both oral and intravenous routes of administration were used. In addition, different endoscopic therapies may have achieved variable results for the primary outcome of rebleeding and could therefore confound the results.

Bottom line: Intermittent PPI therapy is comparable to the current guideline-recommended regimen of intravenous bolus plus continuous infusion of PPIs in patients with endoscopically treated, high risk bleeding ulcers.

Citation: Sachar H, Vaidya K, Laine L. Intermittent vs. continuous proton pump inhibitor therapy for high-risk bleeding ulcers. JAMA Intern Med. 2014;174(11):1755-1762.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Intermittent, Continuous Proton Pump Inhibitor Therapies Are Comparable
Display Headline
Intermittent, Continuous Proton Pump Inhibitor Therapies Are Comparable
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

VTE Treatment Strategies Don't Differ in Efficacy, Safety

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
VTE Treatment Strategies Don't Differ in Efficacy, Safety

Clinical question: Are there differences in efficacy and safety between the treatment strategies for acute venous thromboembolism (VTE)?

Background: There are a number of treatment strategies available for acute VTE. Prior to this study, no large meta-analysis review of strategies had been conducted to compare efficacy and safety.

Study design: Systematic literature review and meta-analysis.

Setting: Patients with confirmed symptomatic acute VTE or confirmed symptomatic recurrent VTE in the inpatient or ambulatory setting

Synopsis: The review identified 45 relevant studies with a total of 44,989 patients. The resultant analysis showed that there were no statistically significant differences for efficacy and safety among most treatment strategies used to treat acute VTE when compared with the low molecular weight heparin-vitamin K antagonist combination. Specifically, no differences were found between effectiveness and bleeding risk. However, the analysis did suggest that the unfractionated heparin-vitamin K antagonist combination was the least effective and resulted in higher rates of recurrent VTE. Additionally, the use of rivaroxaban or apixaban was associated with the lowest risk of bleeding.

Hospitalists treating patients with acute VTE need to use caution when attempting to translate these results into practice. This study did not address comorbidities present in patients with VTE that might limit certain treatment strategies. Also, no studies directly compare the new direct oral anticoagulants, so their use requires thoughtful consideration.

Bottom line: There is no significant difference in efficacy and safety between the strategies used to treat acute VTE.

Citation: Castellucci LA, Cameron C, Le Gal G, et al. Clinical and safety outcomes associated with treatment of acute venous thromboembolism: a systematic review and meta-analysis. JAMA. 2014;312(11):1122-1135.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: Are there differences in efficacy and safety between the treatment strategies for acute venous thromboembolism (VTE)?

Background: There are a number of treatment strategies available for acute VTE. Prior to this study, no large meta-analysis review of strategies had been conducted to compare efficacy and safety.

Study design: Systematic literature review and meta-analysis.

Setting: Patients with confirmed symptomatic acute VTE or confirmed symptomatic recurrent VTE in the inpatient or ambulatory setting

Synopsis: The review identified 45 relevant studies with a total of 44,989 patients. The resultant analysis showed that there were no statistically significant differences for efficacy and safety among most treatment strategies used to treat acute VTE when compared with the low molecular weight heparin-vitamin K antagonist combination. Specifically, no differences were found between effectiveness and bleeding risk. However, the analysis did suggest that the unfractionated heparin-vitamin K antagonist combination was the least effective and resulted in higher rates of recurrent VTE. Additionally, the use of rivaroxaban or apixaban was associated with the lowest risk of bleeding.

Hospitalists treating patients with acute VTE need to use caution when attempting to translate these results into practice. This study did not address comorbidities present in patients with VTE that might limit certain treatment strategies. Also, no studies directly compare the new direct oral anticoagulants, so their use requires thoughtful consideration.

Bottom line: There is no significant difference in efficacy and safety between the strategies used to treat acute VTE.

Citation: Castellucci LA, Cameron C, Le Gal G, et al. Clinical and safety outcomes associated with treatment of acute venous thromboembolism: a systematic review and meta-analysis. JAMA. 2014;312(11):1122-1135.

Clinical question: Are there differences in efficacy and safety between the treatment strategies for acute venous thromboembolism (VTE)?

Background: There are a number of treatment strategies available for acute VTE. Prior to this study, no large meta-analysis review of strategies had been conducted to compare efficacy and safety.

Study design: Systematic literature review and meta-analysis.

Setting: Patients with confirmed symptomatic acute VTE or confirmed symptomatic recurrent VTE in the inpatient or ambulatory setting

Synopsis: The review identified 45 relevant studies with a total of 44,989 patients. The resultant analysis showed that there were no statistically significant differences for efficacy and safety among most treatment strategies used to treat acute VTE when compared with the low molecular weight heparin-vitamin K antagonist combination. Specifically, no differences were found between effectiveness and bleeding risk. However, the analysis did suggest that the unfractionated heparin-vitamin K antagonist combination was the least effective and resulted in higher rates of recurrent VTE. Additionally, the use of rivaroxaban or apixaban was associated with the lowest risk of bleeding.

Hospitalists treating patients with acute VTE need to use caution when attempting to translate these results into practice. This study did not address comorbidities present in patients with VTE that might limit certain treatment strategies. Also, no studies directly compare the new direct oral anticoagulants, so their use requires thoughtful consideration.

Bottom line: There is no significant difference in efficacy and safety between the strategies used to treat acute VTE.

Citation: Castellucci LA, Cameron C, Le Gal G, et al. Clinical and safety outcomes associated with treatment of acute venous thromboembolism: a systematic review and meta-analysis. JAMA. 2014;312(11):1122-1135.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
VTE Treatment Strategies Don't Differ in Efficacy, Safety
Display Headline
VTE Treatment Strategies Don't Differ in Efficacy, Safety
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Antimicrobial Prescribing Common in Inpatient Setting

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Antimicrobial Prescribing Common in Inpatient Setting

Clinical question: What is the daily prevalence of antimicrobial use in acute-care hospitals?

Background: Inappropriate antimicrobial use is associated with adverse events and contributes to the emergence of resistant pathogens. Strategies need to be implemented to reduce inappropriate use. An understanding of antibiotic prevalence and epidemiology in hospitals will aid in the development of these strategies.

Study design: Cross-sectional prevalence study.

Setting: Acute-care hospitals in 10 states.

Synopsis: Surveys were conducted in 183 hospitals (11,282 patients) to assess the prevalence of antimicrobial prescription on a given day. The survey showed 51.9% of patients were receiving antimicrobials. Four antimicrobials (parenteral vancomycin, piperacillin-tazobactam, ceftriaxone, and levofloxacin) accounted for 45% of all antimicrobial treatments.

Additionally, 54% of antimicrobials were used to treat three infection syndromes: lower respiratory tract, urinary tract, and skin and soft tissue. This prescribing pattern was consistent between community-acquired infections and healthcare-acquired infections, as well as inside and outside the critical care unit. The study authors concluded that targeting these four antimicrobials and these three infection syndromes could be the focus of strategies for antimicrobial overuse.

Hospitalists need to use caution, as this data is from 2011 and patterns might have changed. Also, the study included only 183 hospitals, and generalizability is limited. In addition, the study did not take into account the patients’ diagnoses; therefore, it is difficult to assess the appropriateness of the antimicrobial prescriptions.

Bottom line: Use of broad spectrum antibiotics such as vancomycin is common in hospitalized patients.

Citation: Magill SS, Edwards JR, Beldavs ZG, et al. Prevalence of antimicrobial use in US acute care hospitals, May-September 2011. JAMA. 2014;312(14):1438-1446.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: What is the daily prevalence of antimicrobial use in acute-care hospitals?

Background: Inappropriate antimicrobial use is associated with adverse events and contributes to the emergence of resistant pathogens. Strategies need to be implemented to reduce inappropriate use. An understanding of antibiotic prevalence and epidemiology in hospitals will aid in the development of these strategies.

Study design: Cross-sectional prevalence study.

Setting: Acute-care hospitals in 10 states.

Synopsis: Surveys were conducted in 183 hospitals (11,282 patients) to assess the prevalence of antimicrobial prescription on a given day. The survey showed 51.9% of patients were receiving antimicrobials. Four antimicrobials (parenteral vancomycin, piperacillin-tazobactam, ceftriaxone, and levofloxacin) accounted for 45% of all antimicrobial treatments.

Additionally, 54% of antimicrobials were used to treat three infection syndromes: lower respiratory tract, urinary tract, and skin and soft tissue. This prescribing pattern was consistent between community-acquired infections and healthcare-acquired infections, as well as inside and outside the critical care unit. The study authors concluded that targeting these four antimicrobials and these three infection syndromes could be the focus of strategies for antimicrobial overuse.

Hospitalists need to use caution, as this data is from 2011 and patterns might have changed. Also, the study included only 183 hospitals, and generalizability is limited. In addition, the study did not take into account the patients’ diagnoses; therefore, it is difficult to assess the appropriateness of the antimicrobial prescriptions.

Bottom line: Use of broad spectrum antibiotics such as vancomycin is common in hospitalized patients.

Citation: Magill SS, Edwards JR, Beldavs ZG, et al. Prevalence of antimicrobial use in US acute care hospitals, May-September 2011. JAMA. 2014;312(14):1438-1446.

Clinical question: What is the daily prevalence of antimicrobial use in acute-care hospitals?

Background: Inappropriate antimicrobial use is associated with adverse events and contributes to the emergence of resistant pathogens. Strategies need to be implemented to reduce inappropriate use. An understanding of antibiotic prevalence and epidemiology in hospitals will aid in the development of these strategies.

Study design: Cross-sectional prevalence study.

Setting: Acute-care hospitals in 10 states.

Synopsis: Surveys were conducted in 183 hospitals (11,282 patients) to assess the prevalence of antimicrobial prescription on a given day. The survey showed 51.9% of patients were receiving antimicrobials. Four antimicrobials (parenteral vancomycin, piperacillin-tazobactam, ceftriaxone, and levofloxacin) accounted for 45% of all antimicrobial treatments.

Additionally, 54% of antimicrobials were used to treat three infection syndromes: lower respiratory tract, urinary tract, and skin and soft tissue. This prescribing pattern was consistent between community-acquired infections and healthcare-acquired infections, as well as inside and outside the critical care unit. The study authors concluded that targeting these four antimicrobials and these three infection syndromes could be the focus of strategies for antimicrobial overuse.

Hospitalists need to use caution, as this data is from 2011 and patterns might have changed. Also, the study included only 183 hospitals, and generalizability is limited. In addition, the study did not take into account the patients’ diagnoses; therefore, it is difficult to assess the appropriateness of the antimicrobial prescriptions.

Bottom line: Use of broad spectrum antibiotics such as vancomycin is common in hospitalized patients.

Citation: Magill SS, Edwards JR, Beldavs ZG, et al. Prevalence of antimicrobial use in US acute care hospitals, May-September 2011. JAMA. 2014;312(14):1438-1446.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Antimicrobial Prescribing Common in Inpatient Setting
Display Headline
Antimicrobial Prescribing Common in Inpatient Setting
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Hemoglobin Transfusion Threshold Not Associated with Differences in Morbidity, Mortality Among Patients with Septic Shock

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Hemoglobin Transfusion Threshold Not Associated with Differences in Morbidity, Mortality Among Patients with Septic Shock

Clinical question: Is there a difference in 90-day mortality and other outcomes when a lower versus higher hemoglobin threshold is used for blood transfusions in ICU patients with septic shock?

Background: Patients with septic shock frequently receive blood transfusions. This often occurs in the setting of active bleeding but has also been observed in non-bleeding patients for variable hemoglobin levels. Concrete data regarding the efficacy and safety of such transfusions based on hemoglobin thresholds is lacking.

Study design: International, multi-center, stratified, parallel group randomized trial.

Setting: General ICUs in Denmark, Norway, Sweden, and Finland.

Synopsis: Researchers analyzed data from 998 ICU patients in the Transfusion Requirements in Septic Shock (TRISS) trial. Primary outcome was 90-day mortality rate. Hemoglobin levels less than 7 gm/dL and 9 gm/dL were used for lower and higher hemoglobin thresholds, respectively. The mortality rates were 43% and 45%, respectively (RR 0.94; 95% CI 0.78 -1.09; P=0.44); when adjusted for risk factors, the results were similar. Additionally, there were no differences in secondary outcomes (i.e., use of life support, development of ischemic events, and severe adverse reactions).

Hospitalists involved in managing patients with septic shock should be mindful of similar 90-day mortality and several other secondary outcomes regardless of hemoglobin threshold.

Bottom line: Ninety-day mortality and other outcomes were not affected by transfusion thresholds in ICU patients with septic shock.

Citation: Holst LB, Haase N, Wetterslev J, et al. Lower versus higher hemoglobin threshold for transfusion in septic shock. N Engl J Med. 2014;371(15):1381-1391.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: Is there a difference in 90-day mortality and other outcomes when a lower versus higher hemoglobin threshold is used for blood transfusions in ICU patients with septic shock?

Background: Patients with septic shock frequently receive blood transfusions. This often occurs in the setting of active bleeding but has also been observed in non-bleeding patients for variable hemoglobin levels. Concrete data regarding the efficacy and safety of such transfusions based on hemoglobin thresholds is lacking.

Study design: International, multi-center, stratified, parallel group randomized trial.

Setting: General ICUs in Denmark, Norway, Sweden, and Finland.

Synopsis: Researchers analyzed data from 998 ICU patients in the Transfusion Requirements in Septic Shock (TRISS) trial. Primary outcome was 90-day mortality rate. Hemoglobin levels less than 7 gm/dL and 9 gm/dL were used for lower and higher hemoglobin thresholds, respectively. The mortality rates were 43% and 45%, respectively (RR 0.94; 95% CI 0.78 -1.09; P=0.44); when adjusted for risk factors, the results were similar. Additionally, there were no differences in secondary outcomes (i.e., use of life support, development of ischemic events, and severe adverse reactions).

Hospitalists involved in managing patients with septic shock should be mindful of similar 90-day mortality and several other secondary outcomes regardless of hemoglobin threshold.

Bottom line: Ninety-day mortality and other outcomes were not affected by transfusion thresholds in ICU patients with septic shock.

Citation: Holst LB, Haase N, Wetterslev J, et al. Lower versus higher hemoglobin threshold for transfusion in septic shock. N Engl J Med. 2014;371(15):1381-1391.

Clinical question: Is there a difference in 90-day mortality and other outcomes when a lower versus higher hemoglobin threshold is used for blood transfusions in ICU patients with septic shock?

Background: Patients with septic shock frequently receive blood transfusions. This often occurs in the setting of active bleeding but has also been observed in non-bleeding patients for variable hemoglobin levels. Concrete data regarding the efficacy and safety of such transfusions based on hemoglobin thresholds is lacking.

Study design: International, multi-center, stratified, parallel group randomized trial.

Setting: General ICUs in Denmark, Norway, Sweden, and Finland.

Synopsis: Researchers analyzed data from 998 ICU patients in the Transfusion Requirements in Septic Shock (TRISS) trial. Primary outcome was 90-day mortality rate. Hemoglobin levels less than 7 gm/dL and 9 gm/dL were used for lower and higher hemoglobin thresholds, respectively. The mortality rates were 43% and 45%, respectively (RR 0.94; 95% CI 0.78 -1.09; P=0.44); when adjusted for risk factors, the results were similar. Additionally, there were no differences in secondary outcomes (i.e., use of life support, development of ischemic events, and severe adverse reactions).

Hospitalists involved in managing patients with septic shock should be mindful of similar 90-day mortality and several other secondary outcomes regardless of hemoglobin threshold.

Bottom line: Ninety-day mortality and other outcomes were not affected by transfusion thresholds in ICU patients with septic shock.

Citation: Holst LB, Haase N, Wetterslev J, et al. Lower versus higher hemoglobin threshold for transfusion in septic shock. N Engl J Med. 2014;371(15):1381-1391.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Hemoglobin Transfusion Threshold Not Associated with Differences in Morbidity, Mortality Among Patients with Septic Shock
Display Headline
Hemoglobin Transfusion Threshold Not Associated with Differences in Morbidity, Mortality Among Patients with Septic Shock
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Early, Goal-Directed Therapy Doesn’t Improve Mortality in Patients with Early Septic Shock

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Early, Goal-Directed Therapy Doesn’t Improve Mortality in Patients with Early Septic Shock

Clinical question: Does early goal-directed therapy (EGDT) improve mortality in patients presenting to the ED with early septic shock?

Background: EGDT (achieving central venous pressure of 8-12 mmHg, superior vena oxygen saturation (ScvO2) of > 70%, mean arterial pressure ≥ 65mmHg, and urine output ≥ 0.5 mL/kg/h) has been endorsed by the Surviving Sepsis Campaign as a key strategy to decrease mortality among patients with septic shock, but its effectiveness is uncertain and has been questioned by a recent randomized trial.

Study design: Prospective, randomized, parallel group trial.

Setting: Fifty-one tertiary and non-tertiary care metropolitan and rural hospitals, mainly in Australia and New Zealand.

Synopsis: Researchers randomized 1,600 patients who presented to the ED with early septic shock (evidence of refractory hypotension or hypoperfusion) to receive EGDT or usual care for six hours. All patients received antimicrobials and fluid resuscitation (approximately 2.5 liters) before randomization. There was no significant difference between the groups for the primary outcome (all-cause mortality at 90 days), but the EGDT group was more likely to receive vasopressor support and red blood cell transfusions and to have invasive monitoring.

Analysis for the whole group and various patient subgroups (location, age, APACHE II score, and others) did not show any benefit from using EGDT for any outcomes (including length of stay in ICU and hospital, invasive mechanical ventilation, and use of renal replacement therapy).

This study confirms that early diagnosis and aggressive treatment of sepsis is crucial. EGDT might be less important when fluid resuscitation and antimicrobials are started early after sepsis is suspected. With continuous improvement in these areas, monitoring of certain parameters as required in EGDT (like ScvO2, which requires a special catheter) might not be as important.

Bottom line: Early-goal directed therapy is not associated with improved mortality in sepsis in patients treated early with antimicrobials and aggressive fluid resuscitation.

Citation: ARISE Investigators, ANZICS Clinical Trials Group, Peake SL, Delaney A, Bailey M, et al. Goal-directed resuscitation for patients with early septic shock. N Eng J Med. 2014;371(16):1496-1506.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: Does early goal-directed therapy (EGDT) improve mortality in patients presenting to the ED with early septic shock?

Background: EGDT (achieving central venous pressure of 8-12 mmHg, superior vena oxygen saturation (ScvO2) of > 70%, mean arterial pressure ≥ 65mmHg, and urine output ≥ 0.5 mL/kg/h) has been endorsed by the Surviving Sepsis Campaign as a key strategy to decrease mortality among patients with septic shock, but its effectiveness is uncertain and has been questioned by a recent randomized trial.

Study design: Prospective, randomized, parallel group trial.

Setting: Fifty-one tertiary and non-tertiary care metropolitan and rural hospitals, mainly in Australia and New Zealand.

Synopsis: Researchers randomized 1,600 patients who presented to the ED with early septic shock (evidence of refractory hypotension or hypoperfusion) to receive EGDT or usual care for six hours. All patients received antimicrobials and fluid resuscitation (approximately 2.5 liters) before randomization. There was no significant difference between the groups for the primary outcome (all-cause mortality at 90 days), but the EGDT group was more likely to receive vasopressor support and red blood cell transfusions and to have invasive monitoring.

Analysis for the whole group and various patient subgroups (location, age, APACHE II score, and others) did not show any benefit from using EGDT for any outcomes (including length of stay in ICU and hospital, invasive mechanical ventilation, and use of renal replacement therapy).

This study confirms that early diagnosis and aggressive treatment of sepsis is crucial. EGDT might be less important when fluid resuscitation and antimicrobials are started early after sepsis is suspected. With continuous improvement in these areas, monitoring of certain parameters as required in EGDT (like ScvO2, which requires a special catheter) might not be as important.

Bottom line: Early-goal directed therapy is not associated with improved mortality in sepsis in patients treated early with antimicrobials and aggressive fluid resuscitation.

Citation: ARISE Investigators, ANZICS Clinical Trials Group, Peake SL, Delaney A, Bailey M, et al. Goal-directed resuscitation for patients with early septic shock. N Eng J Med. 2014;371(16):1496-1506.

Clinical question: Does early goal-directed therapy (EGDT) improve mortality in patients presenting to the ED with early septic shock?

Background: EGDT (achieving central venous pressure of 8-12 mmHg, superior vena oxygen saturation (ScvO2) of > 70%, mean arterial pressure ≥ 65mmHg, and urine output ≥ 0.5 mL/kg/h) has been endorsed by the Surviving Sepsis Campaign as a key strategy to decrease mortality among patients with septic shock, but its effectiveness is uncertain and has been questioned by a recent randomized trial.

Study design: Prospective, randomized, parallel group trial.

Setting: Fifty-one tertiary and non-tertiary care metropolitan and rural hospitals, mainly in Australia and New Zealand.

Synopsis: Researchers randomized 1,600 patients who presented to the ED with early septic shock (evidence of refractory hypotension or hypoperfusion) to receive EGDT or usual care for six hours. All patients received antimicrobials and fluid resuscitation (approximately 2.5 liters) before randomization. There was no significant difference between the groups for the primary outcome (all-cause mortality at 90 days), but the EGDT group was more likely to receive vasopressor support and red blood cell transfusions and to have invasive monitoring.

Analysis for the whole group and various patient subgroups (location, age, APACHE II score, and others) did not show any benefit from using EGDT for any outcomes (including length of stay in ICU and hospital, invasive mechanical ventilation, and use of renal replacement therapy).

This study confirms that early diagnosis and aggressive treatment of sepsis is crucial. EGDT might be less important when fluid resuscitation and antimicrobials are started early after sepsis is suspected. With continuous improvement in these areas, monitoring of certain parameters as required in EGDT (like ScvO2, which requires a special catheter) might not be as important.

Bottom line: Early-goal directed therapy is not associated with improved mortality in sepsis in patients treated early with antimicrobials and aggressive fluid resuscitation.

Citation: ARISE Investigators, ANZICS Clinical Trials Group, Peake SL, Delaney A, Bailey M, et al. Goal-directed resuscitation for patients with early septic shock. N Eng J Med. 2014;371(16):1496-1506.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Early, Goal-Directed Therapy Doesn’t Improve Mortality in Patients with Early Septic Shock
Display Headline
Early, Goal-Directed Therapy Doesn’t Improve Mortality in Patients with Early Septic Shock
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Arterial Catheter Use in ICU Doesn’t Improve Hospital Mortality

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Arterial Catheter Use in ICU Doesn’t Improve Hospital Mortality

Clinical question: Does the use of arterial catheters (AC) improve hospital mortality in ICU patients requiring mechanical ventilation?

Background: AC are used in 40% of ICU patients, mostly to facilitate diagnostic phlebotomy (including arterial blood gases) and improve hemodynamic monitoring. Despite known risks (limb ischemia, pseudoaneurysms, infections) and costs necessary for insertion and maintenance, data regarding their impact on outcomes are limited.

Study design: Propensity-matched cohort analysis of data in the Project IMPACT database.

Setting: 139 ICUs in the U.S., with larger and urban hospitals providing the majority of the data.

Synopsis: Of 60,975 medical patients who required mechanical ventilation, 24,126 (39.6%) patients had an AC. Propensity score matching yielded 13,603 pairs of patients who did not have an AC with patients who did have an AC. For many variables that could influence mortality in such patients, there were no significant differences between the two groups. No association between AC use and hospital mortality in medical ICU patients who required mechanical ventilation was noted. This was confirmed in analyses of eight of nine secondary cohorts. In one cohort (patients requiring vasopressors), AC use was associated with an 8% increase in the odds of death. More blood transfusions were administered in the AC group, although this finding did not reach statistical significance.

Despite the rigorous and complex statistical analysis used in this study, residual confounders remained. It is still possible, but unlikely, that patients with an AC could have had a higher expected mortality, which the use of the AC ameliorated. This study raises an important question that should ideally be addressed by randomized trials.

Bottom line: Arterial catheters used in mechanically ventilated patients in the ICU are not associated with lower mortality and should therefore be used with caution, weighing the risks and benefits, until more studies are performed.

Citation: Gershengorn HB, Wunsch H, Scales DC, Zarychanski R, Rubenfeld G, Garland A. Association between arterial catheter use and hospital mortality in intensive care units. JAMA Intern Med. 2014;174(11):1746-1754.

Issue
The Hospitalist - 2015(02)
Publications
Topics
Sections

Clinical question: Does the use of arterial catheters (AC) improve hospital mortality in ICU patients requiring mechanical ventilation?

Background: AC are used in 40% of ICU patients, mostly to facilitate diagnostic phlebotomy (including arterial blood gases) and improve hemodynamic monitoring. Despite known risks (limb ischemia, pseudoaneurysms, infections) and costs necessary for insertion and maintenance, data regarding their impact on outcomes are limited.

Study design: Propensity-matched cohort analysis of data in the Project IMPACT database.

Setting: 139 ICUs in the U.S., with larger and urban hospitals providing the majority of the data.

Synopsis: Of 60,975 medical patients who required mechanical ventilation, 24,126 (39.6%) patients had an AC. Propensity score matching yielded 13,603 pairs of patients who did not have an AC with patients who did have an AC. For many variables that could influence mortality in such patients, there were no significant differences between the two groups. No association between AC use and hospital mortality in medical ICU patients who required mechanical ventilation was noted. This was confirmed in analyses of eight of nine secondary cohorts. In one cohort (patients requiring vasopressors), AC use was associated with an 8% increase in the odds of death. More blood transfusions were administered in the AC group, although this finding did not reach statistical significance.

Despite the rigorous and complex statistical analysis used in this study, residual confounders remained. It is still possible, but unlikely, that patients with an AC could have had a higher expected mortality, which the use of the AC ameliorated. This study raises an important question that should ideally be addressed by randomized trials.

Bottom line: Arterial catheters used in mechanically ventilated patients in the ICU are not associated with lower mortality and should therefore be used with caution, weighing the risks and benefits, until more studies are performed.

Citation: Gershengorn HB, Wunsch H, Scales DC, Zarychanski R, Rubenfeld G, Garland A. Association between arterial catheter use and hospital mortality in intensive care units. JAMA Intern Med. 2014;174(11):1746-1754.

Clinical question: Does the use of arterial catheters (AC) improve hospital mortality in ICU patients requiring mechanical ventilation?

Background: AC are used in 40% of ICU patients, mostly to facilitate diagnostic phlebotomy (including arterial blood gases) and improve hemodynamic monitoring. Despite known risks (limb ischemia, pseudoaneurysms, infections) and costs necessary for insertion and maintenance, data regarding their impact on outcomes are limited.

Study design: Propensity-matched cohort analysis of data in the Project IMPACT database.

Setting: 139 ICUs in the U.S., with larger and urban hospitals providing the majority of the data.

Synopsis: Of 60,975 medical patients who required mechanical ventilation, 24,126 (39.6%) patients had an AC. Propensity score matching yielded 13,603 pairs of patients who did not have an AC with patients who did have an AC. For many variables that could influence mortality in such patients, there were no significant differences between the two groups. No association between AC use and hospital mortality in medical ICU patients who required mechanical ventilation was noted. This was confirmed in analyses of eight of nine secondary cohorts. In one cohort (patients requiring vasopressors), AC use was associated with an 8% increase in the odds of death. More blood transfusions were administered in the AC group, although this finding did not reach statistical significance.

Despite the rigorous and complex statistical analysis used in this study, residual confounders remained. It is still possible, but unlikely, that patients with an AC could have had a higher expected mortality, which the use of the AC ameliorated. This study raises an important question that should ideally be addressed by randomized trials.

Bottom line: Arterial catheters used in mechanically ventilated patients in the ICU are not associated with lower mortality and should therefore be used with caution, weighing the risks and benefits, until more studies are performed.

Citation: Gershengorn HB, Wunsch H, Scales DC, Zarychanski R, Rubenfeld G, Garland A. Association between arterial catheter use and hospital mortality in intensive care units. JAMA Intern Med. 2014;174(11):1746-1754.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Topics
Article Type
Display Headline
Arterial Catheter Use in ICU Doesn’t Improve Hospital Mortality
Display Headline
Arterial Catheter Use in ICU Doesn’t Improve Hospital Mortality
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Bedside Attention Tests May Be Useful in Detecting Delirium

Article Type
Changed
Thu, 12/15/2022 - 16:14
Display Headline
Bedside Attention Tests May Be Useful in Detecting Delirium

Clinical question: Are simple bedside attention tests a reliable way to routinely screen for delirium?

Background: Early diagnosis of delirium decreases adverse outcomes, but it often goes unrecognized, in part because clinicians do not routinely screen for it. Patients at high risk of delirium should be assessed regularly, although the best brief screening method is unknown. For example, the Confusion Assessment Method (CAM) requires training and is time-consuming to administer.

Study design: Cross-sectional portion of a larger point prevalence study.

Setting: Adult inpatients in a large university hospital in Ireland.

Synopsis: The study population (265 adult inpatients) was screened for inattention using months of the year backwards (MOTYB) and Spatial Span Forwards (SSF), a visual pattern recognition test. In addition, subjective/objective reports of confusion were gathered by interviewing patients and nurses and by reviewing physician documentation. Any patient who failed at least one of the screening tests or had reports of confusion was administered the CAM and then evaluated by a team of psychiatrists experienced in delirium detection.

Combining MOTYB with assessment of objective/subjective reports of delirium was the most accurate way to screen for delirium (sensitivity 93.8%, specificity 84.7%). In older patients (>69 years), MOTYB by itself was the most accurate. Addition of the CAM as a second-line screening test increased specificity but led to an unacceptable drop in sensitivity.

Hospitalists can easily incorporate the MOTYB test into daily patient assessments to help identify delirious patients but should be mindful of this study’s limitations (involved patients at a single institution, included assessment of only two bedside tests for attention, and completed formal delirium testing only in patients who screened positive).

Bottom line: Simple attention tests, particularly MOTYB, could be useful in increasing recognition of delirium among adult inpatients.

Citation: O’Regan NA, Ryan DJ, Boland E, et al. Attention! A good bedside test for delirium? J Neurol Neurosurg Psychiatry. 2014;85(10):1122-1131.

Issue
The Hospitalist - 2015(02)
Publications
Sections

Clinical question: Are simple bedside attention tests a reliable way to routinely screen for delirium?

Background: Early diagnosis of delirium decreases adverse outcomes, but it often goes unrecognized, in part because clinicians do not routinely screen for it. Patients at high risk of delirium should be assessed regularly, although the best brief screening method is unknown. For example, the Confusion Assessment Method (CAM) requires training and is time-consuming to administer.

Study design: Cross-sectional portion of a larger point prevalence study.

Setting: Adult inpatients in a large university hospital in Ireland.

Synopsis: The study population (265 adult inpatients) was screened for inattention using months of the year backwards (MOTYB) and Spatial Span Forwards (SSF), a visual pattern recognition test. In addition, subjective/objective reports of confusion were gathered by interviewing patients and nurses and by reviewing physician documentation. Any patient who failed at least one of the screening tests or had reports of confusion was administered the CAM and then evaluated by a team of psychiatrists experienced in delirium detection.

Combining MOTYB with assessment of objective/subjective reports of delirium was the most accurate way to screen for delirium (sensitivity 93.8%, specificity 84.7%). In older patients (>69 years), MOTYB by itself was the most accurate. Addition of the CAM as a second-line screening test increased specificity but led to an unacceptable drop in sensitivity.

Hospitalists can easily incorporate the MOTYB test into daily patient assessments to help identify delirious patients but should be mindful of this study’s limitations (involved patients at a single institution, included assessment of only two bedside tests for attention, and completed formal delirium testing only in patients who screened positive).

Bottom line: Simple attention tests, particularly MOTYB, could be useful in increasing recognition of delirium among adult inpatients.

Citation: O’Regan NA, Ryan DJ, Boland E, et al. Attention! A good bedside test for delirium? J Neurol Neurosurg Psychiatry. 2014;85(10):1122-1131.

Clinical question: Are simple bedside attention tests a reliable way to routinely screen for delirium?

Background: Early diagnosis of delirium decreases adverse outcomes, but it often goes unrecognized, in part because clinicians do not routinely screen for it. Patients at high risk of delirium should be assessed regularly, although the best brief screening method is unknown. For example, the Confusion Assessment Method (CAM) requires training and is time-consuming to administer.

Study design: Cross-sectional portion of a larger point prevalence study.

Setting: Adult inpatients in a large university hospital in Ireland.

Synopsis: The study population (265 adult inpatients) was screened for inattention using months of the year backwards (MOTYB) and Spatial Span Forwards (SSF), a visual pattern recognition test. In addition, subjective/objective reports of confusion were gathered by interviewing patients and nurses and by reviewing physician documentation. Any patient who failed at least one of the screening tests or had reports of confusion was administered the CAM and then evaluated by a team of psychiatrists experienced in delirium detection.

Combining MOTYB with assessment of objective/subjective reports of delirium was the most accurate way to screen for delirium (sensitivity 93.8%, specificity 84.7%). In older patients (>69 years), MOTYB by itself was the most accurate. Addition of the CAM as a second-line screening test increased specificity but led to an unacceptable drop in sensitivity.

Hospitalists can easily incorporate the MOTYB test into daily patient assessments to help identify delirious patients but should be mindful of this study’s limitations (involved patients at a single institution, included assessment of only two bedside tests for attention, and completed formal delirium testing only in patients who screened positive).

Bottom line: Simple attention tests, particularly MOTYB, could be useful in increasing recognition of delirium among adult inpatients.

Citation: O’Regan NA, Ryan DJ, Boland E, et al. Attention! A good bedside test for delirium? J Neurol Neurosurg Psychiatry. 2014;85(10):1122-1131.

Issue
The Hospitalist - 2015(02)
Issue
The Hospitalist - 2015(02)
Publications
Publications
Article Type
Display Headline
Bedside Attention Tests May Be Useful in Detecting Delirium
Display Headline
Bedside Attention Tests May Be Useful in Detecting Delirium
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)