User login
In the Literature: Research You Need to Know
Clinical question: What is the prognostic influence of atrial fibrillation in patients with acute myocardial infarction?
Background: There have been conflicting reports regarding the prognostic impact of atrial fibrillation (AF) in patients with acute myocardial infarction (MI). This study represents the first meta-analysis performed to quantify the mortality risk associated with AF in MI patients.
Study design: Meta-analysis of observational studies.
Setting: Forty-three studies involving 278,854 patients diagnosed with MI from 1972 to 2000.
Synopsis: The odds ratio (OR) of mortality associated with AF in MI patients was 1.46 (95% confidence interval, 1.35 to 1.58, I2=76%, 23 studies). Although there was significant heterogeneity in included studies, in subgroup analysis, the significant association between AF and mortality was present whether the AF was new (defined as occurring for the first time within one week of MI) with OR of 1.37 (95% confidence interval, 1.26 to 1.49; I2=28%, nine studies) or old (defined as pre-existing before the MI admission) with OR of 1.28 (95% confidence interval, 1.16 to 1.40, I2=24%, four studies). Sensitivity analyses performed by pooling studies according to follow-up duration and adjustment for confounding clinical factors had little effect on the estimates.
Bottom line: AF was associated with increased mortality in patients with MI regardless of the timing of AF development.
Citation: Jabre P, Roger VL, Murad MH, et al. Mortality associated with atrial fibrillation in patients with myocardial infarction. Circulation. 2011;123:1587-1593.
For more physician reviews of HM-related literature, visit our website.
Clinical question: What is the prognostic influence of atrial fibrillation in patients with acute myocardial infarction?
Background: There have been conflicting reports regarding the prognostic impact of atrial fibrillation (AF) in patients with acute myocardial infarction (MI). This study represents the first meta-analysis performed to quantify the mortality risk associated with AF in MI patients.
Study design: Meta-analysis of observational studies.
Setting: Forty-three studies involving 278,854 patients diagnosed with MI from 1972 to 2000.
Synopsis: The odds ratio (OR) of mortality associated with AF in MI patients was 1.46 (95% confidence interval, 1.35 to 1.58, I2=76%, 23 studies). Although there was significant heterogeneity in included studies, in subgroup analysis, the significant association between AF and mortality was present whether the AF was new (defined as occurring for the first time within one week of MI) with OR of 1.37 (95% confidence interval, 1.26 to 1.49; I2=28%, nine studies) or old (defined as pre-existing before the MI admission) with OR of 1.28 (95% confidence interval, 1.16 to 1.40, I2=24%, four studies). Sensitivity analyses performed by pooling studies according to follow-up duration and adjustment for confounding clinical factors had little effect on the estimates.
Bottom line: AF was associated with increased mortality in patients with MI regardless of the timing of AF development.
Citation: Jabre P, Roger VL, Murad MH, et al. Mortality associated with atrial fibrillation in patients with myocardial infarction. Circulation. 2011;123:1587-1593.
For more physician reviews of HM-related literature, visit our website.
Clinical question: What is the prognostic influence of atrial fibrillation in patients with acute myocardial infarction?
Background: There have been conflicting reports regarding the prognostic impact of atrial fibrillation (AF) in patients with acute myocardial infarction (MI). This study represents the first meta-analysis performed to quantify the mortality risk associated with AF in MI patients.
Study design: Meta-analysis of observational studies.
Setting: Forty-three studies involving 278,854 patients diagnosed with MI from 1972 to 2000.
Synopsis: The odds ratio (OR) of mortality associated with AF in MI patients was 1.46 (95% confidence interval, 1.35 to 1.58, I2=76%, 23 studies). Although there was significant heterogeneity in included studies, in subgroup analysis, the significant association between AF and mortality was present whether the AF was new (defined as occurring for the first time within one week of MI) with OR of 1.37 (95% confidence interval, 1.26 to 1.49; I2=28%, nine studies) or old (defined as pre-existing before the MI admission) with OR of 1.28 (95% confidence interval, 1.16 to 1.40, I2=24%, four studies). Sensitivity analyses performed by pooling studies according to follow-up duration and adjustment for confounding clinical factors had little effect on the estimates.
Bottom line: AF was associated with increased mortality in patients with MI regardless of the timing of AF development.
Citation: Jabre P, Roger VL, Murad MH, et al. Mortality associated with atrial fibrillation in patients with myocardial infarction. Circulation. 2011;123:1587-1593.
For more physician reviews of HM-related literature, visit our website.
In the Literature: HM-Related Research You Need to Know
In This Edition
Literature At A Glance
A guide to this month’s studies
- PCI Not Inferior to CABG in Left Main Coronary Artery Stenosis at One Year, But Requires Further Study
- CABG Did Not Decrease Mortality in Patients with CAD and Left Ventricular Dysfunction
- Linezolid Not Superior to Glycopeptide Antibiotics in Treatment of Nosocomial Pneumonia
- CRP and Procalcitonin Independently Differentiated Pneumonia from Asthma or COPD Exacerbation
- Survival Benefit Demonstrated with FOLFIRINOX in Select Patients with Metastatic Pancreatic Cancer
- MRSA Bundle Implementation at VA Hospitals Reduced Healthcare-Associated MRSA Infections
- New Left Bundle Branch Block Does Not Predict MI
- Acute Beta-Blocker Therapy for MI Increased Risk of Shock
PCI Not Inferior to CABG in Left Main Coronary Artery Stenosis at One Year, But Requires Further Study
Clinical question: Is percutaneous coronary intervention (PCI) an acceptable alternative to coronary artery bypass grafting (CABG) in unprotected left main coronary artery disease (CAD)?
Background: The current standard of care for unprotected left main CAD is CABG. A sub-study from a large randomized trial suggests that PCI might be an alternative to CABG for patients with left main CAD. Outcomes after the two treatments have not been directly compared in an appropriately powered trial.
Study design: Prospective, open-label, randomized trial powered for noninferiority.
Setting: Thirteen sites in South Korea.
Synopsis: Six hundred patients with newly diagnosed left main disease with >50% stenosis were randomized to PCI with a sirolimus-eluting stent versus CABG. The primary endpoint of major adverse cardiac or cerebrovascular events occurred in 8.7% in the PCI group and 6.7% in the CABG group at one year (absolute risk difference 2 percentage points, 95% CI, -1.6 to 5.6; P=0.01), which was considered noninferior.
However, ischemia-driven target-vessel revascularization occurred in significantly more patients in the PCI group than in the CABG group. The wide noninferiority margin was due to an unexpectedly low rate of events, thus underpowering the study. Also, study duration was only two years.
Bottom line: PCI with a sirolimus-eluting stent was noninferior to CABG for unprotected left main CAD in this study, but the wide noninferiority margin and limited follow-up duration limit clinical application.
Reference: Park SJ, Kim YH, Park DW, et al. Randomized trial of stents versus bypass surgery for left main coronary artery disease. N Engl J Med. 2011;364(18):1718-1727.
CABG Did Not Decrease Mortality in Patients with CAD and Left Ventricular Dysfunction
Clinical question: What role does coronary-artery bypass grafting (CABG) have in the treatment of patients with both coronary artery disease (CAD) and heart failure?
Background: Although CAD is the most common cause of heart failure, early trials that evaluated the use of CABG in relieving angina excluded patients who had left ventricular (LV) dysfunction with ejection fraction <35%. It is unknown whether CABG adds mortality benefit to intensive medical treatment in patients with CAD and LV dysfunction.
Study design: Multicenter, nonblinded, randomized trial.
Setting: One hundred twenty-seven sites in 26 countries.
Synopsis: From July 2002 to May 2007, 1,212 patients with known CAD amenable to CABG and LV ejection fraction <35% were randomized to medical therapy alone versus CABG plus medical therapy with an average follow-up of five years. The primary outcome of death from any cause occurred in 41% of the medical-therapy-alone group and 36% of the CABG-plus-medical-therapy group (hazard ratio with CABG 0.86; 95% CI 0.72 to 1.04; P=0.12).
Despite subgroup analysis suggesting decreased death rates from cardiovascular causes in the latter group, there was no significant difference in the primary endpoint of death from any cause.
Bottom line: The addition of CABG to medical therapy for patients with CAD and left ventricular dysfunction does not decrease mortality.
Reference: Velazquez EJ, Lee KL, Deja MA, et al. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616.
Linezolid Not Superior to Glycopeptide Antibiotics in Treatment of Nosocomial Pneumonia
Clinical question: Is linezolid superior to glycopeptide antibiotics in the treatment of nosocomial pneumonia?
Background: Current ATS/IDSA guidelines suggest that linezolid might be preferred over glycopeptide antibiotics (i.e. vancomycin and teicoplanin) for methicillin-resistant Staphylococcus aureus (MRSA) pneumonia, although this recommendation is based on a retrospective subgroup analysis of one randomized trial. No systematic reviews have looked at the comparative efficacy and safety of linezolid and glycopeptide antibiotics for nosocomial pneumonia.
Study design: Meta-analysis using a highly sensitive search method.
Setting: Eight multicenter, randomized controlled trials (RCTs).
Synopsis: The study authors retrieved 762 articles with a highly sensitive search strategy, from which eight RCTs were identified that met study criteria for a total of 1,641 patients. Primary outcome of clinical success at test-of-cure was not different between the two classes of antibiotics (pooled RR 1.04, 95% CI 0.97-1.11, P=0.28). Other endpoints, including mortality and microbiologic eradication, were similar between the two groups.
Clinical success in the subgroup of patients with culture-confirmed MRSA pneumonia was not different than those without culture-proven MRSA, although the study was not powered for subgroup analysis. Risk of thrombocytopenia and renal impairment were not statistically different in the limited subgroup of trials reporting this data.
The results should not be generalized to community-acquired MRSA or MRSA pneumonia with characteristics of PVL toxin-producing strain.
Bottom line: For the treatment of nosocomial pneumonia, there was no significant difference in clinical success or mortality between linezolid and glycopeptide antibiotics.
Citation: Walkey AJ, O’Donnell MR, Weiner RS. Linezolid vs. glycopeptide antibiotics for the treatment of suspected methicillin-resistant Staphylococcus aureus nosocomial pneumonia. Chest. 2011;139: 1148-1155.
CRP and Procalcitonin Independently Differentiated Pneumonia from Asthma or COPD Exacerbation
Clinical question: Are biomarkers such as CRP or procalcitonin useful in differentiating pneumonia from asthma or COPD exacerbation in hospitalized patients?
Background: Antibiotic overuse is associated with the emergence of drug resistance. One potential strategy to decrease antibiotic overuse is biomarker-guided therapy. Several randomized controlled trials (RCT) with procalcitonin-guided therapy have resulted in reduced antibiotic use for symptoms of acute respiratory tract infections (RTI). The use of CRP as a biomarker in acute RTI is not as well-described.
Study design: Prospective, observational, diagnostic accuracy study.
Setting: Winter months, 2006 to 2008, in two hospitals in England.
Synopsis: The study examined 319 patients: 62 with pneumonia, 96 with asthma exacerbation, and 161 with COPD exacerbation. Patients with pneumonia had significantly higher procalcitonin and CRP levels than those with COPD (P<0.0001) or asthma (P<0.0001). The area under receiver operator characteristic curve for distinguishing between pneumonia (requiring antibiotics) and asthma exacerbation (not requiring antibiotics) was 0.93 (0.88-0.98) for procalcitonin and 0.96 (0.93-1.00) for CRP. A CRP value >48 mg/L had a sensitivity of 91% (95% CI 80%-97%) and specificity of 93% (95% CI 86-98).
Using this CRP threshold, antibiotic use would have been reduced by 88% in asthma exacerbation, 76% in COPD exacerbation, and 9% in pneumonia cases.
This strategy was developed in a single-center study and requires further validation in a multicenter RCT.
Bottom line: Procalcitonin and CRP were elevated in patients with pneumonia compared to patients with asthma or COPD exacerbation and might be useful in guiding antibiotic usage.
Citation: Bafadhel, M, Clark TW, Reid, C, et al. Procalcitonin and C-reactive protein in hospitalized adult patients with community-acquired pneumonia or exacerbation of asthma or COPD. Chest. 2011;139:1410-1418.
Survival Benefit Demonstrated with FOLFIRINOX in Select Patients with Metastatic Pancreatic Cancer
Clinical question: How does FOLFIRINOX compare to gemcitabine as first-line treatment of metastatic pancreatic cancer?
Background: Single-agent gemcitabine is the standard first-line treatment for metastatic pancreatic cancer. Preclinical studies followed by Phase 1 and Phase 2 studies have demonstrated response to the oxaliplatin, irinotecan, leucovorin, and fluorouracil regimen (FOLFIRINOX).
Study design: Multicenter, randomized, controlled Phase 2-3 trial.
Setting: Fifteen centers in France during Phase 2, which then expanded to 48 centers for Phase 3.
Synopsis: Three hundred forty-two patients with good performance status (ECOG 0 or 1) and age <76 were randomized to receive FOLFIRINOX or gemcitabine. Median survival in the FOLFIRINOX group was significantly increased, at 11.1 months, compared with 6.8 months in the gemcitabine group (HR 0.57, CI 95%, 0.45-0.73, P<000.1).
Median progression-free survival, objective response rate, and quality of life score at six months were significantly increased in the FOLFIRINOX group. Significantly more grade 3 or grade 4 toxicity was reported in the FOLFIRINOX group.
Patients with elevated bilirubin were excluded due to increased risk of irinotecan-induced toxicity, resulting in only 38% of study patients with carcinoma of the pancreatic head and low proportion of enrolled patients (14.3%) with biliary stents.
Bottom line: FOLFIRONOX was associated with a significant survival advantage compared with single-agent gemcitabine in carefully selected patients with advanced pancreatic cancer, although it was associated with increased toxicity.
Citation: Conroy T, Desseigne F, Ychou M, et al. FOLFIRINOX versus gemcitabine for metastatic pancreatic cancer. N Engl J Med. 2011;364(19):1817-1825.
MRSA Bundle Implementation at VA Hospitals Reduced Healthcare-Associated MRSA Infections
Clinical question: Can nationwide implementation of a “MRSA bundle,” including universal surveillance, contact isolation, hand hygiene, and institutional culture change, influence healthcare-associated MRSA infection rates?
Background: MRSA is a common cause of nosocomial infection. A pilot project at a single Veterans Affairs (VA) hospital utilized a “MRSA bundle” developed from published guidelines, which resulted in decreased healthcare-associated MRSA infections. In October 2007, the MRSA bundle was implemented throughout VA hospitals nationwide.
Study design: Quality-improvement (QI) observational initiative.
Setting: One hundred fifty-eight acute-care VA hospitals in the U.S.
Synopsis: From October 2007 to June 2010, there were 1,934,598 admissions, transfers, or discharges, and 8,318,675 patient-days. Of this study group, 96% of patients were screened at admission and 93% were screened at transfer or discharge. MRSA colonization or infection at the time of admission was 13.6%. Rates of healthcare-associated MRSA infection declined 45% in the non-ICU setting (0.47 to 0.26 per 1,000 patient-days, P<0.001) and 62% in the ICU setting (1.64 to 0.62 per 1,000 patient days, P<0.001).
It is unclear how much each individual component of the MRSA bundle impacted the declining MRSA infection rate.
Bottom line: Implementation of a “MRSA bundle,” including universal surveillance, contact isolation, hand hygiene, and institutional culture change, decreased the healthcare-associated MRSA infection rate in a large hospital system.
Citation: Jain R, Kralovi S, Evans M, et al. Veterans Affairs initiative to prevent methicillin-resistant Staphylococcus aureus infections. N Engl J Med. 2011;364(15):1419-1430.
New Left Bundle Branch Block Does Not Predict MI
Clinical question: How does the chronicity of left bundle branch block (LBBB) impact diagnosis and outcome in patients undergoing evaluation for acute myocardial infarction (MI)?
Background: ACA/AHA guidelines recommend that patients with new or presumed new LBBB undergo early reperfusion therapy. However, previous studies have shown that a minority of patients with new LBBB are diagnosed with MI.
Study design: Prospective cohort study.
Setting: University hospital in the U.S.
Synopsis: From 1994 to 2009, 401 consecutive patients undergoing evaluation for acute coronary syndrome with LBBB on initial ECG were included in the analysis. Of these patients, 64% had new (37%) or presumably new (27%) LBBB. Twenty-nine percent were diagnosed with MI, but there was no difference in frequency or size of MI between the new, presumably new, or chronic LBBB groups.
Concordant ST-T changes were an independent predictor of MI (OR 17, 95% CI 3.4-81, P<0.001) and mortality (OR 4.3, 95% CI 1.3-15, P<0.001), although this finding was present in only about 11% of the patient group.
Bottom line: Left bundle branch block is not a predictor of MI, although concordant ST-T changes were an independent predictor of MI and mortality.
Citation: Kontos MC, Aziz HA, Chau VQ, et al. Outcomes in patients with chronicity of left bundle-branch block with possible acute myocardial infarction. Am Heart J. 2011;161(4):698-704.
Acute Beta-Blocker Therapy for MI Increased Risk of Shock
Clinical question: How does acute beta-blocker therapy in myocardial infarction (MI) impact outcome?
Background: Long-term treatment with beta-blockers after myocardial infarction (MI) reduces mortality. However, data regarding outcome after acute use of beta-blockers in the first 24 hours of MI is conflicting. Updated ACA/AHA guidelines for STEMI and NSTEMI recommend caution when using beta-blockers in the first 24 hours, particularly in patients at risk for shock.
Study design: Observational registry study.
Setting: Two hundred ninety-one U.S hospitals.
Synopsis: More than 34,600 patients diagnosed with STEMI and NSTEMI from January 2007 to June 2008 were identified from a national QI MI registry. Patients were stratified by guideline-stated risk factors for shock; age >70, HR >110, and systolic BP <120 were associated with increased risk of composite outcome of shock or death.
At least one high-risk factor was present in 63% of the NSTEMI patients and 45% of STEMI patients; however, >90% of these patients received acute beta-blocker therapy. Nearly half (49%) of the NSTEMI patients received beta-blockers in the ED and 62% of the STEMI patients received beta-blockers before PCI.
In a multivariable model, NSTEMI patients receiving beta-blocker therapy in the ED were more likely to develop cardiogenic shock (OR 1.54, 95% CI 1.26-1.88, P<.001), as were STEMI patients receiving beta-blocker therapy prior to PCI (1.40, 95% CI 1.10-1.79, P=.006).
Bottom line: Caution should be exercised when using beta-blocker therapy during acute MI, particularly in the ED or prior to primary PCI.
Citation: Kontos MC, Diercks DB, Ho MP, Wang TY, Chen AY, Roe MT. Treatment and outcomes in patients with myocardial infarction treated with acute beta-blocker therapy: results from the American College of Cardiology’s NCDR. Am Heart J. 2011;161(5):864-870.
In This Edition
Literature At A Glance
A guide to this month’s studies
- PCI Not Inferior to CABG in Left Main Coronary Artery Stenosis at One Year, But Requires Further Study
- CABG Did Not Decrease Mortality in Patients with CAD and Left Ventricular Dysfunction
- Linezolid Not Superior to Glycopeptide Antibiotics in Treatment of Nosocomial Pneumonia
- CRP and Procalcitonin Independently Differentiated Pneumonia from Asthma or COPD Exacerbation
- Survival Benefit Demonstrated with FOLFIRINOX in Select Patients with Metastatic Pancreatic Cancer
- MRSA Bundle Implementation at VA Hospitals Reduced Healthcare-Associated MRSA Infections
- New Left Bundle Branch Block Does Not Predict MI
- Acute Beta-Blocker Therapy for MI Increased Risk of Shock
PCI Not Inferior to CABG in Left Main Coronary Artery Stenosis at One Year, But Requires Further Study
Clinical question: Is percutaneous coronary intervention (PCI) an acceptable alternative to coronary artery bypass grafting (CABG) in unprotected left main coronary artery disease (CAD)?
Background: The current standard of care for unprotected left main CAD is CABG. A sub-study from a large randomized trial suggests that PCI might be an alternative to CABG for patients with left main CAD. Outcomes after the two treatments have not been directly compared in an appropriately powered trial.
Study design: Prospective, open-label, randomized trial powered for noninferiority.
Setting: Thirteen sites in South Korea.
Synopsis: Six hundred patients with newly diagnosed left main disease with >50% stenosis were randomized to PCI with a sirolimus-eluting stent versus CABG. The primary endpoint of major adverse cardiac or cerebrovascular events occurred in 8.7% in the PCI group and 6.7% in the CABG group at one year (absolute risk difference 2 percentage points, 95% CI, -1.6 to 5.6; P=0.01), which was considered noninferior.
However, ischemia-driven target-vessel revascularization occurred in significantly more patients in the PCI group than in the CABG group. The wide noninferiority margin was due to an unexpectedly low rate of events, thus underpowering the study. Also, study duration was only two years.
Bottom line: PCI with a sirolimus-eluting stent was noninferior to CABG for unprotected left main CAD in this study, but the wide noninferiority margin and limited follow-up duration limit clinical application.
Reference: Park SJ, Kim YH, Park DW, et al. Randomized trial of stents versus bypass surgery for left main coronary artery disease. N Engl J Med. 2011;364(18):1718-1727.
CABG Did Not Decrease Mortality in Patients with CAD and Left Ventricular Dysfunction
Clinical question: What role does coronary-artery bypass grafting (CABG) have in the treatment of patients with both coronary artery disease (CAD) and heart failure?
Background: Although CAD is the most common cause of heart failure, early trials that evaluated the use of CABG in relieving angina excluded patients who had left ventricular (LV) dysfunction with ejection fraction <35%. It is unknown whether CABG adds mortality benefit to intensive medical treatment in patients with CAD and LV dysfunction.
Study design: Multicenter, nonblinded, randomized trial.
Setting: One hundred twenty-seven sites in 26 countries.
Synopsis: From July 2002 to May 2007, 1,212 patients with known CAD amenable to CABG and LV ejection fraction <35% were randomized to medical therapy alone versus CABG plus medical therapy with an average follow-up of five years. The primary outcome of death from any cause occurred in 41% of the medical-therapy-alone group and 36% of the CABG-plus-medical-therapy group (hazard ratio with CABG 0.86; 95% CI 0.72 to 1.04; P=0.12).
Despite subgroup analysis suggesting decreased death rates from cardiovascular causes in the latter group, there was no significant difference in the primary endpoint of death from any cause.
Bottom line: The addition of CABG to medical therapy for patients with CAD and left ventricular dysfunction does not decrease mortality.
Reference: Velazquez EJ, Lee KL, Deja MA, et al. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616.
Linezolid Not Superior to Glycopeptide Antibiotics in Treatment of Nosocomial Pneumonia
Clinical question: Is linezolid superior to glycopeptide antibiotics in the treatment of nosocomial pneumonia?
Background: Current ATS/IDSA guidelines suggest that linezolid might be preferred over glycopeptide antibiotics (i.e. vancomycin and teicoplanin) for methicillin-resistant Staphylococcus aureus (MRSA) pneumonia, although this recommendation is based on a retrospective subgroup analysis of one randomized trial. No systematic reviews have looked at the comparative efficacy and safety of linezolid and glycopeptide antibiotics for nosocomial pneumonia.
Study design: Meta-analysis using a highly sensitive search method.
Setting: Eight multicenter, randomized controlled trials (RCTs).
Synopsis: The study authors retrieved 762 articles with a highly sensitive search strategy, from which eight RCTs were identified that met study criteria for a total of 1,641 patients. Primary outcome of clinical success at test-of-cure was not different between the two classes of antibiotics (pooled RR 1.04, 95% CI 0.97-1.11, P=0.28). Other endpoints, including mortality and microbiologic eradication, were similar between the two groups.
Clinical success in the subgroup of patients with culture-confirmed MRSA pneumonia was not different than those without culture-proven MRSA, although the study was not powered for subgroup analysis. Risk of thrombocytopenia and renal impairment were not statistically different in the limited subgroup of trials reporting this data.
The results should not be generalized to community-acquired MRSA or MRSA pneumonia with characteristics of PVL toxin-producing strain.
Bottom line: For the treatment of nosocomial pneumonia, there was no significant difference in clinical success or mortality between linezolid and glycopeptide antibiotics.
Citation: Walkey AJ, O’Donnell MR, Weiner RS. Linezolid vs. glycopeptide antibiotics for the treatment of suspected methicillin-resistant Staphylococcus aureus nosocomial pneumonia. Chest. 2011;139: 1148-1155.
CRP and Procalcitonin Independently Differentiated Pneumonia from Asthma or COPD Exacerbation
Clinical question: Are biomarkers such as CRP or procalcitonin useful in differentiating pneumonia from asthma or COPD exacerbation in hospitalized patients?
Background: Antibiotic overuse is associated with the emergence of drug resistance. One potential strategy to decrease antibiotic overuse is biomarker-guided therapy. Several randomized controlled trials (RCT) with procalcitonin-guided therapy have resulted in reduced antibiotic use for symptoms of acute respiratory tract infections (RTI). The use of CRP as a biomarker in acute RTI is not as well-described.
Study design: Prospective, observational, diagnostic accuracy study.
Setting: Winter months, 2006 to 2008, in two hospitals in England.
Synopsis: The study examined 319 patients: 62 with pneumonia, 96 with asthma exacerbation, and 161 with COPD exacerbation. Patients with pneumonia had significantly higher procalcitonin and CRP levels than those with COPD (P<0.0001) or asthma (P<0.0001). The area under receiver operator characteristic curve for distinguishing between pneumonia (requiring antibiotics) and asthma exacerbation (not requiring antibiotics) was 0.93 (0.88-0.98) for procalcitonin and 0.96 (0.93-1.00) for CRP. A CRP value >48 mg/L had a sensitivity of 91% (95% CI 80%-97%) and specificity of 93% (95% CI 86-98).
Using this CRP threshold, antibiotic use would have been reduced by 88% in asthma exacerbation, 76% in COPD exacerbation, and 9% in pneumonia cases.
This strategy was developed in a single-center study and requires further validation in a multicenter RCT.
Bottom line: Procalcitonin and CRP were elevated in patients with pneumonia compared to patients with asthma or COPD exacerbation and might be useful in guiding antibiotic usage.
Citation: Bafadhel, M, Clark TW, Reid, C, et al. Procalcitonin and C-reactive protein in hospitalized adult patients with community-acquired pneumonia or exacerbation of asthma or COPD. Chest. 2011;139:1410-1418.
Survival Benefit Demonstrated with FOLFIRINOX in Select Patients with Metastatic Pancreatic Cancer
Clinical question: How does FOLFIRINOX compare to gemcitabine as first-line treatment of metastatic pancreatic cancer?
Background: Single-agent gemcitabine is the standard first-line treatment for metastatic pancreatic cancer. Preclinical studies followed by Phase 1 and Phase 2 studies have demonstrated response to the oxaliplatin, irinotecan, leucovorin, and fluorouracil regimen (FOLFIRINOX).
Study design: Multicenter, randomized, controlled Phase 2-3 trial.
Setting: Fifteen centers in France during Phase 2, which then expanded to 48 centers for Phase 3.
Synopsis: Three hundred forty-two patients with good performance status (ECOG 0 or 1) and age <76 were randomized to receive FOLFIRINOX or gemcitabine. Median survival in the FOLFIRINOX group was significantly increased, at 11.1 months, compared with 6.8 months in the gemcitabine group (HR 0.57, CI 95%, 0.45-0.73, P<000.1).
Median progression-free survival, objective response rate, and quality of life score at six months were significantly increased in the FOLFIRINOX group. Significantly more grade 3 or grade 4 toxicity was reported in the FOLFIRINOX group.
Patients with elevated bilirubin were excluded due to increased risk of irinotecan-induced toxicity, resulting in only 38% of study patients with carcinoma of the pancreatic head and low proportion of enrolled patients (14.3%) with biliary stents.
Bottom line: FOLFIRONOX was associated with a significant survival advantage compared with single-agent gemcitabine in carefully selected patients with advanced pancreatic cancer, although it was associated with increased toxicity.
Citation: Conroy T, Desseigne F, Ychou M, et al. FOLFIRINOX versus gemcitabine for metastatic pancreatic cancer. N Engl J Med. 2011;364(19):1817-1825.
MRSA Bundle Implementation at VA Hospitals Reduced Healthcare-Associated MRSA Infections
Clinical question: Can nationwide implementation of a “MRSA bundle,” including universal surveillance, contact isolation, hand hygiene, and institutional culture change, influence healthcare-associated MRSA infection rates?
Background: MRSA is a common cause of nosocomial infection. A pilot project at a single Veterans Affairs (VA) hospital utilized a “MRSA bundle” developed from published guidelines, which resulted in decreased healthcare-associated MRSA infections. In October 2007, the MRSA bundle was implemented throughout VA hospitals nationwide.
Study design: Quality-improvement (QI) observational initiative.
Setting: One hundred fifty-eight acute-care VA hospitals in the U.S.
Synopsis: From October 2007 to June 2010, there were 1,934,598 admissions, transfers, or discharges, and 8,318,675 patient-days. Of this study group, 96% of patients were screened at admission and 93% were screened at transfer or discharge. MRSA colonization or infection at the time of admission was 13.6%. Rates of healthcare-associated MRSA infection declined 45% in the non-ICU setting (0.47 to 0.26 per 1,000 patient-days, P<0.001) and 62% in the ICU setting (1.64 to 0.62 per 1,000 patient days, P<0.001).
It is unclear how much each individual component of the MRSA bundle impacted the declining MRSA infection rate.
Bottom line: Implementation of a “MRSA bundle,” including universal surveillance, contact isolation, hand hygiene, and institutional culture change, decreased the healthcare-associated MRSA infection rate in a large hospital system.
Citation: Jain R, Kralovi S, Evans M, et al. Veterans Affairs initiative to prevent methicillin-resistant Staphylococcus aureus infections. N Engl J Med. 2011;364(15):1419-1430.
New Left Bundle Branch Block Does Not Predict MI
Clinical question: How does the chronicity of left bundle branch block (LBBB) impact diagnosis and outcome in patients undergoing evaluation for acute myocardial infarction (MI)?
Background: ACA/AHA guidelines recommend that patients with new or presumed new LBBB undergo early reperfusion therapy. However, previous studies have shown that a minority of patients with new LBBB are diagnosed with MI.
Study design: Prospective cohort study.
Setting: University hospital in the U.S.
Synopsis: From 1994 to 2009, 401 consecutive patients undergoing evaluation for acute coronary syndrome with LBBB on initial ECG were included in the analysis. Of these patients, 64% had new (37%) or presumably new (27%) LBBB. Twenty-nine percent were diagnosed with MI, but there was no difference in frequency or size of MI between the new, presumably new, or chronic LBBB groups.
Concordant ST-T changes were an independent predictor of MI (OR 17, 95% CI 3.4-81, P<0.001) and mortality (OR 4.3, 95% CI 1.3-15, P<0.001), although this finding was present in only about 11% of the patient group.
Bottom line: Left bundle branch block is not a predictor of MI, although concordant ST-T changes were an independent predictor of MI and mortality.
Citation: Kontos MC, Aziz HA, Chau VQ, et al. Outcomes in patients with chronicity of left bundle-branch block with possible acute myocardial infarction. Am Heart J. 2011;161(4):698-704.
Acute Beta-Blocker Therapy for MI Increased Risk of Shock
Clinical question: How does acute beta-blocker therapy in myocardial infarction (MI) impact outcome?
Background: Long-term treatment with beta-blockers after myocardial infarction (MI) reduces mortality. However, data regarding outcome after acute use of beta-blockers in the first 24 hours of MI is conflicting. Updated ACA/AHA guidelines for STEMI and NSTEMI recommend caution when using beta-blockers in the first 24 hours, particularly in patients at risk for shock.
Study design: Observational registry study.
Setting: Two hundred ninety-one U.S hospitals.
Synopsis: More than 34,600 patients diagnosed with STEMI and NSTEMI from January 2007 to June 2008 were identified from a national QI MI registry. Patients were stratified by guideline-stated risk factors for shock; age >70, HR >110, and systolic BP <120 were associated with increased risk of composite outcome of shock or death.
At least one high-risk factor was present in 63% of the NSTEMI patients and 45% of STEMI patients; however, >90% of these patients received acute beta-blocker therapy. Nearly half (49%) of the NSTEMI patients received beta-blockers in the ED and 62% of the STEMI patients received beta-blockers before PCI.
In a multivariable model, NSTEMI patients receiving beta-blocker therapy in the ED were more likely to develop cardiogenic shock (OR 1.54, 95% CI 1.26-1.88, P<.001), as were STEMI patients receiving beta-blocker therapy prior to PCI (1.40, 95% CI 1.10-1.79, P=.006).
Bottom line: Caution should be exercised when using beta-blocker therapy during acute MI, particularly in the ED or prior to primary PCI.
Citation: Kontos MC, Diercks DB, Ho MP, Wang TY, Chen AY, Roe MT. Treatment and outcomes in patients with myocardial infarction treated with acute beta-blocker therapy: results from the American College of Cardiology’s NCDR. Am Heart J. 2011;161(5):864-870.
In This Edition
Literature At A Glance
A guide to this month’s studies
- PCI Not Inferior to CABG in Left Main Coronary Artery Stenosis at One Year, But Requires Further Study
- CABG Did Not Decrease Mortality in Patients with CAD and Left Ventricular Dysfunction
- Linezolid Not Superior to Glycopeptide Antibiotics in Treatment of Nosocomial Pneumonia
- CRP and Procalcitonin Independently Differentiated Pneumonia from Asthma or COPD Exacerbation
- Survival Benefit Demonstrated with FOLFIRINOX in Select Patients with Metastatic Pancreatic Cancer
- MRSA Bundle Implementation at VA Hospitals Reduced Healthcare-Associated MRSA Infections
- New Left Bundle Branch Block Does Not Predict MI
- Acute Beta-Blocker Therapy for MI Increased Risk of Shock
PCI Not Inferior to CABG in Left Main Coronary Artery Stenosis at One Year, But Requires Further Study
Clinical question: Is percutaneous coronary intervention (PCI) an acceptable alternative to coronary artery bypass grafting (CABG) in unprotected left main coronary artery disease (CAD)?
Background: The current standard of care for unprotected left main CAD is CABG. A sub-study from a large randomized trial suggests that PCI might be an alternative to CABG for patients with left main CAD. Outcomes after the two treatments have not been directly compared in an appropriately powered trial.
Study design: Prospective, open-label, randomized trial powered for noninferiority.
Setting: Thirteen sites in South Korea.
Synopsis: Six hundred patients with newly diagnosed left main disease with >50% stenosis were randomized to PCI with a sirolimus-eluting stent versus CABG. The primary endpoint of major adverse cardiac or cerebrovascular events occurred in 8.7% in the PCI group and 6.7% in the CABG group at one year (absolute risk difference 2 percentage points, 95% CI, -1.6 to 5.6; P=0.01), which was considered noninferior.
However, ischemia-driven target-vessel revascularization occurred in significantly more patients in the PCI group than in the CABG group. The wide noninferiority margin was due to an unexpectedly low rate of events, thus underpowering the study. Also, study duration was only two years.
Bottom line: PCI with a sirolimus-eluting stent was noninferior to CABG for unprotected left main CAD in this study, but the wide noninferiority margin and limited follow-up duration limit clinical application.
Reference: Park SJ, Kim YH, Park DW, et al. Randomized trial of stents versus bypass surgery for left main coronary artery disease. N Engl J Med. 2011;364(18):1718-1727.
CABG Did Not Decrease Mortality in Patients with CAD and Left Ventricular Dysfunction
Clinical question: What role does coronary-artery bypass grafting (CABG) have in the treatment of patients with both coronary artery disease (CAD) and heart failure?
Background: Although CAD is the most common cause of heart failure, early trials that evaluated the use of CABG in relieving angina excluded patients who had left ventricular (LV) dysfunction with ejection fraction <35%. It is unknown whether CABG adds mortality benefit to intensive medical treatment in patients with CAD and LV dysfunction.
Study design: Multicenter, nonblinded, randomized trial.
Setting: One hundred twenty-seven sites in 26 countries.
Synopsis: From July 2002 to May 2007, 1,212 patients with known CAD amenable to CABG and LV ejection fraction <35% were randomized to medical therapy alone versus CABG plus medical therapy with an average follow-up of five years. The primary outcome of death from any cause occurred in 41% of the medical-therapy-alone group and 36% of the CABG-plus-medical-therapy group (hazard ratio with CABG 0.86; 95% CI 0.72 to 1.04; P=0.12).
Despite subgroup analysis suggesting decreased death rates from cardiovascular causes in the latter group, there was no significant difference in the primary endpoint of death from any cause.
Bottom line: The addition of CABG to medical therapy for patients with CAD and left ventricular dysfunction does not decrease mortality.
Reference: Velazquez EJ, Lee KL, Deja MA, et al. Coronary-artery bypass surgery in patients with left ventricular dysfunction. N Engl J Med. 2011;364(17):1607-1616.
Linezolid Not Superior to Glycopeptide Antibiotics in Treatment of Nosocomial Pneumonia
Clinical question: Is linezolid superior to glycopeptide antibiotics in the treatment of nosocomial pneumonia?
Background: Current ATS/IDSA guidelines suggest that linezolid might be preferred over glycopeptide antibiotics (i.e. vancomycin and teicoplanin) for methicillin-resistant Staphylococcus aureus (MRSA) pneumonia, although this recommendation is based on a retrospective subgroup analysis of one randomized trial. No systematic reviews have looked at the comparative efficacy and safety of linezolid and glycopeptide antibiotics for nosocomial pneumonia.
Study design: Meta-analysis using a highly sensitive search method.
Setting: Eight multicenter, randomized controlled trials (RCTs).
Synopsis: The study authors retrieved 762 articles with a highly sensitive search strategy, from which eight RCTs were identified that met study criteria for a total of 1,641 patients. Primary outcome of clinical success at test-of-cure was not different between the two classes of antibiotics (pooled RR 1.04, 95% CI 0.97-1.11, P=0.28). Other endpoints, including mortality and microbiologic eradication, were similar between the two groups.
Clinical success in the subgroup of patients with culture-confirmed MRSA pneumonia was not different than those without culture-proven MRSA, although the study was not powered for subgroup analysis. Risk of thrombocytopenia and renal impairment were not statistically different in the limited subgroup of trials reporting this data.
The results should not be generalized to community-acquired MRSA or MRSA pneumonia with characteristics of PVL toxin-producing strain.
Bottom line: For the treatment of nosocomial pneumonia, there was no significant difference in clinical success or mortality between linezolid and glycopeptide antibiotics.
Citation: Walkey AJ, O’Donnell MR, Weiner RS. Linezolid vs. glycopeptide antibiotics for the treatment of suspected methicillin-resistant Staphylococcus aureus nosocomial pneumonia. Chest. 2011;139: 1148-1155.
CRP and Procalcitonin Independently Differentiated Pneumonia from Asthma or COPD Exacerbation
Clinical question: Are biomarkers such as CRP or procalcitonin useful in differentiating pneumonia from asthma or COPD exacerbation in hospitalized patients?
Background: Antibiotic overuse is associated with the emergence of drug resistance. One potential strategy to decrease antibiotic overuse is biomarker-guided therapy. Several randomized controlled trials (RCT) with procalcitonin-guided therapy have resulted in reduced antibiotic use for symptoms of acute respiratory tract infections (RTI). The use of CRP as a biomarker in acute RTI is not as well-described.
Study design: Prospective, observational, diagnostic accuracy study.
Setting: Winter months, 2006 to 2008, in two hospitals in England.
Synopsis: The study examined 319 patients: 62 with pneumonia, 96 with asthma exacerbation, and 161 with COPD exacerbation. Patients with pneumonia had significantly higher procalcitonin and CRP levels than those with COPD (P<0.0001) or asthma (P<0.0001). The area under receiver operator characteristic curve for distinguishing between pneumonia (requiring antibiotics) and asthma exacerbation (not requiring antibiotics) was 0.93 (0.88-0.98) for procalcitonin and 0.96 (0.93-1.00) for CRP. A CRP value >48 mg/L had a sensitivity of 91% (95% CI 80%-97%) and specificity of 93% (95% CI 86-98).
Using this CRP threshold, antibiotic use would have been reduced by 88% in asthma exacerbation, 76% in COPD exacerbation, and 9% in pneumonia cases.
This strategy was developed in a single-center study and requires further validation in a multicenter RCT.
Bottom line: Procalcitonin and CRP were elevated in patients with pneumonia compared to patients with asthma or COPD exacerbation and might be useful in guiding antibiotic usage.
Citation: Bafadhel, M, Clark TW, Reid, C, et al. Procalcitonin and C-reactive protein in hospitalized adult patients with community-acquired pneumonia or exacerbation of asthma or COPD. Chest. 2011;139:1410-1418.
Survival Benefit Demonstrated with FOLFIRINOX in Select Patients with Metastatic Pancreatic Cancer
Clinical question: How does FOLFIRINOX compare to gemcitabine as first-line treatment of metastatic pancreatic cancer?
Background: Single-agent gemcitabine is the standard first-line treatment for metastatic pancreatic cancer. Preclinical studies followed by Phase 1 and Phase 2 studies have demonstrated response to the oxaliplatin, irinotecan, leucovorin, and fluorouracil regimen (FOLFIRINOX).
Study design: Multicenter, randomized, controlled Phase 2-3 trial.
Setting: Fifteen centers in France during Phase 2, which then expanded to 48 centers for Phase 3.
Synopsis: Three hundred forty-two patients with good performance status (ECOG 0 or 1) and age <76 were randomized to receive FOLFIRINOX or gemcitabine. Median survival in the FOLFIRINOX group was significantly increased, at 11.1 months, compared with 6.8 months in the gemcitabine group (HR 0.57, CI 95%, 0.45-0.73, P<000.1).
Median progression-free survival, objective response rate, and quality of life score at six months were significantly increased in the FOLFIRINOX group. Significantly more grade 3 or grade 4 toxicity was reported in the FOLFIRINOX group.
Patients with elevated bilirubin were excluded due to increased risk of irinotecan-induced toxicity, resulting in only 38% of study patients with carcinoma of the pancreatic head and low proportion of enrolled patients (14.3%) with biliary stents.
Bottom line: FOLFIRONOX was associated with a significant survival advantage compared with single-agent gemcitabine in carefully selected patients with advanced pancreatic cancer, although it was associated with increased toxicity.
Citation: Conroy T, Desseigne F, Ychou M, et al. FOLFIRINOX versus gemcitabine for metastatic pancreatic cancer. N Engl J Med. 2011;364(19):1817-1825.
MRSA Bundle Implementation at VA Hospitals Reduced Healthcare-Associated MRSA Infections
Clinical question: Can nationwide implementation of a “MRSA bundle,” including universal surveillance, contact isolation, hand hygiene, and institutional culture change, influence healthcare-associated MRSA infection rates?
Background: MRSA is a common cause of nosocomial infection. A pilot project at a single Veterans Affairs (VA) hospital utilized a “MRSA bundle” developed from published guidelines, which resulted in decreased healthcare-associated MRSA infections. In October 2007, the MRSA bundle was implemented throughout VA hospitals nationwide.
Study design: Quality-improvement (QI) observational initiative.
Setting: One hundred fifty-eight acute-care VA hospitals in the U.S.
Synopsis: From October 2007 to June 2010, there were 1,934,598 admissions, transfers, or discharges, and 8,318,675 patient-days. Of this study group, 96% of patients were screened at admission and 93% were screened at transfer or discharge. MRSA colonization or infection at the time of admission was 13.6%. Rates of healthcare-associated MRSA infection declined 45% in the non-ICU setting (0.47 to 0.26 per 1,000 patient-days, P<0.001) and 62% in the ICU setting (1.64 to 0.62 per 1,000 patient days, P<0.001).
It is unclear how much each individual component of the MRSA bundle impacted the declining MRSA infection rate.
Bottom line: Implementation of a “MRSA bundle,” including universal surveillance, contact isolation, hand hygiene, and institutional culture change, decreased the healthcare-associated MRSA infection rate in a large hospital system.
Citation: Jain R, Kralovi S, Evans M, et al. Veterans Affairs initiative to prevent methicillin-resistant Staphylococcus aureus infections. N Engl J Med. 2011;364(15):1419-1430.
New Left Bundle Branch Block Does Not Predict MI
Clinical question: How does the chronicity of left bundle branch block (LBBB) impact diagnosis and outcome in patients undergoing evaluation for acute myocardial infarction (MI)?
Background: ACA/AHA guidelines recommend that patients with new or presumed new LBBB undergo early reperfusion therapy. However, previous studies have shown that a minority of patients with new LBBB are diagnosed with MI.
Study design: Prospective cohort study.
Setting: University hospital in the U.S.
Synopsis: From 1994 to 2009, 401 consecutive patients undergoing evaluation for acute coronary syndrome with LBBB on initial ECG were included in the analysis. Of these patients, 64% had new (37%) or presumably new (27%) LBBB. Twenty-nine percent were diagnosed with MI, but there was no difference in frequency or size of MI between the new, presumably new, or chronic LBBB groups.
Concordant ST-T changes were an independent predictor of MI (OR 17, 95% CI 3.4-81, P<0.001) and mortality (OR 4.3, 95% CI 1.3-15, P<0.001), although this finding was present in only about 11% of the patient group.
Bottom line: Left bundle branch block is not a predictor of MI, although concordant ST-T changes were an independent predictor of MI and mortality.
Citation: Kontos MC, Aziz HA, Chau VQ, et al. Outcomes in patients with chronicity of left bundle-branch block with possible acute myocardial infarction. Am Heart J. 2011;161(4):698-704.
Acute Beta-Blocker Therapy for MI Increased Risk of Shock
Clinical question: How does acute beta-blocker therapy in myocardial infarction (MI) impact outcome?
Background: Long-term treatment with beta-blockers after myocardial infarction (MI) reduces mortality. However, data regarding outcome after acute use of beta-blockers in the first 24 hours of MI is conflicting. Updated ACA/AHA guidelines for STEMI and NSTEMI recommend caution when using beta-blockers in the first 24 hours, particularly in patients at risk for shock.
Study design: Observational registry study.
Setting: Two hundred ninety-one U.S hospitals.
Synopsis: More than 34,600 patients diagnosed with STEMI and NSTEMI from January 2007 to June 2008 were identified from a national QI MI registry. Patients were stratified by guideline-stated risk factors for shock; age >70, HR >110, and systolic BP <120 were associated with increased risk of composite outcome of shock or death.
At least one high-risk factor was present in 63% of the NSTEMI patients and 45% of STEMI patients; however, >90% of these patients received acute beta-blocker therapy. Nearly half (49%) of the NSTEMI patients received beta-blockers in the ED and 62% of the STEMI patients received beta-blockers before PCI.
In a multivariable model, NSTEMI patients receiving beta-blocker therapy in the ED were more likely to develop cardiogenic shock (OR 1.54, 95% CI 1.26-1.88, P<.001), as were STEMI patients receiving beta-blocker therapy prior to PCI (1.40, 95% CI 1.10-1.79, P=.006).
Bottom line: Caution should be exercised when using beta-blocker therapy during acute MI, particularly in the ED or prior to primary PCI.
Citation: Kontos MC, Diercks DB, Ho MP, Wang TY, Chen AY, Roe MT. Treatment and outcomes in patients with myocardial infarction treated with acute beta-blocker therapy: results from the American College of Cardiology’s NCDR. Am Heart J. 2011;161(5):864-870.
In the Literature: Research You Need to Know
Clinical question: Is transcatheter aortic-valve replacement comparable to surgical valve replacement in high-operative-risk patients?
Background: In the randomized Placement of Aortic Transcatheter Valves (PARTNER) trial, patients who were not surgical candidates underwent transcatheter aortic-valve replacement, resulting in reduced symptoms and 20% improvement in one-year survival rates. Transcatheter valve replacement has not been compared to surgical replacement in high-operative-risk patients who remain candidates for surgical replacement.
Study design: Randomized controlled trial powered for noninferiority.
Setting: Twenty-five centers in the U.S., Canada, and Germany.
Synopsis: Six-hundred ninety-nine high-operative-risk patients with severe aortic stenosis were randomized to undergo transcatheter aortic-valve replacement or surgical replacement. In the intention-to-treat analysis, all-cause mortality rates were 3.4% in the transcatheter group and 6.5% in the surgical group at 30 days (P=0.07) and 24.2% vs. 26.8% at one year (P=0.44). Rates of major stroke were 3.8% in the transcatheter group compared with 2.1% in the surgical group at 30 days (P=0.20) and 5.1% vs. 2.4% at one year (P=0.07).
The transcatheter group had a significantly higher rate of major vascular complications, but had lower rates of major bleeding and new onset-atrial fibrillation. At one year, improvement in cardiac symptoms and the six-minute-walk distance were not significantly different in the two groups.
Bottom line: In high-operative-risk patients with severe aortic stenosis, transcatheter and surgical aortic-valve replacement had similar mortality at 30 days and one year, but there were a few significant differences in periprocedural risks.
Citation: Smith CR, Leon MB, Mack MJ, et al. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med. 2011;364(23):2187-2198.
For more physician reviews of HM-related literature, visit our website.
Clinical question: Is transcatheter aortic-valve replacement comparable to surgical valve replacement in high-operative-risk patients?
Background: In the randomized Placement of Aortic Transcatheter Valves (PARTNER) trial, patients who were not surgical candidates underwent transcatheter aortic-valve replacement, resulting in reduced symptoms and 20% improvement in one-year survival rates. Transcatheter valve replacement has not been compared to surgical replacement in high-operative-risk patients who remain candidates for surgical replacement.
Study design: Randomized controlled trial powered for noninferiority.
Setting: Twenty-five centers in the U.S., Canada, and Germany.
Synopsis: Six-hundred ninety-nine high-operative-risk patients with severe aortic stenosis were randomized to undergo transcatheter aortic-valve replacement or surgical replacement. In the intention-to-treat analysis, all-cause mortality rates were 3.4% in the transcatheter group and 6.5% in the surgical group at 30 days (P=0.07) and 24.2% vs. 26.8% at one year (P=0.44). Rates of major stroke were 3.8% in the transcatheter group compared with 2.1% in the surgical group at 30 days (P=0.20) and 5.1% vs. 2.4% at one year (P=0.07).
The transcatheter group had a significantly higher rate of major vascular complications, but had lower rates of major bleeding and new onset-atrial fibrillation. At one year, improvement in cardiac symptoms and the six-minute-walk distance were not significantly different in the two groups.
Bottom line: In high-operative-risk patients with severe aortic stenosis, transcatheter and surgical aortic-valve replacement had similar mortality at 30 days and one year, but there were a few significant differences in periprocedural risks.
Citation: Smith CR, Leon MB, Mack MJ, et al. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med. 2011;364(23):2187-2198.
For more physician reviews of HM-related literature, visit our website.
Clinical question: Is transcatheter aortic-valve replacement comparable to surgical valve replacement in high-operative-risk patients?
Background: In the randomized Placement of Aortic Transcatheter Valves (PARTNER) trial, patients who were not surgical candidates underwent transcatheter aortic-valve replacement, resulting in reduced symptoms and 20% improvement in one-year survival rates. Transcatheter valve replacement has not been compared to surgical replacement in high-operative-risk patients who remain candidates for surgical replacement.
Study design: Randomized controlled trial powered for noninferiority.
Setting: Twenty-five centers in the U.S., Canada, and Germany.
Synopsis: Six-hundred ninety-nine high-operative-risk patients with severe aortic stenosis were randomized to undergo transcatheter aortic-valve replacement or surgical replacement. In the intention-to-treat analysis, all-cause mortality rates were 3.4% in the transcatheter group and 6.5% in the surgical group at 30 days (P=0.07) and 24.2% vs. 26.8% at one year (P=0.44). Rates of major stroke were 3.8% in the transcatheter group compared with 2.1% in the surgical group at 30 days (P=0.20) and 5.1% vs. 2.4% at one year (P=0.07).
The transcatheter group had a significantly higher rate of major vascular complications, but had lower rates of major bleeding and new onset-atrial fibrillation. At one year, improvement in cardiac symptoms and the six-minute-walk distance were not significantly different in the two groups.
Bottom line: In high-operative-risk patients with severe aortic stenosis, transcatheter and surgical aortic-valve replacement had similar mortality at 30 days and one year, but there were a few significant differences in periprocedural risks.
Citation: Smith CR, Leon MB, Mack MJ, et al. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med. 2011;364(23):2187-2198.
For more physician reviews of HM-related literature, visit our website.
In the Literature
In This Edition
Literature at a Glance
A guide to this month’s studies
- CPOE and quality outcomes
- Outcomes of standardized management of endocarditis
- Effect of tPA three to 4.5 hours after stroke onset
- Failure to notify patients of significant test results
- PFO repair and stroke rate
- Predictors of delay in defibrillation for in-hospital arrest
- H. pylori eradication and risk of future gastric cancer
- Bleeding risk with fondaparinux vs. enoxaparin in ACS
- Perceptions of physician ability to predict medical futility
CPOE Is Associated with Improvement in Quality Measures
Clinical question: Is computerized physician order entry (CPOE) associated with improved outcomes across a large, nationally representative sample of hospitals?
Background: Several single-institution studies suggest CPOE leads to better outcomes in quality measures for heart failure, acute myocardial infarction, and pneumonia as defined by the Hospital Quality Alliance (HQA) initiative, led by the Centers for Medicare and Medicaid Services (CMS). Little systematic information is known about the effects of CPOE on quality of care.
Study design: Cross-sectional study.
Setting: The Health Information Management System Society (HIMSS) analytics database of 3,364 hospitals throughout the U.S.
Synopsis: Of the hospitals that reported CPOE utilization to HIMSS, 264 (7.8%) fully implement CPOE throughout their institutions. These CPOE hospitals outperformed their peers on five of 11 quality measures related to ordering medications, and in one of nine non-medication-related measures. No difference was noted in the other measures, except CPOE hospitals were less effective at providing antibiotics within four hours of pneumonia diagnosis. Hospitals that utilized CPOE were generally academic, larger, and nonprofit. After adjusting for these differences, benefits were still preserved.
The authors indicate that the lack of systematic outperformance by CPOE hospitals in all 20 of the quality categories inherently suggests that other factors (e.g., concomitant QI efforts) are not affecting these results. Given the observational nature of this study, no causal relationship can be established between CPOE and the observed benefits. CPOE might represent the commitment of certain hospitals to quality measures, but further study is needed.
Bottom line: Enhanced compliance in several CMS-established quality measures is seen in hospitals that utilize CPOE throughout their institutions.
Citation: Yu FB, Menachemi N, Berner ES, Allison JJ, Weissman NW, Houston TK. Full implementation of computerized physician order entry and medication-related quality outcomes: a study of 3,364 hospitals. Am J Med Qual. 2009;24(4):278-286.
Standardized Management of Endocarditis Leads to Significant Mortality Benefit
Clinical question: Does a standardized approach to the treatment of infective endocarditis reduce mortality and morbidity?
Background: Despite epidemiological changes to the inciting bacteria and improvements in available antibiotics, mortality and morbidity associated with endocarditis remain high. The contribution of inconsistent or inaccurate treatment of endocarditis is unclear.
Study design: Case series with historical controls from 1994 to 2001, compared with protocolized patients from 2002 to 2006.
Setting: Single teaching tertiary-care hospital in France.
Synopsis: The authors established a diagnostic protocol for infectious endocarditis from 1994 to 2001 (period 1) and established a treatment protocol from 2002 to 2006 (period 2). Despite a statistically significant sicker population (older, higher comorbidities, higher coagulase-negative staphylococcal infections, and fewer healthy valves), the period-2 patients had a dramatically lower mortality rate of 8.2% (P<0.001), compared with 18.5% in period-1 patients. Fewer episodes of renal failure, organ failure, and deaths associated with embolism were noted in period 2.
Whether these results are due to more frequent care, more aggressive care (patients were “summoned” if they did not show for appointments), standardized medication and surgical options, or the effects of long-term collaboration, these results appear durable, remarkable, and reproducible.
This study is limited by its lack of randomization and extensive time frame, with concomitant changes in medical treatment and observed infectious organisms.
Bottom line: Implementation of a standardized approach to endocarditis has significant benefit on mortality and morbidity.
Citation: Botelho-Nevers E, Thuny F, Casalta JP, et al. Dramatic reduction in infective endocarditis-related mortality with a management-based approach. Arch Intern Med. 2009;169(14):1290-1298.
Treatment with tPA in the Three- to 4.5-Hour Time Window after Stroke Is Beneficial
Clinical question: What is the effect of tissue plasminogen activator (tPA) on outcomes in patients treated in the three- to 4.5-hour window after stroke?
Background: The third European Cooperative Acute Stroke Study 3 (ECASS-3) demonstrated benefit of treatment of acute stroke with tPA in the three- to 4.5-hour time window. Prior studies, however, did not show superiority of tPA over placebo, and there is a lack of a confirmatory randomized, controlled trial of tPA in this time frame.
Study design: Meta-analysis of randomized, controlled trials.
Setting: Four studies involving 1,622 patients who were treated with intravenous tPA for acute ischemic stroke from three to 4.5 hours after stroke compared with placebo.
Synopsis: Of the randomized, controlled trials of intravenous tPA for treatment of acute ischemic stroke from three to 4.5 hours after stroke, four trials (ECASS-1, ECASS-2, ECASS-3, and ATLANTIS) were included in the analysis. Treatment with tPA in the three- to 4.5-hour time window is associated with increased favorable outcomes based on the global outcome measure (OR 1.31; 95% CI: 1.10-1.56, P=0.002) and the modified Rankin Scale (OR 1.31; 95% CI: 1.07-1.59, P=0.01), compared with placebo. The 90-day mortality rate was not significantly different between the treatment and placebo groups (OR 1.04; 95% CI 0.75-1.43, P=0.83).
Due to the relatively high dose of tPA (1.1 mg/kg) administered in the ECASS-1 trial, a separate meta-analysis looking at the other three trials (tPA dose of 0.9 mg/kg) was conducted, and the favorable outcome with tPA remained.
Bottom line: Treatment of acute ischemic stroke with tPA in the three- to 4.5-hour time window results in an increased rate of favorable functional outcomes without a significant difference in mortality.
Citation: Lansberg MG, Bluhmki E, Thijs VN. Efficacy and safety of tissue plasminogen activator 3 to 4.5 hours after acute ischemic stroke: a metaanalysis. Stroke. 2009;40(7):2438-2441.
Outpatients Often Are Not Notified of Clinically Significant Test Results
Clinical question: How frequently do primary-care physicians (PCPs) fail to inform patients of clinically significant outpatient test results?
Background: Diagnostic errors are the most common cause of malpractice claims in the U.S. It is unclear how often providers fail to either inform patients of abnormal test results or document that patients have been notified.
Study design: Retrospective chart review.
Setting: Twenty-three primary-care practices: 19 private, four academic.
Synopsis: More than 5,400 charts were reviewed, and 1,889 abnormal test results were identified in this study. Failure to inform or document notification was identified in 135 cases (7.1%). The failure rates in the practices ranged from 0.0% to 26.2%. Practices with the best processes for managing test results had the lowest failure rates; these processes included: all results being routed to the responsible physician; the physician signing off on all results; the practice informing patients of all results, both normal and abnormal; documenting when the patient is informed; and instructing patients to call if not notified of test results within a certain time interval.
Limitations of this study include the potential of over- or underreporting of failures to inform as a chart review was used, and only practices that agreed to participate were included.
Bottom line: Failure to notify outpatients of test results is common but can be minimized by creating a systematic management of test results that include best practices.
Citation: Casalino LP, Dunham D, Chin MH, et al. Frequency of failure to inform patients of clinically significant outpatient test results. Arch Intern Med. 2009;169(12):1123-1129.
Repair of Incidental PFO Discovered During Cardiothoracic Surgery Repair Increases Postoperative Stroke Risk
Clinical question: What is the impact of closing incidentally discovered patent foramen ovale (PFO) defects during cardiothoracic surgery?
Background: PFO’s role in cryptogenic stroke remains controversial. Incidental PFO is commonly detected by transesophageal echocardiography (TEE) during cardiothoracic surgery. Routine PFO closure has been recommended when almost no alteration of the surgical plan is required.
Study design: Retrospective chart review.
Setting: The Cleveland Clinic.
Synopsis: Between 1995 and 2006, 13,092 patients undergoing cardiothoracic surgery had TEE data with no previous diagnosis of PFO, but the review found that 2,277 (17%) had PFO discovered intraoperatively. Of these, 639 (28%) had the PFO repaired.
Patients with an intraoperative diagnosis of PFO had similar rates of in-hospital stroke and hospital death compared with those without PFO. Patients who had their PFO repaired had a greater in-hospital stroke risk (2.8% vs. 1.2%; P=0.04) compared with those with a non-repaired PFO, representing nearly 2.5 times greater odds of having an in-hospital stroke. No other difference was noted in perioperative outcomes for patients who underwent intraoperative repair compared with those who did not, including risk of in-hospital death, hospital length of stay, ICU length of stay, and time on cardiopulmonary bypass. Long-term analysis demonstrated that PFO repair was associated with no survival difference.
The study is limited by its retrospective nature.
Bottom line: Routine surgical closure of incidental PFO detected during intraoperative imaging should be discouraged.
Citation: Krasuski RA, Hart SA, Allen D, et al. Prevalence and repair of interoperatively diagnosed patent foramen ovale and association with perioperative outcomes and long-term survival. JAMA. 2009;302(3):290-297.
Hospital-Level Differences Are Strong Predictors of Time to Defibrillation Delay In Cardiac Arrest
Clinical question: What are the predictors of delay in the time to defibrillation after in-hospital cardiac arrest?
Background: Thirty percent of in-hospital cardiac arrests from ventricular arrhythmias are not treated within the American Heart Association’s recommendation of two minutes. This delay is associated with a 50% lower rate of in-hospital survival. Exploring the hospital-level variation in delays to defibrillation is a critical step toward sharing the best practices.
Study design: Retrospective review of registry data.
Setting: The National Registry of Cardiopulmonary Resuscitation (NRCPR) survey of 200 acute-care, nonpediatric hospitals.
Synopsis: The registry identified 7,479 patients who experienced cardiac arrest from ventricular tachycardia or pulseless ventricular fibrillation. The primary outcome was the hospital rate of delayed defibrillation (time to defibrillation > two minutes), which ranged from 2% to 51%.
Time to defibrillation was found to be a major predictor of survival after a cardiac arrest. Only bed volume and arrest location were associated with differences in rates of delayed defibrillation (lower rates in larger hospitals and in ICUs). The variability was not due to differences in patient characteristics, but was due to hospital-level effects. Academic status, geographical location, arrest volume, and daily admission volume did not affect the time to defibrillation.
The study was able to identify only a few facility characteristics that account for the variability between hospitals in the rate of delayed defibrillation. The study emphasizes the need for new approaches to identifying hospital innovations in process-of-care measures that are associated with improved performance in defibrillation times.
Bottom Line: Future research is needed to better understand the reason for the wide variation between hospitals in the rate of delayed defibrillation after in-hospital cardiac arrest.
Citation: Chan PS, Nichol G, Krumholz HM, Spertus JA, Nallamothu BK; American Heart Association National Registry of Cardiopulmonary Resuscitation (NRCPR) Investigators. Hospital variation in time to defibrillation after in-hospital cardiac arrest. Arch Intern Med. 2009;169(14):1265-1273.
Treating for H. Pylori Reduces the Risk for Developing Gastric Cancer in High-Risk Patients
Clinical question: In patients with high-baseline incidence of gastric cancer, does H. pylori eradication reduce the risk for developing gastric cancer?
Background: Gastric cancer remains a major health problem in Asia. The link of H. pylori and gastric cancer has been established, but it remains unclear whether treatment for H. pylori is effective primary prevention for the development of gastric cancer.
Study design: Meta-analysis of six studies.
Setting: All but one trial was performed in Asia.
Synopsis: Seven studies met inclusion criteria, one of which was excluded due to heterogeneity. The six remaining studies were pooled, with 37 of 3,388 (1.1%) treated patients developing a new gastric cancer, compared with 56 of 3,307 (1.7%) patients who received placebo or were in the control group (RR 0.65; 0.43-0.98). Most patients received gastric biopsy prior to enrollment, and most of those demonstrated gastric atrophy or intestinal metaplasia.
These patients, despite more advanced precancerous pathology findings, still benefited from eradication. The seventh study, which was excluded, enrolled patients with early gastric cancer; these patients still benefited from H. pylori eradication and, when included in the meta-analysis, the RR was even lower, 0.57 (0.49-0.81).
Only two trials were double-blinded, but all of the studies employed the same definition of gastric cancer and held to excellent data reporting standards. This study encourages screening and treatment in high-risk patients given the widespread incidence of H. pylori.
Bottom Line: Treatment of H. pylori reduces the risk of gastric cancer in high-risk patients.
Citation: Fuccio L, Zagari RM, Eusebi LH, et al. Meta-analysis: can Helicobacter pylori eradication treatment reduce the risk for gastric cancer? Ann Intern Med. 2009;151(2):121-128.
Patients on Anti-Platelet Agents with Acute Coronary Syndrome Have a Lower Bleeding Risk When Treated with Fondaparinux
Clinical question: Is there a difference in bleeding risk with fondaparinux and enoxaparin when used with GPIIb/IIIa inhibitors or thienopyridines in NSTEMI-ACS?
Background: The OASIS 5 study reported a 50% reduction in severe bleeding when comparing fondaparinux to enoxaparin in ACS while maintaining a similar efficacy. This subgroup analysis was performed to evaluate whether reduced bleeding risk with fondaparinux remains in patients treated with additional anti-platelet agents.
Study design: Subgroup analysis of a large, multicenter, randomized, double-blind trial.
Setting: Acute-care hospitals in North America, Eastern and Western Europe, Latin America, Australia, and Asia.
Synopsis: Patients with NSTE-ACS received either fondaparinux or enoxaparin and were treated with GPIIb/IIIa inhibitors or thienopyridines at the discretion of their physician. At 30 days, the fondaparinux group had similar efficacy and decreased bleeding risk in both the GPIIb/IIIa and the thienopyridine groups. Of the 3,630 patients in the GPIIb/IIIa group, the risk for major bleeding with fondaparinux was 5.2%, whereas the risk with enoxaparin was 8.3% (HR 0.61; P<0.001) compared with enoxaparin. Of the 1,352 patients treated with thienopyridines, the risk for major bleeding with fondaparinux was 3.4%, whereas the risk with enoxaparin was 5.4% (HR 0.62; P<0.001).
Bottom Line: This subgroup analysis suggests there are less-severe bleeding complications in patients treated with fondaparinux when compared with enoxaparin in the setting of cotreatment with GPIIb/IIIa inhibitors, thienopyridines, or both.
Citation: Jolly SS, Faxon DP, Fox KA, et al. Efficacy and safety of fondaparinux versus enoxaparin in patients with acute coronary syndromes treated with glycoprotein IIb/IIIa inhibitors of thienopyridines: results from the OASIS 5 (Fifth Organization to Assess Strategies in Ischemic Syndromes) trial. J Am Coll Cardiol. 2009;54(5):468-476.
Surrogate Decision-Makers Frequently Doubt Clinicians’ Ability to Predict Medical Futility
Clinical question: What attitudes do surrogate decision-makers hold toward clinicians’ predictions of medical futility in critically-ill patients?
Background: The clinical judgment of medical futility leading to the withdrawal of life-sustaining treatment—despite the objections of surrogate decision-makers—is controversial. Very little is known about how surrogate decision-makers view the futility rationale when physicians suggest limiting the use of life-sustaining treatment.
Study design: Multicenter, mixed, qualitative and quantitative study.
Setting: Three ICUs in three different California hospitals from 2006 to 2007.
Synopsis: Semi-structured interviews of surrogate decision-makers for 50 incapacitated, critically-ill patients were performed to ascertain their beliefs about medical futility in response to hypothetical situations. Of the surrogates surveyed, 64% expressed doubt about physicians’ futility predictions.
The interviewees gave four main reasons for their doubts. Two reasons not previously described were doubts about the accuracy of physicians’ predictions and the need for surrogates to see futility themselves. Previously described sources of conflict included a misunderstanding about prognosis and religious-based objections. Surrogates with religious objections were more likely to request continuation of life-sustaining treatments than those with secular or experiential objections (OR 4; 95% CI 1.2-14.0; P=0.03). Nearly a third (32%) of surrogates elected to continue life support with a <1% survival estimate; 18% elected to continue life support when physicians thought there was no chance of survival.
This study has several limitations: a small sample size, the use of hypothetical situations, and the inability to assess attitudes as they change over time.
Bottom line: The nature of surrogate decision-makers’ doubts about medical futility can help predict whether they accept predictions of medical futility from physicians.
Citation: Zier LS, Burack JH, Micco G, Chipman AK, Frank JA, White DB. Surrogate decision makers’ responses to physicians’ predictions of medical futility. Chest. 2009;136:110-117. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- CPOE and quality outcomes
- Outcomes of standardized management of endocarditis
- Effect of tPA three to 4.5 hours after stroke onset
- Failure to notify patients of significant test results
- PFO repair and stroke rate
- Predictors of delay in defibrillation for in-hospital arrest
- H. pylori eradication and risk of future gastric cancer
- Bleeding risk with fondaparinux vs. enoxaparin in ACS
- Perceptions of physician ability to predict medical futility
CPOE Is Associated with Improvement in Quality Measures
Clinical question: Is computerized physician order entry (CPOE) associated with improved outcomes across a large, nationally representative sample of hospitals?
Background: Several single-institution studies suggest CPOE leads to better outcomes in quality measures for heart failure, acute myocardial infarction, and pneumonia as defined by the Hospital Quality Alliance (HQA) initiative, led by the Centers for Medicare and Medicaid Services (CMS). Little systematic information is known about the effects of CPOE on quality of care.
Study design: Cross-sectional study.
Setting: The Health Information Management System Society (HIMSS) analytics database of 3,364 hospitals throughout the U.S.
Synopsis: Of the hospitals that reported CPOE utilization to HIMSS, 264 (7.8%) fully implement CPOE throughout their institutions. These CPOE hospitals outperformed their peers on five of 11 quality measures related to ordering medications, and in one of nine non-medication-related measures. No difference was noted in the other measures, except CPOE hospitals were less effective at providing antibiotics within four hours of pneumonia diagnosis. Hospitals that utilized CPOE were generally academic, larger, and nonprofit. After adjusting for these differences, benefits were still preserved.
The authors indicate that the lack of systematic outperformance by CPOE hospitals in all 20 of the quality categories inherently suggests that other factors (e.g., concomitant QI efforts) are not affecting these results. Given the observational nature of this study, no causal relationship can be established between CPOE and the observed benefits. CPOE might represent the commitment of certain hospitals to quality measures, but further study is needed.
Bottom line: Enhanced compliance in several CMS-established quality measures is seen in hospitals that utilize CPOE throughout their institutions.
Citation: Yu FB, Menachemi N, Berner ES, Allison JJ, Weissman NW, Houston TK. Full implementation of computerized physician order entry and medication-related quality outcomes: a study of 3,364 hospitals. Am J Med Qual. 2009;24(4):278-286.
Standardized Management of Endocarditis Leads to Significant Mortality Benefit
Clinical question: Does a standardized approach to the treatment of infective endocarditis reduce mortality and morbidity?
Background: Despite epidemiological changes to the inciting bacteria and improvements in available antibiotics, mortality and morbidity associated with endocarditis remain high. The contribution of inconsistent or inaccurate treatment of endocarditis is unclear.
Study design: Case series with historical controls from 1994 to 2001, compared with protocolized patients from 2002 to 2006.
Setting: Single teaching tertiary-care hospital in France.
Synopsis: The authors established a diagnostic protocol for infectious endocarditis from 1994 to 2001 (period 1) and established a treatment protocol from 2002 to 2006 (period 2). Despite a statistically significant sicker population (older, higher comorbidities, higher coagulase-negative staphylococcal infections, and fewer healthy valves), the period-2 patients had a dramatically lower mortality rate of 8.2% (P<0.001), compared with 18.5% in period-1 patients. Fewer episodes of renal failure, organ failure, and deaths associated with embolism were noted in period 2.
Whether these results are due to more frequent care, more aggressive care (patients were “summoned” if they did not show for appointments), standardized medication and surgical options, or the effects of long-term collaboration, these results appear durable, remarkable, and reproducible.
This study is limited by its lack of randomization and extensive time frame, with concomitant changes in medical treatment and observed infectious organisms.
Bottom line: Implementation of a standardized approach to endocarditis has significant benefit on mortality and morbidity.
Citation: Botelho-Nevers E, Thuny F, Casalta JP, et al. Dramatic reduction in infective endocarditis-related mortality with a management-based approach. Arch Intern Med. 2009;169(14):1290-1298.
Treatment with tPA in the Three- to 4.5-Hour Time Window after Stroke Is Beneficial
Clinical question: What is the effect of tissue plasminogen activator (tPA) on outcomes in patients treated in the three- to 4.5-hour window after stroke?
Background: The third European Cooperative Acute Stroke Study 3 (ECASS-3) demonstrated benefit of treatment of acute stroke with tPA in the three- to 4.5-hour time window. Prior studies, however, did not show superiority of tPA over placebo, and there is a lack of a confirmatory randomized, controlled trial of tPA in this time frame.
Study design: Meta-analysis of randomized, controlled trials.
Setting: Four studies involving 1,622 patients who were treated with intravenous tPA for acute ischemic stroke from three to 4.5 hours after stroke compared with placebo.
Synopsis: Of the randomized, controlled trials of intravenous tPA for treatment of acute ischemic stroke from three to 4.5 hours after stroke, four trials (ECASS-1, ECASS-2, ECASS-3, and ATLANTIS) were included in the analysis. Treatment with tPA in the three- to 4.5-hour time window is associated with increased favorable outcomes based on the global outcome measure (OR 1.31; 95% CI: 1.10-1.56, P=0.002) and the modified Rankin Scale (OR 1.31; 95% CI: 1.07-1.59, P=0.01), compared with placebo. The 90-day mortality rate was not significantly different between the treatment and placebo groups (OR 1.04; 95% CI 0.75-1.43, P=0.83).
Due to the relatively high dose of tPA (1.1 mg/kg) administered in the ECASS-1 trial, a separate meta-analysis looking at the other three trials (tPA dose of 0.9 mg/kg) was conducted, and the favorable outcome with tPA remained.
Bottom line: Treatment of acute ischemic stroke with tPA in the three- to 4.5-hour time window results in an increased rate of favorable functional outcomes without a significant difference in mortality.
Citation: Lansberg MG, Bluhmki E, Thijs VN. Efficacy and safety of tissue plasminogen activator 3 to 4.5 hours after acute ischemic stroke: a metaanalysis. Stroke. 2009;40(7):2438-2441.
Outpatients Often Are Not Notified of Clinically Significant Test Results
Clinical question: How frequently do primary-care physicians (PCPs) fail to inform patients of clinically significant outpatient test results?
Background: Diagnostic errors are the most common cause of malpractice claims in the U.S. It is unclear how often providers fail to either inform patients of abnormal test results or document that patients have been notified.
Study design: Retrospective chart review.
Setting: Twenty-three primary-care practices: 19 private, four academic.
Synopsis: More than 5,400 charts were reviewed, and 1,889 abnormal test results were identified in this study. Failure to inform or document notification was identified in 135 cases (7.1%). The failure rates in the practices ranged from 0.0% to 26.2%. Practices with the best processes for managing test results had the lowest failure rates; these processes included: all results being routed to the responsible physician; the physician signing off on all results; the practice informing patients of all results, both normal and abnormal; documenting when the patient is informed; and instructing patients to call if not notified of test results within a certain time interval.
Limitations of this study include the potential of over- or underreporting of failures to inform as a chart review was used, and only practices that agreed to participate were included.
Bottom line: Failure to notify outpatients of test results is common but can be minimized by creating a systematic management of test results that include best practices.
Citation: Casalino LP, Dunham D, Chin MH, et al. Frequency of failure to inform patients of clinically significant outpatient test results. Arch Intern Med. 2009;169(12):1123-1129.
Repair of Incidental PFO Discovered During Cardiothoracic Surgery Repair Increases Postoperative Stroke Risk
Clinical question: What is the impact of closing incidentally discovered patent foramen ovale (PFO) defects during cardiothoracic surgery?
Background: PFO’s role in cryptogenic stroke remains controversial. Incidental PFO is commonly detected by transesophageal echocardiography (TEE) during cardiothoracic surgery. Routine PFO closure has been recommended when almost no alteration of the surgical plan is required.
Study design: Retrospective chart review.
Setting: The Cleveland Clinic.
Synopsis: Between 1995 and 2006, 13,092 patients undergoing cardiothoracic surgery had TEE data with no previous diagnosis of PFO, but the review found that 2,277 (17%) had PFO discovered intraoperatively. Of these, 639 (28%) had the PFO repaired.
Patients with an intraoperative diagnosis of PFO had similar rates of in-hospital stroke and hospital death compared with those without PFO. Patients who had their PFO repaired had a greater in-hospital stroke risk (2.8% vs. 1.2%; P=0.04) compared with those with a non-repaired PFO, representing nearly 2.5 times greater odds of having an in-hospital stroke. No other difference was noted in perioperative outcomes for patients who underwent intraoperative repair compared with those who did not, including risk of in-hospital death, hospital length of stay, ICU length of stay, and time on cardiopulmonary bypass. Long-term analysis demonstrated that PFO repair was associated with no survival difference.
The study is limited by its retrospective nature.
Bottom line: Routine surgical closure of incidental PFO detected during intraoperative imaging should be discouraged.
Citation: Krasuski RA, Hart SA, Allen D, et al. Prevalence and repair of interoperatively diagnosed patent foramen ovale and association with perioperative outcomes and long-term survival. JAMA. 2009;302(3):290-297.
Hospital-Level Differences Are Strong Predictors of Time to Defibrillation Delay In Cardiac Arrest
Clinical question: What are the predictors of delay in the time to defibrillation after in-hospital cardiac arrest?
Background: Thirty percent of in-hospital cardiac arrests from ventricular arrhythmias are not treated within the American Heart Association’s recommendation of two minutes. This delay is associated with a 50% lower rate of in-hospital survival. Exploring the hospital-level variation in delays to defibrillation is a critical step toward sharing the best practices.
Study design: Retrospective review of registry data.
Setting: The National Registry of Cardiopulmonary Resuscitation (NRCPR) survey of 200 acute-care, nonpediatric hospitals.
Synopsis: The registry identified 7,479 patients who experienced cardiac arrest from ventricular tachycardia or pulseless ventricular fibrillation. The primary outcome was the hospital rate of delayed defibrillation (time to defibrillation > two minutes), which ranged from 2% to 51%.
Time to defibrillation was found to be a major predictor of survival after a cardiac arrest. Only bed volume and arrest location were associated with differences in rates of delayed defibrillation (lower rates in larger hospitals and in ICUs). The variability was not due to differences in patient characteristics, but was due to hospital-level effects. Academic status, geographical location, arrest volume, and daily admission volume did not affect the time to defibrillation.
The study was able to identify only a few facility characteristics that account for the variability between hospitals in the rate of delayed defibrillation. The study emphasizes the need for new approaches to identifying hospital innovations in process-of-care measures that are associated with improved performance in defibrillation times.
Bottom Line: Future research is needed to better understand the reason for the wide variation between hospitals in the rate of delayed defibrillation after in-hospital cardiac arrest.
Citation: Chan PS, Nichol G, Krumholz HM, Spertus JA, Nallamothu BK; American Heart Association National Registry of Cardiopulmonary Resuscitation (NRCPR) Investigators. Hospital variation in time to defibrillation after in-hospital cardiac arrest. Arch Intern Med. 2009;169(14):1265-1273.
Treating for H. Pylori Reduces the Risk for Developing Gastric Cancer in High-Risk Patients
Clinical question: In patients with high-baseline incidence of gastric cancer, does H. pylori eradication reduce the risk for developing gastric cancer?
Background: Gastric cancer remains a major health problem in Asia. The link of H. pylori and gastric cancer has been established, but it remains unclear whether treatment for H. pylori is effective primary prevention for the development of gastric cancer.
Study design: Meta-analysis of six studies.
Setting: All but one trial was performed in Asia.
Synopsis: Seven studies met inclusion criteria, one of which was excluded due to heterogeneity. The six remaining studies were pooled, with 37 of 3,388 (1.1%) treated patients developing a new gastric cancer, compared with 56 of 3,307 (1.7%) patients who received placebo or were in the control group (RR 0.65; 0.43-0.98). Most patients received gastric biopsy prior to enrollment, and most of those demonstrated gastric atrophy or intestinal metaplasia.
These patients, despite more advanced precancerous pathology findings, still benefited from eradication. The seventh study, which was excluded, enrolled patients with early gastric cancer; these patients still benefited from H. pylori eradication and, when included in the meta-analysis, the RR was even lower, 0.57 (0.49-0.81).
Only two trials were double-blinded, but all of the studies employed the same definition of gastric cancer and held to excellent data reporting standards. This study encourages screening and treatment in high-risk patients given the widespread incidence of H. pylori.
Bottom Line: Treatment of H. pylori reduces the risk of gastric cancer in high-risk patients.
Citation: Fuccio L, Zagari RM, Eusebi LH, et al. Meta-analysis: can Helicobacter pylori eradication treatment reduce the risk for gastric cancer? Ann Intern Med. 2009;151(2):121-128.
Patients on Anti-Platelet Agents with Acute Coronary Syndrome Have a Lower Bleeding Risk When Treated with Fondaparinux
Clinical question: Is there a difference in bleeding risk with fondaparinux and enoxaparin when used with GPIIb/IIIa inhibitors or thienopyridines in NSTEMI-ACS?
Background: The OASIS 5 study reported a 50% reduction in severe bleeding when comparing fondaparinux to enoxaparin in ACS while maintaining a similar efficacy. This subgroup analysis was performed to evaluate whether reduced bleeding risk with fondaparinux remains in patients treated with additional anti-platelet agents.
Study design: Subgroup analysis of a large, multicenter, randomized, double-blind trial.
Setting: Acute-care hospitals in North America, Eastern and Western Europe, Latin America, Australia, and Asia.
Synopsis: Patients with NSTE-ACS received either fondaparinux or enoxaparin and were treated with GPIIb/IIIa inhibitors or thienopyridines at the discretion of their physician. At 30 days, the fondaparinux group had similar efficacy and decreased bleeding risk in both the GPIIb/IIIa and the thienopyridine groups. Of the 3,630 patients in the GPIIb/IIIa group, the risk for major bleeding with fondaparinux was 5.2%, whereas the risk with enoxaparin was 8.3% (HR 0.61; P<0.001) compared with enoxaparin. Of the 1,352 patients treated with thienopyridines, the risk for major bleeding with fondaparinux was 3.4%, whereas the risk with enoxaparin was 5.4% (HR 0.62; P<0.001).
Bottom Line: This subgroup analysis suggests there are less-severe bleeding complications in patients treated with fondaparinux when compared with enoxaparin in the setting of cotreatment with GPIIb/IIIa inhibitors, thienopyridines, or both.
Citation: Jolly SS, Faxon DP, Fox KA, et al. Efficacy and safety of fondaparinux versus enoxaparin in patients with acute coronary syndromes treated with glycoprotein IIb/IIIa inhibitors of thienopyridines: results from the OASIS 5 (Fifth Organization to Assess Strategies in Ischemic Syndromes) trial. J Am Coll Cardiol. 2009;54(5):468-476.
Surrogate Decision-Makers Frequently Doubt Clinicians’ Ability to Predict Medical Futility
Clinical question: What attitudes do surrogate decision-makers hold toward clinicians’ predictions of medical futility in critically-ill patients?
Background: The clinical judgment of medical futility leading to the withdrawal of life-sustaining treatment—despite the objections of surrogate decision-makers—is controversial. Very little is known about how surrogate decision-makers view the futility rationale when physicians suggest limiting the use of life-sustaining treatment.
Study design: Multicenter, mixed, qualitative and quantitative study.
Setting: Three ICUs in three different California hospitals from 2006 to 2007.
Synopsis: Semi-structured interviews of surrogate decision-makers for 50 incapacitated, critically-ill patients were performed to ascertain their beliefs about medical futility in response to hypothetical situations. Of the surrogates surveyed, 64% expressed doubt about physicians’ futility predictions.
The interviewees gave four main reasons for their doubts. Two reasons not previously described were doubts about the accuracy of physicians’ predictions and the need for surrogates to see futility themselves. Previously described sources of conflict included a misunderstanding about prognosis and religious-based objections. Surrogates with religious objections were more likely to request continuation of life-sustaining treatments than those with secular or experiential objections (OR 4; 95% CI 1.2-14.0; P=0.03). Nearly a third (32%) of surrogates elected to continue life support with a <1% survival estimate; 18% elected to continue life support when physicians thought there was no chance of survival.
This study has several limitations: a small sample size, the use of hypothetical situations, and the inability to assess attitudes as they change over time.
Bottom line: The nature of surrogate decision-makers’ doubts about medical futility can help predict whether they accept predictions of medical futility from physicians.
Citation: Zier LS, Burack JH, Micco G, Chipman AK, Frank JA, White DB. Surrogate decision makers’ responses to physicians’ predictions of medical futility. Chest. 2009;136:110-117. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- CPOE and quality outcomes
- Outcomes of standardized management of endocarditis
- Effect of tPA three to 4.5 hours after stroke onset
- Failure to notify patients of significant test results
- PFO repair and stroke rate
- Predictors of delay in defibrillation for in-hospital arrest
- H. pylori eradication and risk of future gastric cancer
- Bleeding risk with fondaparinux vs. enoxaparin in ACS
- Perceptions of physician ability to predict medical futility
CPOE Is Associated with Improvement in Quality Measures
Clinical question: Is computerized physician order entry (CPOE) associated with improved outcomes across a large, nationally representative sample of hospitals?
Background: Several single-institution studies suggest CPOE leads to better outcomes in quality measures for heart failure, acute myocardial infarction, and pneumonia as defined by the Hospital Quality Alliance (HQA) initiative, led by the Centers for Medicare and Medicaid Services (CMS). Little systematic information is known about the effects of CPOE on quality of care.
Study design: Cross-sectional study.
Setting: The Health Information Management System Society (HIMSS) analytics database of 3,364 hospitals throughout the U.S.
Synopsis: Of the hospitals that reported CPOE utilization to HIMSS, 264 (7.8%) fully implement CPOE throughout their institutions. These CPOE hospitals outperformed their peers on five of 11 quality measures related to ordering medications, and in one of nine non-medication-related measures. No difference was noted in the other measures, except CPOE hospitals were less effective at providing antibiotics within four hours of pneumonia diagnosis. Hospitals that utilized CPOE were generally academic, larger, and nonprofit. After adjusting for these differences, benefits were still preserved.
The authors indicate that the lack of systematic outperformance by CPOE hospitals in all 20 of the quality categories inherently suggests that other factors (e.g., concomitant QI efforts) are not affecting these results. Given the observational nature of this study, no causal relationship can be established between CPOE and the observed benefits. CPOE might represent the commitment of certain hospitals to quality measures, but further study is needed.
Bottom line: Enhanced compliance in several CMS-established quality measures is seen in hospitals that utilize CPOE throughout their institutions.
Citation: Yu FB, Menachemi N, Berner ES, Allison JJ, Weissman NW, Houston TK. Full implementation of computerized physician order entry and medication-related quality outcomes: a study of 3,364 hospitals. Am J Med Qual. 2009;24(4):278-286.
Standardized Management of Endocarditis Leads to Significant Mortality Benefit
Clinical question: Does a standardized approach to the treatment of infective endocarditis reduce mortality and morbidity?
Background: Despite epidemiological changes to the inciting bacteria and improvements in available antibiotics, mortality and morbidity associated with endocarditis remain high. The contribution of inconsistent or inaccurate treatment of endocarditis is unclear.
Study design: Case series with historical controls from 1994 to 2001, compared with protocolized patients from 2002 to 2006.
Setting: Single teaching tertiary-care hospital in France.
Synopsis: The authors established a diagnostic protocol for infectious endocarditis from 1994 to 2001 (period 1) and established a treatment protocol from 2002 to 2006 (period 2). Despite a statistically significant sicker population (older, higher comorbidities, higher coagulase-negative staphylococcal infections, and fewer healthy valves), the period-2 patients had a dramatically lower mortality rate of 8.2% (P<0.001), compared with 18.5% in period-1 patients. Fewer episodes of renal failure, organ failure, and deaths associated with embolism were noted in period 2.
Whether these results are due to more frequent care, more aggressive care (patients were “summoned” if they did not show for appointments), standardized medication and surgical options, or the effects of long-term collaboration, these results appear durable, remarkable, and reproducible.
This study is limited by its lack of randomization and extensive time frame, with concomitant changes in medical treatment and observed infectious organisms.
Bottom line: Implementation of a standardized approach to endocarditis has significant benefit on mortality and morbidity.
Citation: Botelho-Nevers E, Thuny F, Casalta JP, et al. Dramatic reduction in infective endocarditis-related mortality with a management-based approach. Arch Intern Med. 2009;169(14):1290-1298.
Treatment with tPA in the Three- to 4.5-Hour Time Window after Stroke Is Beneficial
Clinical question: What is the effect of tissue plasminogen activator (tPA) on outcomes in patients treated in the three- to 4.5-hour window after stroke?
Background: The third European Cooperative Acute Stroke Study 3 (ECASS-3) demonstrated benefit of treatment of acute stroke with tPA in the three- to 4.5-hour time window. Prior studies, however, did not show superiority of tPA over placebo, and there is a lack of a confirmatory randomized, controlled trial of tPA in this time frame.
Study design: Meta-analysis of randomized, controlled trials.
Setting: Four studies involving 1,622 patients who were treated with intravenous tPA for acute ischemic stroke from three to 4.5 hours after stroke compared with placebo.
Synopsis: Of the randomized, controlled trials of intravenous tPA for treatment of acute ischemic stroke from three to 4.5 hours after stroke, four trials (ECASS-1, ECASS-2, ECASS-3, and ATLANTIS) were included in the analysis. Treatment with tPA in the three- to 4.5-hour time window is associated with increased favorable outcomes based on the global outcome measure (OR 1.31; 95% CI: 1.10-1.56, P=0.002) and the modified Rankin Scale (OR 1.31; 95% CI: 1.07-1.59, P=0.01), compared with placebo. The 90-day mortality rate was not significantly different between the treatment and placebo groups (OR 1.04; 95% CI 0.75-1.43, P=0.83).
Due to the relatively high dose of tPA (1.1 mg/kg) administered in the ECASS-1 trial, a separate meta-analysis looking at the other three trials (tPA dose of 0.9 mg/kg) was conducted, and the favorable outcome with tPA remained.
Bottom line: Treatment of acute ischemic stroke with tPA in the three- to 4.5-hour time window results in an increased rate of favorable functional outcomes without a significant difference in mortality.
Citation: Lansberg MG, Bluhmki E, Thijs VN. Efficacy and safety of tissue plasminogen activator 3 to 4.5 hours after acute ischemic stroke: a metaanalysis. Stroke. 2009;40(7):2438-2441.
Outpatients Often Are Not Notified of Clinically Significant Test Results
Clinical question: How frequently do primary-care physicians (PCPs) fail to inform patients of clinically significant outpatient test results?
Background: Diagnostic errors are the most common cause of malpractice claims in the U.S. It is unclear how often providers fail to either inform patients of abnormal test results or document that patients have been notified.
Study design: Retrospective chart review.
Setting: Twenty-three primary-care practices: 19 private, four academic.
Synopsis: More than 5,400 charts were reviewed, and 1,889 abnormal test results were identified in this study. Failure to inform or document notification was identified in 135 cases (7.1%). The failure rates in the practices ranged from 0.0% to 26.2%. Practices with the best processes for managing test results had the lowest failure rates; these processes included: all results being routed to the responsible physician; the physician signing off on all results; the practice informing patients of all results, both normal and abnormal; documenting when the patient is informed; and instructing patients to call if not notified of test results within a certain time interval.
Limitations of this study include the potential of over- or underreporting of failures to inform as a chart review was used, and only practices that agreed to participate were included.
Bottom line: Failure to notify outpatients of test results is common but can be minimized by creating a systematic management of test results that include best practices.
Citation: Casalino LP, Dunham D, Chin MH, et al. Frequency of failure to inform patients of clinically significant outpatient test results. Arch Intern Med. 2009;169(12):1123-1129.
Repair of Incidental PFO Discovered During Cardiothoracic Surgery Repair Increases Postoperative Stroke Risk
Clinical question: What is the impact of closing incidentally discovered patent foramen ovale (PFO) defects during cardiothoracic surgery?
Background: PFO’s role in cryptogenic stroke remains controversial. Incidental PFO is commonly detected by transesophageal echocardiography (TEE) during cardiothoracic surgery. Routine PFO closure has been recommended when almost no alteration of the surgical plan is required.
Study design: Retrospective chart review.
Setting: The Cleveland Clinic.
Synopsis: Between 1995 and 2006, 13,092 patients undergoing cardiothoracic surgery had TEE data with no previous diagnosis of PFO, but the review found that 2,277 (17%) had PFO discovered intraoperatively. Of these, 639 (28%) had the PFO repaired.
Patients with an intraoperative diagnosis of PFO had similar rates of in-hospital stroke and hospital death compared with those without PFO. Patients who had their PFO repaired had a greater in-hospital stroke risk (2.8% vs. 1.2%; P=0.04) compared with those with a non-repaired PFO, representing nearly 2.5 times greater odds of having an in-hospital stroke. No other difference was noted in perioperative outcomes for patients who underwent intraoperative repair compared with those who did not, including risk of in-hospital death, hospital length of stay, ICU length of stay, and time on cardiopulmonary bypass. Long-term analysis demonstrated that PFO repair was associated with no survival difference.
The study is limited by its retrospective nature.
Bottom line: Routine surgical closure of incidental PFO detected during intraoperative imaging should be discouraged.
Citation: Krasuski RA, Hart SA, Allen D, et al. Prevalence and repair of interoperatively diagnosed patent foramen ovale and association with perioperative outcomes and long-term survival. JAMA. 2009;302(3):290-297.
Hospital-Level Differences Are Strong Predictors of Time to Defibrillation Delay In Cardiac Arrest
Clinical question: What are the predictors of delay in the time to defibrillation after in-hospital cardiac arrest?
Background: Thirty percent of in-hospital cardiac arrests from ventricular arrhythmias are not treated within the American Heart Association’s recommendation of two minutes. This delay is associated with a 50% lower rate of in-hospital survival. Exploring the hospital-level variation in delays to defibrillation is a critical step toward sharing the best practices.
Study design: Retrospective review of registry data.
Setting: The National Registry of Cardiopulmonary Resuscitation (NRCPR) survey of 200 acute-care, nonpediatric hospitals.
Synopsis: The registry identified 7,479 patients who experienced cardiac arrest from ventricular tachycardia or pulseless ventricular fibrillation. The primary outcome was the hospital rate of delayed defibrillation (time to defibrillation > two minutes), which ranged from 2% to 51%.
Time to defibrillation was found to be a major predictor of survival after a cardiac arrest. Only bed volume and arrest location were associated with differences in rates of delayed defibrillation (lower rates in larger hospitals and in ICUs). The variability was not due to differences in patient characteristics, but was due to hospital-level effects. Academic status, geographical location, arrest volume, and daily admission volume did not affect the time to defibrillation.
The study was able to identify only a few facility characteristics that account for the variability between hospitals in the rate of delayed defibrillation. The study emphasizes the need for new approaches to identifying hospital innovations in process-of-care measures that are associated with improved performance in defibrillation times.
Bottom Line: Future research is needed to better understand the reason for the wide variation between hospitals in the rate of delayed defibrillation after in-hospital cardiac arrest.
Citation: Chan PS, Nichol G, Krumholz HM, Spertus JA, Nallamothu BK; American Heart Association National Registry of Cardiopulmonary Resuscitation (NRCPR) Investigators. Hospital variation in time to defibrillation after in-hospital cardiac arrest. Arch Intern Med. 2009;169(14):1265-1273.
Treating for H. Pylori Reduces the Risk for Developing Gastric Cancer in High-Risk Patients
Clinical question: In patients with high-baseline incidence of gastric cancer, does H. pylori eradication reduce the risk for developing gastric cancer?
Background: Gastric cancer remains a major health problem in Asia. The link of H. pylori and gastric cancer has been established, but it remains unclear whether treatment for H. pylori is effective primary prevention for the development of gastric cancer.
Study design: Meta-analysis of six studies.
Setting: All but one trial was performed in Asia.
Synopsis: Seven studies met inclusion criteria, one of which was excluded due to heterogeneity. The six remaining studies were pooled, with 37 of 3,388 (1.1%) treated patients developing a new gastric cancer, compared with 56 of 3,307 (1.7%) patients who received placebo or were in the control group (RR 0.65; 0.43-0.98). Most patients received gastric biopsy prior to enrollment, and most of those demonstrated gastric atrophy or intestinal metaplasia.
These patients, despite more advanced precancerous pathology findings, still benefited from eradication. The seventh study, which was excluded, enrolled patients with early gastric cancer; these patients still benefited from H. pylori eradication and, when included in the meta-analysis, the RR was even lower, 0.57 (0.49-0.81).
Only two trials were double-blinded, but all of the studies employed the same definition of gastric cancer and held to excellent data reporting standards. This study encourages screening and treatment in high-risk patients given the widespread incidence of H. pylori.
Bottom Line: Treatment of H. pylori reduces the risk of gastric cancer in high-risk patients.
Citation: Fuccio L, Zagari RM, Eusebi LH, et al. Meta-analysis: can Helicobacter pylori eradication treatment reduce the risk for gastric cancer? Ann Intern Med. 2009;151(2):121-128.
Patients on Anti-Platelet Agents with Acute Coronary Syndrome Have a Lower Bleeding Risk When Treated with Fondaparinux
Clinical question: Is there a difference in bleeding risk with fondaparinux and enoxaparin when used with GPIIb/IIIa inhibitors or thienopyridines in NSTEMI-ACS?
Background: The OASIS 5 study reported a 50% reduction in severe bleeding when comparing fondaparinux to enoxaparin in ACS while maintaining a similar efficacy. This subgroup analysis was performed to evaluate whether reduced bleeding risk with fondaparinux remains in patients treated with additional anti-platelet agents.
Study design: Subgroup analysis of a large, multicenter, randomized, double-blind trial.
Setting: Acute-care hospitals in North America, Eastern and Western Europe, Latin America, Australia, and Asia.
Synopsis: Patients with NSTE-ACS received either fondaparinux or enoxaparin and were treated with GPIIb/IIIa inhibitors or thienopyridines at the discretion of their physician. At 30 days, the fondaparinux group had similar efficacy and decreased bleeding risk in both the GPIIb/IIIa and the thienopyridine groups. Of the 3,630 patients in the GPIIb/IIIa group, the risk for major bleeding with fondaparinux was 5.2%, whereas the risk with enoxaparin was 8.3% (HR 0.61; P<0.001) compared with enoxaparin. Of the 1,352 patients treated with thienopyridines, the risk for major bleeding with fondaparinux was 3.4%, whereas the risk with enoxaparin was 5.4% (HR 0.62; P<0.001).
Bottom Line: This subgroup analysis suggests there are less-severe bleeding complications in patients treated with fondaparinux when compared with enoxaparin in the setting of cotreatment with GPIIb/IIIa inhibitors, thienopyridines, or both.
Citation: Jolly SS, Faxon DP, Fox KA, et al. Efficacy and safety of fondaparinux versus enoxaparin in patients with acute coronary syndromes treated with glycoprotein IIb/IIIa inhibitors of thienopyridines: results from the OASIS 5 (Fifth Organization to Assess Strategies in Ischemic Syndromes) trial. J Am Coll Cardiol. 2009;54(5):468-476.
Surrogate Decision-Makers Frequently Doubt Clinicians’ Ability to Predict Medical Futility
Clinical question: What attitudes do surrogate decision-makers hold toward clinicians’ predictions of medical futility in critically-ill patients?
Background: The clinical judgment of medical futility leading to the withdrawal of life-sustaining treatment—despite the objections of surrogate decision-makers—is controversial. Very little is known about how surrogate decision-makers view the futility rationale when physicians suggest limiting the use of life-sustaining treatment.
Study design: Multicenter, mixed, qualitative and quantitative study.
Setting: Three ICUs in three different California hospitals from 2006 to 2007.
Synopsis: Semi-structured interviews of surrogate decision-makers for 50 incapacitated, critically-ill patients were performed to ascertain their beliefs about medical futility in response to hypothetical situations. Of the surrogates surveyed, 64% expressed doubt about physicians’ futility predictions.
The interviewees gave four main reasons for their doubts. Two reasons not previously described were doubts about the accuracy of physicians’ predictions and the need for surrogates to see futility themselves. Previously described sources of conflict included a misunderstanding about prognosis and religious-based objections. Surrogates with religious objections were more likely to request continuation of life-sustaining treatments than those with secular or experiential objections (OR 4; 95% CI 1.2-14.0; P=0.03). Nearly a third (32%) of surrogates elected to continue life support with a <1% survival estimate; 18% elected to continue life support when physicians thought there was no chance of survival.
This study has several limitations: a small sample size, the use of hypothetical situations, and the inability to assess attitudes as they change over time.
Bottom line: The nature of surrogate decision-makers’ doubts about medical futility can help predict whether they accept predictions of medical futility from physicians.
Citation: Zier LS, Burack JH, Micco G, Chipman AK, Frank JA, White DB. Surrogate decision makers’ responses to physicians’ predictions of medical futility. Chest. 2009;136:110-117. TH
In the Literature
Literature at a Glance
A guide to this month’s abstracts
- Steroids reduce mortality only in patients with confirmed bacterial meningitis.
- Probiotics can be useful in the treatment of acute diarrhea in children.
- CT pulmonary angiography is not inferior to V/Q scanning for exclusion of PE.
- Hospitalist care results in shorter LOS compared with care by traditional general internists and family practice physicians.
- The early risk of stroke after TIA is approximately 15% to 20% at 90 days after the sentinel event.
- Different anti-thrombotic strategies produce no difference in outcomes of early acute coronary syndromes.
- The risk of fatal PE is highest in the first year after medication is stopped.
- Beers criteria medications are associated with fewer ED visits by elderly patients compared with warfarin, digoxin, and insulin.
Do Steroids Affect the Outcome in Patients with Meningitis?
Background: Pyogenic (bacterial) meningitis has high morbidity and mortality. Studies suggest some benefit of steroids in children but provide limited evidence for adult use.
Study design: Intention-to-treat, randomized control trial.
Setting: Single hospital in Vietnam.
Synopsis: Of 435 patients older than 14 with suspected meningitis all received lumbar puncture with randomization to IV dexamethasone or placebo for four days. Results showed 69% of patients had definite meningitis, 28.3% were probable, and 2.8% had an alternative diagnosis based on culture results.
The primary outcome was death after one month, which did not differ among groups (risk ratio [RR] 0.79, confidence interval [CI] 0.45-1.39).
Predefined subgroup analysis of patients with definitive meningitis showed a significant reduction in mortality at one month (RR 0.43, CI 0.2-0.94) and death/disability at six months (odds ratio [OR] 0.56, CI 0.32-0.98).
In patients with probable meningitis, those who received steroids demonstrated a trend toward harm (OR 2.65, CI 0.73-9.63).
Probable versus definite meningitis was determined retrospectively based on cultures. The most common isolate was Streptococcus suis.
Bottom line: This study provides some evidence for using steroids in adults with confirmed bacterial meningitis. Clinical application is limited by bacterial epidemiology and the difficulty of prospectively separating patients who would benefit from those who might be harmed.
Citation: Nguyen TH, Tran TH, Thwaites G, et. al. Dexamethasone in Vietnamese adolescents and adults with bacterial meningitis. N Engl J Med. 2007;357:2431-2439.
Which Probiotic Preparations Best Reduce the Duration of Acute Diarrhea in Children?
Background: Probiotics have been suggested as an adjunctive therapy to reduce the severity and duration of acute diarrhea in children. However, there are no clear data to suggest if specific probiotic agents are superior to others.
Study design: Prospective single-blind, randomized, controlled trial.
Setting: Outpatient primary care in Naples, Italy.
Synopsis: This study compared five commercially available probiotic preparations (mix of Lactobacillus delbrueckii var bulgaricus/Streptococcus thermophilus/L. acidophilus/ Bifido-bacterium bifidum; L. rhamnosus strain GG; Saccharomyces boulardii; Bacillus clausii; or Enterococcus faecium SF68) and a control group in the treatment of outpatient acute diarrhea in 571 children age 3 months to 36 months.
The primary outcomes were the duration of diarrhea and the number and consistency of stools. The groups receiving Lactobacillus GG and the mixture had a shorter total duration of diarrhea (78.5 and 70 hours, respectively), decreased total number of stools, and improved stool consistency when compared with the control (115.5 hours). The other therapies showed no improvement over the control group. These data report on products commercially available in Italy, which may differ greatly from products available locally.
Bottom line: Probiotic preparations for the treatment of acute diarrhea in children should be chosen based on effectiveness data.
Citation: Canani RB, Cirillo P, Terrin G, et al. Probiotics for treatment of acute diarrhoea in children: randomised clinical trial of five different preparations. BMJ 2007;335:340-345.
Is CTPA a Reliable Alternative to V/Q Scan for Diagnosing PE?
Background: Computed tomography pulmonary angiogram (CTPA) has replaced ventilation/perfusion (V/Q) scanning at many hospitals as the test of choice for ruling out pulmonary embolism (PE). But limited clinical data compare CTPA with V/Q scanning in those suspected of having venous thromboembolism (VTE).
Study design: Randomized, investigator blinded, controlled trial.
Setting: The emergency departments (ED), inpatient wards, and outpatient clinics of five academic centers.
Synopsis: In the study, 1,411 patients were enrolled from five medical centers. Of 694 patients randomized to CTPA, 133 (19.2%) were diagnosed with VTE in the initial evaluation period, while 101 of 712 patients (14.2%) receiving a V/Q scan were diagnosed with VTE.
Patients not initially diagnosed with VTE were monitored. At three-month follow-up, 0.4% of the CTPA group and 1.0% of the V/Q group had a diagnosed VTE.
The overall rate of VTE found in the initial diagnostic period was significantly greater in patients randomized to CTPA (19.2% vs. 14.2%; difference, 5.0%; 95% CI; 1.1% to 8.9% p=.01). This suggests CTPA has a higher false positive rate or detects clinically insignificant thrombi.
Bottom line: CTPA was not inferior to V/Q scanning for excluding clinically meaningful PE, but CTPA diagnosed about 30% more patients with VTE than did V/Q scanning.
Citation: Anderson DR, Kahn SR, Rodger MA, et al. Computed tomographic pulmonary angiography vs. ventilation-perfusion lung scanning in patients with suspected pulmonary embolism: a randomized controlled trial. JAMA. 2007;298(23):2743-2753.
Does the Hospitalist Model Improve Length of Stay, Quality, and Cost of Care?
Background: The hospitalist model, with increased physician availability and expertise but greater discontinuity of care, is becoming more prevalent in U.S. medicine. What little is known about how this model will affect patient care is derived from a number of small studies.
Study design: Retrospective cohort study.
Setting: 45 small to midsize, predominantly nonteaching hospitals throughout the U.S.
Synopsis: Using the Premier Healthcare Informatics database, this study examined information on 76,926 patients admitted for seven common diagnoses to one of three services: hospitalist, general internist, or family physician. Analysis showed that patients on a hospitalist service had a 0.4-day shorter length of stay (p<0.001) compared with those on a general internist or family physician service.
The cost to patients cared for by a hospitalist was lower than the cost of family physicians ($125 less, p=0.33) and internists ($268 less, p=0.02). There was no difference found in death rate or 14-day readmission rate among the three services.
Given the retrospective design of this study, no causal relationship can be deduced. This study is further limited by its lack of specific data on the physicians categorized into one of the three groups solely by administrative data. The authors had concerns that the biases inherent to the retrospective nature of their work accounted for the significant difference found between hospitalists and internists.
Bottom line: The hospitalist model is associated with modest improvements in length of stay as compared with traditional inpatient approaches.
Citation: Lindenauer PK, Rothberg MB, Pekow PS, et. al. Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:2589-2600.
What Is the Stroke Risk Soon after TIA, and What Factors Drive the Variability of Previous Findings?
Background: Many studies have attempted to estimate the risk of stroke in the early period after a transient ischemic attack (TIA). These studies vary widely in their calculation of the estimated risk. Further, the clinical and methodological factors underlying this variability are unclear.
Study design: Systematic review and meta-analysis.
Setting: Community and hospital.
Synopsis: Searching the Cochrane review database, MEDLINE, EMBASE, CINAHL, and BIOSIS, 11 studies from 1973 to 2006 were included for meta-analysis, selected from 694 potential candidate studies identified on initial screening. The studies ranged in size from 62 to 2,285 patients.
The pooled estimate of risk for stroke following TIA was found to be 3.5%, 8%, and 9.2% at two, 30, and 90 days following TIA, respectively. However, there was significant heterogeneity for all periods considered (p<0.001).
Outcome ascertainment was identified as a major source of methodological heterogeneity. When risk of stroke at follow-up was determined by passive ascertainment (e.g., administrative documentation) the early risk of stroke was 3.1% two days after TIA, 6.4% at 30 days, and 8.7% at 90 days. But active ascertainment (e.g., direct, personal contact with study participants) determined stroke risk to be 9.9%, 13.4%, and 17.4% at two, 30, and 90 days after TIA, respectively.
Bottom line: Based on analysis of completed studies that included directly observed follow-up of study participants, the early risk of stroke after TIA is approximately 15% to 20% at 90 days following the sentinel event.
Citation: Wu CM, McLaughlin K, Lorenzetti DL, Hill MD, Manns BJ, Ghali WA. Early risk of stroke after transient ischemic attack. Arch Intern Med. 2007;167:2417-2422.
What Is the 1-year Ischemia and Mortality Rate for Three Anti-thrombotic Therapies for Early Invasive Management of ACS?
Background: Early interventional or surgical revascularization has improved morbidity and mortality in patients with acute coronary syndrome (ACS). The optimal anti-thrombotic regimen to reduce late ischemic and death rates has not been determined.
Study design: Prospective, open-label randomized control trial.
Setting: 450 academic and community-based institutions in 17 countries.
Synopsis: A total of 13,819 patients were enrolled between August 2003 and December 2005. They were assigned to heparin plus glycoprotein (GP) IIb/IIIa inhibitors (n=4,603), bivalirudin (Angiomax) plus IIb/IIIa inhibitors (n=4,604), or bivalirudin monotherapy (n=4,612).
For patients receiving GP IIb/IIIa inhibitors, a 2x2 factorial design assigned half the heparin and bivalirudin groups to routine upstream GP inhibitor administration (4,605 patients). The other half received selective GP IIb/IIIa inhibitors administration if PCI was indicated (4,602 patients).
At one year, there was no statistically significant difference in ischemia or mortality rate among the three therapy groups. No difference in ischemia rate was detected between the two GP IIb/IIIa inhibitor utilization strategies.
Since the hypotheses and the power for the one-year analysis in this trial were not prospectively determined, the results are considered to be exploratory and hypothesis generating.
Bottom line: At one year, there is no statistically significant difference in ischemia or mortality rate for the three antithrombotic regiments and the two glycoprotein utilization strategies.
Citation: Stone GW, Ware JH, Bertrand ME, et. al. Antithrombotic strategies in patients with acute coronary syndromes undergoing early invasive management. One-year results from the ACUITY trial. JAMA 2007;298:2497-2505.
What Is the PE Risk after Discontinuing Anticoagulation in Patients with Symptomatic VTE?
Background: The natural history of patients with symptomatic VTE who have completed anticoagulation is not well understood.
Study design: Inception cohort using pooled data from a prospective cohort study and one arm of an open-label randomized trial.
Setting: Academic medical centers in Canada, Sweden, and Italy.
Synopsis: Using pooled data from two previous studies, 2,052 patients with a first diagnosis of symptomatic VTE (lower-extremity deep-vein thrombosis [DVT], PE, or both) were evaluated for fatal PE after a standard course of therapy (mean of six months) with a vitamin K antagonist.
Patients were followed for up to 120 months. The investigators found an annual event risk of 0.19-0.49 per 100 person-years for fatal PE. Patients with prolonged immobility, active cancer, and thrombophilia were excluded, as were those with recurrent acute DVT.
Secondary analysis revealed an incidence of any fatal, definite or probable PE within the first year of discontinuing therapy of 0.35%-0.81%.
After the first year, the annual event risk ranged from 0.15-0.40 events per 100 person-years. Patients with advanced age, idiopathic VTE as well as those presenting with PE had higher rates of fatal PE.
Bottom line: There is a real though small (less than 1%) risk of fatal PE in the first year following discontinuation of anticoagulation for the first VTE episode. The optimal course of treatment for patients with idiopathic VTE is yet to be determined.
Citation: Douketis JD, Gu CS, Schulman S, et al. The risk for fatal pulmonary embolism after discontinuing anticoagulant therapy for venous thromboembolism. Ann Intern Med. 2007;147(11):766-774.
Do the Beers Criteria Predict ED Visits Associated with Adverse Drug Events?
Background: Adverse drug events are common in the elderly. The Beers criteria are a consensus-based list of 41 medications that are considered inappropriate for use in older adults and often lead to poor outcomes.
Study design: Retrospective medical record review and data analysis.
Setting: Three nationally representative, U.S. public health surveillance systems: the National Electronic Injury Surveillance System-Cooperative Adverse Drug Event Surveillance System (NEISS-CADES), 2004-2005; the National Ambulatory Medical Care Survey (NAMCS), 2004; and National Hospital Ambulatory Medical Care Survey (NHAMCS), 2004.
Synopsis: Using data collected from ED visits at 58 hospitals in the NEISS-CADES system, this study estimated that 177,504 visits for adverse drug events occur annually in the United States. Only 8.8% of such visits were attributable to the 41 medications included in the Beers criteria. Three drug classes (anticoagulant and antiplatelet agents, antidiabetic agents, and narrow therapeutic index agents) accounted for nearly half of all such ED visits. Warfarin (17.3%), insulin (13%), and digoxin (3.2%) were the most commonly implicated medications, collectively accounting for 33% of visits (CI, 27.8% to 38.7%).
This study suggests that because of the common use and high risk of adverse events associated with these three drugs, interventions targeting their use may prevent ED visits for adverse drug events in the elderly, compared with interventions aimed at reducing the use of medications identified in the Beers criteria.
This study only included adverse drug events identified in the ED and relied on the diagnosis and documentation of such events by the ED physician.
Bottom line: Beers criteria medications, although considered inappropriate for use in the elderly, were associated with significantly fewer ED visits for adverse events compared with warfarin, digoxin, and insulin.
Citation: Budnitz DS, Shehab N, Kegler SR, et. al. Medication use leading to emergency department visits for adverse drug events in older adults. Ann Intern Med. 2007;147:755-765. TH
Literature at a Glance
A guide to this month’s abstracts
- Steroids reduce mortality only in patients with confirmed bacterial meningitis.
- Probiotics can be useful in the treatment of acute diarrhea in children.
- CT pulmonary angiography is not inferior to V/Q scanning for exclusion of PE.
- Hospitalist care results in shorter LOS compared with care by traditional general internists and family practice physicians.
- The early risk of stroke after TIA is approximately 15% to 20% at 90 days after the sentinel event.
- Different anti-thrombotic strategies produce no difference in outcomes of early acute coronary syndromes.
- The risk of fatal PE is highest in the first year after medication is stopped.
- Beers criteria medications are associated with fewer ED visits by elderly patients compared with warfarin, digoxin, and insulin.
Do Steroids Affect the Outcome in Patients with Meningitis?
Background: Pyogenic (bacterial) meningitis has high morbidity and mortality. Studies suggest some benefit of steroids in children but provide limited evidence for adult use.
Study design: Intention-to-treat, randomized control trial.
Setting: Single hospital in Vietnam.
Synopsis: Of 435 patients older than 14 with suspected meningitis all received lumbar puncture with randomization to IV dexamethasone or placebo for four days. Results showed 69% of patients had definite meningitis, 28.3% were probable, and 2.8% had an alternative diagnosis based on culture results.
The primary outcome was death after one month, which did not differ among groups (risk ratio [RR] 0.79, confidence interval [CI] 0.45-1.39).
Predefined subgroup analysis of patients with definitive meningitis showed a significant reduction in mortality at one month (RR 0.43, CI 0.2-0.94) and death/disability at six months (odds ratio [OR] 0.56, CI 0.32-0.98).
In patients with probable meningitis, those who received steroids demonstrated a trend toward harm (OR 2.65, CI 0.73-9.63).
Probable versus definite meningitis was determined retrospectively based on cultures. The most common isolate was Streptococcus suis.
Bottom line: This study provides some evidence for using steroids in adults with confirmed bacterial meningitis. Clinical application is limited by bacterial epidemiology and the difficulty of prospectively separating patients who would benefit from those who might be harmed.
Citation: Nguyen TH, Tran TH, Thwaites G, et. al. Dexamethasone in Vietnamese adolescents and adults with bacterial meningitis. N Engl J Med. 2007;357:2431-2439.
Which Probiotic Preparations Best Reduce the Duration of Acute Diarrhea in Children?
Background: Probiotics have been suggested as an adjunctive therapy to reduce the severity and duration of acute diarrhea in children. However, there are no clear data to suggest if specific probiotic agents are superior to others.
Study design: Prospective single-blind, randomized, controlled trial.
Setting: Outpatient primary care in Naples, Italy.
Synopsis: This study compared five commercially available probiotic preparations (mix of Lactobacillus delbrueckii var bulgaricus/Streptococcus thermophilus/L. acidophilus/ Bifido-bacterium bifidum; L. rhamnosus strain GG; Saccharomyces boulardii; Bacillus clausii; or Enterococcus faecium SF68) and a control group in the treatment of outpatient acute diarrhea in 571 children age 3 months to 36 months.
The primary outcomes were the duration of diarrhea and the number and consistency of stools. The groups receiving Lactobacillus GG and the mixture had a shorter total duration of diarrhea (78.5 and 70 hours, respectively), decreased total number of stools, and improved stool consistency when compared with the control (115.5 hours). The other therapies showed no improvement over the control group. These data report on products commercially available in Italy, which may differ greatly from products available locally.
Bottom line: Probiotic preparations for the treatment of acute diarrhea in children should be chosen based on effectiveness data.
Citation: Canani RB, Cirillo P, Terrin G, et al. Probiotics for treatment of acute diarrhoea in children: randomised clinical trial of five different preparations. BMJ 2007;335:340-345.
Is CTPA a Reliable Alternative to V/Q Scan for Diagnosing PE?
Background: Computed tomography pulmonary angiogram (CTPA) has replaced ventilation/perfusion (V/Q) scanning at many hospitals as the test of choice for ruling out pulmonary embolism (PE). But limited clinical data compare CTPA with V/Q scanning in those suspected of having venous thromboembolism (VTE).
Study design: Randomized, investigator blinded, controlled trial.
Setting: The emergency departments (ED), inpatient wards, and outpatient clinics of five academic centers.
Synopsis: In the study, 1,411 patients were enrolled from five medical centers. Of 694 patients randomized to CTPA, 133 (19.2%) were diagnosed with VTE in the initial evaluation period, while 101 of 712 patients (14.2%) receiving a V/Q scan were diagnosed with VTE.
Patients not initially diagnosed with VTE were monitored. At three-month follow-up, 0.4% of the CTPA group and 1.0% of the V/Q group had a diagnosed VTE.
The overall rate of VTE found in the initial diagnostic period was significantly greater in patients randomized to CTPA (19.2% vs. 14.2%; difference, 5.0%; 95% CI; 1.1% to 8.9% p=.01). This suggests CTPA has a higher false positive rate or detects clinically insignificant thrombi.
Bottom line: CTPA was not inferior to V/Q scanning for excluding clinically meaningful PE, but CTPA diagnosed about 30% more patients with VTE than did V/Q scanning.
Citation: Anderson DR, Kahn SR, Rodger MA, et al. Computed tomographic pulmonary angiography vs. ventilation-perfusion lung scanning in patients with suspected pulmonary embolism: a randomized controlled trial. JAMA. 2007;298(23):2743-2753.
Does the Hospitalist Model Improve Length of Stay, Quality, and Cost of Care?
Background: The hospitalist model, with increased physician availability and expertise but greater discontinuity of care, is becoming more prevalent in U.S. medicine. What little is known about how this model will affect patient care is derived from a number of small studies.
Study design: Retrospective cohort study.
Setting: 45 small to midsize, predominantly nonteaching hospitals throughout the U.S.
Synopsis: Using the Premier Healthcare Informatics database, this study examined information on 76,926 patients admitted for seven common diagnoses to one of three services: hospitalist, general internist, or family physician. Analysis showed that patients on a hospitalist service had a 0.4-day shorter length of stay (p<0.001) compared with those on a general internist or family physician service.
The cost to patients cared for by a hospitalist was lower than the cost of family physicians ($125 less, p=0.33) and internists ($268 less, p=0.02). There was no difference found in death rate or 14-day readmission rate among the three services.
Given the retrospective design of this study, no causal relationship can be deduced. This study is further limited by its lack of specific data on the physicians categorized into one of the three groups solely by administrative data. The authors had concerns that the biases inherent to the retrospective nature of their work accounted for the significant difference found between hospitalists and internists.
Bottom line: The hospitalist model is associated with modest improvements in length of stay as compared with traditional inpatient approaches.
Citation: Lindenauer PK, Rothberg MB, Pekow PS, et. al. Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:2589-2600.
What Is the Stroke Risk Soon after TIA, and What Factors Drive the Variability of Previous Findings?
Background: Many studies have attempted to estimate the risk of stroke in the early period after a transient ischemic attack (TIA). These studies vary widely in their calculation of the estimated risk. Further, the clinical and methodological factors underlying this variability are unclear.
Study design: Systematic review and meta-analysis.
Setting: Community and hospital.
Synopsis: Searching the Cochrane review database, MEDLINE, EMBASE, CINAHL, and BIOSIS, 11 studies from 1973 to 2006 were included for meta-analysis, selected from 694 potential candidate studies identified on initial screening. The studies ranged in size from 62 to 2,285 patients.
The pooled estimate of risk for stroke following TIA was found to be 3.5%, 8%, and 9.2% at two, 30, and 90 days following TIA, respectively. However, there was significant heterogeneity for all periods considered (p<0.001).
Outcome ascertainment was identified as a major source of methodological heterogeneity. When risk of stroke at follow-up was determined by passive ascertainment (e.g., administrative documentation) the early risk of stroke was 3.1% two days after TIA, 6.4% at 30 days, and 8.7% at 90 days. But active ascertainment (e.g., direct, personal contact with study participants) determined stroke risk to be 9.9%, 13.4%, and 17.4% at two, 30, and 90 days after TIA, respectively.
Bottom line: Based on analysis of completed studies that included directly observed follow-up of study participants, the early risk of stroke after TIA is approximately 15% to 20% at 90 days following the sentinel event.
Citation: Wu CM, McLaughlin K, Lorenzetti DL, Hill MD, Manns BJ, Ghali WA. Early risk of stroke after transient ischemic attack. Arch Intern Med. 2007;167:2417-2422.
What Is the 1-year Ischemia and Mortality Rate for Three Anti-thrombotic Therapies for Early Invasive Management of ACS?
Background: Early interventional or surgical revascularization has improved morbidity and mortality in patients with acute coronary syndrome (ACS). The optimal anti-thrombotic regimen to reduce late ischemic and death rates has not been determined.
Study design: Prospective, open-label randomized control trial.
Setting: 450 academic and community-based institutions in 17 countries.
Synopsis: A total of 13,819 patients were enrolled between August 2003 and December 2005. They were assigned to heparin plus glycoprotein (GP) IIb/IIIa inhibitors (n=4,603), bivalirudin (Angiomax) plus IIb/IIIa inhibitors (n=4,604), or bivalirudin monotherapy (n=4,612).
For patients receiving GP IIb/IIIa inhibitors, a 2x2 factorial design assigned half the heparin and bivalirudin groups to routine upstream GP inhibitor administration (4,605 patients). The other half received selective GP IIb/IIIa inhibitors administration if PCI was indicated (4,602 patients).
At one year, there was no statistically significant difference in ischemia or mortality rate among the three therapy groups. No difference in ischemia rate was detected between the two GP IIb/IIIa inhibitor utilization strategies.
Since the hypotheses and the power for the one-year analysis in this trial were not prospectively determined, the results are considered to be exploratory and hypothesis generating.
Bottom line: At one year, there is no statistically significant difference in ischemia or mortality rate for the three antithrombotic regiments and the two glycoprotein utilization strategies.
Citation: Stone GW, Ware JH, Bertrand ME, et. al. Antithrombotic strategies in patients with acute coronary syndromes undergoing early invasive management. One-year results from the ACUITY trial. JAMA 2007;298:2497-2505.
What Is the PE Risk after Discontinuing Anticoagulation in Patients with Symptomatic VTE?
Background: The natural history of patients with symptomatic VTE who have completed anticoagulation is not well understood.
Study design: Inception cohort using pooled data from a prospective cohort study and one arm of an open-label randomized trial.
Setting: Academic medical centers in Canada, Sweden, and Italy.
Synopsis: Using pooled data from two previous studies, 2,052 patients with a first diagnosis of symptomatic VTE (lower-extremity deep-vein thrombosis [DVT], PE, or both) were evaluated for fatal PE after a standard course of therapy (mean of six months) with a vitamin K antagonist.
Patients were followed for up to 120 months. The investigators found an annual event risk of 0.19-0.49 per 100 person-years for fatal PE. Patients with prolonged immobility, active cancer, and thrombophilia were excluded, as were those with recurrent acute DVT.
Secondary analysis revealed an incidence of any fatal, definite or probable PE within the first year of discontinuing therapy of 0.35%-0.81%.
After the first year, the annual event risk ranged from 0.15-0.40 events per 100 person-years. Patients with advanced age, idiopathic VTE as well as those presenting with PE had higher rates of fatal PE.
Bottom line: There is a real though small (less than 1%) risk of fatal PE in the first year following discontinuation of anticoagulation for the first VTE episode. The optimal course of treatment for patients with idiopathic VTE is yet to be determined.
Citation: Douketis JD, Gu CS, Schulman S, et al. The risk for fatal pulmonary embolism after discontinuing anticoagulant therapy for venous thromboembolism. Ann Intern Med. 2007;147(11):766-774.
Do the Beers Criteria Predict ED Visits Associated with Adverse Drug Events?
Background: Adverse drug events are common in the elderly. The Beers criteria are a consensus-based list of 41 medications that are considered inappropriate for use in older adults and often lead to poor outcomes.
Study design: Retrospective medical record review and data analysis.
Setting: Three nationally representative, U.S. public health surveillance systems: the National Electronic Injury Surveillance System-Cooperative Adverse Drug Event Surveillance System (NEISS-CADES), 2004-2005; the National Ambulatory Medical Care Survey (NAMCS), 2004; and National Hospital Ambulatory Medical Care Survey (NHAMCS), 2004.
Synopsis: Using data collected from ED visits at 58 hospitals in the NEISS-CADES system, this study estimated that 177,504 visits for adverse drug events occur annually in the United States. Only 8.8% of such visits were attributable to the 41 medications included in the Beers criteria. Three drug classes (anticoagulant and antiplatelet agents, antidiabetic agents, and narrow therapeutic index agents) accounted for nearly half of all such ED visits. Warfarin (17.3%), insulin (13%), and digoxin (3.2%) were the most commonly implicated medications, collectively accounting for 33% of visits (CI, 27.8% to 38.7%).
This study suggests that because of the common use and high risk of adverse events associated with these three drugs, interventions targeting their use may prevent ED visits for adverse drug events in the elderly, compared with interventions aimed at reducing the use of medications identified in the Beers criteria.
This study only included adverse drug events identified in the ED and relied on the diagnosis and documentation of such events by the ED physician.
Bottom line: Beers criteria medications, although considered inappropriate for use in the elderly, were associated with significantly fewer ED visits for adverse events compared with warfarin, digoxin, and insulin.
Citation: Budnitz DS, Shehab N, Kegler SR, et. al. Medication use leading to emergency department visits for adverse drug events in older adults. Ann Intern Med. 2007;147:755-765. TH
Literature at a Glance
A guide to this month’s abstracts
- Steroids reduce mortality only in patients with confirmed bacterial meningitis.
- Probiotics can be useful in the treatment of acute diarrhea in children.
- CT pulmonary angiography is not inferior to V/Q scanning for exclusion of PE.
- Hospitalist care results in shorter LOS compared with care by traditional general internists and family practice physicians.
- The early risk of stroke after TIA is approximately 15% to 20% at 90 days after the sentinel event.
- Different anti-thrombotic strategies produce no difference in outcomes of early acute coronary syndromes.
- The risk of fatal PE is highest in the first year after medication is stopped.
- Beers criteria medications are associated with fewer ED visits by elderly patients compared with warfarin, digoxin, and insulin.
Do Steroids Affect the Outcome in Patients with Meningitis?
Background: Pyogenic (bacterial) meningitis has high morbidity and mortality. Studies suggest some benefit of steroids in children but provide limited evidence for adult use.
Study design: Intention-to-treat, randomized control trial.
Setting: Single hospital in Vietnam.
Synopsis: Of 435 patients older than 14 with suspected meningitis all received lumbar puncture with randomization to IV dexamethasone or placebo for four days. Results showed 69% of patients had definite meningitis, 28.3% were probable, and 2.8% had an alternative diagnosis based on culture results.
The primary outcome was death after one month, which did not differ among groups (risk ratio [RR] 0.79, confidence interval [CI] 0.45-1.39).
Predefined subgroup analysis of patients with definitive meningitis showed a significant reduction in mortality at one month (RR 0.43, CI 0.2-0.94) and death/disability at six months (odds ratio [OR] 0.56, CI 0.32-0.98).
In patients with probable meningitis, those who received steroids demonstrated a trend toward harm (OR 2.65, CI 0.73-9.63).
Probable versus definite meningitis was determined retrospectively based on cultures. The most common isolate was Streptococcus suis.
Bottom line: This study provides some evidence for using steroids in adults with confirmed bacterial meningitis. Clinical application is limited by bacterial epidemiology and the difficulty of prospectively separating patients who would benefit from those who might be harmed.
Citation: Nguyen TH, Tran TH, Thwaites G, et. al. Dexamethasone in Vietnamese adolescents and adults with bacterial meningitis. N Engl J Med. 2007;357:2431-2439.
Which Probiotic Preparations Best Reduce the Duration of Acute Diarrhea in Children?
Background: Probiotics have been suggested as an adjunctive therapy to reduce the severity and duration of acute diarrhea in children. However, there are no clear data to suggest if specific probiotic agents are superior to others.
Study design: Prospective single-blind, randomized, controlled trial.
Setting: Outpatient primary care in Naples, Italy.
Synopsis: This study compared five commercially available probiotic preparations (mix of Lactobacillus delbrueckii var bulgaricus/Streptococcus thermophilus/L. acidophilus/ Bifido-bacterium bifidum; L. rhamnosus strain GG; Saccharomyces boulardii; Bacillus clausii; or Enterococcus faecium SF68) and a control group in the treatment of outpatient acute diarrhea in 571 children age 3 months to 36 months.
The primary outcomes were the duration of diarrhea and the number and consistency of stools. The groups receiving Lactobacillus GG and the mixture had a shorter total duration of diarrhea (78.5 and 70 hours, respectively), decreased total number of stools, and improved stool consistency when compared with the control (115.5 hours). The other therapies showed no improvement over the control group. These data report on products commercially available in Italy, which may differ greatly from products available locally.
Bottom line: Probiotic preparations for the treatment of acute diarrhea in children should be chosen based on effectiveness data.
Citation: Canani RB, Cirillo P, Terrin G, et al. Probiotics for treatment of acute diarrhoea in children: randomised clinical trial of five different preparations. BMJ 2007;335:340-345.
Is CTPA a Reliable Alternative to V/Q Scan for Diagnosing PE?
Background: Computed tomography pulmonary angiogram (CTPA) has replaced ventilation/perfusion (V/Q) scanning at many hospitals as the test of choice for ruling out pulmonary embolism (PE). But limited clinical data compare CTPA with V/Q scanning in those suspected of having venous thromboembolism (VTE).
Study design: Randomized, investigator blinded, controlled trial.
Setting: The emergency departments (ED), inpatient wards, and outpatient clinics of five academic centers.
Synopsis: In the study, 1,411 patients were enrolled from five medical centers. Of 694 patients randomized to CTPA, 133 (19.2%) were diagnosed with VTE in the initial evaluation period, while 101 of 712 patients (14.2%) receiving a V/Q scan were diagnosed with VTE.
Patients not initially diagnosed with VTE were monitored. At three-month follow-up, 0.4% of the CTPA group and 1.0% of the V/Q group had a diagnosed VTE.
The overall rate of VTE found in the initial diagnostic period was significantly greater in patients randomized to CTPA (19.2% vs. 14.2%; difference, 5.0%; 95% CI; 1.1% to 8.9% p=.01). This suggests CTPA has a higher false positive rate or detects clinically insignificant thrombi.
Bottom line: CTPA was not inferior to V/Q scanning for excluding clinically meaningful PE, but CTPA diagnosed about 30% more patients with VTE than did V/Q scanning.
Citation: Anderson DR, Kahn SR, Rodger MA, et al. Computed tomographic pulmonary angiography vs. ventilation-perfusion lung scanning in patients with suspected pulmonary embolism: a randomized controlled trial. JAMA. 2007;298(23):2743-2753.
Does the Hospitalist Model Improve Length of Stay, Quality, and Cost of Care?
Background: The hospitalist model, with increased physician availability and expertise but greater discontinuity of care, is becoming more prevalent in U.S. medicine. What little is known about how this model will affect patient care is derived from a number of small studies.
Study design: Retrospective cohort study.
Setting: 45 small to midsize, predominantly nonteaching hospitals throughout the U.S.
Synopsis: Using the Premier Healthcare Informatics database, this study examined information on 76,926 patients admitted for seven common diagnoses to one of three services: hospitalist, general internist, or family physician. Analysis showed that patients on a hospitalist service had a 0.4-day shorter length of stay (p<0.001) compared with those on a general internist or family physician service.
The cost to patients cared for by a hospitalist was lower than the cost of family physicians ($125 less, p=0.33) and internists ($268 less, p=0.02). There was no difference found in death rate or 14-day readmission rate among the three services.
Given the retrospective design of this study, no causal relationship can be deduced. This study is further limited by its lack of specific data on the physicians categorized into one of the three groups solely by administrative data. The authors had concerns that the biases inherent to the retrospective nature of their work accounted for the significant difference found between hospitalists and internists.
Bottom line: The hospitalist model is associated with modest improvements in length of stay as compared with traditional inpatient approaches.
Citation: Lindenauer PK, Rothberg MB, Pekow PS, et. al. Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:2589-2600.
What Is the Stroke Risk Soon after TIA, and What Factors Drive the Variability of Previous Findings?
Background: Many studies have attempted to estimate the risk of stroke in the early period after a transient ischemic attack (TIA). These studies vary widely in their calculation of the estimated risk. Further, the clinical and methodological factors underlying this variability are unclear.
Study design: Systematic review and meta-analysis.
Setting: Community and hospital.
Synopsis: Searching the Cochrane review database, MEDLINE, EMBASE, CINAHL, and BIOSIS, 11 studies from 1973 to 2006 were included for meta-analysis, selected from 694 potential candidate studies identified on initial screening. The studies ranged in size from 62 to 2,285 patients.
The pooled estimate of risk for stroke following TIA was found to be 3.5%, 8%, and 9.2% at two, 30, and 90 days following TIA, respectively. However, there was significant heterogeneity for all periods considered (p<0.001).
Outcome ascertainment was identified as a major source of methodological heterogeneity. When risk of stroke at follow-up was determined by passive ascertainment (e.g., administrative documentation) the early risk of stroke was 3.1% two days after TIA, 6.4% at 30 days, and 8.7% at 90 days. But active ascertainment (e.g., direct, personal contact with study participants) determined stroke risk to be 9.9%, 13.4%, and 17.4% at two, 30, and 90 days after TIA, respectively.
Bottom line: Based on analysis of completed studies that included directly observed follow-up of study participants, the early risk of stroke after TIA is approximately 15% to 20% at 90 days following the sentinel event.
Citation: Wu CM, McLaughlin K, Lorenzetti DL, Hill MD, Manns BJ, Ghali WA. Early risk of stroke after transient ischemic attack. Arch Intern Med. 2007;167:2417-2422.
What Is the 1-year Ischemia and Mortality Rate for Three Anti-thrombotic Therapies for Early Invasive Management of ACS?
Background: Early interventional or surgical revascularization has improved morbidity and mortality in patients with acute coronary syndrome (ACS). The optimal anti-thrombotic regimen to reduce late ischemic and death rates has not been determined.
Study design: Prospective, open-label randomized control trial.
Setting: 450 academic and community-based institutions in 17 countries.
Synopsis: A total of 13,819 patients were enrolled between August 2003 and December 2005. They were assigned to heparin plus glycoprotein (GP) IIb/IIIa inhibitors (n=4,603), bivalirudin (Angiomax) plus IIb/IIIa inhibitors (n=4,604), or bivalirudin monotherapy (n=4,612).
For patients receiving GP IIb/IIIa inhibitors, a 2x2 factorial design assigned half the heparin and bivalirudin groups to routine upstream GP inhibitor administration (4,605 patients). The other half received selective GP IIb/IIIa inhibitors administration if PCI was indicated (4,602 patients).
At one year, there was no statistically significant difference in ischemia or mortality rate among the three therapy groups. No difference in ischemia rate was detected between the two GP IIb/IIIa inhibitor utilization strategies.
Since the hypotheses and the power for the one-year analysis in this trial were not prospectively determined, the results are considered to be exploratory and hypothesis generating.
Bottom line: At one year, there is no statistically significant difference in ischemia or mortality rate for the three antithrombotic regiments and the two glycoprotein utilization strategies.
Citation: Stone GW, Ware JH, Bertrand ME, et. al. Antithrombotic strategies in patients with acute coronary syndromes undergoing early invasive management. One-year results from the ACUITY trial. JAMA 2007;298:2497-2505.
What Is the PE Risk after Discontinuing Anticoagulation in Patients with Symptomatic VTE?
Background: The natural history of patients with symptomatic VTE who have completed anticoagulation is not well understood.
Study design: Inception cohort using pooled data from a prospective cohort study and one arm of an open-label randomized trial.
Setting: Academic medical centers in Canada, Sweden, and Italy.
Synopsis: Using pooled data from two previous studies, 2,052 patients with a first diagnosis of symptomatic VTE (lower-extremity deep-vein thrombosis [DVT], PE, or both) were evaluated for fatal PE after a standard course of therapy (mean of six months) with a vitamin K antagonist.
Patients were followed for up to 120 months. The investigators found an annual event risk of 0.19-0.49 per 100 person-years for fatal PE. Patients with prolonged immobility, active cancer, and thrombophilia were excluded, as were those with recurrent acute DVT.
Secondary analysis revealed an incidence of any fatal, definite or probable PE within the first year of discontinuing therapy of 0.35%-0.81%.
After the first year, the annual event risk ranged from 0.15-0.40 events per 100 person-years. Patients with advanced age, idiopathic VTE as well as those presenting with PE had higher rates of fatal PE.
Bottom line: There is a real though small (less than 1%) risk of fatal PE in the first year following discontinuation of anticoagulation for the first VTE episode. The optimal course of treatment for patients with idiopathic VTE is yet to be determined.
Citation: Douketis JD, Gu CS, Schulman S, et al. The risk for fatal pulmonary embolism after discontinuing anticoagulant therapy for venous thromboembolism. Ann Intern Med. 2007;147(11):766-774.
Do the Beers Criteria Predict ED Visits Associated with Adverse Drug Events?
Background: Adverse drug events are common in the elderly. The Beers criteria are a consensus-based list of 41 medications that are considered inappropriate for use in older adults and often lead to poor outcomes.
Study design: Retrospective medical record review and data analysis.
Setting: Three nationally representative, U.S. public health surveillance systems: the National Electronic Injury Surveillance System-Cooperative Adverse Drug Event Surveillance System (NEISS-CADES), 2004-2005; the National Ambulatory Medical Care Survey (NAMCS), 2004; and National Hospital Ambulatory Medical Care Survey (NHAMCS), 2004.
Synopsis: Using data collected from ED visits at 58 hospitals in the NEISS-CADES system, this study estimated that 177,504 visits for adverse drug events occur annually in the United States. Only 8.8% of such visits were attributable to the 41 medications included in the Beers criteria. Three drug classes (anticoagulant and antiplatelet agents, antidiabetic agents, and narrow therapeutic index agents) accounted for nearly half of all such ED visits. Warfarin (17.3%), insulin (13%), and digoxin (3.2%) were the most commonly implicated medications, collectively accounting for 33% of visits (CI, 27.8% to 38.7%).
This study suggests that because of the common use and high risk of adverse events associated with these three drugs, interventions targeting their use may prevent ED visits for adverse drug events in the elderly, compared with interventions aimed at reducing the use of medications identified in the Beers criteria.
This study only included adverse drug events identified in the ED and relied on the diagnosis and documentation of such events by the ED physician.
Bottom line: Beers criteria medications, although considered inappropriate for use in the elderly, were associated with significantly fewer ED visits for adverse events compared with warfarin, digoxin, and insulin.
Citation: Budnitz DS, Shehab N, Kegler SR, et. al. Medication use leading to emergency department visits for adverse drug events in older adults. Ann Intern Med. 2007;147:755-765. TH