ITL: Physician Reviews of HM-Relevant Research

Article Type
Changed
Fri, 09/14/2018 - 12:22
Display Headline
ITL: Physician Reviews of HM-Relevant Research

Clinical question: Is the risk of recurrence of Clostridium difficile infection (CDI) increased by the use of “non-CDI” antimicrobial agents (inactive against C. diff) during or after CDI therapy?

Background: Recurrence of CDI is expected to increase with use of non-CDI antimicrobials. Previous studies have not distinguished between the timing of non-CDI agents during and after CDI treatment, nor examined the effect of frequency, duration, or type of non-CDI antibiotic therapy.

Study design: Retrospective cohort.

Setting: Academic Veterans Affairs medical center.

Synopsis: All patients with CDI over a three-year period were evaluated to determine the association between non-CDI antimicrobial during or within 30 days following CDI therapy and 90-day CDI recurrence. Of 246 patients, 57% received concurrent or subsequent non-CDI antimicrobials. CDI recurred in 40% of patients who received non-CDI antimicrobials and in 16% of those who did not (OR: 3.5, 95% CI: 1.9 to 6.5).

After multivariable adjustment (including age, duration of CDI treatment, comorbidity, hospital and ICU admission, and gastric acid suppression), those who received non-CDI antimicrobials during CDI therapy had no increased risk of recurrence. However, those who received any non-CDI antimicrobials after initial CDI treatment had an absolute recurrence rate of 48% with an adjusted OR of 3.02 (95% CI: 1.65 to 5.52). This increased risk of recurrence was unaffected by the number or duration of non-CDI antimicrobial prescriptions. Subgroup analysis by antimicrobial class revealed statistically significant associations only with beta-lactams and fluoroquinolones.

Bottom line: The risk of recurrence of CDI is tripled by exposure to non-CDI antimicrobials within 30 days after CDI treatment, irrespective of the number or duration of such exposures.

Citation: Drekonja DM, Amundson WH, DeCarolis DD, Kuskowski MA, Lederle FA, Johnson JR. Antimicrobial use and risk for recurrent Clostridium difficile infection. Am J Med. 2011;124:1081.e1-1081.e7.

Issue
The Hospitalist - 2012(06)
Publications
Sections

Clinical question: Is the risk of recurrence of Clostridium difficile infection (CDI) increased by the use of “non-CDI” antimicrobial agents (inactive against C. diff) during or after CDI therapy?

Background: Recurrence of CDI is expected to increase with use of non-CDI antimicrobials. Previous studies have not distinguished between the timing of non-CDI agents during and after CDI treatment, nor examined the effect of frequency, duration, or type of non-CDI antibiotic therapy.

Study design: Retrospective cohort.

Setting: Academic Veterans Affairs medical center.

Synopsis: All patients with CDI over a three-year period were evaluated to determine the association between non-CDI antimicrobial during or within 30 days following CDI therapy and 90-day CDI recurrence. Of 246 patients, 57% received concurrent or subsequent non-CDI antimicrobials. CDI recurred in 40% of patients who received non-CDI antimicrobials and in 16% of those who did not (OR: 3.5, 95% CI: 1.9 to 6.5).

After multivariable adjustment (including age, duration of CDI treatment, comorbidity, hospital and ICU admission, and gastric acid suppression), those who received non-CDI antimicrobials during CDI therapy had no increased risk of recurrence. However, those who received any non-CDI antimicrobials after initial CDI treatment had an absolute recurrence rate of 48% with an adjusted OR of 3.02 (95% CI: 1.65 to 5.52). This increased risk of recurrence was unaffected by the number or duration of non-CDI antimicrobial prescriptions. Subgroup analysis by antimicrobial class revealed statistically significant associations only with beta-lactams and fluoroquinolones.

Bottom line: The risk of recurrence of CDI is tripled by exposure to non-CDI antimicrobials within 30 days after CDI treatment, irrespective of the number or duration of such exposures.

Citation: Drekonja DM, Amundson WH, DeCarolis DD, Kuskowski MA, Lederle FA, Johnson JR. Antimicrobial use and risk for recurrent Clostridium difficile infection. Am J Med. 2011;124:1081.e1-1081.e7.

Clinical question: Is the risk of recurrence of Clostridium difficile infection (CDI) increased by the use of “non-CDI” antimicrobial agents (inactive against C. diff) during or after CDI therapy?

Background: Recurrence of CDI is expected to increase with use of non-CDI antimicrobials. Previous studies have not distinguished between the timing of non-CDI agents during and after CDI treatment, nor examined the effect of frequency, duration, or type of non-CDI antibiotic therapy.

Study design: Retrospective cohort.

Setting: Academic Veterans Affairs medical center.

Synopsis: All patients with CDI over a three-year period were evaluated to determine the association between non-CDI antimicrobial during or within 30 days following CDI therapy and 90-day CDI recurrence. Of 246 patients, 57% received concurrent or subsequent non-CDI antimicrobials. CDI recurred in 40% of patients who received non-CDI antimicrobials and in 16% of those who did not (OR: 3.5, 95% CI: 1.9 to 6.5).

After multivariable adjustment (including age, duration of CDI treatment, comorbidity, hospital and ICU admission, and gastric acid suppression), those who received non-CDI antimicrobials during CDI therapy had no increased risk of recurrence. However, those who received any non-CDI antimicrobials after initial CDI treatment had an absolute recurrence rate of 48% with an adjusted OR of 3.02 (95% CI: 1.65 to 5.52). This increased risk of recurrence was unaffected by the number or duration of non-CDI antimicrobial prescriptions. Subgroup analysis by antimicrobial class revealed statistically significant associations only with beta-lactams and fluoroquinolones.

Bottom line: The risk of recurrence of CDI is tripled by exposure to non-CDI antimicrobials within 30 days after CDI treatment, irrespective of the number or duration of such exposures.

Citation: Drekonja DM, Amundson WH, DeCarolis DD, Kuskowski MA, Lederle FA, Johnson JR. Antimicrobial use and risk for recurrent Clostridium difficile infection. Am J Med. 2011;124:1081.e1-1081.e7.

Issue
The Hospitalist - 2012(06)
Issue
The Hospitalist - 2012(06)
Publications
Publications
Article Type
Display Headline
ITL: Physician Reviews of HM-Relevant Research
Display Headline
ITL: Physician Reviews of HM-Relevant Research
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

ITL: Physician Reviews of HM-Relevant Research

Article Type
Changed
Fri, 09/14/2018 - 12:23
Display Headline
ITL: Physician Reviews of HM-Relevant Research

Clinical question: Is it safe to perform esophagogastroduodenoscopy (EGD) in patients with upper gastrointestinal (GI) hemorrhage and low hematocrit?

Background: Patients admitted with GI hemorrhage are generally volume-resuscitated aggressively upon admission. After hemodynamic stability has been achieved, some would advocate delaying EGD until the hemoglobin and hematocrit are above 10 g/dL and 30%, respectively. This study attempted to determine whether EGD is safe in the setting of low hematocrit levels.

Study design: Prospective cohort.

Setting: Parkland Memorial Hospital, Dallas.

Synopsis: The 920 patients with upper GI bleeding were divided into two groups: a low (<30%) hematocrit group and a high (>30%) hematocrit group. They were analyzed for differences in rates of cardiovascular events, requirement for surgery, angiography, mortality, or ICU transfer. Overall event rates were extremely low, with no differences between the two groups.

Bottom line: Transfusing to a target hematocrit of >30% should not be a prerequisite for EGD in patients who present with upper GI bleeding.

Citation: Balderas V, Bhore R, Lara LF, Spesivtseva J, Rockey DC. The hematocrit level in upper gastrointestinal hemorrhage: safety of endoscopy and outcomes. Am J Med. 2011;124:970-976.

Issue
The Hospitalist - 2012(04)
Publications
Sections

Clinical question: Is it safe to perform esophagogastroduodenoscopy (EGD) in patients with upper gastrointestinal (GI) hemorrhage and low hematocrit?

Background: Patients admitted with GI hemorrhage are generally volume-resuscitated aggressively upon admission. After hemodynamic stability has been achieved, some would advocate delaying EGD until the hemoglobin and hematocrit are above 10 g/dL and 30%, respectively. This study attempted to determine whether EGD is safe in the setting of low hematocrit levels.

Study design: Prospective cohort.

Setting: Parkland Memorial Hospital, Dallas.

Synopsis: The 920 patients with upper GI bleeding were divided into two groups: a low (<30%) hematocrit group and a high (>30%) hematocrit group. They were analyzed for differences in rates of cardiovascular events, requirement for surgery, angiography, mortality, or ICU transfer. Overall event rates were extremely low, with no differences between the two groups.

Bottom line: Transfusing to a target hematocrit of >30% should not be a prerequisite for EGD in patients who present with upper GI bleeding.

Citation: Balderas V, Bhore R, Lara LF, Spesivtseva J, Rockey DC. The hematocrit level in upper gastrointestinal hemorrhage: safety of endoscopy and outcomes. Am J Med. 2011;124:970-976.

Clinical question: Is it safe to perform esophagogastroduodenoscopy (EGD) in patients with upper gastrointestinal (GI) hemorrhage and low hematocrit?

Background: Patients admitted with GI hemorrhage are generally volume-resuscitated aggressively upon admission. After hemodynamic stability has been achieved, some would advocate delaying EGD until the hemoglobin and hematocrit are above 10 g/dL and 30%, respectively. This study attempted to determine whether EGD is safe in the setting of low hematocrit levels.

Study design: Prospective cohort.

Setting: Parkland Memorial Hospital, Dallas.

Synopsis: The 920 patients with upper GI bleeding were divided into two groups: a low (<30%) hematocrit group and a high (>30%) hematocrit group. They were analyzed for differences in rates of cardiovascular events, requirement for surgery, angiography, mortality, or ICU transfer. Overall event rates were extremely low, with no differences between the two groups.

Bottom line: Transfusing to a target hematocrit of >30% should not be a prerequisite for EGD in patients who present with upper GI bleeding.

Citation: Balderas V, Bhore R, Lara LF, Spesivtseva J, Rockey DC. The hematocrit level in upper gastrointestinal hemorrhage: safety of endoscopy and outcomes. Am J Med. 2011;124:970-976.

Issue
The Hospitalist - 2012(04)
Issue
The Hospitalist - 2012(04)
Publications
Publications
Article Type
Display Headline
ITL: Physician Reviews of HM-Relevant Research
Display Headline
ITL: Physician Reviews of HM-Relevant Research
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

In the Literature: Physician Reviews of HM-Related Research

Article Type
Changed
Fri, 09/14/2018 - 12:24
Display Headline
In the Literature: Physician Reviews of HM-Related Research

In This Edition

Literature At A Glance

A guide to this month’s studies

  1. IDSA/ATS guidelines for community-acquired pneumonia
  2. Improved asthma with IL-13 antibody
  3. Rivaroxaban vs. warfarin for stroke prevention in atrial fibrillation
  4. Apixaban vs. warfarin for stroke prevention in atrial fibrillation
  5. Ultrasonography more sensitive than chest radiograph for pneumothorax
  6. Current readmission risk models inadequate
  7. Optimal fluid volume for acute pancreatitis
  8. Low mortality in saddle pulmonary embolism

Triage Decisions for Patients with Severe Community-Acquired Pneumonia Should Be Based on IDSA/ATS Guidelines, Not Inflammatory Biomarkers

Clinical question: Can C-reactive protein levels (CRP), procalcitonin, TNF-alpha, and cytokine levels predict the need for intensive-care admission more accurately than IDSA/ATS guidelines in patients with severe community-acquired pneumonia (CAP)?

Background: Inflammatory biomarkers, such as CRP and procalcitonin, have diagnostic and prognostic utility in patients with CAP. Whether these inflammatory biomarkers can help triage patients to the appropriate level of care is unknown.

Study design: Prospective case control study.

Setting: Two university hospitals in Spain.

Synopsis: The study included 685 patients with severe CAP who did not require mechanical ventilation or vasopressor support. Serum levels of CRP, procalcitonin, TNF-alpha, IL-1, IL-6, IL-8, and IL-10, as well as Infectious Diseases Society of American/American Thoracic Society (IDSA/ATS) minor severity criteria data, were collected on admission. After controlling for age, comorbidities, and PSI risk class, serum levels of CRP and procalcitonin were found to be significantly higher in ICU patients compared with non-ICU patients. Despite this, these inflammatory biomarkers did not augment the IDSA/ATS guidelines, suggesting that patients who have three or more minor criteria be considered for ICU admission.

The study did suggest that patients with severe CAP and low levels of IL-6 and procalcitonin could potentially be managed safely outside of the ICU. However, hospitalists should be wary of applying the study results due to the small number of ICU patients in this study and the lack of real-time availability of these biomarkers at most institutions.

Bottom line: More studies of inflammatory biomarkers are needed before using them to determine the level of care required for patients with CAP. Until these data are available, physicians should use the IDSA/ATS guidelines to triage patients to the appropriate level of care.

Citation: Ramirez P, Ferrer M, Torres A, et al. Inflammatory biomarkers and prediction for intensive care unit admission pneumonia. Crit Care Med. 2011;39:2211-2217.

IL-13 Antibody Lebrikizumab Shows Promise as a New Therapy for Adults with Uncontrolled Asthma

Clinical question: Can lebrikizumab, an IL-13 antibody, improve asthma control in patients with uncontrolled asthma?

Background: Asthma is a complex disease, with varied patient response to treatment. Some patients have uncontrolled asthma despite inhaled glucocorticoids. It is postulated that IL-13 may account for this variability and that some patients with uncontrolled asthma are poorly controlled due to glucocorticoid resistance mediated by IL-13. Lebrikizumab is an IgG4 monoclonal antibody that binds to and inhibits the function of IL-13. This study was performed to see if this antibody would be effective in patients with uncontrolled asthma despite inhaled glucocorticoid therapy.

Study design: Randomized double-blinded placebo-controlled trial.

Setting: Multiple centers.

Synopsis: The study randomized 219 adult asthma patients who were inadequately controlled despite inhaled corticosteroids to a placebo or lebrikizumab. The primary outcome was improvement in prebronchodilator FEV1 from baseline. Secondary outcomes were exacerbations, use of rescue medications, and symptom scores. Patients were also stratified and analyzed based on surrogate markers for IL-13, which included serum IGE levels, eosinophil counts, and periostin levels.

 

 

In patients who were randomized to the lebrikizumab treatment, there was a statistically significant improvement in FEV1 of 5.5%, which occurred almost immediately and was sustained for the entire 32 weeks of the study. The improvement was more significant in patients who had high surrogate markers for IL-13. Despite this improvement in FEV1, there were no differences in secondary outcomes except in patients who had surrogate markers for high IL-13 levels.

Bottom line: In adults with asthma who remained uncontrolled despite inhaled corticosteroid therapy, IL-13 antagonism with lebrikizumab improved FEV1. However, the clinical relevance of these modest improvements remains unclear.

Citation: Corren J, Lemanske R, Matthews J, et al. Lebrikizumab treatment in adults with asthma. N Engl J Med. 2011;365:1088-1098.

Rivaroxaban Is Noninferior to Warfarin for Stroke Prevention in Atrial Fibrillation

Clinical question: How does rivaroxaban compare with warfarin in the prevention of stroke or systemic embolism in patients with nonvalvular atrial fibrillation?

Background: Warfarin is effective for the prevention of stroke in atrial fibrillation, but it requires close monitoring and adjustment. Rivaroxaban, an oral Xa inhibitor, may be safer, easier, and more effective than warfarin.

Study design: Multicenter, randomized, double-blind, double-dummy trial.

Setting: 1,178 sites in 45 countries.

Synopsis: The study included 14,264 patients with nonvalvular atrial fibrillation who were randomized to either fixed-dose rivaroxaban (20 mg daily or 15 mg daily for CrCl 30-49 mL/min) plus placebo or adjusted-dose warfarin (target INR 2.0 to 3.0) plus placebo. The mean CHADS2 score was 3.5. The primary endpoint (stroke or systemic embolism) occurred in 1.7% of patients per year in the rivaroxaban group and 2.2% per year in the warfarin group (hazard ratio for rivaroxaban 0.79; 95% CI: 0.66 to 0.96, P<0.001 for noninferiority). There was no difference in major or nonmajor clinically significant bleeding between the two groups (14.9% rivaroxaban vs. 14.5% warfarin, hazard ratio=1.03, 95% CI: 0.96 to 1.11, P=0.44). There were fewer intracranial hemorrhages (0.5% vs. 0.7%, P=0.02) and fatal bleeding (0.2% vs. 0.5%, P=0.003) in the rivaroxaban group.

Bottom line: In patients with atrial fibrillation, rivaroxaban was noninferior to warfarin for the prevention of stroke or systemic embolization, with a similar risk of major bleeding and a lower risk of intracranial hemorrhage or fatal bleeding.

Citation: Patel MR, Mahaffey K, Garg J, et al. Rivaroxaban versus warfarin in nonvalvular atrial fibrillation. N Engl J Med. 2011;365:883-891.

Apixaban More Effective and Safer than Warfarin for Stroke Prevention in Atrial Fibrillation

Clinical question: How does the effectiveness and safety of apixaban compare with warfarin for stroke prevention in atrial fibrillation?

Background: Until recently, warfarin has been the only available oral anticoagulant for stroke prevention in patients with atrial fibrillation (AF). The oral factor Xa inhibitors have shown similar efficacy and safety, without the monitoring requirement and drug interactions associated with warfarin.

Study design: Prospective randomized double-blind controlled trial.

Setting: More than 1,000 clinical sites in 39 countries.

Synopsis: This study randomized 18,201 patients with atrial fibrillation or flutter and at least one CHADS2 risk factor for stroke to receive oral apixaban or warfarin therapy. Exclusion criteria were prosthetic valves and severe kidney disease. The median duration of follow-up was 1.8 years, and the major endpoints were incidence of stroke, systemic embolism, bleeding complications, and mortality.

Compared with warfarin, apixaban reduced the annual incidence of stroke and systemic embolism from 1.6% to 1.3% (HR 0.79, 95%: CI 0.66 to 0.95, P=0.01 for superiority), and reduced mortality (HR: 0.89, 95% CI: 0.80 to 0.998). For the combined endpoint of stroke, systemic embolism, MI, or death, the annual rate was reduced from 5.5% to 4.9% (HR: 0.88, 95% CI: 0.80 to 0.97). All measures of bleeding were less frequent with apixaban: major 2.1% vs. 3.1% (HR: 0.69, 95% CI: 0.60 to 0.80), and combined major and minor bleeding 4.1% vs. 6.0% (HR: 0.68, 95% CI: 0.61 to 0.75). The annual rate for the net outcome of stroke, embolism, or major bleeding was 3.2% with apixaban and 4.1% with warfarin (HR: 0.77, 95% CI: 0.69 to 0.86).

 

 

Bottom line: Compared with warfarin therapy, apixaban is more effective and safer for stroke prevention in patients with atrial fibrillation.

Citation: Granger CB, Alexander JH, McMurray JJ, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365:981-992.

Ultrasonography Is Useful in Diagnosis of Pneumothorax

Clinical question: Is transthoracic ultrasonography a useful tool to diagnose pneumothorax?

Background: CT is the diagnostic gold standard for pneumothorax, but it is associated with radiation exposure and requires patient transport. Chest radiograph is easy to perform but may be too insensitive for adequate diagnosis. Ultrasonography’s diagnostic performance for detecting pneumothorax needs further evaluation.

Study design: Systematic review and meta-analysis.

Setting: Critically ill, trauma, or post-biopsy patients were identified in each of the studies.

Synopsis: The meta-analysis of 20 eligible studies found a pooled sensitivity of ultrasound for the detection of pneumothorax of 0.88 (95% CI: 0.85 to 0.91) and specificity of 0.99 (0.98 to 0.99) compared with sensitivity of 0.52 (0.49 to 0.55) and specificity of 1.00 (1.00 to 1.00) for chest radiograph. Although the overall ROC curve was not significantly different between these modalities, the accuracy of ultrasonography was highly dependent on the skill of the operator.

Bottom line: When performed by a skilled operator, transthoracic ultrasonography is as specific, and more sensitive, than chest radiograph in diagnosing pneumothorax.

Citation: Ding W, Shen Y, Yang J, He X, Zhang M. Diagnosis of pneumothorax by radiography and ultrasonography: a meta-analysis. Chest. 2011;140:859-866.

Risk Prediction for Hospital Readmission Remains Challenging

Clinical question: Can readmission risk assessment be used to identify which patients would benefit most from care-transition interventions, or to risk-adjust readmission rates for hospital comparison?

Background: Multiple models to predict hospital readmission have been described and validated. Identifying patients at high risk for readmission could allow for customized care-transition interventions, or could be used to risk-adjust readmission rates to compare publicly reported rates by hospital.

Study design: Systematic review with qualitative synthesis of results.

Setting: Thirty studies (23 from the U.S.) tested 26 unique readmission models.

Synopsis: Each model had been tested in both a derivation and validation cohort. Fourteen models (nine from the U.S.), using retrospective administrative data to compare risk-adjusted rates between hospitals, had poor discriminative capacity (c statistic range: 0.55 to 0.65). Seven models could be used to identify high-risk patients early in the hospitalization (c statistic range: 0.56 to 0.72) and five could be used to identify high-risk patients at discharge (c statistic range: 0.68 to 0.83), but these also had poor to moderate discriminative capacity. Multiple variables were considered in each of the models; most incorporated medical comorbidities and prior use of healthcare services.

Bottom line: Current readmission risk prediction models do not perform adequately for comparative or clinical purposes.

Citation: Kansagara D, Englander H, Salanitro A, et. al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306:1688-1698.

Intravenous Fluids for Acute Pancreatitis: More May Be Less

Clinical question: What is the optimal volume of fluid administration for treatment of acute pancreatitis?

Background: Current guidelines for management of acute pancreatitis emphasize vigorous administration of intravenous fluids to reduce the risk of pancreatic necrosis and organ failure. This recommendation is based upon animal studies, and has not been subjected to clinical evaluation in humans.

Study design: Prospective observational cohort.

Setting: University-affiliated tertiary-care public hospital in Spain.

Synopsis: This study enrolled 247 patients admitted with acute pancreatitis to determine the association between the volume of fluid administered during the first 24 hours and the development of persistent organ failure, pancreatic fluid collection or necrosis, and mortality. The volume and rate of fluid administered were determined by the treating physician. Patients were classified into three groups: those receiving a volume <3.1 L, 3.1 to 4.1 L, or >4.1 L.

 

 

After multivariate adjustment, those receiving <3.1 L had no increased risk of necrosis or any other adverse outcome, compared with those who received the middle range of fluid volume.

Patients receiving >4.1 L had a higher risk of persistent organ failure (OR: 7.7, 95% CI: 1.5 to 38.7), particularly renal and respiratory insufficiency, and fluid collection development (OR: 1.9, 95% CI: 1 to 3.7) independent of disease severity. Pancreatic necrosis and mortality were similar in the three groups.

Bottom line: Administration of large-volume intravenous fluids (>4.1 L) in

the first 24 hours was associated with worse outcomes, although residual confounding cannot be excluded in this nonrandomized study.

Citation: de-Madaria E, Soler-Sala G, Sanchez-Paya J, et al. Influence of fluid therapy on the prognosis of acute pancreatitis: a prospective cohort study. Am J Gastroenterol. 2011;106:1843-1850.

Clinical Outcomes in Saddle Pulmonary Embolism

Clinical question: What are the treatments used and outcomes associated with saddle pulmonary embolism?

Background: Saddle pulmonary embolism is a risk for right ventricular dysfunction and sudden hemodynamic collapse. There are limited data on the clinical presentation and outcomes in these patients.

Study design: Retrospective case review.

Setting: Single academic medical center.

Synopsis: In this retrospective review of 680 patients diagnosed with pulmonary embolism on CT at a single academic medical center from 2004 to 2009, 5.4% (37 patients) had a saddle pulmonary embolism.

Most patients with saddle pulmonary embolism were hemodynamically stable and responded to standard therapy with unfractionated heparin. The mean length of stay was nine days, 46% received an inferior vena cava filter, 41% were treated in an ICU, and 5.4% (two patients) died in the hospital. Thrombolytics were used in only 11% of patients, most of which had sustained hypotension and/or were mechanically ventilated.

Bottom line: Most patients with saddle pulmonary embolus in this single institution study did not receive thrombolytics and had overall low mortality.

Citation: Sardi A, Gluskin J, Guttentag A, Kotler MN, Braitman LE, Lippmann M. Saddle pulmonary embolism: is it as bad as it looks? A community hospital experience. Crit Care Med. 2011;39:2413-2418.

 

Clinical Shorts

Pancreaticojejunostomy Is Superior To Endoscopy As Treatment For Advanced Chronic Pancreatitis

In this small prospective trial of 31 patients with advanced chronic calcific pancreatitis, patients who initially underwent a pancreaticojejunostomy had less pain and required fewer re-interventions than patients who had endoscopic treatment during a median follow-up of six years.

Citation: Cahen DL, Gouma DJ, Bruna et al. Long-term outcomes of endoscopic vs surgical drainage of the pancreatic duct in patient with chronic pancreatitis. Gastroenterology. 2011;141:1690-1695.

Routine surveillance for patients with Barrett’s esophagus called into question

In this nationwide cohort of patients in Denmark, the annual risk of developing adenocarcinoma in patients with Barrett’s esophagus was 0.12%, which is markedly lower than previously reported estimates of 0.5%, uapon which the guidelines for screening are based.

Citation: Hvid-Jensen F, Pedersen L, Funch-Jensen P, et al. Incidence of adenocarcinoma among patients with Barrett’s esophagus. N Engl J Med. 2012;365:1375-1383.

Consider hepatitis E testing in suspected drug-induced liver injury

A serologic survey of patients in the U.S. with acute liver injury attributed to drugs found 16% had evidence of hepatitis E, representing 3% of the acute disease in the population studied.

Citation: Davern TJ, Chalasani N, Fontana RJ, et al. Acute hepatitis E infection accounts for some cases of suspected drug-induced liver injury. Gastroenterology. 2011;141:1665-1672.

Pressure redistribution mattresses reduce pressure ulcers

A cost-effectiveness analysis supported using pressure redistribution mattresses to prevent pressure ulcers in long-term care residents; additional study is needed to determine the cost-effectiveness of emollients and cleansers.

Citation: Pham B, Stern A, Chen W, et. al. Preventing pressure ulcers in long-term care. Arch Intern Med. 2011;171:1839-1847.

Screening chest x-rays still do not prevent death from lung cancer

A randomized controlled trial of annual screening chest radiography enrolling nearly 155,000 patients, 52% of whom were current or former smokers, yielded no reduction in mortality from lung cancer.

Citation: Oken MM, Hocking WG, Kvale PA, et al. Screening by chest radiograph and lung cancer mortality. JAMA. 2011;306:1865-1873.

Abnormal QT-interval duration is associated with increased mortality

A cross-sectional study found that shortened or prolonged QT-interval duration, even if still within the reference range, is associated with increased mortality risk.

Citation: Zhang Y, Post WS, Dalal D, Blasco-Colmenares E, Tomaselli GF, Guallar E. QT-interval duration and mortality rate. Arch Intern Med. 2011;171:1727-1733.

Issue
The Hospitalist - 2012(03)
Publications
Sections

In This Edition

Literature At A Glance

A guide to this month’s studies

  1. IDSA/ATS guidelines for community-acquired pneumonia
  2. Improved asthma with IL-13 antibody
  3. Rivaroxaban vs. warfarin for stroke prevention in atrial fibrillation
  4. Apixaban vs. warfarin for stroke prevention in atrial fibrillation
  5. Ultrasonography more sensitive than chest radiograph for pneumothorax
  6. Current readmission risk models inadequate
  7. Optimal fluid volume for acute pancreatitis
  8. Low mortality in saddle pulmonary embolism

Triage Decisions for Patients with Severe Community-Acquired Pneumonia Should Be Based on IDSA/ATS Guidelines, Not Inflammatory Biomarkers

Clinical question: Can C-reactive protein levels (CRP), procalcitonin, TNF-alpha, and cytokine levels predict the need for intensive-care admission more accurately than IDSA/ATS guidelines in patients with severe community-acquired pneumonia (CAP)?

Background: Inflammatory biomarkers, such as CRP and procalcitonin, have diagnostic and prognostic utility in patients with CAP. Whether these inflammatory biomarkers can help triage patients to the appropriate level of care is unknown.

Study design: Prospective case control study.

Setting: Two university hospitals in Spain.

Synopsis: The study included 685 patients with severe CAP who did not require mechanical ventilation or vasopressor support. Serum levels of CRP, procalcitonin, TNF-alpha, IL-1, IL-6, IL-8, and IL-10, as well as Infectious Diseases Society of American/American Thoracic Society (IDSA/ATS) minor severity criteria data, were collected on admission. After controlling for age, comorbidities, and PSI risk class, serum levels of CRP and procalcitonin were found to be significantly higher in ICU patients compared with non-ICU patients. Despite this, these inflammatory biomarkers did not augment the IDSA/ATS guidelines, suggesting that patients who have three or more minor criteria be considered for ICU admission.

The study did suggest that patients with severe CAP and low levels of IL-6 and procalcitonin could potentially be managed safely outside of the ICU. However, hospitalists should be wary of applying the study results due to the small number of ICU patients in this study and the lack of real-time availability of these biomarkers at most institutions.

Bottom line: More studies of inflammatory biomarkers are needed before using them to determine the level of care required for patients with CAP. Until these data are available, physicians should use the IDSA/ATS guidelines to triage patients to the appropriate level of care.

Citation: Ramirez P, Ferrer M, Torres A, et al. Inflammatory biomarkers and prediction for intensive care unit admission pneumonia. Crit Care Med. 2011;39:2211-2217.

IL-13 Antibody Lebrikizumab Shows Promise as a New Therapy for Adults with Uncontrolled Asthma

Clinical question: Can lebrikizumab, an IL-13 antibody, improve asthma control in patients with uncontrolled asthma?

Background: Asthma is a complex disease, with varied patient response to treatment. Some patients have uncontrolled asthma despite inhaled glucocorticoids. It is postulated that IL-13 may account for this variability and that some patients with uncontrolled asthma are poorly controlled due to glucocorticoid resistance mediated by IL-13. Lebrikizumab is an IgG4 monoclonal antibody that binds to and inhibits the function of IL-13. This study was performed to see if this antibody would be effective in patients with uncontrolled asthma despite inhaled glucocorticoid therapy.

Study design: Randomized double-blinded placebo-controlled trial.

Setting: Multiple centers.

Synopsis: The study randomized 219 adult asthma patients who were inadequately controlled despite inhaled corticosteroids to a placebo or lebrikizumab. The primary outcome was improvement in prebronchodilator FEV1 from baseline. Secondary outcomes were exacerbations, use of rescue medications, and symptom scores. Patients were also stratified and analyzed based on surrogate markers for IL-13, which included serum IGE levels, eosinophil counts, and periostin levels.

 

 

In patients who were randomized to the lebrikizumab treatment, there was a statistically significant improvement in FEV1 of 5.5%, which occurred almost immediately and was sustained for the entire 32 weeks of the study. The improvement was more significant in patients who had high surrogate markers for IL-13. Despite this improvement in FEV1, there were no differences in secondary outcomes except in patients who had surrogate markers for high IL-13 levels.

Bottom line: In adults with asthma who remained uncontrolled despite inhaled corticosteroid therapy, IL-13 antagonism with lebrikizumab improved FEV1. However, the clinical relevance of these modest improvements remains unclear.

Citation: Corren J, Lemanske R, Matthews J, et al. Lebrikizumab treatment in adults with asthma. N Engl J Med. 2011;365:1088-1098.

Rivaroxaban Is Noninferior to Warfarin for Stroke Prevention in Atrial Fibrillation

Clinical question: How does rivaroxaban compare with warfarin in the prevention of stroke or systemic embolism in patients with nonvalvular atrial fibrillation?

Background: Warfarin is effective for the prevention of stroke in atrial fibrillation, but it requires close monitoring and adjustment. Rivaroxaban, an oral Xa inhibitor, may be safer, easier, and more effective than warfarin.

Study design: Multicenter, randomized, double-blind, double-dummy trial.

Setting: 1,178 sites in 45 countries.

Synopsis: The study included 14,264 patients with nonvalvular atrial fibrillation who were randomized to either fixed-dose rivaroxaban (20 mg daily or 15 mg daily for CrCl 30-49 mL/min) plus placebo or adjusted-dose warfarin (target INR 2.0 to 3.0) plus placebo. The mean CHADS2 score was 3.5. The primary endpoint (stroke or systemic embolism) occurred in 1.7% of patients per year in the rivaroxaban group and 2.2% per year in the warfarin group (hazard ratio for rivaroxaban 0.79; 95% CI: 0.66 to 0.96, P<0.001 for noninferiority). There was no difference in major or nonmajor clinically significant bleeding between the two groups (14.9% rivaroxaban vs. 14.5% warfarin, hazard ratio=1.03, 95% CI: 0.96 to 1.11, P=0.44). There were fewer intracranial hemorrhages (0.5% vs. 0.7%, P=0.02) and fatal bleeding (0.2% vs. 0.5%, P=0.003) in the rivaroxaban group.

Bottom line: In patients with atrial fibrillation, rivaroxaban was noninferior to warfarin for the prevention of stroke or systemic embolization, with a similar risk of major bleeding and a lower risk of intracranial hemorrhage or fatal bleeding.

Citation: Patel MR, Mahaffey K, Garg J, et al. Rivaroxaban versus warfarin in nonvalvular atrial fibrillation. N Engl J Med. 2011;365:883-891.

Apixaban More Effective and Safer than Warfarin for Stroke Prevention in Atrial Fibrillation

Clinical question: How does the effectiveness and safety of apixaban compare with warfarin for stroke prevention in atrial fibrillation?

Background: Until recently, warfarin has been the only available oral anticoagulant for stroke prevention in patients with atrial fibrillation (AF). The oral factor Xa inhibitors have shown similar efficacy and safety, without the monitoring requirement and drug interactions associated with warfarin.

Study design: Prospective randomized double-blind controlled trial.

Setting: More than 1,000 clinical sites in 39 countries.

Synopsis: This study randomized 18,201 patients with atrial fibrillation or flutter and at least one CHADS2 risk factor for stroke to receive oral apixaban or warfarin therapy. Exclusion criteria were prosthetic valves and severe kidney disease. The median duration of follow-up was 1.8 years, and the major endpoints were incidence of stroke, systemic embolism, bleeding complications, and mortality.

Compared with warfarin, apixaban reduced the annual incidence of stroke and systemic embolism from 1.6% to 1.3% (HR 0.79, 95%: CI 0.66 to 0.95, P=0.01 for superiority), and reduced mortality (HR: 0.89, 95% CI: 0.80 to 0.998). For the combined endpoint of stroke, systemic embolism, MI, or death, the annual rate was reduced from 5.5% to 4.9% (HR: 0.88, 95% CI: 0.80 to 0.97). All measures of bleeding were less frequent with apixaban: major 2.1% vs. 3.1% (HR: 0.69, 95% CI: 0.60 to 0.80), and combined major and minor bleeding 4.1% vs. 6.0% (HR: 0.68, 95% CI: 0.61 to 0.75). The annual rate for the net outcome of stroke, embolism, or major bleeding was 3.2% with apixaban and 4.1% with warfarin (HR: 0.77, 95% CI: 0.69 to 0.86).

 

 

Bottom line: Compared with warfarin therapy, apixaban is more effective and safer for stroke prevention in patients with atrial fibrillation.

Citation: Granger CB, Alexander JH, McMurray JJ, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365:981-992.

Ultrasonography Is Useful in Diagnosis of Pneumothorax

Clinical question: Is transthoracic ultrasonography a useful tool to diagnose pneumothorax?

Background: CT is the diagnostic gold standard for pneumothorax, but it is associated with radiation exposure and requires patient transport. Chest radiograph is easy to perform but may be too insensitive for adequate diagnosis. Ultrasonography’s diagnostic performance for detecting pneumothorax needs further evaluation.

Study design: Systematic review and meta-analysis.

Setting: Critically ill, trauma, or post-biopsy patients were identified in each of the studies.

Synopsis: The meta-analysis of 20 eligible studies found a pooled sensitivity of ultrasound for the detection of pneumothorax of 0.88 (95% CI: 0.85 to 0.91) and specificity of 0.99 (0.98 to 0.99) compared with sensitivity of 0.52 (0.49 to 0.55) and specificity of 1.00 (1.00 to 1.00) for chest radiograph. Although the overall ROC curve was not significantly different between these modalities, the accuracy of ultrasonography was highly dependent on the skill of the operator.

Bottom line: When performed by a skilled operator, transthoracic ultrasonography is as specific, and more sensitive, than chest radiograph in diagnosing pneumothorax.

Citation: Ding W, Shen Y, Yang J, He X, Zhang M. Diagnosis of pneumothorax by radiography and ultrasonography: a meta-analysis. Chest. 2011;140:859-866.

Risk Prediction for Hospital Readmission Remains Challenging

Clinical question: Can readmission risk assessment be used to identify which patients would benefit most from care-transition interventions, or to risk-adjust readmission rates for hospital comparison?

Background: Multiple models to predict hospital readmission have been described and validated. Identifying patients at high risk for readmission could allow for customized care-transition interventions, or could be used to risk-adjust readmission rates to compare publicly reported rates by hospital.

Study design: Systematic review with qualitative synthesis of results.

Setting: Thirty studies (23 from the U.S.) tested 26 unique readmission models.

Synopsis: Each model had been tested in both a derivation and validation cohort. Fourteen models (nine from the U.S.), using retrospective administrative data to compare risk-adjusted rates between hospitals, had poor discriminative capacity (c statistic range: 0.55 to 0.65). Seven models could be used to identify high-risk patients early in the hospitalization (c statistic range: 0.56 to 0.72) and five could be used to identify high-risk patients at discharge (c statistic range: 0.68 to 0.83), but these also had poor to moderate discriminative capacity. Multiple variables were considered in each of the models; most incorporated medical comorbidities and prior use of healthcare services.

Bottom line: Current readmission risk prediction models do not perform adequately for comparative or clinical purposes.

Citation: Kansagara D, Englander H, Salanitro A, et. al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306:1688-1698.

Intravenous Fluids for Acute Pancreatitis: More May Be Less

Clinical question: What is the optimal volume of fluid administration for treatment of acute pancreatitis?

Background: Current guidelines for management of acute pancreatitis emphasize vigorous administration of intravenous fluids to reduce the risk of pancreatic necrosis and organ failure. This recommendation is based upon animal studies, and has not been subjected to clinical evaluation in humans.

Study design: Prospective observational cohort.

Setting: University-affiliated tertiary-care public hospital in Spain.

Synopsis: This study enrolled 247 patients admitted with acute pancreatitis to determine the association between the volume of fluid administered during the first 24 hours and the development of persistent organ failure, pancreatic fluid collection or necrosis, and mortality. The volume and rate of fluid administered were determined by the treating physician. Patients were classified into three groups: those receiving a volume <3.1 L, 3.1 to 4.1 L, or >4.1 L.

 

 

After multivariate adjustment, those receiving <3.1 L had no increased risk of necrosis or any other adverse outcome, compared with those who received the middle range of fluid volume.

Patients receiving >4.1 L had a higher risk of persistent organ failure (OR: 7.7, 95% CI: 1.5 to 38.7), particularly renal and respiratory insufficiency, and fluid collection development (OR: 1.9, 95% CI: 1 to 3.7) independent of disease severity. Pancreatic necrosis and mortality were similar in the three groups.

Bottom line: Administration of large-volume intravenous fluids (>4.1 L) in

the first 24 hours was associated with worse outcomes, although residual confounding cannot be excluded in this nonrandomized study.

Citation: de-Madaria E, Soler-Sala G, Sanchez-Paya J, et al. Influence of fluid therapy on the prognosis of acute pancreatitis: a prospective cohort study. Am J Gastroenterol. 2011;106:1843-1850.

Clinical Outcomes in Saddle Pulmonary Embolism

Clinical question: What are the treatments used and outcomes associated with saddle pulmonary embolism?

Background: Saddle pulmonary embolism is a risk for right ventricular dysfunction and sudden hemodynamic collapse. There are limited data on the clinical presentation and outcomes in these patients.

Study design: Retrospective case review.

Setting: Single academic medical center.

Synopsis: In this retrospective review of 680 patients diagnosed with pulmonary embolism on CT at a single academic medical center from 2004 to 2009, 5.4% (37 patients) had a saddle pulmonary embolism.

Most patients with saddle pulmonary embolism were hemodynamically stable and responded to standard therapy with unfractionated heparin. The mean length of stay was nine days, 46% received an inferior vena cava filter, 41% were treated in an ICU, and 5.4% (two patients) died in the hospital. Thrombolytics were used in only 11% of patients, most of which had sustained hypotension and/or were mechanically ventilated.

Bottom line: Most patients with saddle pulmonary embolus in this single institution study did not receive thrombolytics and had overall low mortality.

Citation: Sardi A, Gluskin J, Guttentag A, Kotler MN, Braitman LE, Lippmann M. Saddle pulmonary embolism: is it as bad as it looks? A community hospital experience. Crit Care Med. 2011;39:2413-2418.

 

Clinical Shorts

Pancreaticojejunostomy Is Superior To Endoscopy As Treatment For Advanced Chronic Pancreatitis

In this small prospective trial of 31 patients with advanced chronic calcific pancreatitis, patients who initially underwent a pancreaticojejunostomy had less pain and required fewer re-interventions than patients who had endoscopic treatment during a median follow-up of six years.

Citation: Cahen DL, Gouma DJ, Bruna et al. Long-term outcomes of endoscopic vs surgical drainage of the pancreatic duct in patient with chronic pancreatitis. Gastroenterology. 2011;141:1690-1695.

Routine surveillance for patients with Barrett’s esophagus called into question

In this nationwide cohort of patients in Denmark, the annual risk of developing adenocarcinoma in patients with Barrett’s esophagus was 0.12%, which is markedly lower than previously reported estimates of 0.5%, uapon which the guidelines for screening are based.

Citation: Hvid-Jensen F, Pedersen L, Funch-Jensen P, et al. Incidence of adenocarcinoma among patients with Barrett’s esophagus. N Engl J Med. 2012;365:1375-1383.

Consider hepatitis E testing in suspected drug-induced liver injury

A serologic survey of patients in the U.S. with acute liver injury attributed to drugs found 16% had evidence of hepatitis E, representing 3% of the acute disease in the population studied.

Citation: Davern TJ, Chalasani N, Fontana RJ, et al. Acute hepatitis E infection accounts for some cases of suspected drug-induced liver injury. Gastroenterology. 2011;141:1665-1672.

Pressure redistribution mattresses reduce pressure ulcers

A cost-effectiveness analysis supported using pressure redistribution mattresses to prevent pressure ulcers in long-term care residents; additional study is needed to determine the cost-effectiveness of emollients and cleansers.

Citation: Pham B, Stern A, Chen W, et. al. Preventing pressure ulcers in long-term care. Arch Intern Med. 2011;171:1839-1847.

Screening chest x-rays still do not prevent death from lung cancer

A randomized controlled trial of annual screening chest radiography enrolling nearly 155,000 patients, 52% of whom were current or former smokers, yielded no reduction in mortality from lung cancer.

Citation: Oken MM, Hocking WG, Kvale PA, et al. Screening by chest radiograph and lung cancer mortality. JAMA. 2011;306:1865-1873.

Abnormal QT-interval duration is associated with increased mortality

A cross-sectional study found that shortened or prolonged QT-interval duration, even if still within the reference range, is associated with increased mortality risk.

Citation: Zhang Y, Post WS, Dalal D, Blasco-Colmenares E, Tomaselli GF, Guallar E. QT-interval duration and mortality rate. Arch Intern Med. 2011;171:1727-1733.

In This Edition

Literature At A Glance

A guide to this month’s studies

  1. IDSA/ATS guidelines for community-acquired pneumonia
  2. Improved asthma with IL-13 antibody
  3. Rivaroxaban vs. warfarin for stroke prevention in atrial fibrillation
  4. Apixaban vs. warfarin for stroke prevention in atrial fibrillation
  5. Ultrasonography more sensitive than chest radiograph for pneumothorax
  6. Current readmission risk models inadequate
  7. Optimal fluid volume for acute pancreatitis
  8. Low mortality in saddle pulmonary embolism

Triage Decisions for Patients with Severe Community-Acquired Pneumonia Should Be Based on IDSA/ATS Guidelines, Not Inflammatory Biomarkers

Clinical question: Can C-reactive protein levels (CRP), procalcitonin, TNF-alpha, and cytokine levels predict the need for intensive-care admission more accurately than IDSA/ATS guidelines in patients with severe community-acquired pneumonia (CAP)?

Background: Inflammatory biomarkers, such as CRP and procalcitonin, have diagnostic and prognostic utility in patients with CAP. Whether these inflammatory biomarkers can help triage patients to the appropriate level of care is unknown.

Study design: Prospective case control study.

Setting: Two university hospitals in Spain.

Synopsis: The study included 685 patients with severe CAP who did not require mechanical ventilation or vasopressor support. Serum levels of CRP, procalcitonin, TNF-alpha, IL-1, IL-6, IL-8, and IL-10, as well as Infectious Diseases Society of American/American Thoracic Society (IDSA/ATS) minor severity criteria data, were collected on admission. After controlling for age, comorbidities, and PSI risk class, serum levels of CRP and procalcitonin were found to be significantly higher in ICU patients compared with non-ICU patients. Despite this, these inflammatory biomarkers did not augment the IDSA/ATS guidelines, suggesting that patients who have three or more minor criteria be considered for ICU admission.

The study did suggest that patients with severe CAP and low levels of IL-6 and procalcitonin could potentially be managed safely outside of the ICU. However, hospitalists should be wary of applying the study results due to the small number of ICU patients in this study and the lack of real-time availability of these biomarkers at most institutions.

Bottom line: More studies of inflammatory biomarkers are needed before using them to determine the level of care required for patients with CAP. Until these data are available, physicians should use the IDSA/ATS guidelines to triage patients to the appropriate level of care.

Citation: Ramirez P, Ferrer M, Torres A, et al. Inflammatory biomarkers and prediction for intensive care unit admission pneumonia. Crit Care Med. 2011;39:2211-2217.

IL-13 Antibody Lebrikizumab Shows Promise as a New Therapy for Adults with Uncontrolled Asthma

Clinical question: Can lebrikizumab, an IL-13 antibody, improve asthma control in patients with uncontrolled asthma?

Background: Asthma is a complex disease, with varied patient response to treatment. Some patients have uncontrolled asthma despite inhaled glucocorticoids. It is postulated that IL-13 may account for this variability and that some patients with uncontrolled asthma are poorly controlled due to glucocorticoid resistance mediated by IL-13. Lebrikizumab is an IgG4 monoclonal antibody that binds to and inhibits the function of IL-13. This study was performed to see if this antibody would be effective in patients with uncontrolled asthma despite inhaled glucocorticoid therapy.

Study design: Randomized double-blinded placebo-controlled trial.

Setting: Multiple centers.

Synopsis: The study randomized 219 adult asthma patients who were inadequately controlled despite inhaled corticosteroids to a placebo or lebrikizumab. The primary outcome was improvement in prebronchodilator FEV1 from baseline. Secondary outcomes were exacerbations, use of rescue medications, and symptom scores. Patients were also stratified and analyzed based on surrogate markers for IL-13, which included serum IGE levels, eosinophil counts, and periostin levels.

 

 

In patients who were randomized to the lebrikizumab treatment, there was a statistically significant improvement in FEV1 of 5.5%, which occurred almost immediately and was sustained for the entire 32 weeks of the study. The improvement was more significant in patients who had high surrogate markers for IL-13. Despite this improvement in FEV1, there were no differences in secondary outcomes except in patients who had surrogate markers for high IL-13 levels.

Bottom line: In adults with asthma who remained uncontrolled despite inhaled corticosteroid therapy, IL-13 antagonism with lebrikizumab improved FEV1. However, the clinical relevance of these modest improvements remains unclear.

Citation: Corren J, Lemanske R, Matthews J, et al. Lebrikizumab treatment in adults with asthma. N Engl J Med. 2011;365:1088-1098.

Rivaroxaban Is Noninferior to Warfarin for Stroke Prevention in Atrial Fibrillation

Clinical question: How does rivaroxaban compare with warfarin in the prevention of stroke or systemic embolism in patients with nonvalvular atrial fibrillation?

Background: Warfarin is effective for the prevention of stroke in atrial fibrillation, but it requires close monitoring and adjustment. Rivaroxaban, an oral Xa inhibitor, may be safer, easier, and more effective than warfarin.

Study design: Multicenter, randomized, double-blind, double-dummy trial.

Setting: 1,178 sites in 45 countries.

Synopsis: The study included 14,264 patients with nonvalvular atrial fibrillation who were randomized to either fixed-dose rivaroxaban (20 mg daily or 15 mg daily for CrCl 30-49 mL/min) plus placebo or adjusted-dose warfarin (target INR 2.0 to 3.0) plus placebo. The mean CHADS2 score was 3.5. The primary endpoint (stroke or systemic embolism) occurred in 1.7% of patients per year in the rivaroxaban group and 2.2% per year in the warfarin group (hazard ratio for rivaroxaban 0.79; 95% CI: 0.66 to 0.96, P<0.001 for noninferiority). There was no difference in major or nonmajor clinically significant bleeding between the two groups (14.9% rivaroxaban vs. 14.5% warfarin, hazard ratio=1.03, 95% CI: 0.96 to 1.11, P=0.44). There were fewer intracranial hemorrhages (0.5% vs. 0.7%, P=0.02) and fatal bleeding (0.2% vs. 0.5%, P=0.003) in the rivaroxaban group.

Bottom line: In patients with atrial fibrillation, rivaroxaban was noninferior to warfarin for the prevention of stroke or systemic embolization, with a similar risk of major bleeding and a lower risk of intracranial hemorrhage or fatal bleeding.

Citation: Patel MR, Mahaffey K, Garg J, et al. Rivaroxaban versus warfarin in nonvalvular atrial fibrillation. N Engl J Med. 2011;365:883-891.

Apixaban More Effective and Safer than Warfarin for Stroke Prevention in Atrial Fibrillation

Clinical question: How does the effectiveness and safety of apixaban compare with warfarin for stroke prevention in atrial fibrillation?

Background: Until recently, warfarin has been the only available oral anticoagulant for stroke prevention in patients with atrial fibrillation (AF). The oral factor Xa inhibitors have shown similar efficacy and safety, without the monitoring requirement and drug interactions associated with warfarin.

Study design: Prospective randomized double-blind controlled trial.

Setting: More than 1,000 clinical sites in 39 countries.

Synopsis: This study randomized 18,201 patients with atrial fibrillation or flutter and at least one CHADS2 risk factor for stroke to receive oral apixaban or warfarin therapy. Exclusion criteria were prosthetic valves and severe kidney disease. The median duration of follow-up was 1.8 years, and the major endpoints were incidence of stroke, systemic embolism, bleeding complications, and mortality.

Compared with warfarin, apixaban reduced the annual incidence of stroke and systemic embolism from 1.6% to 1.3% (HR 0.79, 95%: CI 0.66 to 0.95, P=0.01 for superiority), and reduced mortality (HR: 0.89, 95% CI: 0.80 to 0.998). For the combined endpoint of stroke, systemic embolism, MI, or death, the annual rate was reduced from 5.5% to 4.9% (HR: 0.88, 95% CI: 0.80 to 0.97). All measures of bleeding were less frequent with apixaban: major 2.1% vs. 3.1% (HR: 0.69, 95% CI: 0.60 to 0.80), and combined major and minor bleeding 4.1% vs. 6.0% (HR: 0.68, 95% CI: 0.61 to 0.75). The annual rate for the net outcome of stroke, embolism, or major bleeding was 3.2% with apixaban and 4.1% with warfarin (HR: 0.77, 95% CI: 0.69 to 0.86).

 

 

Bottom line: Compared with warfarin therapy, apixaban is more effective and safer for stroke prevention in patients with atrial fibrillation.

Citation: Granger CB, Alexander JH, McMurray JJ, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365:981-992.

Ultrasonography Is Useful in Diagnosis of Pneumothorax

Clinical question: Is transthoracic ultrasonography a useful tool to diagnose pneumothorax?

Background: CT is the diagnostic gold standard for pneumothorax, but it is associated with radiation exposure and requires patient transport. Chest radiograph is easy to perform but may be too insensitive for adequate diagnosis. Ultrasonography’s diagnostic performance for detecting pneumothorax needs further evaluation.

Study design: Systematic review and meta-analysis.

Setting: Critically ill, trauma, or post-biopsy patients were identified in each of the studies.

Synopsis: The meta-analysis of 20 eligible studies found a pooled sensitivity of ultrasound for the detection of pneumothorax of 0.88 (95% CI: 0.85 to 0.91) and specificity of 0.99 (0.98 to 0.99) compared with sensitivity of 0.52 (0.49 to 0.55) and specificity of 1.00 (1.00 to 1.00) for chest radiograph. Although the overall ROC curve was not significantly different between these modalities, the accuracy of ultrasonography was highly dependent on the skill of the operator.

Bottom line: When performed by a skilled operator, transthoracic ultrasonography is as specific, and more sensitive, than chest radiograph in diagnosing pneumothorax.

Citation: Ding W, Shen Y, Yang J, He X, Zhang M. Diagnosis of pneumothorax by radiography and ultrasonography: a meta-analysis. Chest. 2011;140:859-866.

Risk Prediction for Hospital Readmission Remains Challenging

Clinical question: Can readmission risk assessment be used to identify which patients would benefit most from care-transition interventions, or to risk-adjust readmission rates for hospital comparison?

Background: Multiple models to predict hospital readmission have been described and validated. Identifying patients at high risk for readmission could allow for customized care-transition interventions, or could be used to risk-adjust readmission rates to compare publicly reported rates by hospital.

Study design: Systematic review with qualitative synthesis of results.

Setting: Thirty studies (23 from the U.S.) tested 26 unique readmission models.

Synopsis: Each model had been tested in both a derivation and validation cohort. Fourteen models (nine from the U.S.), using retrospective administrative data to compare risk-adjusted rates between hospitals, had poor discriminative capacity (c statistic range: 0.55 to 0.65). Seven models could be used to identify high-risk patients early in the hospitalization (c statistic range: 0.56 to 0.72) and five could be used to identify high-risk patients at discharge (c statistic range: 0.68 to 0.83), but these also had poor to moderate discriminative capacity. Multiple variables were considered in each of the models; most incorporated medical comorbidities and prior use of healthcare services.

Bottom line: Current readmission risk prediction models do not perform adequately for comparative or clinical purposes.

Citation: Kansagara D, Englander H, Salanitro A, et. al. Risk prediction models for hospital readmission: a systematic review. JAMA. 2011;306:1688-1698.

Intravenous Fluids for Acute Pancreatitis: More May Be Less

Clinical question: What is the optimal volume of fluid administration for treatment of acute pancreatitis?

Background: Current guidelines for management of acute pancreatitis emphasize vigorous administration of intravenous fluids to reduce the risk of pancreatic necrosis and organ failure. This recommendation is based upon animal studies, and has not been subjected to clinical evaluation in humans.

Study design: Prospective observational cohort.

Setting: University-affiliated tertiary-care public hospital in Spain.

Synopsis: This study enrolled 247 patients admitted with acute pancreatitis to determine the association between the volume of fluid administered during the first 24 hours and the development of persistent organ failure, pancreatic fluid collection or necrosis, and mortality. The volume and rate of fluid administered were determined by the treating physician. Patients were classified into three groups: those receiving a volume <3.1 L, 3.1 to 4.1 L, or >4.1 L.

 

 

After multivariate adjustment, those receiving <3.1 L had no increased risk of necrosis or any other adverse outcome, compared with those who received the middle range of fluid volume.

Patients receiving >4.1 L had a higher risk of persistent organ failure (OR: 7.7, 95% CI: 1.5 to 38.7), particularly renal and respiratory insufficiency, and fluid collection development (OR: 1.9, 95% CI: 1 to 3.7) independent of disease severity. Pancreatic necrosis and mortality were similar in the three groups.

Bottom line: Administration of large-volume intravenous fluids (>4.1 L) in

the first 24 hours was associated with worse outcomes, although residual confounding cannot be excluded in this nonrandomized study.

Citation: de-Madaria E, Soler-Sala G, Sanchez-Paya J, et al. Influence of fluid therapy on the prognosis of acute pancreatitis: a prospective cohort study. Am J Gastroenterol. 2011;106:1843-1850.

Clinical Outcomes in Saddle Pulmonary Embolism

Clinical question: What are the treatments used and outcomes associated with saddle pulmonary embolism?

Background: Saddle pulmonary embolism is a risk for right ventricular dysfunction and sudden hemodynamic collapse. There are limited data on the clinical presentation and outcomes in these patients.

Study design: Retrospective case review.

Setting: Single academic medical center.

Synopsis: In this retrospective review of 680 patients diagnosed with pulmonary embolism on CT at a single academic medical center from 2004 to 2009, 5.4% (37 patients) had a saddle pulmonary embolism.

Most patients with saddle pulmonary embolism were hemodynamically stable and responded to standard therapy with unfractionated heparin. The mean length of stay was nine days, 46% received an inferior vena cava filter, 41% were treated in an ICU, and 5.4% (two patients) died in the hospital. Thrombolytics were used in only 11% of patients, most of which had sustained hypotension and/or were mechanically ventilated.

Bottom line: Most patients with saddle pulmonary embolus in this single institution study did not receive thrombolytics and had overall low mortality.

Citation: Sardi A, Gluskin J, Guttentag A, Kotler MN, Braitman LE, Lippmann M. Saddle pulmonary embolism: is it as bad as it looks? A community hospital experience. Crit Care Med. 2011;39:2413-2418.

 

Clinical Shorts

Pancreaticojejunostomy Is Superior To Endoscopy As Treatment For Advanced Chronic Pancreatitis

In this small prospective trial of 31 patients with advanced chronic calcific pancreatitis, patients who initially underwent a pancreaticojejunostomy had less pain and required fewer re-interventions than patients who had endoscopic treatment during a median follow-up of six years.

Citation: Cahen DL, Gouma DJ, Bruna et al. Long-term outcomes of endoscopic vs surgical drainage of the pancreatic duct in patient with chronic pancreatitis. Gastroenterology. 2011;141:1690-1695.

Routine surveillance for patients with Barrett’s esophagus called into question

In this nationwide cohort of patients in Denmark, the annual risk of developing adenocarcinoma in patients with Barrett’s esophagus was 0.12%, which is markedly lower than previously reported estimates of 0.5%, uapon which the guidelines for screening are based.

Citation: Hvid-Jensen F, Pedersen L, Funch-Jensen P, et al. Incidence of adenocarcinoma among patients with Barrett’s esophagus. N Engl J Med. 2012;365:1375-1383.

Consider hepatitis E testing in suspected drug-induced liver injury

A serologic survey of patients in the U.S. with acute liver injury attributed to drugs found 16% had evidence of hepatitis E, representing 3% of the acute disease in the population studied.

Citation: Davern TJ, Chalasani N, Fontana RJ, et al. Acute hepatitis E infection accounts for some cases of suspected drug-induced liver injury. Gastroenterology. 2011;141:1665-1672.

Pressure redistribution mattresses reduce pressure ulcers

A cost-effectiveness analysis supported using pressure redistribution mattresses to prevent pressure ulcers in long-term care residents; additional study is needed to determine the cost-effectiveness of emollients and cleansers.

Citation: Pham B, Stern A, Chen W, et. al. Preventing pressure ulcers in long-term care. Arch Intern Med. 2011;171:1839-1847.

Screening chest x-rays still do not prevent death from lung cancer

A randomized controlled trial of annual screening chest radiography enrolling nearly 155,000 patients, 52% of whom were current or former smokers, yielded no reduction in mortality from lung cancer.

Citation: Oken MM, Hocking WG, Kvale PA, et al. Screening by chest radiograph and lung cancer mortality. JAMA. 2011;306:1865-1873.

Abnormal QT-interval duration is associated with increased mortality

A cross-sectional study found that shortened or prolonged QT-interval duration, even if still within the reference range, is associated with increased mortality risk.

Citation: Zhang Y, Post WS, Dalal D, Blasco-Colmenares E, Tomaselli GF, Guallar E. QT-interval duration and mortality rate. Arch Intern Med. 2011;171:1727-1733.

Issue
The Hospitalist - 2012(03)
Issue
The Hospitalist - 2012(03)
Publications
Publications
Article Type
Display Headline
In the Literature: Physician Reviews of HM-Related Research
Display Headline
In the Literature: Physician Reviews of HM-Related Research
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

ITL: Physician Reviews of HM-Relevant Research

Article Type
Changed
Fri, 09/14/2018 - 12:24
Display Headline
ITL: Physician Reviews of HM-Relevant Research

Clinical question: Is the risk of recurrence of Clostridium difficile infection (CDI) increased by the use of "non-CDI" antimicrobial agents (inactive against C. diff) during or after CDI therapy?

Background: Recurrence of CDI is expected to increase with use of non-CDI antimicrobials. Previous studies have not distinguished between the timing of non-CDI agents during and after CDI treatment, nor examined the effect of frequency, duration, or type of non-CDI antibiotic therapy.

Study design: Retrospective cohort.

Setting: Academic Veterans Affairs medical center.

Synopsis: All patients with CDI over a three-year period were evaluated to determine the association between non-CDI antimicrobial during or within 30 days following CDI therapy and 90-day CDI recurrence. Of 246 patients, 57% received concurrent or subsequent non-CDI antimicrobials. CDI recurred in 40% of patients who received non-CDI antimicrobials and in 16% of those who did not (OR: 3.5, 95% CI: 1.9 to 6.5).

After multivariable adjustment (including age, duration of CDI treatment, comorbidity, hospital and ICU admission, and gastric acid suppression), those who received non-CDI antimicrobials during CDI therapy had no increased risk of recurrence. However, those who received any non-CDI antimicrobials after initial CDI treatment had an absolute recurrence rate of 48% with an adjusted OR of 3.02 (95% CI: 1.65 to 5.52). This increased risk of recurrence was unaffected by the number or duration of non-CDI antimicrobial prescriptions. Subgroup analysis by antimicrobial class revealed statistically significant associations only with beta-lactams and fluoroquinolones.

Bottom line: The risk of recurrence of CDI is tripled by exposure to non-CDI antimicrobials within 30 days after CDI treatment, irrespective of the number or duration of such exposures.

Citation: Drekonja DM, Amundson WH, DeCarolis DD, Kuskowski MA, Lederle FA, Johnson JR. Antimicrobial use and risk for recurrent Clostridium difficile infection. Am J Med. 2011;124:1081.e1-1081.e7.

Issue
The Hospitalist - 2012(03)
Publications
Sections

Clinical question: Is the risk of recurrence of Clostridium difficile infection (CDI) increased by the use of "non-CDI" antimicrobial agents (inactive against C. diff) during or after CDI therapy?

Background: Recurrence of CDI is expected to increase with use of non-CDI antimicrobials. Previous studies have not distinguished between the timing of non-CDI agents during and after CDI treatment, nor examined the effect of frequency, duration, or type of non-CDI antibiotic therapy.

Study design: Retrospective cohort.

Setting: Academic Veterans Affairs medical center.

Synopsis: All patients with CDI over a three-year period were evaluated to determine the association between non-CDI antimicrobial during or within 30 days following CDI therapy and 90-day CDI recurrence. Of 246 patients, 57% received concurrent or subsequent non-CDI antimicrobials. CDI recurred in 40% of patients who received non-CDI antimicrobials and in 16% of those who did not (OR: 3.5, 95% CI: 1.9 to 6.5).

After multivariable adjustment (including age, duration of CDI treatment, comorbidity, hospital and ICU admission, and gastric acid suppression), those who received non-CDI antimicrobials during CDI therapy had no increased risk of recurrence. However, those who received any non-CDI antimicrobials after initial CDI treatment had an absolute recurrence rate of 48% with an adjusted OR of 3.02 (95% CI: 1.65 to 5.52). This increased risk of recurrence was unaffected by the number or duration of non-CDI antimicrobial prescriptions. Subgroup analysis by antimicrobial class revealed statistically significant associations only with beta-lactams and fluoroquinolones.

Bottom line: The risk of recurrence of CDI is tripled by exposure to non-CDI antimicrobials within 30 days after CDI treatment, irrespective of the number or duration of such exposures.

Citation: Drekonja DM, Amundson WH, DeCarolis DD, Kuskowski MA, Lederle FA, Johnson JR. Antimicrobial use and risk for recurrent Clostridium difficile infection. Am J Med. 2011;124:1081.e1-1081.e7.

Clinical question: Is the risk of recurrence of Clostridium difficile infection (CDI) increased by the use of "non-CDI" antimicrobial agents (inactive against C. diff) during or after CDI therapy?

Background: Recurrence of CDI is expected to increase with use of non-CDI antimicrobials. Previous studies have not distinguished between the timing of non-CDI agents during and after CDI treatment, nor examined the effect of frequency, duration, or type of non-CDI antibiotic therapy.

Study design: Retrospective cohort.

Setting: Academic Veterans Affairs medical center.

Synopsis: All patients with CDI over a three-year period were evaluated to determine the association between non-CDI antimicrobial during or within 30 days following CDI therapy and 90-day CDI recurrence. Of 246 patients, 57% received concurrent or subsequent non-CDI antimicrobials. CDI recurred in 40% of patients who received non-CDI antimicrobials and in 16% of those who did not (OR: 3.5, 95% CI: 1.9 to 6.5).

After multivariable adjustment (including age, duration of CDI treatment, comorbidity, hospital and ICU admission, and gastric acid suppression), those who received non-CDI antimicrobials during CDI therapy had no increased risk of recurrence. However, those who received any non-CDI antimicrobials after initial CDI treatment had an absolute recurrence rate of 48% with an adjusted OR of 3.02 (95% CI: 1.65 to 5.52). This increased risk of recurrence was unaffected by the number or duration of non-CDI antimicrobial prescriptions. Subgroup analysis by antimicrobial class revealed statistically significant associations only with beta-lactams and fluoroquinolones.

Bottom line: The risk of recurrence of CDI is tripled by exposure to non-CDI antimicrobials within 30 days after CDI treatment, irrespective of the number or duration of such exposures.

Citation: Drekonja DM, Amundson WH, DeCarolis DD, Kuskowski MA, Lederle FA, Johnson JR. Antimicrobial use and risk for recurrent Clostridium difficile infection. Am J Med. 2011;124:1081.e1-1081.e7.

Issue
The Hospitalist - 2012(03)
Issue
The Hospitalist - 2012(03)
Publications
Publications
Article Type
Display Headline
ITL: Physician Reviews of HM-Relevant Research
Display Headline
ITL: Physician Reviews of HM-Relevant Research
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Quality Care for COPD, Secondary Stroke Prevention, Treat Classic CTA

Article Type
Changed
Fri, 09/14/2018 - 12:33
Display Headline
Quality Care for COPD, Secondary Stroke Prevention, Treat Classic CTA

Factors Influencing the Treatment of COPD

Lindenauer PK, Pekow P, Gao S, et al. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144:894-903.

Background

Chronic obstructive pulmonary disease (COPD) is the fourth leading cause of death in the United States, resulting in more than $18 billion in annual health costs. Acute exacerbations of COPD can lead to respiratory compromise and are one of the 10 leading causes of hospitalization in the United States.

Hospitalists currently have evidence-based guidelines available that recommend therapies for patients with acute exacerbations of COPD. This study was designed to evaluate the practice patterns in the United States and to evaluate the quality of care provided to hospitalized patients based on comparisons with these published guidelines. The authors did not report any conflicts of interest, and this work was performed without external grant support.

Methods

Using administrative data from the 360 hospitals that participate in Perspective, a database developed for measuring healthcare quality and utilization, the authors performed a retrospective cohort study. Patients hospitalized for a primary diagnosis of acute exacerbation of COPD were chosen. Patients with pneumonia were specifically excluded. The outcomes of interest included adherence to the diagnostic and therapeutic recommendations of the joint American College of Physicians and American College of Chest Physicians evidence-based COPD guideline, published in 2001.

Results

Of the 69,820 patients included in the analysis, 33% received “ideal care,” defined as all of the recommended care and none of the non-beneficial interventions. Specific results included varied utilization of recommended care: 95% had chest radiography, 91% received supplemental oxygen, 97% had bronchodilators, 85% were given systemic steroids, and 85% received antibiotics.

Overall, 45% of patients received at least one non-beneficial intervention specified in the guidelines: 24% were treated with methylxanthines, 14% underwent sputum testing, 12% had acute spirometry, 6% received chest physiotherapy, and 2% were given mucolytics.

Older patients and women were more likely to receive ideal care as defined, but hospitals with a higher annual volume of COPD cases were not associated with improved performance in this analysis.

Conclusions

Given a widely accepted evidence-based practice guideline as a benchmark, significant variation exists across hospitals in the quality of care for acute exacerbations of COPD. Opportunities exist to improve the quality of care, in particular by increasing the use of systemic corticosteroids and antibiotic therapy and reducing the utilization of many diagnostic and therapeutic interventions that are not only not recommended but are also potentially harmful.

Commentary

COPD management in the acute inpatient setting is on the horizon as a focus of policymakers, and this study suggests that significant opportunities exist for inpatient physicians to reduce variation in practice and utilize an evidence-based approach to the treatment of acute exacerbations of COPD. This study is limited by its use of administrative data, its inability to use clinical data to best determine appropriate care processes for individual patients, and its retrospective design.

As we move toward external quality metrics for the care of patients with acute exacerbations of COPD, further prospective studies evaluating clinical outcomes of interest, including mortality and readmission rates, are needed to determine the effects of adherence to ideal or recommended care for acute exacerbations of COPD.1-3

References

  1. Snow V, Lascher S, Mottur-Pilson C, et al. Evidence base for management of acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2001 Apr 3;134(7):595-599.
  2. American Thoracic Society. Standards for the diagnosis and care of patients with chronic obstructive pulmonary disease (COPD) and asthma. This official statement of the American Thoracic Society was adopted by the ATS Board of Directors, November 1986. Am Rev Respir Dis. 1987 Jul;136(1):225-244.
  3. Agency for Healthcare Research and Quality. Management of Acute Exacerbations of Chronic Obstructive Pulmonary Disease. Rockville, Md.: Agency for Healthcare Research and Quality; 2000.
 

 

The Role of Dipyridamole in the Secondary Prevention of Stroke

ESPRIT Study Group. Aspirin plus dipyridamole versus aspirin alone after cerebral ischaemia of arterial origin (ESPRIT): randomized controlled trial. Lancet. 2006 May 20;367(9623):1665-1673.

Background

To date, studies have resulted in inconsistent results in trials of aspirin versus aspirin in combination with dipyridamole for secondary prevention of ischemic stroke. Four early, smaller studies have yielded non-significant results, in contrast to the statistically significant relative risk reduction seen with the addition of dipyridamole to aspirin in the European Stroke Prevention Study 2 (ESPS-2).1-2

Methods

The European/Australian Stroke Prevention in Reversible Ischaemia Trial (ESPRIT) study group conducted a prospective randomized controlled trial of 2,763 patients with transient ischemic attacks or minor ischemic stroke of presumed arterial origin who received aspirin (30-325 mg daily) with or without dipyridamole (200 mg twice daily) as secondary prevention. The primary outcome for this study was a composite of death from vascular causes, nonfatal stroke, nonfatal myocardial infarction, or major bleeding complication. Mean follow-up of patients enrolled was 3.5 years.

Results

In an intention-to-treat analysis, the primary combined endpoint occurred in 16% (216) of the patients on aspirin alone (median aspirin dose was 75 mg in both groups) compared with 13% (173) of the patients on aspirin plus dipyridamole. This result was statistically significant, with an absolute risk reduction of 1% per year. As noted in other trials, patients on dipyridamole discontinued their study medication more frequently than patients on aspirin alone, mostly due to headache.

Conclusions

The results of this trial, taken in the context of previously published data, support the combination of aspirin plus dipyridamole over aspirin alone for the secondary prevention of ischemic stroke of presumed arterial origin. Addition of these data to the previous meta-analysis of trials resulted in a statistically significant risk ratio for the composite endpoint of 0.82 (95% confidence interval, 0.74-0.91).1

Commentary

Ischemic stroke and transient ischemic attacks remain a challenge to effectively manage medically and are appropriately greatly feared health complications for many patients, resulting in significant morbidity and mortality. Prior studies of secondary prevention with aspirin therapy have demonstrated only a modest reduction in vascular complications in these patients.3-4

The results of this trial are consistent with data from the Second European Stroke Prevention Study, and in combination, these data confirm that the addition of dipyridamole for patients who can tolerate it offers significant benefit.2 The magnitude of the effect results in a number needed to treat of 100 patients for one year to prevent one vascular death, stroke, or myocardial infarction. Given the clinical significance of these outcomes, many patients may prefer a trial on combination therapy.

References

  1. Antithrombotic Trialists’ Collaboration. Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction, and stroke in high risk patients. BMJ. 2002 Jan 12;324(7329):71-86.
  2. Diener HC, Cunha L, Forbes C, et al. European Stroke Prevention Study. Dipyridamole and acetylsalicylic acid in the secondary prevention of stroke. J Neurol Sci. 1996;143:1-13.
  3. Warlow C. Secondary prevention of stroke. Lancet. 1992;339:724-727.
  4. Algra A, van Gijn J. Cumulative meta-analysis of aspirin efficacy after cerebral ischaemia of arterial origin. J Neurol Neurosurg Psychiatry. 1999 Feb;66(2):255.

The Effectiveness of CTA in Diagnosing Acute Pulmonary Embolism

Stein PD, Fowler SE, Goodman LR, et al. Multidetector computed tomography for acute pulmonary embolism. N Engl J Med. 2006 Jun 1;354(22):2317-2327.

Background

The Prospective Investigation of Pulmonary Embolism Diagnosis II (PIOPED II) trial was designed to answer questions about the accuracy of contrast-enhanced multidetector computed tomographic angiography (CTA). Recent studies of the use of single-row or multidetector CTA alone have suggested a low incidence of pulmonary embolism in follow-up of untreated patients with normal findings on CTA.

 

 

The specific goals of this study were to determine the ability of multidetector CTA to rule out or detect pulmonary embolism, and to evaluate whether the addition of computed tomographic venography (CTV) improves the diagnostic accuracy of CTA.

Methods

Using a technique similar to PIOPED I, the investigators performed a prospective, multi-center trial using a composite reference standard to confirm the diagnosis of pulmonary embolism. Once again, for ethical reasons, the use of pulmonary artery digital-subtraction angiography was limited to patients whose diagnosis could neither be confirmed nor ruled out by less invasive tests. In contrast to PIOPED I, a clinical scoring system was used to assess the clinical probability of pulmonary embolism. Central readings were performed on all imaging studies except for venous ultrasonography.

Results

Of the 7,284 patients screened for the study, 3,262 were eligible, and 1,090 were enrolled. Of those, 824 patients received a completed CTA study and a reference standard for analysis. In 51 patients, the quality of the CTA was not suitable for interpretation, and these patients were excluded from the subsequent analysis. Pulmonary embolism was diagnosed in192 patients.

CTA was found to have a sensitivity of 83% and a specificity of 96%, yielding a likelihood ratio for a positive multidetector CTA test of 19.6 (95% confidence interval, 13.3 to 29.0), while the likelihood ratio for a negative test was 0.18 (95% confidence interval, 0.13 to 0.24). The quality of results on CTA-CTV was not adequate for interpretation in 87 patients; when these patients were excluded from analysis, the sensitivity was 90% with a specificity of 95%, yielding likelihood ratios of 16.5 (95% confidence interval, 11.6 to 23.5) for a positive test and 0.11 (95% confidence interval, 0.07 to 0.16) for a negative test.

Conclusions

Multidetector CTA and CTA-CTV perform well when the results of these tests are concordant with pre-test clinical probabilities of pulmonary embolism. CTA-CTV offers slightly increased sensitivity compared with CTA alone, with no significant difference in specificity. If the results of CTA or CTA-CTV are inconsistent with the clinical probability of pulmonary embolism, additional diagnostic testing is indicated.

Commentary

CTA has been used widely, and in many centers has largely replaced other diagnostic tests for pulmonary embolism. This well-done study incorporated recent advances in technology with multidetector CTA-CTV, along with a clinical prediction rule to better estimate pre-test probabilities of pulmonary embolism.2 It is important to recognize that 266 of the 1,090 patients enrolled were not included in the calculations of sensitivity and specificity for CTA-CTV because they did not have interpretable test results.

Although the specificity of both CTA and the CTA-CTV combination were high, the sensitivity was not sufficient to identify all cases of pulmonary embolism. This result contrasts to the recent outcomes studies of CTA, in which low rates of venous thromboembolism were seen in follow-up of patients with negative multidetector CTA.3,4 Although multidetector CTA has a higher sensitivity than single-slice technology, this test may still miss small subsegmental thrombi that might be detected using other diagnostic tests (ventilation-perfusion scintigraphy and/or pulmonary digital-subtraction angiography).

An important take-home message from this study is to recognize once again the importance of utilizing established clinical prediction rules for venous thrombosis and pulmonary embolism (such as the Wells clinical model).2 As with the majority of diagnostic tests at our disposal, when our clinical judgment is in contrast with test results, as in the case of a high likelihood of a potentially fatal disease like pulmonary embolism with a normal CTA result, additional diagnostic testing is necessary.

References

  1. The PIOPED Investigators. Value of the ventilation/perfusion scan in acute pulmonary embolism: results of the prospective investigation of pulmonary embolism diagnosis (PIOPED). JAMA. 1990;263:2753-2759.
  2. Wells PS, Anderson DR, Rodger M, et al. Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and d-dimer. Ann Intern Med. 2001 Jul 17;135(2):98-107.
  3. Perrier A, Roy PM, Sanchez O, et al. Multidetector-row computed tomography in suspected pulmonary embolism. N Engl J Med. 2005 Apr;352(17):1760-1768.
  4. van Belle A, Buller HR, Huisman MV, et al. Effectiveness of managing suspected pulmonary embolism using an algorithm combining clinical probability, D-dimer testing, and computed tomography. JAMA. 2006 Jan 11;295(2):172-179.
 

 

Classic Article:

PIOPED Investigators

The PIOPED Investigators. Value of ventilation/perfusion scan in acute pulmonary embolism: results of the prospective investigation of pulmonary embolism diagnosis (PIOPED). JAMA. 1990;263:2753-2759.

Background

The risk of untreated pulmonary embolism requires either the diagnosis or the exclusion of this diagnosis when clinical suspicion exists. The reference test for pulmonary embolism, standard pulmonary angiography, is invasive and expensive, and carries with it a measurable procedural risk.

Non-invasive diagnostic tests, including ventilation/perfusion (V/Q) scintigraphy, have been used to detect perfusion defects consistent with pulmonary embolism, though the performance characteristics of this diagnostic test were not well known prior to 1990. This study was designed to evaluate the sensitivity and specificities of ventilation/perfusion lung scans for pulmonary embolism in the acute setting.

Methods

This prospective, multi-center study evaluated V/Q scintigraphy on a random sample of 931 patients. A composite reference standard was used because only 755 patients underwent scintigraphy and pulmonary angiography. Clinical follow-up and subsequent diagnostic testing were employed in untreated patients with low clinical probabilities of pulmonary embolism who did not undergo angiography. Clinical assessment of the probability of pulmonary embolism was determined on the basis of the clinician’s judgment, without systematic prediction rules.

Results

Almost all patients with pulmonary embolism had abnormal ventilation/perfusion lung scans of high, intermediate, or low probability. Unfortunately, most patients without pulmonary embolism also had abnormal studies, limiting the utility of this test. Clinical follow-up and angiography revealed that pulmonary embolism occurred among 12% of patients with low-probability scans.

Conclusions

V/Q scintigraphy is useful in establishing or excluding the diagnosis of pulmonary embolism in only a minority of patients, where clinical suspicion of pulmonary embolism is concordant with the diagnostic test findings. The likelihood of pulmonary embolism in patients with a high pre-test probability of pulmonary embolism and a high probability scan is 95%, while in low probability patients with a low probability or normal scan the probability is 4% or 2%, respectively.

Commentary

This original PIOPED study established the diagnostic characteristics of V/Q scintigraphy and demonstrated, for the first time, evidence of the role of clinical assessment and prior probability in a diagnostic strategy for pulmonary embolism. Although subsequent studies have significantly advanced our knowledge of clinical prediction and diagnostic strategies in venous thromboembolism, the first PIOPED study continues to serve as an example of a high-quality, multi-center diagnostic test study utilizing a composite reference standard in a difficult-to-study disease. Unfortunately, the results of this study demonstrated that V/Q scintigraphy performs well for only a minority of patients. The majority of patients (72%) had clinical probabilities of pulmonary embolism and ventilation/perfusion scan results, which yielded post-test probabilities of 15-86%, leaving, in many cases, enough remaining diagnostic uncertainty to warrant additional testing.—TO TH

Issue
The Hospitalist - 2009(06)
Publications
Topics
Sections

Factors Influencing the Treatment of COPD

Lindenauer PK, Pekow P, Gao S, et al. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144:894-903.

Background

Chronic obstructive pulmonary disease (COPD) is the fourth leading cause of death in the United States, resulting in more than $18 billion in annual health costs. Acute exacerbations of COPD can lead to respiratory compromise and are one of the 10 leading causes of hospitalization in the United States.

Hospitalists currently have evidence-based guidelines available that recommend therapies for patients with acute exacerbations of COPD. This study was designed to evaluate the practice patterns in the United States and to evaluate the quality of care provided to hospitalized patients based on comparisons with these published guidelines. The authors did not report any conflicts of interest, and this work was performed without external grant support.

Methods

Using administrative data from the 360 hospitals that participate in Perspective, a database developed for measuring healthcare quality and utilization, the authors performed a retrospective cohort study. Patients hospitalized for a primary diagnosis of acute exacerbation of COPD were chosen. Patients with pneumonia were specifically excluded. The outcomes of interest included adherence to the diagnostic and therapeutic recommendations of the joint American College of Physicians and American College of Chest Physicians evidence-based COPD guideline, published in 2001.

Results

Of the 69,820 patients included in the analysis, 33% received “ideal care,” defined as all of the recommended care and none of the non-beneficial interventions. Specific results included varied utilization of recommended care: 95% had chest radiography, 91% received supplemental oxygen, 97% had bronchodilators, 85% were given systemic steroids, and 85% received antibiotics.

Overall, 45% of patients received at least one non-beneficial intervention specified in the guidelines: 24% were treated with methylxanthines, 14% underwent sputum testing, 12% had acute spirometry, 6% received chest physiotherapy, and 2% were given mucolytics.

Older patients and women were more likely to receive ideal care as defined, but hospitals with a higher annual volume of COPD cases were not associated with improved performance in this analysis.

Conclusions

Given a widely accepted evidence-based practice guideline as a benchmark, significant variation exists across hospitals in the quality of care for acute exacerbations of COPD. Opportunities exist to improve the quality of care, in particular by increasing the use of systemic corticosteroids and antibiotic therapy and reducing the utilization of many diagnostic and therapeutic interventions that are not only not recommended but are also potentially harmful.

Commentary

COPD management in the acute inpatient setting is on the horizon as a focus of policymakers, and this study suggests that significant opportunities exist for inpatient physicians to reduce variation in practice and utilize an evidence-based approach to the treatment of acute exacerbations of COPD. This study is limited by its use of administrative data, its inability to use clinical data to best determine appropriate care processes for individual patients, and its retrospective design.

As we move toward external quality metrics for the care of patients with acute exacerbations of COPD, further prospective studies evaluating clinical outcomes of interest, including mortality and readmission rates, are needed to determine the effects of adherence to ideal or recommended care for acute exacerbations of COPD.1-3

References

  1. Snow V, Lascher S, Mottur-Pilson C, et al. Evidence base for management of acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2001 Apr 3;134(7):595-599.
  2. American Thoracic Society. Standards for the diagnosis and care of patients with chronic obstructive pulmonary disease (COPD) and asthma. This official statement of the American Thoracic Society was adopted by the ATS Board of Directors, November 1986. Am Rev Respir Dis. 1987 Jul;136(1):225-244.
  3. Agency for Healthcare Research and Quality. Management of Acute Exacerbations of Chronic Obstructive Pulmonary Disease. Rockville, Md.: Agency for Healthcare Research and Quality; 2000.
 

 

The Role of Dipyridamole in the Secondary Prevention of Stroke

ESPRIT Study Group. Aspirin plus dipyridamole versus aspirin alone after cerebral ischaemia of arterial origin (ESPRIT): randomized controlled trial. Lancet. 2006 May 20;367(9623):1665-1673.

Background

To date, studies have resulted in inconsistent results in trials of aspirin versus aspirin in combination with dipyridamole for secondary prevention of ischemic stroke. Four early, smaller studies have yielded non-significant results, in contrast to the statistically significant relative risk reduction seen with the addition of dipyridamole to aspirin in the European Stroke Prevention Study 2 (ESPS-2).1-2

Methods

The European/Australian Stroke Prevention in Reversible Ischaemia Trial (ESPRIT) study group conducted a prospective randomized controlled trial of 2,763 patients with transient ischemic attacks or minor ischemic stroke of presumed arterial origin who received aspirin (30-325 mg daily) with or without dipyridamole (200 mg twice daily) as secondary prevention. The primary outcome for this study was a composite of death from vascular causes, nonfatal stroke, nonfatal myocardial infarction, or major bleeding complication. Mean follow-up of patients enrolled was 3.5 years.

Results

In an intention-to-treat analysis, the primary combined endpoint occurred in 16% (216) of the patients on aspirin alone (median aspirin dose was 75 mg in both groups) compared with 13% (173) of the patients on aspirin plus dipyridamole. This result was statistically significant, with an absolute risk reduction of 1% per year. As noted in other trials, patients on dipyridamole discontinued their study medication more frequently than patients on aspirin alone, mostly due to headache.

Conclusions

The results of this trial, taken in the context of previously published data, support the combination of aspirin plus dipyridamole over aspirin alone for the secondary prevention of ischemic stroke of presumed arterial origin. Addition of these data to the previous meta-analysis of trials resulted in a statistically significant risk ratio for the composite endpoint of 0.82 (95% confidence interval, 0.74-0.91).1

Commentary

Ischemic stroke and transient ischemic attacks remain a challenge to effectively manage medically and are appropriately greatly feared health complications for many patients, resulting in significant morbidity and mortality. Prior studies of secondary prevention with aspirin therapy have demonstrated only a modest reduction in vascular complications in these patients.3-4

The results of this trial are consistent with data from the Second European Stroke Prevention Study, and in combination, these data confirm that the addition of dipyridamole for patients who can tolerate it offers significant benefit.2 The magnitude of the effect results in a number needed to treat of 100 patients for one year to prevent one vascular death, stroke, or myocardial infarction. Given the clinical significance of these outcomes, many patients may prefer a trial on combination therapy.

References

  1. Antithrombotic Trialists’ Collaboration. Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction, and stroke in high risk patients. BMJ. 2002 Jan 12;324(7329):71-86.
  2. Diener HC, Cunha L, Forbes C, et al. European Stroke Prevention Study. Dipyridamole and acetylsalicylic acid in the secondary prevention of stroke. J Neurol Sci. 1996;143:1-13.
  3. Warlow C. Secondary prevention of stroke. Lancet. 1992;339:724-727.
  4. Algra A, van Gijn J. Cumulative meta-analysis of aspirin efficacy after cerebral ischaemia of arterial origin. J Neurol Neurosurg Psychiatry. 1999 Feb;66(2):255.

The Effectiveness of CTA in Diagnosing Acute Pulmonary Embolism

Stein PD, Fowler SE, Goodman LR, et al. Multidetector computed tomography for acute pulmonary embolism. N Engl J Med. 2006 Jun 1;354(22):2317-2327.

Background

The Prospective Investigation of Pulmonary Embolism Diagnosis II (PIOPED II) trial was designed to answer questions about the accuracy of contrast-enhanced multidetector computed tomographic angiography (CTA). Recent studies of the use of single-row or multidetector CTA alone have suggested a low incidence of pulmonary embolism in follow-up of untreated patients with normal findings on CTA.

 

 

The specific goals of this study were to determine the ability of multidetector CTA to rule out or detect pulmonary embolism, and to evaluate whether the addition of computed tomographic venography (CTV) improves the diagnostic accuracy of CTA.

Methods

Using a technique similar to PIOPED I, the investigators performed a prospective, multi-center trial using a composite reference standard to confirm the diagnosis of pulmonary embolism. Once again, for ethical reasons, the use of pulmonary artery digital-subtraction angiography was limited to patients whose diagnosis could neither be confirmed nor ruled out by less invasive tests. In contrast to PIOPED I, a clinical scoring system was used to assess the clinical probability of pulmonary embolism. Central readings were performed on all imaging studies except for venous ultrasonography.

Results

Of the 7,284 patients screened for the study, 3,262 were eligible, and 1,090 were enrolled. Of those, 824 patients received a completed CTA study and a reference standard for analysis. In 51 patients, the quality of the CTA was not suitable for interpretation, and these patients were excluded from the subsequent analysis. Pulmonary embolism was diagnosed in192 patients.

CTA was found to have a sensitivity of 83% and a specificity of 96%, yielding a likelihood ratio for a positive multidetector CTA test of 19.6 (95% confidence interval, 13.3 to 29.0), while the likelihood ratio for a negative test was 0.18 (95% confidence interval, 0.13 to 0.24). The quality of results on CTA-CTV was not adequate for interpretation in 87 patients; when these patients were excluded from analysis, the sensitivity was 90% with a specificity of 95%, yielding likelihood ratios of 16.5 (95% confidence interval, 11.6 to 23.5) for a positive test and 0.11 (95% confidence interval, 0.07 to 0.16) for a negative test.

Conclusions

Multidetector CTA and CTA-CTV perform well when the results of these tests are concordant with pre-test clinical probabilities of pulmonary embolism. CTA-CTV offers slightly increased sensitivity compared with CTA alone, with no significant difference in specificity. If the results of CTA or CTA-CTV are inconsistent with the clinical probability of pulmonary embolism, additional diagnostic testing is indicated.

Commentary

CTA has been used widely, and in many centers has largely replaced other diagnostic tests for pulmonary embolism. This well-done study incorporated recent advances in technology with multidetector CTA-CTV, along with a clinical prediction rule to better estimate pre-test probabilities of pulmonary embolism.2 It is important to recognize that 266 of the 1,090 patients enrolled were not included in the calculations of sensitivity and specificity for CTA-CTV because they did not have interpretable test results.

Although the specificity of both CTA and the CTA-CTV combination were high, the sensitivity was not sufficient to identify all cases of pulmonary embolism. This result contrasts to the recent outcomes studies of CTA, in which low rates of venous thromboembolism were seen in follow-up of patients with negative multidetector CTA.3,4 Although multidetector CTA has a higher sensitivity than single-slice technology, this test may still miss small subsegmental thrombi that might be detected using other diagnostic tests (ventilation-perfusion scintigraphy and/or pulmonary digital-subtraction angiography).

An important take-home message from this study is to recognize once again the importance of utilizing established clinical prediction rules for venous thrombosis and pulmonary embolism (such as the Wells clinical model).2 As with the majority of diagnostic tests at our disposal, when our clinical judgment is in contrast with test results, as in the case of a high likelihood of a potentially fatal disease like pulmonary embolism with a normal CTA result, additional diagnostic testing is necessary.

References

  1. The PIOPED Investigators. Value of the ventilation/perfusion scan in acute pulmonary embolism: results of the prospective investigation of pulmonary embolism diagnosis (PIOPED). JAMA. 1990;263:2753-2759.
  2. Wells PS, Anderson DR, Rodger M, et al. Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and d-dimer. Ann Intern Med. 2001 Jul 17;135(2):98-107.
  3. Perrier A, Roy PM, Sanchez O, et al. Multidetector-row computed tomography in suspected pulmonary embolism. N Engl J Med. 2005 Apr;352(17):1760-1768.
  4. van Belle A, Buller HR, Huisman MV, et al. Effectiveness of managing suspected pulmonary embolism using an algorithm combining clinical probability, D-dimer testing, and computed tomography. JAMA. 2006 Jan 11;295(2):172-179.
 

 

Classic Article:

PIOPED Investigators

The PIOPED Investigators. Value of ventilation/perfusion scan in acute pulmonary embolism: results of the prospective investigation of pulmonary embolism diagnosis (PIOPED). JAMA. 1990;263:2753-2759.

Background

The risk of untreated pulmonary embolism requires either the diagnosis or the exclusion of this diagnosis when clinical suspicion exists. The reference test for pulmonary embolism, standard pulmonary angiography, is invasive and expensive, and carries with it a measurable procedural risk.

Non-invasive diagnostic tests, including ventilation/perfusion (V/Q) scintigraphy, have been used to detect perfusion defects consistent with pulmonary embolism, though the performance characteristics of this diagnostic test were not well known prior to 1990. This study was designed to evaluate the sensitivity and specificities of ventilation/perfusion lung scans for pulmonary embolism in the acute setting.

Methods

This prospective, multi-center study evaluated V/Q scintigraphy on a random sample of 931 patients. A composite reference standard was used because only 755 patients underwent scintigraphy and pulmonary angiography. Clinical follow-up and subsequent diagnostic testing were employed in untreated patients with low clinical probabilities of pulmonary embolism who did not undergo angiography. Clinical assessment of the probability of pulmonary embolism was determined on the basis of the clinician’s judgment, without systematic prediction rules.

Results

Almost all patients with pulmonary embolism had abnormal ventilation/perfusion lung scans of high, intermediate, or low probability. Unfortunately, most patients without pulmonary embolism also had abnormal studies, limiting the utility of this test. Clinical follow-up and angiography revealed that pulmonary embolism occurred among 12% of patients with low-probability scans.

Conclusions

V/Q scintigraphy is useful in establishing or excluding the diagnosis of pulmonary embolism in only a minority of patients, where clinical suspicion of pulmonary embolism is concordant with the diagnostic test findings. The likelihood of pulmonary embolism in patients with a high pre-test probability of pulmonary embolism and a high probability scan is 95%, while in low probability patients with a low probability or normal scan the probability is 4% or 2%, respectively.

Commentary

This original PIOPED study established the diagnostic characteristics of V/Q scintigraphy and demonstrated, for the first time, evidence of the role of clinical assessment and prior probability in a diagnostic strategy for pulmonary embolism. Although subsequent studies have significantly advanced our knowledge of clinical prediction and diagnostic strategies in venous thromboembolism, the first PIOPED study continues to serve as an example of a high-quality, multi-center diagnostic test study utilizing a composite reference standard in a difficult-to-study disease. Unfortunately, the results of this study demonstrated that V/Q scintigraphy performs well for only a minority of patients. The majority of patients (72%) had clinical probabilities of pulmonary embolism and ventilation/perfusion scan results, which yielded post-test probabilities of 15-86%, leaving, in many cases, enough remaining diagnostic uncertainty to warrant additional testing.—TO TH

Factors Influencing the Treatment of COPD

Lindenauer PK, Pekow P, Gao S, et al. Quality of care for patients hospitalized for acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2006;144:894-903.

Background

Chronic obstructive pulmonary disease (COPD) is the fourth leading cause of death in the United States, resulting in more than $18 billion in annual health costs. Acute exacerbations of COPD can lead to respiratory compromise and are one of the 10 leading causes of hospitalization in the United States.

Hospitalists currently have evidence-based guidelines available that recommend therapies for patients with acute exacerbations of COPD. This study was designed to evaluate the practice patterns in the United States and to evaluate the quality of care provided to hospitalized patients based on comparisons with these published guidelines. The authors did not report any conflicts of interest, and this work was performed without external grant support.

Methods

Using administrative data from the 360 hospitals that participate in Perspective, a database developed for measuring healthcare quality and utilization, the authors performed a retrospective cohort study. Patients hospitalized for a primary diagnosis of acute exacerbation of COPD were chosen. Patients with pneumonia were specifically excluded. The outcomes of interest included adherence to the diagnostic and therapeutic recommendations of the joint American College of Physicians and American College of Chest Physicians evidence-based COPD guideline, published in 2001.

Results

Of the 69,820 patients included in the analysis, 33% received “ideal care,” defined as all of the recommended care and none of the non-beneficial interventions. Specific results included varied utilization of recommended care: 95% had chest radiography, 91% received supplemental oxygen, 97% had bronchodilators, 85% were given systemic steroids, and 85% received antibiotics.

Overall, 45% of patients received at least one non-beneficial intervention specified in the guidelines: 24% were treated with methylxanthines, 14% underwent sputum testing, 12% had acute spirometry, 6% received chest physiotherapy, and 2% were given mucolytics.

Older patients and women were more likely to receive ideal care as defined, but hospitals with a higher annual volume of COPD cases were not associated with improved performance in this analysis.

Conclusions

Given a widely accepted evidence-based practice guideline as a benchmark, significant variation exists across hospitals in the quality of care for acute exacerbations of COPD. Opportunities exist to improve the quality of care, in particular by increasing the use of systemic corticosteroids and antibiotic therapy and reducing the utilization of many diagnostic and therapeutic interventions that are not only not recommended but are also potentially harmful.

Commentary

COPD management in the acute inpatient setting is on the horizon as a focus of policymakers, and this study suggests that significant opportunities exist for inpatient physicians to reduce variation in practice and utilize an evidence-based approach to the treatment of acute exacerbations of COPD. This study is limited by its use of administrative data, its inability to use clinical data to best determine appropriate care processes for individual patients, and its retrospective design.

As we move toward external quality metrics for the care of patients with acute exacerbations of COPD, further prospective studies evaluating clinical outcomes of interest, including mortality and readmission rates, are needed to determine the effects of adherence to ideal or recommended care for acute exacerbations of COPD.1-3

References

  1. Snow V, Lascher S, Mottur-Pilson C, et al. Evidence base for management of acute exacerbations of chronic obstructive pulmonary disease. Ann Intern Med. 2001 Apr 3;134(7):595-599.
  2. American Thoracic Society. Standards for the diagnosis and care of patients with chronic obstructive pulmonary disease (COPD) and asthma. This official statement of the American Thoracic Society was adopted by the ATS Board of Directors, November 1986. Am Rev Respir Dis. 1987 Jul;136(1):225-244.
  3. Agency for Healthcare Research and Quality. Management of Acute Exacerbations of Chronic Obstructive Pulmonary Disease. Rockville, Md.: Agency for Healthcare Research and Quality; 2000.
 

 

The Role of Dipyridamole in the Secondary Prevention of Stroke

ESPRIT Study Group. Aspirin plus dipyridamole versus aspirin alone after cerebral ischaemia of arterial origin (ESPRIT): randomized controlled trial. Lancet. 2006 May 20;367(9623):1665-1673.

Background

To date, studies have resulted in inconsistent results in trials of aspirin versus aspirin in combination with dipyridamole for secondary prevention of ischemic stroke. Four early, smaller studies have yielded non-significant results, in contrast to the statistically significant relative risk reduction seen with the addition of dipyridamole to aspirin in the European Stroke Prevention Study 2 (ESPS-2).1-2

Methods

The European/Australian Stroke Prevention in Reversible Ischaemia Trial (ESPRIT) study group conducted a prospective randomized controlled trial of 2,763 patients with transient ischemic attacks or minor ischemic stroke of presumed arterial origin who received aspirin (30-325 mg daily) with or without dipyridamole (200 mg twice daily) as secondary prevention. The primary outcome for this study was a composite of death from vascular causes, nonfatal stroke, nonfatal myocardial infarction, or major bleeding complication. Mean follow-up of patients enrolled was 3.5 years.

Results

In an intention-to-treat analysis, the primary combined endpoint occurred in 16% (216) of the patients on aspirin alone (median aspirin dose was 75 mg in both groups) compared with 13% (173) of the patients on aspirin plus dipyridamole. This result was statistically significant, with an absolute risk reduction of 1% per year. As noted in other trials, patients on dipyridamole discontinued their study medication more frequently than patients on aspirin alone, mostly due to headache.

Conclusions

The results of this trial, taken in the context of previously published data, support the combination of aspirin plus dipyridamole over aspirin alone for the secondary prevention of ischemic stroke of presumed arterial origin. Addition of these data to the previous meta-analysis of trials resulted in a statistically significant risk ratio for the composite endpoint of 0.82 (95% confidence interval, 0.74-0.91).1

Commentary

Ischemic stroke and transient ischemic attacks remain a challenge to effectively manage medically and are appropriately greatly feared health complications for many patients, resulting in significant morbidity and mortality. Prior studies of secondary prevention with aspirin therapy have demonstrated only a modest reduction in vascular complications in these patients.3-4

The results of this trial are consistent with data from the Second European Stroke Prevention Study, and in combination, these data confirm that the addition of dipyridamole for patients who can tolerate it offers significant benefit.2 The magnitude of the effect results in a number needed to treat of 100 patients for one year to prevent one vascular death, stroke, or myocardial infarction. Given the clinical significance of these outcomes, many patients may prefer a trial on combination therapy.

References

  1. Antithrombotic Trialists’ Collaboration. Collaborative meta-analysis of randomised trials of antiplatelet therapy for prevention of death, myocardial infarction, and stroke in high risk patients. BMJ. 2002 Jan 12;324(7329):71-86.
  2. Diener HC, Cunha L, Forbes C, et al. European Stroke Prevention Study. Dipyridamole and acetylsalicylic acid in the secondary prevention of stroke. J Neurol Sci. 1996;143:1-13.
  3. Warlow C. Secondary prevention of stroke. Lancet. 1992;339:724-727.
  4. Algra A, van Gijn J. Cumulative meta-analysis of aspirin efficacy after cerebral ischaemia of arterial origin. J Neurol Neurosurg Psychiatry. 1999 Feb;66(2):255.

The Effectiveness of CTA in Diagnosing Acute Pulmonary Embolism

Stein PD, Fowler SE, Goodman LR, et al. Multidetector computed tomography for acute pulmonary embolism. N Engl J Med. 2006 Jun 1;354(22):2317-2327.

Background

The Prospective Investigation of Pulmonary Embolism Diagnosis II (PIOPED II) trial was designed to answer questions about the accuracy of contrast-enhanced multidetector computed tomographic angiography (CTA). Recent studies of the use of single-row or multidetector CTA alone have suggested a low incidence of pulmonary embolism in follow-up of untreated patients with normal findings on CTA.

 

 

The specific goals of this study were to determine the ability of multidetector CTA to rule out or detect pulmonary embolism, and to evaluate whether the addition of computed tomographic venography (CTV) improves the diagnostic accuracy of CTA.

Methods

Using a technique similar to PIOPED I, the investigators performed a prospective, multi-center trial using a composite reference standard to confirm the diagnosis of pulmonary embolism. Once again, for ethical reasons, the use of pulmonary artery digital-subtraction angiography was limited to patients whose diagnosis could neither be confirmed nor ruled out by less invasive tests. In contrast to PIOPED I, a clinical scoring system was used to assess the clinical probability of pulmonary embolism. Central readings were performed on all imaging studies except for venous ultrasonography.

Results

Of the 7,284 patients screened for the study, 3,262 were eligible, and 1,090 were enrolled. Of those, 824 patients received a completed CTA study and a reference standard for analysis. In 51 patients, the quality of the CTA was not suitable for interpretation, and these patients were excluded from the subsequent analysis. Pulmonary embolism was diagnosed in192 patients.

CTA was found to have a sensitivity of 83% and a specificity of 96%, yielding a likelihood ratio for a positive multidetector CTA test of 19.6 (95% confidence interval, 13.3 to 29.0), while the likelihood ratio for a negative test was 0.18 (95% confidence interval, 0.13 to 0.24). The quality of results on CTA-CTV was not adequate for interpretation in 87 patients; when these patients were excluded from analysis, the sensitivity was 90% with a specificity of 95%, yielding likelihood ratios of 16.5 (95% confidence interval, 11.6 to 23.5) for a positive test and 0.11 (95% confidence interval, 0.07 to 0.16) for a negative test.

Conclusions

Multidetector CTA and CTA-CTV perform well when the results of these tests are concordant with pre-test clinical probabilities of pulmonary embolism. CTA-CTV offers slightly increased sensitivity compared with CTA alone, with no significant difference in specificity. If the results of CTA or CTA-CTV are inconsistent with the clinical probability of pulmonary embolism, additional diagnostic testing is indicated.

Commentary

CTA has been used widely, and in many centers has largely replaced other diagnostic tests for pulmonary embolism. This well-done study incorporated recent advances in technology with multidetector CTA-CTV, along with a clinical prediction rule to better estimate pre-test probabilities of pulmonary embolism.2 It is important to recognize that 266 of the 1,090 patients enrolled were not included in the calculations of sensitivity and specificity for CTA-CTV because they did not have interpretable test results.

Although the specificity of both CTA and the CTA-CTV combination were high, the sensitivity was not sufficient to identify all cases of pulmonary embolism. This result contrasts to the recent outcomes studies of CTA, in which low rates of venous thromboembolism were seen in follow-up of patients with negative multidetector CTA.3,4 Although multidetector CTA has a higher sensitivity than single-slice technology, this test may still miss small subsegmental thrombi that might be detected using other diagnostic tests (ventilation-perfusion scintigraphy and/or pulmonary digital-subtraction angiography).

An important take-home message from this study is to recognize once again the importance of utilizing established clinical prediction rules for venous thrombosis and pulmonary embolism (such as the Wells clinical model).2 As with the majority of diagnostic tests at our disposal, when our clinical judgment is in contrast with test results, as in the case of a high likelihood of a potentially fatal disease like pulmonary embolism with a normal CTA result, additional diagnostic testing is necessary.

References

  1. The PIOPED Investigators. Value of the ventilation/perfusion scan in acute pulmonary embolism: results of the prospective investigation of pulmonary embolism diagnosis (PIOPED). JAMA. 1990;263:2753-2759.
  2. Wells PS, Anderson DR, Rodger M, et al. Excluding pulmonary embolism at the bedside without diagnostic imaging: management of patients with suspected pulmonary embolism presenting to the emergency department by using a simple clinical model and d-dimer. Ann Intern Med. 2001 Jul 17;135(2):98-107.
  3. Perrier A, Roy PM, Sanchez O, et al. Multidetector-row computed tomography in suspected pulmonary embolism. N Engl J Med. 2005 Apr;352(17):1760-1768.
  4. van Belle A, Buller HR, Huisman MV, et al. Effectiveness of managing suspected pulmonary embolism using an algorithm combining clinical probability, D-dimer testing, and computed tomography. JAMA. 2006 Jan 11;295(2):172-179.
 

 

Classic Article:

PIOPED Investigators

The PIOPED Investigators. Value of ventilation/perfusion scan in acute pulmonary embolism: results of the prospective investigation of pulmonary embolism diagnosis (PIOPED). JAMA. 1990;263:2753-2759.

Background

The risk of untreated pulmonary embolism requires either the diagnosis or the exclusion of this diagnosis when clinical suspicion exists. The reference test for pulmonary embolism, standard pulmonary angiography, is invasive and expensive, and carries with it a measurable procedural risk.

Non-invasive diagnostic tests, including ventilation/perfusion (V/Q) scintigraphy, have been used to detect perfusion defects consistent with pulmonary embolism, though the performance characteristics of this diagnostic test were not well known prior to 1990. This study was designed to evaluate the sensitivity and specificities of ventilation/perfusion lung scans for pulmonary embolism in the acute setting.

Methods

This prospective, multi-center study evaluated V/Q scintigraphy on a random sample of 931 patients. A composite reference standard was used because only 755 patients underwent scintigraphy and pulmonary angiography. Clinical follow-up and subsequent diagnostic testing were employed in untreated patients with low clinical probabilities of pulmonary embolism who did not undergo angiography. Clinical assessment of the probability of pulmonary embolism was determined on the basis of the clinician’s judgment, without systematic prediction rules.

Results

Almost all patients with pulmonary embolism had abnormal ventilation/perfusion lung scans of high, intermediate, or low probability. Unfortunately, most patients without pulmonary embolism also had abnormal studies, limiting the utility of this test. Clinical follow-up and angiography revealed that pulmonary embolism occurred among 12% of patients with low-probability scans.

Conclusions

V/Q scintigraphy is useful in establishing or excluding the diagnosis of pulmonary embolism in only a minority of patients, where clinical suspicion of pulmonary embolism is concordant with the diagnostic test findings. The likelihood of pulmonary embolism in patients with a high pre-test probability of pulmonary embolism and a high probability scan is 95%, while in low probability patients with a low probability or normal scan the probability is 4% or 2%, respectively.

Commentary

This original PIOPED study established the diagnostic characteristics of V/Q scintigraphy and demonstrated, for the first time, evidence of the role of clinical assessment and prior probability in a diagnostic strategy for pulmonary embolism. Although subsequent studies have significantly advanced our knowledge of clinical prediction and diagnostic strategies in venous thromboembolism, the first PIOPED study continues to serve as an example of a high-quality, multi-center diagnostic test study utilizing a composite reference standard in a difficult-to-study disease. Unfortunately, the results of this study demonstrated that V/Q scintigraphy performs well for only a minority of patients. The majority of patients (72%) had clinical probabilities of pulmonary embolism and ventilation/perfusion scan results, which yielded post-test probabilities of 15-86%, leaving, in many cases, enough remaining diagnostic uncertainty to warrant additional testing.—TO TH

Issue
The Hospitalist - 2009(06)
Issue
The Hospitalist - 2009(06)
Publications
Publications
Topics
Article Type
Display Headline
Quality Care for COPD, Secondary Stroke Prevention, Treat Classic CTA
Display Headline
Quality Care for COPD, Secondary Stroke Prevention, Treat Classic CTA
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

In the Literature

Article Type
Changed
Fri, 09/14/2018 - 12:38
Display Headline
In the Literature

Thrombocytopenia Reaction to Vancomycin

Von Drygalski A, Curtis BR, Bougie DW, et al. Vancomycin-induced immune thrombocytopenia. N Engl J Med. 2007 Mar 1;356(9):904-910

The use of vancomycin has grown exponentially in the past 20 years.1 Physicians have become increasingly aware of its major side effects, such as red man syndrome, hypersensitivity, neutropenia, and nephrotoxicity. But there have been only a few case reports of thrombocytopenia associated with this drug. This article looked at cases of thrombocytopenia in patients referred for clinical suspicion of vancomycin-induced thrombocytopenia.

From 2001-2005, serum samples were sent to the Platelet and Neutrophil Immunology Laboratory at the BloodCenter of Wisconsin in Milwaukee for testing for vancomycin-dependent antibodies from several sites. Clinical information regarding these patients was obtained from their referring physicians and one of the authors. Platelet reactive antibodies were detected by flow cytometry.

IgG and IgM vancomycin-dependent antibodies were detected in 34 patients. It was found that platelets dropped an average of 93% from pretreatment levels, and the average nadir occurred on day eight. The mean platelet count was 13,600. After vancomycin was discontinued, the platelet count returned to normal in all patients except for the three who died. The average time for resolution of thrombocytopenia was 7.5 days.

Unlike other drug-induced thrombocytopenia, these cases of thrombocytopenia associated with vancomycin appear to be more prone to significant hemorrhage. In this group 34% were found to have had severe hemorrhage defined in this study as florid petechial hemorrhages, ecchymoses, and oozing form the buccal mucosa. Three patients who had renal insufficiency were found to be profoundly thrombocytopenic for a longer duration, presumably due to delayed clearance of vancomycin in this setting.

Based on this study, it appears thrombocytopenia is a significant adverse reaction that can be attributed to vancomycin. Unlike other drug-induced thrombocytopenias, it appears to be associated with a higher likelihood of significant hemorrhage, as well.

Thrombocytopenia is a common occurrence in the acutely ill hospitalized patient and has been linked to increased hospital mortality and increased length of stay.2 Many drugs and diseases that hospitalists treat are associated with thrombocytopenia. The indications for usage of vancomycin continues to grow with the increasing number of patients with prosthetic devices and intravascular access, and the increasing prevalence of MRSA. This study raises awareness of a significant side effect that can be associated with vancomycin.

References

  1. Ena J, Dick RW, Jones RN, et al. The epidemiology of intravenous vancomycin usage in a university hospital: a 10-year study. JAMA. 1993 Feb 3;269(5):598-602. Comment in JAMA. 1993 Sep 22-29;270(12):1426.
  2. Crowther MA, Cook DJ, Meade M, et al. Thrombocytopenia in medical-surgical critically ill patients: prevalence, incidence, and risk factors. J Crit Care. 2005 Dec;20(4):248-253.

Table 1: The Modified Blatchford Risk Score
click for large version
click for large version

Can the mBRS Stratify Pts Admitted for Nonvariceal Upper GI Bleeds?

Romagnuolo J, Barkun AN, Enns R, et al. Simple clinical predictors may obviate urgent endoscopy in selected patients with nonvariceal upper gastrointestinal tract bleeding. Arch Intern Med. 2007 Feb 12;167(3):265-270.

Nonvariceal upper gastrointestinal bleeding is one of the top 10 admission diagnoses based on reviews of diagnosis-related groups. Patients with low-risk lesions on endoscopy, such as ulcers with a clean base, esophagitis, gastritis, duodenitis, or Mallory-Weiss tears, are felt to have less than a 5% chance of recurrent bleeding. In some instances, these patients can be treated successfully and discharged to home.1

Unfortunately, endoscopy is not always available—especially late at night and on weekends. It would be helpful to have a clinical prediction rule to identify patients at low risk for bleeding who could be safely discharged to get endoscopy within a few days.

 

 

In the study, 1,869 patients who had undergone upper endoscopy for upper gastrointestinal bleeding were entered into a Canadian national Registry for Upper GI Bleeding and Endoscopy (RUGBE). A modified Blatchford risk score (mBRS) was calculated to see if it could predict the presence of high-risk stigmata of bleeding, rebleeding rates, and mortality.

This mBRS was also compared with another scoring system—the Rockall score. The mBRS uses clinical and laboratory data to risk assess nonvariceal bleeding. The variables included in the scoring system include hemoglobin, systolic blood pressure, heart rate, melena, liver disease, and heart failure. High-risk endoscopic stigmata were defined as adherent clot after irrigation, a bleeding, oozing or spurting vessel, or a nonbleeding visible vessel. Rebleeding was defined as hematemesis, melena, or a bloody nasogastric aspirate in the presence of shock or a decrease in hemoglobin of 2 g/dL or more.

Patients who had a modified Blatchford risk score of <1 were found to have a lower likelihood of high-risk stigmata on endoscopy and were at a low risk for rebleeding (5%). Patients who had high-risk stigmata on endoscopy but an mBRS score of <1 were also found to have low rebleeding rates. The mBRS seemed to a better predictor than the Rockall score for high-risk stigmata and for rebleeding rates.

Patients with nonvariceal upper gastrointestinal tract bleeding may be identified as low risk for re-bleeding if they are normotensive, not tachycardic, not anemic, and do not have active melena, liver disease, or heart failure. It is conceivable that if endoscopy were not available, these patients could be sent home on high-dose proton pump inhibitor and asked to return for outpatient upper endoscopy within a few days.

The study certainly raises interesting questions. Whether it is acceptable practice to discharge a “low-risk” patient with an upper gastrointestinal hemorrhage on a high-dose proton pump inhibitor with good social support and close outpatient follow-up, but without diagnostic endoscopy is still unclear.

The study is limited by the fact that it is a retrospective analysis; however, it does examine a large cohort of patients. The authors acknowledge this, and this work could lead to a prospective randomized trial that would help answer this question. In the meantime, the mBRS may be a helpful tool to help risk stratify patients admitted for nonvariceal upper gastrointestinal bleeding.

References

  1. Cipolletta L, Bianco M, Rotondano G, et al. Outpatient management for low-risk nonvariceal upper GI bleeding: a randomized controlled trial. Gastrointest Endosc. 2002;55(1):1-5.

Lumbar Puncture to Reduce Adverse Events

Straus SE, Thorpe KE, Holroyd-Leduc J. How do I perform a lumbar puncture and analyze the results to diagnose bacterial meningitis? JAMA. 2006 Oct 25;296(16):2012-2022.

Lumbar punctures (LPs) remain a common diagnostic test performed by physicians to rule out meningitis. This procedure may be associated with adverse events, with headache and backache the most commonly reported. This systematic review and meta-analysis sought to review the evidence regarding diagnostic lumbar puncture techniques that might reduce the risk of adverse events, and to examine the accuracy of cerebrospinal fluid (CSF) analysis in the diagnosis of bacterial meningitis.

Studies were identified through searches of the Cochrane Library (www3.interscience.wiley.com/cgi-bin/mrwhome/106568753/AboutCochrane.html), MEDLINE from 1966 to January 2006, and EMBASE from 1980 to January 2006 (without language restrictions) to identify relevant studies. Bibliographies of retrieved articles were also used as data sources.

Randomized controlled trials of patients 18 or older undergoing lumbar puncture testing interventions to facilitate a successful diagnostic procedure or reduce adverse events were identified and selected. As a secondary outcome, trials that assessed the accuracy of CSF biochemical analysis for the diagnosis of bacterial meningitis were also identified and included. Trials that studied spinal anesthesia or myelography were excluded.

 

 

Study appraisals for quality (randomization, blinding, and outcome assessment) and data extraction were performed by two investigators independently. Fifteen randomized trials of interventions to reduce adverse events met criteria for inclusion, and four studies of the diagnostic test characteristics of CSF analysis met criteria and were included.

Meta-analysis with a random effects model of five studies (total of 587 patients) comparing atraumatic needles with standard needles yielded a nonsignificant decrease in the odds of headache with an atraumatic needle (absolute risk reduction [ARR], 12.3%; 95% confidence interval [CI], –1.72% to 26.2%). A single study of reinsertion of the stylet before needle removal (600 patients) showed a decreased risk of headache (ARR, 11.3%; 95% CI, 6.50%-16.2%). Meta-analysis of four studies (717 patients) revealed a nonsignificant decrease in headache in patients mobilized after LP (ARR 2.9%; 95% CI, –3.4 to 9.3%).

Data from the diagnostic test studies yielded the following likelihood ratios for diagnosing bacterial meningitis: A CSF–blood glucose ratio of 0.4 or less with a likelihood ratio of 18 (95% CI, 12-27); CSF white blood cell count of 500/µL or higher with a likelihood ratio of 15 (95% CI, 10-22); and CSF lactate level of >31.53 mg/dL with a likelihood ration of 21 (95% CI, 14-32) in accurately diagnosed bacterial meningitis.

These data support the reinsertion of the stylet before needle removal to reduce the risk of headache after lumbar puncture and that patients do not require bed rest after diagnostic lumbar puncture. Biochemical analyses, including CSF-blood glucose ratio, CSF leukocyte count and lactate level are useful in diagnosing bacterial meningitis.

This Rational Clinical Examination systematic review and meta-analysis provides a nice review of the available data on optimizing diagnostic lumbar puncture technique to reduce adverse events. It is somewhat remarkable so little has changed in our knowledge about this long-standing diagnostic procedure. Post-lumbar puncture headaches remain a challenge that may affect patient satisfaction as well as hospital (or observation unit) course particularly for patients who do not have evidence of bacterial meningitis once the analysis is complete.

This review seems to provide some useful answers for physicians performing lumbar puncture, who should consider selecting a small gauge needle and reinserting the stylet prior to removal. Future studies of other maneuvers to reduce post-procedure adverse events should be considered for the question of atraumatic needles, which may be technically more difficult to use. The review confirms and helps quantify the utility of CSF biochemical analysis in the diagnosis of bacterial meningitis.

Who’s Performing Procedures?

Wigton RS, Alguire P. The declining number and variety of procedures done by general internists: a resurvey of members of the American College of Physicians. Ann Intern Med. 2007 Mar 6;146(5):355-360. Comment in Ann Intern Med. 2007 Mar 6; 146(5):392-393.

Prior surveys of physicians documented that general internists performed a variety and significant number of procedures in their practice. Much has changed since those prior assessments, including physician training, practice settings, availability of subspecialists, and regulatory requirements that have altered physician’s practice with regard to procedures. This study sought to reassess the volume and variety of procedures performed by general internists compared with the prior survey of 1986. The final sample included 990 completed surveys from general internists from 1,389 returned questionnaires for a successful completion rate of 39.6%.

The median number of different procedures performed in practice decreased from 16 in 1986 to seven in 2004. Internists who practiced in smaller hospitals or smaller towns reported performing almost twice as many procedures as physicians in the largest hospitals and cities. Hours spent in the care of hospitalized patients were also associated with an increased number of different procedures—in particular mechanical ventilation, central venous catheter placement, and thoracentesis. For all but one of the 34 procedures common to both surveys, fewer general internists performed them in 2004 compared with 1986. Remarkably, for 22 of the 34 procedures, a greater than 50% reduction in the proportion of respondents who performed the procedure was noted.

 

 

In the 1986 survey, the majority of internists performed all but one of the six procedures required by the American Board of Internal Medicine (ABIM) for certification (abdominal paracentesis, arterial puncture for blood gases, central venous catheter placement, joint aspiration, lumbar puncture, and thoracentesis). Except for joint aspiration, in 2004 these required procedures were performed by 25% or fewer of the respondents.

The 2004 survey demonstrated a striking reduction in the number of different procedures performed by general internists, and a decrease in the proportion of internists who do most procedures. These reductions may stem from a variety of changes in physician practices, including the emergence of hospitalists, availability of subspecialty physicians and proceduralists, and changes in technology and regulatory environments.

Regardless of the forces behind these changes, internal medicine residents’ training in procedures should be re-examined.

Many of those in academic hospital medicine have noted a decline in procedures performed by general internists at large academic centers. This study affirms this trend overall and in particular for physicians in large urban settings or in the largest hospitals. The emergence of hospital medicine may have played a role in reducing the procedures performed by primary care (outpatient) physicians who now spend less time caring for medically ill hospitalized patients.

Residency programs now must consider how to incorporate procedure skills and training to align with the needs of internists. The rising interest in careers in hospital medicine (as opposed to outpatient primary care) necessitates a new approach and individualized plans for gaining procedural skills to match career goals and practice settings. The new ABIM policy acknowledges this greater variability in the procedures performed by internists in practice, and takes steps to more closely align procedure requirements and core manual skills with physician practice.

These changes and new flexibility in requirements provide another opportunity for academic hospital medicine programs to provide leadership, and help shape the training of inpatient physicians. TH

Issue
The Hospitalist - 2007(06)
Publications
Sections

Thrombocytopenia Reaction to Vancomycin

Von Drygalski A, Curtis BR, Bougie DW, et al. Vancomycin-induced immune thrombocytopenia. N Engl J Med. 2007 Mar 1;356(9):904-910

The use of vancomycin has grown exponentially in the past 20 years.1 Physicians have become increasingly aware of its major side effects, such as red man syndrome, hypersensitivity, neutropenia, and nephrotoxicity. But there have been only a few case reports of thrombocytopenia associated with this drug. This article looked at cases of thrombocytopenia in patients referred for clinical suspicion of vancomycin-induced thrombocytopenia.

From 2001-2005, serum samples were sent to the Platelet and Neutrophil Immunology Laboratory at the BloodCenter of Wisconsin in Milwaukee for testing for vancomycin-dependent antibodies from several sites. Clinical information regarding these patients was obtained from their referring physicians and one of the authors. Platelet reactive antibodies were detected by flow cytometry.

IgG and IgM vancomycin-dependent antibodies were detected in 34 patients. It was found that platelets dropped an average of 93% from pretreatment levels, and the average nadir occurred on day eight. The mean platelet count was 13,600. After vancomycin was discontinued, the platelet count returned to normal in all patients except for the three who died. The average time for resolution of thrombocytopenia was 7.5 days.

Unlike other drug-induced thrombocytopenia, these cases of thrombocytopenia associated with vancomycin appear to be more prone to significant hemorrhage. In this group 34% were found to have had severe hemorrhage defined in this study as florid petechial hemorrhages, ecchymoses, and oozing form the buccal mucosa. Three patients who had renal insufficiency were found to be profoundly thrombocytopenic for a longer duration, presumably due to delayed clearance of vancomycin in this setting.

Based on this study, it appears thrombocytopenia is a significant adverse reaction that can be attributed to vancomycin. Unlike other drug-induced thrombocytopenias, it appears to be associated with a higher likelihood of significant hemorrhage, as well.

Thrombocytopenia is a common occurrence in the acutely ill hospitalized patient and has been linked to increased hospital mortality and increased length of stay.2 Many drugs and diseases that hospitalists treat are associated with thrombocytopenia. The indications for usage of vancomycin continues to grow with the increasing number of patients with prosthetic devices and intravascular access, and the increasing prevalence of MRSA. This study raises awareness of a significant side effect that can be associated with vancomycin.

References

  1. Ena J, Dick RW, Jones RN, et al. The epidemiology of intravenous vancomycin usage in a university hospital: a 10-year study. JAMA. 1993 Feb 3;269(5):598-602. Comment in JAMA. 1993 Sep 22-29;270(12):1426.
  2. Crowther MA, Cook DJ, Meade M, et al. Thrombocytopenia in medical-surgical critically ill patients: prevalence, incidence, and risk factors. J Crit Care. 2005 Dec;20(4):248-253.

Table 1: The Modified Blatchford Risk Score
click for large version
click for large version

Can the mBRS Stratify Pts Admitted for Nonvariceal Upper GI Bleeds?

Romagnuolo J, Barkun AN, Enns R, et al. Simple clinical predictors may obviate urgent endoscopy in selected patients with nonvariceal upper gastrointestinal tract bleeding. Arch Intern Med. 2007 Feb 12;167(3):265-270.

Nonvariceal upper gastrointestinal bleeding is one of the top 10 admission diagnoses based on reviews of diagnosis-related groups. Patients with low-risk lesions on endoscopy, such as ulcers with a clean base, esophagitis, gastritis, duodenitis, or Mallory-Weiss tears, are felt to have less than a 5% chance of recurrent bleeding. In some instances, these patients can be treated successfully and discharged to home.1

Unfortunately, endoscopy is not always available—especially late at night and on weekends. It would be helpful to have a clinical prediction rule to identify patients at low risk for bleeding who could be safely discharged to get endoscopy within a few days.

 

 

In the study, 1,869 patients who had undergone upper endoscopy for upper gastrointestinal bleeding were entered into a Canadian national Registry for Upper GI Bleeding and Endoscopy (RUGBE). A modified Blatchford risk score (mBRS) was calculated to see if it could predict the presence of high-risk stigmata of bleeding, rebleeding rates, and mortality.

This mBRS was also compared with another scoring system—the Rockall score. The mBRS uses clinical and laboratory data to risk assess nonvariceal bleeding. The variables included in the scoring system include hemoglobin, systolic blood pressure, heart rate, melena, liver disease, and heart failure. High-risk endoscopic stigmata were defined as adherent clot after irrigation, a bleeding, oozing or spurting vessel, or a nonbleeding visible vessel. Rebleeding was defined as hematemesis, melena, or a bloody nasogastric aspirate in the presence of shock or a decrease in hemoglobin of 2 g/dL or more.

Patients who had a modified Blatchford risk score of <1 were found to have a lower likelihood of high-risk stigmata on endoscopy and were at a low risk for rebleeding (5%). Patients who had high-risk stigmata on endoscopy but an mBRS score of <1 were also found to have low rebleeding rates. The mBRS seemed to a better predictor than the Rockall score for high-risk stigmata and for rebleeding rates.

Patients with nonvariceal upper gastrointestinal tract bleeding may be identified as low risk for re-bleeding if they are normotensive, not tachycardic, not anemic, and do not have active melena, liver disease, or heart failure. It is conceivable that if endoscopy were not available, these patients could be sent home on high-dose proton pump inhibitor and asked to return for outpatient upper endoscopy within a few days.

The study certainly raises interesting questions. Whether it is acceptable practice to discharge a “low-risk” patient with an upper gastrointestinal hemorrhage on a high-dose proton pump inhibitor with good social support and close outpatient follow-up, but without diagnostic endoscopy is still unclear.

The study is limited by the fact that it is a retrospective analysis; however, it does examine a large cohort of patients. The authors acknowledge this, and this work could lead to a prospective randomized trial that would help answer this question. In the meantime, the mBRS may be a helpful tool to help risk stratify patients admitted for nonvariceal upper gastrointestinal bleeding.

References

  1. Cipolletta L, Bianco M, Rotondano G, et al. Outpatient management for low-risk nonvariceal upper GI bleeding: a randomized controlled trial. Gastrointest Endosc. 2002;55(1):1-5.

Lumbar Puncture to Reduce Adverse Events

Straus SE, Thorpe KE, Holroyd-Leduc J. How do I perform a lumbar puncture and analyze the results to diagnose bacterial meningitis? JAMA. 2006 Oct 25;296(16):2012-2022.

Lumbar punctures (LPs) remain a common diagnostic test performed by physicians to rule out meningitis. This procedure may be associated with adverse events, with headache and backache the most commonly reported. This systematic review and meta-analysis sought to review the evidence regarding diagnostic lumbar puncture techniques that might reduce the risk of adverse events, and to examine the accuracy of cerebrospinal fluid (CSF) analysis in the diagnosis of bacterial meningitis.

Studies were identified through searches of the Cochrane Library (www3.interscience.wiley.com/cgi-bin/mrwhome/106568753/AboutCochrane.html), MEDLINE from 1966 to January 2006, and EMBASE from 1980 to January 2006 (without language restrictions) to identify relevant studies. Bibliographies of retrieved articles were also used as data sources.

Randomized controlled trials of patients 18 or older undergoing lumbar puncture testing interventions to facilitate a successful diagnostic procedure or reduce adverse events were identified and selected. As a secondary outcome, trials that assessed the accuracy of CSF biochemical analysis for the diagnosis of bacterial meningitis were also identified and included. Trials that studied spinal anesthesia or myelography were excluded.

 

 

Study appraisals for quality (randomization, blinding, and outcome assessment) and data extraction were performed by two investigators independently. Fifteen randomized trials of interventions to reduce adverse events met criteria for inclusion, and four studies of the diagnostic test characteristics of CSF analysis met criteria and were included.

Meta-analysis with a random effects model of five studies (total of 587 patients) comparing atraumatic needles with standard needles yielded a nonsignificant decrease in the odds of headache with an atraumatic needle (absolute risk reduction [ARR], 12.3%; 95% confidence interval [CI], –1.72% to 26.2%). A single study of reinsertion of the stylet before needle removal (600 patients) showed a decreased risk of headache (ARR, 11.3%; 95% CI, 6.50%-16.2%). Meta-analysis of four studies (717 patients) revealed a nonsignificant decrease in headache in patients mobilized after LP (ARR 2.9%; 95% CI, –3.4 to 9.3%).

Data from the diagnostic test studies yielded the following likelihood ratios for diagnosing bacterial meningitis: A CSF–blood glucose ratio of 0.4 or less with a likelihood ratio of 18 (95% CI, 12-27); CSF white blood cell count of 500/µL or higher with a likelihood ratio of 15 (95% CI, 10-22); and CSF lactate level of >31.53 mg/dL with a likelihood ration of 21 (95% CI, 14-32) in accurately diagnosed bacterial meningitis.

These data support the reinsertion of the stylet before needle removal to reduce the risk of headache after lumbar puncture and that patients do not require bed rest after diagnostic lumbar puncture. Biochemical analyses, including CSF-blood glucose ratio, CSF leukocyte count and lactate level are useful in diagnosing bacterial meningitis.

This Rational Clinical Examination systematic review and meta-analysis provides a nice review of the available data on optimizing diagnostic lumbar puncture technique to reduce adverse events. It is somewhat remarkable so little has changed in our knowledge about this long-standing diagnostic procedure. Post-lumbar puncture headaches remain a challenge that may affect patient satisfaction as well as hospital (or observation unit) course particularly for patients who do not have evidence of bacterial meningitis once the analysis is complete.

This review seems to provide some useful answers for physicians performing lumbar puncture, who should consider selecting a small gauge needle and reinserting the stylet prior to removal. Future studies of other maneuvers to reduce post-procedure adverse events should be considered for the question of atraumatic needles, which may be technically more difficult to use. The review confirms and helps quantify the utility of CSF biochemical analysis in the diagnosis of bacterial meningitis.

Who’s Performing Procedures?

Wigton RS, Alguire P. The declining number and variety of procedures done by general internists: a resurvey of members of the American College of Physicians. Ann Intern Med. 2007 Mar 6;146(5):355-360. Comment in Ann Intern Med. 2007 Mar 6; 146(5):392-393.

Prior surveys of physicians documented that general internists performed a variety and significant number of procedures in their practice. Much has changed since those prior assessments, including physician training, practice settings, availability of subspecialists, and regulatory requirements that have altered physician’s practice with regard to procedures. This study sought to reassess the volume and variety of procedures performed by general internists compared with the prior survey of 1986. The final sample included 990 completed surveys from general internists from 1,389 returned questionnaires for a successful completion rate of 39.6%.

The median number of different procedures performed in practice decreased from 16 in 1986 to seven in 2004. Internists who practiced in smaller hospitals or smaller towns reported performing almost twice as many procedures as physicians in the largest hospitals and cities. Hours spent in the care of hospitalized patients were also associated with an increased number of different procedures—in particular mechanical ventilation, central venous catheter placement, and thoracentesis. For all but one of the 34 procedures common to both surveys, fewer general internists performed them in 2004 compared with 1986. Remarkably, for 22 of the 34 procedures, a greater than 50% reduction in the proportion of respondents who performed the procedure was noted.

 

 

In the 1986 survey, the majority of internists performed all but one of the six procedures required by the American Board of Internal Medicine (ABIM) for certification (abdominal paracentesis, arterial puncture for blood gases, central venous catheter placement, joint aspiration, lumbar puncture, and thoracentesis). Except for joint aspiration, in 2004 these required procedures were performed by 25% or fewer of the respondents.

The 2004 survey demonstrated a striking reduction in the number of different procedures performed by general internists, and a decrease in the proportion of internists who do most procedures. These reductions may stem from a variety of changes in physician practices, including the emergence of hospitalists, availability of subspecialty physicians and proceduralists, and changes in technology and regulatory environments.

Regardless of the forces behind these changes, internal medicine residents’ training in procedures should be re-examined.

Many of those in academic hospital medicine have noted a decline in procedures performed by general internists at large academic centers. This study affirms this trend overall and in particular for physicians in large urban settings or in the largest hospitals. The emergence of hospital medicine may have played a role in reducing the procedures performed by primary care (outpatient) physicians who now spend less time caring for medically ill hospitalized patients.

Residency programs now must consider how to incorporate procedure skills and training to align with the needs of internists. The rising interest in careers in hospital medicine (as opposed to outpatient primary care) necessitates a new approach and individualized plans for gaining procedural skills to match career goals and practice settings. The new ABIM policy acknowledges this greater variability in the procedures performed by internists in practice, and takes steps to more closely align procedure requirements and core manual skills with physician practice.

These changes and new flexibility in requirements provide another opportunity for academic hospital medicine programs to provide leadership, and help shape the training of inpatient physicians. TH

Thrombocytopenia Reaction to Vancomycin

Von Drygalski A, Curtis BR, Bougie DW, et al. Vancomycin-induced immune thrombocytopenia. N Engl J Med. 2007 Mar 1;356(9):904-910

The use of vancomycin has grown exponentially in the past 20 years.1 Physicians have become increasingly aware of its major side effects, such as red man syndrome, hypersensitivity, neutropenia, and nephrotoxicity. But there have been only a few case reports of thrombocytopenia associated with this drug. This article looked at cases of thrombocytopenia in patients referred for clinical suspicion of vancomycin-induced thrombocytopenia.

From 2001-2005, serum samples were sent to the Platelet and Neutrophil Immunology Laboratory at the BloodCenter of Wisconsin in Milwaukee for testing for vancomycin-dependent antibodies from several sites. Clinical information regarding these patients was obtained from their referring physicians and one of the authors. Platelet reactive antibodies were detected by flow cytometry.

IgG and IgM vancomycin-dependent antibodies were detected in 34 patients. It was found that platelets dropped an average of 93% from pretreatment levels, and the average nadir occurred on day eight. The mean platelet count was 13,600. After vancomycin was discontinued, the platelet count returned to normal in all patients except for the three who died. The average time for resolution of thrombocytopenia was 7.5 days.

Unlike other drug-induced thrombocytopenia, these cases of thrombocytopenia associated with vancomycin appear to be more prone to significant hemorrhage. In this group 34% were found to have had severe hemorrhage defined in this study as florid petechial hemorrhages, ecchymoses, and oozing form the buccal mucosa. Three patients who had renal insufficiency were found to be profoundly thrombocytopenic for a longer duration, presumably due to delayed clearance of vancomycin in this setting.

Based on this study, it appears thrombocytopenia is a significant adverse reaction that can be attributed to vancomycin. Unlike other drug-induced thrombocytopenias, it appears to be associated with a higher likelihood of significant hemorrhage, as well.

Thrombocytopenia is a common occurrence in the acutely ill hospitalized patient and has been linked to increased hospital mortality and increased length of stay.2 Many drugs and diseases that hospitalists treat are associated with thrombocytopenia. The indications for usage of vancomycin continues to grow with the increasing number of patients with prosthetic devices and intravascular access, and the increasing prevalence of MRSA. This study raises awareness of a significant side effect that can be associated with vancomycin.

References

  1. Ena J, Dick RW, Jones RN, et al. The epidemiology of intravenous vancomycin usage in a university hospital: a 10-year study. JAMA. 1993 Feb 3;269(5):598-602. Comment in JAMA. 1993 Sep 22-29;270(12):1426.
  2. Crowther MA, Cook DJ, Meade M, et al. Thrombocytopenia in medical-surgical critically ill patients: prevalence, incidence, and risk factors. J Crit Care. 2005 Dec;20(4):248-253.

Table 1: The Modified Blatchford Risk Score
click for large version
click for large version

Can the mBRS Stratify Pts Admitted for Nonvariceal Upper GI Bleeds?

Romagnuolo J, Barkun AN, Enns R, et al. Simple clinical predictors may obviate urgent endoscopy in selected patients with nonvariceal upper gastrointestinal tract bleeding. Arch Intern Med. 2007 Feb 12;167(3):265-270.

Nonvariceal upper gastrointestinal bleeding is one of the top 10 admission diagnoses based on reviews of diagnosis-related groups. Patients with low-risk lesions on endoscopy, such as ulcers with a clean base, esophagitis, gastritis, duodenitis, or Mallory-Weiss tears, are felt to have less than a 5% chance of recurrent bleeding. In some instances, these patients can be treated successfully and discharged to home.1

Unfortunately, endoscopy is not always available—especially late at night and on weekends. It would be helpful to have a clinical prediction rule to identify patients at low risk for bleeding who could be safely discharged to get endoscopy within a few days.

 

 

In the study, 1,869 patients who had undergone upper endoscopy for upper gastrointestinal bleeding were entered into a Canadian national Registry for Upper GI Bleeding and Endoscopy (RUGBE). A modified Blatchford risk score (mBRS) was calculated to see if it could predict the presence of high-risk stigmata of bleeding, rebleeding rates, and mortality.

This mBRS was also compared with another scoring system—the Rockall score. The mBRS uses clinical and laboratory data to risk assess nonvariceal bleeding. The variables included in the scoring system include hemoglobin, systolic blood pressure, heart rate, melena, liver disease, and heart failure. High-risk endoscopic stigmata were defined as adherent clot after irrigation, a bleeding, oozing or spurting vessel, or a nonbleeding visible vessel. Rebleeding was defined as hematemesis, melena, or a bloody nasogastric aspirate in the presence of shock or a decrease in hemoglobin of 2 g/dL or more.

Patients who had a modified Blatchford risk score of <1 were found to have a lower likelihood of high-risk stigmata on endoscopy and were at a low risk for rebleeding (5%). Patients who had high-risk stigmata on endoscopy but an mBRS score of <1 were also found to have low rebleeding rates. The mBRS seemed to a better predictor than the Rockall score for high-risk stigmata and for rebleeding rates.

Patients with nonvariceal upper gastrointestinal tract bleeding may be identified as low risk for re-bleeding if they are normotensive, not tachycardic, not anemic, and do not have active melena, liver disease, or heart failure. It is conceivable that if endoscopy were not available, these patients could be sent home on high-dose proton pump inhibitor and asked to return for outpatient upper endoscopy within a few days.

The study certainly raises interesting questions. Whether it is acceptable practice to discharge a “low-risk” patient with an upper gastrointestinal hemorrhage on a high-dose proton pump inhibitor with good social support and close outpatient follow-up, but without diagnostic endoscopy is still unclear.

The study is limited by the fact that it is a retrospective analysis; however, it does examine a large cohort of patients. The authors acknowledge this, and this work could lead to a prospective randomized trial that would help answer this question. In the meantime, the mBRS may be a helpful tool to help risk stratify patients admitted for nonvariceal upper gastrointestinal bleeding.

References

  1. Cipolletta L, Bianco M, Rotondano G, et al. Outpatient management for low-risk nonvariceal upper GI bleeding: a randomized controlled trial. Gastrointest Endosc. 2002;55(1):1-5.

Lumbar Puncture to Reduce Adverse Events

Straus SE, Thorpe KE, Holroyd-Leduc J. How do I perform a lumbar puncture and analyze the results to diagnose bacterial meningitis? JAMA. 2006 Oct 25;296(16):2012-2022.

Lumbar punctures (LPs) remain a common diagnostic test performed by physicians to rule out meningitis. This procedure may be associated with adverse events, with headache and backache the most commonly reported. This systematic review and meta-analysis sought to review the evidence regarding diagnostic lumbar puncture techniques that might reduce the risk of adverse events, and to examine the accuracy of cerebrospinal fluid (CSF) analysis in the diagnosis of bacterial meningitis.

Studies were identified through searches of the Cochrane Library (www3.interscience.wiley.com/cgi-bin/mrwhome/106568753/AboutCochrane.html), MEDLINE from 1966 to January 2006, and EMBASE from 1980 to January 2006 (without language restrictions) to identify relevant studies. Bibliographies of retrieved articles were also used as data sources.

Randomized controlled trials of patients 18 or older undergoing lumbar puncture testing interventions to facilitate a successful diagnostic procedure or reduce adverse events were identified and selected. As a secondary outcome, trials that assessed the accuracy of CSF biochemical analysis for the diagnosis of bacterial meningitis were also identified and included. Trials that studied spinal anesthesia or myelography were excluded.

 

 

Study appraisals for quality (randomization, blinding, and outcome assessment) and data extraction were performed by two investigators independently. Fifteen randomized trials of interventions to reduce adverse events met criteria for inclusion, and four studies of the diagnostic test characteristics of CSF analysis met criteria and were included.

Meta-analysis with a random effects model of five studies (total of 587 patients) comparing atraumatic needles with standard needles yielded a nonsignificant decrease in the odds of headache with an atraumatic needle (absolute risk reduction [ARR], 12.3%; 95% confidence interval [CI], –1.72% to 26.2%). A single study of reinsertion of the stylet before needle removal (600 patients) showed a decreased risk of headache (ARR, 11.3%; 95% CI, 6.50%-16.2%). Meta-analysis of four studies (717 patients) revealed a nonsignificant decrease in headache in patients mobilized after LP (ARR 2.9%; 95% CI, –3.4 to 9.3%).

Data from the diagnostic test studies yielded the following likelihood ratios for diagnosing bacterial meningitis: A CSF–blood glucose ratio of 0.4 or less with a likelihood ratio of 18 (95% CI, 12-27); CSF white blood cell count of 500/µL or higher with a likelihood ratio of 15 (95% CI, 10-22); and CSF lactate level of >31.53 mg/dL with a likelihood ration of 21 (95% CI, 14-32) in accurately diagnosed bacterial meningitis.

These data support the reinsertion of the stylet before needle removal to reduce the risk of headache after lumbar puncture and that patients do not require bed rest after diagnostic lumbar puncture. Biochemical analyses, including CSF-blood glucose ratio, CSF leukocyte count and lactate level are useful in diagnosing bacterial meningitis.

This Rational Clinical Examination systematic review and meta-analysis provides a nice review of the available data on optimizing diagnostic lumbar puncture technique to reduce adverse events. It is somewhat remarkable so little has changed in our knowledge about this long-standing diagnostic procedure. Post-lumbar puncture headaches remain a challenge that may affect patient satisfaction as well as hospital (or observation unit) course particularly for patients who do not have evidence of bacterial meningitis once the analysis is complete.

This review seems to provide some useful answers for physicians performing lumbar puncture, who should consider selecting a small gauge needle and reinserting the stylet prior to removal. Future studies of other maneuvers to reduce post-procedure adverse events should be considered for the question of atraumatic needles, which may be technically more difficult to use. The review confirms and helps quantify the utility of CSF biochemical analysis in the diagnosis of bacterial meningitis.

Who’s Performing Procedures?

Wigton RS, Alguire P. The declining number and variety of procedures done by general internists: a resurvey of members of the American College of Physicians. Ann Intern Med. 2007 Mar 6;146(5):355-360. Comment in Ann Intern Med. 2007 Mar 6; 146(5):392-393.

Prior surveys of physicians documented that general internists performed a variety and significant number of procedures in their practice. Much has changed since those prior assessments, including physician training, practice settings, availability of subspecialists, and regulatory requirements that have altered physician’s practice with regard to procedures. This study sought to reassess the volume and variety of procedures performed by general internists compared with the prior survey of 1986. The final sample included 990 completed surveys from general internists from 1,389 returned questionnaires for a successful completion rate of 39.6%.

The median number of different procedures performed in practice decreased from 16 in 1986 to seven in 2004. Internists who practiced in smaller hospitals or smaller towns reported performing almost twice as many procedures as physicians in the largest hospitals and cities. Hours spent in the care of hospitalized patients were also associated with an increased number of different procedures—in particular mechanical ventilation, central venous catheter placement, and thoracentesis. For all but one of the 34 procedures common to both surveys, fewer general internists performed them in 2004 compared with 1986. Remarkably, for 22 of the 34 procedures, a greater than 50% reduction in the proportion of respondents who performed the procedure was noted.

 

 

In the 1986 survey, the majority of internists performed all but one of the six procedures required by the American Board of Internal Medicine (ABIM) for certification (abdominal paracentesis, arterial puncture for blood gases, central venous catheter placement, joint aspiration, lumbar puncture, and thoracentesis). Except for joint aspiration, in 2004 these required procedures were performed by 25% or fewer of the respondents.

The 2004 survey demonstrated a striking reduction in the number of different procedures performed by general internists, and a decrease in the proportion of internists who do most procedures. These reductions may stem from a variety of changes in physician practices, including the emergence of hospitalists, availability of subspecialty physicians and proceduralists, and changes in technology and regulatory environments.

Regardless of the forces behind these changes, internal medicine residents’ training in procedures should be re-examined.

Many of those in academic hospital medicine have noted a decline in procedures performed by general internists at large academic centers. This study affirms this trend overall and in particular for physicians in large urban settings or in the largest hospitals. The emergence of hospital medicine may have played a role in reducing the procedures performed by primary care (outpatient) physicians who now spend less time caring for medically ill hospitalized patients.

Residency programs now must consider how to incorporate procedure skills and training to align with the needs of internists. The rising interest in careers in hospital medicine (as opposed to outpatient primary care) necessitates a new approach and individualized plans for gaining procedural skills to match career goals and practice settings. The new ABIM policy acknowledges this greater variability in the procedures performed by internists in practice, and takes steps to more closely align procedure requirements and core manual skills with physician practice.

These changes and new flexibility in requirements provide another opportunity for academic hospital medicine programs to provide leadership, and help shape the training of inpatient physicians. TH

Issue
The Hospitalist - 2007(06)
Issue
The Hospitalist - 2007(06)
Publications
Publications
Article Type
Display Headline
In the Literature
Display Headline
In the Literature
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)