Comfort Seldom Comes from Cannula

Article Type
Changed
Tue, 12/13/2016 - 12:08
Display Headline
Comfort Seldom Comes from Cannula

Hospitalists frequently need to address the needs of the dying patient. At times questions and dynamics of the family and loved ones surrounding that patient may elevate the challenges and complexities of an already difficult situation. It takes a different skill set – one often not taught during medical education – to do this well.

Over the last few decades, research shows, approximately 50% of adults who die in the United States do so in the hospital setting. These factors have led the Society of Hospital Medicine (SHM) to advocate that hospitalists be effective at caring for the dying patient, even including palliative care within the Core Competencies of the specialty.

Dr. Stephen J. Bekanich

The mechanism of dyspnea is not completely understood, but its impact on patients and families is undeniably negative and profound. Just the description is discomfiting: It is "not a single sensation and there are at least three distinct sensations including air hunger, work/effort, and chest tightness" (Br. J. Anaesth. 2011;106:463-74). More than 50% of patients with cancer, cardiopulmonary disease, and neuromuscular disorders experience dyspnea; and more than 70% of all people experience this during the last few weeks of life (Palliat. Med. 2006;20:219-30).

All attempts to prevent or relieve dyspnea should be undertaken. Evidence-based strategies exist for and include both pharmacologic and nonpharmacologic approaches. Examples of the former include opioids, bronchodilators, and benzodiazepines while the latter group varies from guided imagery, noninvasive positive pressure ventilation, and fans, among others.

The benefit of oxygen therapy is at best controversial (Curr. Opin. Support. Palliat. Care 2008;2:89-94).To date there is a lack of evidence that providing oxygen to an actively dying patient is beneficial. In the most well-designed trial to date of patients with life-limiting illnesses and refractory dyspnea, palliative oxygen delivered via nasal cannula failed to demonstrate symptomatic improvement (Lancet 2010;376:784-93).

["Routine Oxygen at End of Life is Typically Unhelpful" -- Hospitalist News, 6/4/12]

Despite this, oxygen is frequently found on dying patients. From personal experience, at multiple centers, I have found this to be the case even when an order is written to discontinue any form of oxygen delivery. Why might the avoidance of oxygen in this situation be important?

• Hospitals spend money on obtaining, maintaining, and providing oxygen.

• Patients and third-party payers incur charges related to oxygen.

• Medical equipment can carry risks ranging from discomfort to a source of infection for those in contact with it.

• Delivery systems produce noise, and nurses and aides need to interrupt family time to administer this and follow-up on the intervention.

• It may send mixed messages to families about treatment goals.

• The cannula or masks can be seen as a barrier by loved ones wishing to express physical affection.

• It has not been shown to provide any benefit.

Anecdotally, the only time I have stronger feelings about leaving the oxygen in place during the dying process is when patients, who are still aware of their surroundings, have been on long-term oxygen and understandably feel anxious or naked without it. Otherwise, Dr. Mary L. Campbell’s suggestions for discontinuing oxygen and emphasizing good communication with the family about the situation are on point.

Dr. Bekanich is with the department of medicine and is medical director of palliative care, Seton Healthcare, Austin, Tex.

Author and Disclosure Information

Publications
Sections
Author and Disclosure Information

Author and Disclosure Information

Related Articles

Hospitalists frequently need to address the needs of the dying patient. At times questions and dynamics of the family and loved ones surrounding that patient may elevate the challenges and complexities of an already difficult situation. It takes a different skill set – one often not taught during medical education – to do this well.

Over the last few decades, research shows, approximately 50% of adults who die in the United States do so in the hospital setting. These factors have led the Society of Hospital Medicine (SHM) to advocate that hospitalists be effective at caring for the dying patient, even including palliative care within the Core Competencies of the specialty.

Dr. Stephen J. Bekanich

The mechanism of dyspnea is not completely understood, but its impact on patients and families is undeniably negative and profound. Just the description is discomfiting: It is "not a single sensation and there are at least three distinct sensations including air hunger, work/effort, and chest tightness" (Br. J. Anaesth. 2011;106:463-74). More than 50% of patients with cancer, cardiopulmonary disease, and neuromuscular disorders experience dyspnea; and more than 70% of all people experience this during the last few weeks of life (Palliat. Med. 2006;20:219-30).

All attempts to prevent or relieve dyspnea should be undertaken. Evidence-based strategies exist for and include both pharmacologic and nonpharmacologic approaches. Examples of the former include opioids, bronchodilators, and benzodiazepines while the latter group varies from guided imagery, noninvasive positive pressure ventilation, and fans, among others.

The benefit of oxygen therapy is at best controversial (Curr. Opin. Support. Palliat. Care 2008;2:89-94).To date there is a lack of evidence that providing oxygen to an actively dying patient is beneficial. In the most well-designed trial to date of patients with life-limiting illnesses and refractory dyspnea, palliative oxygen delivered via nasal cannula failed to demonstrate symptomatic improvement (Lancet 2010;376:784-93).

["Routine Oxygen at End of Life is Typically Unhelpful" -- Hospitalist News, 6/4/12]

Despite this, oxygen is frequently found on dying patients. From personal experience, at multiple centers, I have found this to be the case even when an order is written to discontinue any form of oxygen delivery. Why might the avoidance of oxygen in this situation be important?

• Hospitals spend money on obtaining, maintaining, and providing oxygen.

• Patients and third-party payers incur charges related to oxygen.

• Medical equipment can carry risks ranging from discomfort to a source of infection for those in contact with it.

• Delivery systems produce noise, and nurses and aides need to interrupt family time to administer this and follow-up on the intervention.

• It may send mixed messages to families about treatment goals.

• The cannula or masks can be seen as a barrier by loved ones wishing to express physical affection.

• It has not been shown to provide any benefit.

Anecdotally, the only time I have stronger feelings about leaving the oxygen in place during the dying process is when patients, who are still aware of their surroundings, have been on long-term oxygen and understandably feel anxious or naked without it. Otherwise, Dr. Mary L. Campbell’s suggestions for discontinuing oxygen and emphasizing good communication with the family about the situation are on point.

Dr. Bekanich is with the department of medicine and is medical director of palliative care, Seton Healthcare, Austin, Tex.

Hospitalists frequently need to address the needs of the dying patient. At times questions and dynamics of the family and loved ones surrounding that patient may elevate the challenges and complexities of an already difficult situation. It takes a different skill set – one often not taught during medical education – to do this well.

Over the last few decades, research shows, approximately 50% of adults who die in the United States do so in the hospital setting. These factors have led the Society of Hospital Medicine (SHM) to advocate that hospitalists be effective at caring for the dying patient, even including palliative care within the Core Competencies of the specialty.

Dr. Stephen J. Bekanich

The mechanism of dyspnea is not completely understood, but its impact on patients and families is undeniably negative and profound. Just the description is discomfiting: It is "not a single sensation and there are at least three distinct sensations including air hunger, work/effort, and chest tightness" (Br. J. Anaesth. 2011;106:463-74). More than 50% of patients with cancer, cardiopulmonary disease, and neuromuscular disorders experience dyspnea; and more than 70% of all people experience this during the last few weeks of life (Palliat. Med. 2006;20:219-30).

All attempts to prevent or relieve dyspnea should be undertaken. Evidence-based strategies exist for and include both pharmacologic and nonpharmacologic approaches. Examples of the former include opioids, bronchodilators, and benzodiazepines while the latter group varies from guided imagery, noninvasive positive pressure ventilation, and fans, among others.

The benefit of oxygen therapy is at best controversial (Curr. Opin. Support. Palliat. Care 2008;2:89-94).To date there is a lack of evidence that providing oxygen to an actively dying patient is beneficial. In the most well-designed trial to date of patients with life-limiting illnesses and refractory dyspnea, palliative oxygen delivered via nasal cannula failed to demonstrate symptomatic improvement (Lancet 2010;376:784-93).

["Routine Oxygen at End of Life is Typically Unhelpful" -- Hospitalist News, 6/4/12]

Despite this, oxygen is frequently found on dying patients. From personal experience, at multiple centers, I have found this to be the case even when an order is written to discontinue any form of oxygen delivery. Why might the avoidance of oxygen in this situation be important?

• Hospitals spend money on obtaining, maintaining, and providing oxygen.

• Patients and third-party payers incur charges related to oxygen.

• Medical equipment can carry risks ranging from discomfort to a source of infection for those in contact with it.

• Delivery systems produce noise, and nurses and aides need to interrupt family time to administer this and follow-up on the intervention.

• It may send mixed messages to families about treatment goals.

• The cannula or masks can be seen as a barrier by loved ones wishing to express physical affection.

• It has not been shown to provide any benefit.

Anecdotally, the only time I have stronger feelings about leaving the oxygen in place during the dying process is when patients, who are still aware of their surroundings, have been on long-term oxygen and understandably feel anxious or naked without it. Otherwise, Dr. Mary L. Campbell’s suggestions for discontinuing oxygen and emphasizing good communication with the family about the situation are on point.

Dr. Bekanich is with the department of medicine and is medical director of palliative care, Seton Healthcare, Austin, Tex.

Publications
Publications
Article Type
Display Headline
Comfort Seldom Comes from Cannula
Display Headline
Comfort Seldom Comes from Cannula
Sections
Article Source

PURLs Copyright

Inside the Article

Bacterial Meningitis, Non-Specific Troponin Elevation, Antibiotics for ECOPD, VTE Update, and More

Article Type
Changed
Fri, 09/14/2018 - 12:38
Display Headline
Bacterial Meningitis, Non-Specific Troponin Elevation, Antibiotics for ECOPD, VTE Update, and More

Treatment of Bacterial Meningitis with Vancomycin

Ricard JD, Wolff M, Lacherade JC, et al. Levels of vancomycin in cerebrospinal fluid of adult patients receiving adjunctive corticosteroids to treat pneumococcal meningitis: a prospective multicenter observational study. Clin Infect Dis. 2007 Jan 15;44(2):250-255. Epub 2006 Dec 15.

In 2002, van de Beek and de Gans published a study demonstrating that adjuvant dexamethasone decreased mortality and improved neurological disability when given to patients with bacterial meningitis. Their results changed our treatment paradigm for this disease but left us with several questions. At what point in the treatment course does giving corticosteroids become ineffective? Do their results apply to all bacterial pathogens? Can the results be applied to the use of vancomycin in treating penicillin-resistant strains of Streptococcus pneumoniae? This final question arises from the disturbing ability of vancomycin to penetrate the cerebrospinal fluid (CSF). Previous data support this concern; thus, bactericidal titers may be inadequate within the CSF. Because meningeal inflammation exerts a strong influence over whether or not vancomycin enters the CSF, administering steroids may decrease its ability to do so. This study brings some clarity to the issue.

In this observational open multicenter trial from France, 14 adults were admitted to intensive care units with suspected pneumococcal meningitis. They were treated with intravenous cefotaxime, vancomycin, and dexamethasone. The vancomycin was given as a loading dose of 15 mg per kg of body weight followed by administration of a continuous infusion of 60 mg per kg of body weight per day. The diagnosis of pneumococcal meningitis was made using a CSF pleocytosis as well as one or more of the following: a positive culture from either the blood or CSF, a Gram stain showing Gram-positive diplococci, or pneumococcal antigens in the CSF as demonstrated by latex agglutination. Patients had a second lumbar puncture on either day two or three to measure vancomycin levels—among other markers of disease activity—in the CSF. Serum levels of vancomycin were drawn simultaneously.

Thirteen of the 14 patients had pneumococcal meningitis; one patient was found to have meningitis from Neisseria meningitidis. Seven patients had pneumococcal strains resistant to penicillin. Ten of the 14 patients required mechanical intubation. The second lumbar puncture demonstrated marked improvements in leukocyte counts, protein levels, and glucose levels. All subsequent cultures from the CSF were negative. Three patients died, two had neurological sequelae, and the remainder were discharged from the hospital without complications. Vancomycin concentrations in the serum ranged from 14.2 to 39.0 mg/L, with a mean of 25.2 mg/L; concentrations in the CSF ranged from 3.1 to 22.3 mg/L, with a mean of 7.9 mg/L. There was a significant correlation between vancomycin levels in the serum and those in the CSF (r = 0.68; P = 0.01). The concentration of vancomycin in the CSF was between four and 10 times the mean inhibitory concentrations (MICs). A linear correlation exists between penetration of vancomycin into CSF and serum levels. No evidence of drug toxicities was observed.

The results demonstrate that a therapeutic concentration of vancomycin can be achieved in the CSF. The continuous infusion of vancomycin with a loading dose, which has not been standard practice, has previously been shown to achieve targeted serum levels more quickly than intermittent dosing. Levels of serum vancomycin were likely higher in this study than when troughs of 15-20 mg/L are the goal. This data strongly suggests, however, that this same treatment regimen can obtain adequate vancomycin levels in the CSF while treating pneumococcal meningitis with adjunctive steroids.

Though this is a retrospective trial, it provides guidance for a very common clinical scenario.
 

 

Nonspecific elevations in troponins

Alcalai R, Planer D, Culhaoqlu A, et al. Acute coronary syndrome vs nonspecific troponin elevation: clinical predictors and survival analysis. Arch Intern Med. 2007 Feb 12;167(3):276-281.

In 2000, the American College of Cardiology (ACC) and the European Society of Cardiology (ESC) jointly produced a recommendation for a new definition of myocardial infarction. This proposal based the diagnosis primarily on the elevation of biomarkers specific to cardiac tissue, troponin T and troponin I. Since that time, as use of these blood tests has escalated, it is apparent that elevations in these biomarkers do not always translate into thrombotic coronary artery occlusion. Instead, we have seen that they are positive in a variety of clinical settings. These include sepsis, renal failure, pulmonary embolism, and atrial fibrillation. This investigation attempts to characterize the differences among patients presenting with acute coronary syndrome (ACS) and nonthrombotic troponin elevation (NTTE), to report on outcomes for each, and to note the positive predictive values (PPV) for elevated troponins across clinical settings.

Two hospitals in Israel collected data on all adult patients who experienced an elevation in troponin T (defined as at least 0.1 ng/mL) at any time during their hospital stay. Six hundred and fifteen patients were evaluated by age, sex, cardiovascular risk factors, history of ischemic heart disease, left ventricular function (LVF) by echocardiogram, serum creatine phosphokinase (CPK), and creatinine levels, as well as by which hospital service each had been admitted under. The highest troponin T value was used in the analysis, along with the creatinine level taken on the same day. Two physicians, one a specialist in internal medicine and the other a specialist in cardiology, independently determined the principal diagnosis in accordance with the ACC/ESC guidelines for thrombotic ACS and used other diagnostic studies for alternative diagnosis for conditions known to cause NTTE.

Patients were followed up for causes of mortality for up to two-and-a-half years. Kappa (k) was calculated for physician agreement regarding the principal diagnosis. To assess independent odds ratios and their 95% confidence intervals (CIs) of predictor variables for ACS, an unconditional multiple logistic regression analysis was used. The PPV for troponin T in the diagnosis of ACS was calculated. In-house mortality rates were measured. Long-term risk of death was assessed using Cox proportional hazard models.

The diagnosis of ACS was made in only 53% (326) of the patients. Forty-one percent (254) had NTTE, and the diagnosis was not determined in 6% (35). The diagnoses comprising NTTE included—in order from most to least common—cardiac non-ischemic conditions, sepsis, pulmonary diseases, and neurologic diseases. Using the multivariate analysis, the diagnostic predictors for ACS were history of hypertension or ischemic heart disease, age between 40 and 70 years, higher troponin levels (greater than 1.0 ng/mL), and normal renal function. Extreme age and admission to a surgical team were negative predictors for ACS. Gender, presence of diabetes, and LVF did not appear to make a difference.

The PPV of an elevated troponin T for ACS among all patients was only 56% (95% CI, 52%-60%). It became lower (27%) in those older than 70 with abnormal renal function and higher (90%) in those with a troponin T greater than 1.0 ng/mL and normal renal function. In-house mortality for all patients was 8%; for those with ACS, it was 3%, while for those with NTTE, it was—at 21%—almost eight times higher than the ACS group (P<0.001). Patients were followed up for mortality for a median of 22 months. The long-term mortality was also significantly better (P<0.001) for those with a diagnosis of ACS than for those with NTTE.

 

 

Since the incorporation of the ACC/ESC guidelines, the diagnosis of ACS has substantially increased. It is critical to distinguish between ACS and NTTE when using these very sensitive biomarkers, because the underlying cause of NTTE usually requires a drastically different therapy than that of ACS; in addition, misdiagnosing a myocardial infarction may lead to potentially harmful diagnostic studies and therapies in the form of coronary angiography, antithrombotics, and antiplatelet agents. Hospitalists should look for ACS when troponin T levels exceed 1.0 ng/mL in the face of normal renal function. Based on their data, the authors present an algorithm for working up ACS and NTTE that takes into consideration the clinical presentation, age, renal function, electrocardiographic changes, and troponin T levels. Though this is a retrospective trial, it provides guidance for a very common clinical scenario. We should be concerned about a patient’s prognosis when we encounter an elevated troponin in a setting of NTTE.

Though recent literature suggests that antibiotic therapy during exacerbations reduces morbidity and mortality and reduces the lack of response to treatment, controversy remains as to whether or not this is applicable to all patients with this condition.

Guiding Antibiotic Therapy for COPD Exacerbations

Stolz D, Christ-Crain M, Bingisser R, et al. Antibiotic treatment of exacerbations of COPD: a randomized, controlled trial comparing procalcitonin-guidance with standard therapy. Chest. 2007 Jan;131(1):9-19.

Chronic obstructive pulmonary disease (COPD) is a leading cause of morbidity and mortality in the United States. Exacerbations of COPD (ECOPD) that require hospitalization are both common and costly. Though recent literature suggests that antibiotic therapy during exacerbations reduces morbidity and mortality and lowers the lack of response to treatment, controversy persists concerning whether or not these results are applicable to all patients with this condition. Procalcitonin is a protein not typically measurable in plasma. Levels of this protein rise with bacterial infections, but appear to be unaffected by inflammation from other etiologies such as autoimmune processes or viral infections. Measuring procalcitonin levels has already been shown to safely decrease the use of antibiotics in lower respiratory infections.

This single-center trial from Switzerland evaluated consecutive patients admitted from the emergency department with ECOPD. For 226 enrolled patients, symptoms were quantified, sputum was collected, spirometry was measured, and procalcitonin levels were evaluated. Attending physicians chose antibiotics, using current guidelines, for patients randomized to the standard therapy group. In the group randomized to procalcitonin guidance, antibiotics were given according to serum levels. No antibiotics were administered for levels below 0.1 micrograms (mcg)/L; antibiotics were encouraged for levels greater than 0.25 mcg/L. For levels between 0.1 and .25 mcg/L, antibiotics were encouraged or discouraged based on the clinical condition of the patient. The primary outcomes evaluated were total antibiotics used during hospitalization and up to six months following hospitalization. Secondary endpoints included clinical and laboratory data and six-month follow-up for exacerbation rate and time to the next ECOPD.

Procalcitonin guidance significantly decreased antibiotic administration compared with the standard-therapy arm (40% versus 72% respectively; P<0.0001) and antibiotic exposure (RR, 0.56; 95% CI, 0.43 to 0.73; P<0.0001). The absolute risk reduction was 31.5% (95% CI, 18.7 to 44.3%; p<0.0001). No difference in the mean time to the next exacerbation was noticed between the two groups. Clinical and laboratory measures at baseline and through the six-month follow-up demonstrated no significant differences.

Using procalcitonin levels to guide antibiotic therapy for ECOPD is a practice that is exciting and full of promise. Not only could costs be cut by omitting antibiotics for this treatment regimen in select patients, but some pressure will be relieved in terms of decreasing emerging bacterial resistance. Because procalcitonin levels have a lab turn-around time of approximately one hour, this test becomes even more attractive: decisions for treatment can be made while patients are still in the emergency department. On a cautionary note, there is more than one method of testing for procalcitonin levels, and this trial was done at only one center. Before widespread use of this test is applied, these results should be validated in a multicenter trial. In addition, one test should be used consistently for measuring procalcitonin levels.

 

 

Given the data presented, there is currently no way to consistently distinguish between CA-MRSA and CA-MSSA prior to culture results.

Community-Associated MRSA and MSSA: Clinical and Epidemiologic Characteristics

Miller LG, Perdreau-Remington F, Bayer AS, et al. Clinical and epidemiologic characteristics cannot distinguish community-associated methicillin-resistant Staphylococcus aureus infection from methicillin-susceptible S. aureus infection: a prospective investigation. Clin Infect Dis. 2007 Feb 15;44(4):471-482. Epub 2007 Jan 19.

Methicillin-susceptible Staphylococcus aureus (MSSA) was, until very recently, the predominant strain seen in community-associated (CA) S. aureus infections. Now methicillin-resistant S aureus (MRSA) is a concern around the world. Deciding whether or not to treat empirically for MRSA in those patients who do not have risk factors for healthcare-associated (HCA) infections is difficult.

Investigators at the University of California-Los Angeles Medical Center (Torrance) prospectively evaluated consecutive patients admitted to the county hospital with S. aureus infections. Daily cultures of wounds, urine, blood, and sputum were taken. An extensive questionnaire was completed by 280 patients who provided information on exposures, demographic characteristics, and clinical characteristics. CA infections were defined as those not having a positive culture from a surgical site in a patient who, in the past year, had not lived in an extended living facility, had any indwelling devices, visited an infusion center, or received dialysis.

Of those evaluated, 202 patients (78%) had CA S. aureus and 78 (28%) had HCA S. aureus. Of those with the CA infections, 108 (60%) had MRSA and 72 (40%) had MSSA. Sensitivity, specificity, predictive values, and likelihood ratios for the risk factors evaluated were unable to distinguish CA-MRSA from CA-MSSA. For example, the sensitivities for most MRSA risk factors were less than 30%, and all the positive likelihood ratios were lower than three.

This study has very important consequences. Given the data presented, there is currently no way to consistently distinguish between CA-MRSA and CA-MSSA prior to culture results. It would be very reasonable in this population to treat for MRSA empirically. One limitation is that the information comes from a single center in an area that has a very diverse patient population. Also, because this was done at a county hospital, the resources for treating patients who would be cared for in the outpatient arena at other centers might not otherwise be available, thus generalizing this data to potential outpatients. Because the morbidity and mortality from a delay in treatment of MRSA infections is significant, however, it appears sensible to treat CA S. aureus empirically in areas where CA-MRSA is common, regardless of patients’ risk factors.

Venous Thromboembolism Update

King CS, Holley AB, Jackson JL, et al. Twice vs three times daily heparin dosing for thromboembolism prophylaxis in the general medical population: a metaanalysis. Chest. 2007 Feb;131(2):507-516.

Nijkeuter M, Sohne M, Tick LW, et al. The natural course of hemodynamically stable pulmonary embolism: clinical outcome and risk factors in a large prospective cohort study. Chest. 2007 Feb;131(2):517-523.

Segal JB, Streiff MB, Hoffman LV, et al. Management of venous thromboembolism: a systematic review for a practice guideline. Ann Intern Med. 2007 Feb 6;146(3):211-222.

Snow V, Qaseem A, Barry P, et al. Management of venous thromboembolism: a clinical practice guideline from the American College of Physicians and the American Academy of Family Physicians. Ann Intern Med. 2007 Feb 6;146 (3):204-210. Epub 2007 Jan 29.

The prevention and treatment of venous thromboembolism (VTE) is a skill set required for all hospitalists given the prevalence of this condition in hospitalized patients as well as the significant morbidity and mortality associated with the condition. Several articles that help to guide our decisions in managing VTE have been published recently.

 

 

We have no randomized controlled trials (RCT) comparing twice-daily (bid) with three-times-daily (tid) dosing of unfractionated heparin (UFH) for the prevention of VTE in medically ill patient populations. It is unlikely that such a study, involving an adequate number of patients, will ever be conducted. Though low molecular weight heparins (LMWH) are used more frequently for VTE prevention, many hospitalists still use UFH to prevent VTE in patients who are morbidly obese or who have profound renal insufficiency. King and colleagues have done a meta-analysis to find out whether or not tid dosing is superior to bid dosing for VTE prevention. Twelve studies, including almost 8,000 patients from 1966 to 2004, were reviewed. All patients were hospitalized for medical rather than surgical conditions.

Tid heparin significantly decreased the incidence of the combined outcome of pulmonary embolism (PE) and proximal deep vein thrombosis (DVT). There was a trend toward significance in decreasing the incidence of PE. There was a significant increase in the number of major bleeds with tid dosing compared with bid dosing. There are many limitations to this study: It is retrospective, the population is extremely heterogeneous, and varying methods have been employed to diagnosis VTE across the many studies from which data were pooled. This is likely the best data we will have for UFH in VTE prevention, however. In summary, tid dosing is preferred for high-risk patients, but bid dosing should be considered for those at risk for bleeding complications.

Data are limited for the clinical course of PE. Outpatient treatment of PE with LMWH is not uncommon in select patients, but choosing who is safe to treat in this arena is uncertain. Nijkeuter and colleagues assessed the incidence of recurrent VTE, hemorrhagic complications from therapy, mortality, risk factors for recurrence, and the course of these events from the time of diagnosis through a three-month follow-up period.

Six hundred and seventy-three patients completed the three-month follow-up. Twenty of them (3%) had recurrent VTE; 14 of these had recurrent PE. Recurrence predominantly transpired in the first three weeks of therapy. Of those with recurrent PE, 11 (79%) were fatal, and most of these occurred within the first week of diagnosis. Major bleeding occurred in 1.5% of the patients. Immobilization for more than three days was a significant risk factor for recurrence. Inpatient status, a diagnosis of COPD, and malignancy were independent risk factors for bleeding complications. Fifty-five patients (8.2%) died over the three-month period. Twenty percent died of fatal recurrent PE, while 4% suffered fatal hemorrhage.

Multivariate analysis revealed four characteristics as independent risk factors for mortality in patients with PE. These include age, inpatient status, immobilization for more than three days, and malignancy. It appears that the majority of recurrent and fatal PE occurs during the first week of therapy. Physicians should not discharge patients to home with LMWH for PE without considering these risk factors for hemorrhage, recurrence, and mortality.

Annals of Internal Medicine has published a systematic review of management issues in VTE to provide the framework for the American College of Physicians practice guidelines. These guidelines pool data from more than 100 randomized controlled trials and comment on six areas in VTE management. The following are quotes from this document.

Recommendation #1: Use low molecular-weight heparin (LMWH) rather than unfractionated heparin whenever possible for the initial inpatient treatment of deep vein thrombosis (DVT). Either unfractionated heparin or LMWH is appropriate for the initial treatment of pulmonary embolism.

Recommendation #2: Outpatient treatment of DVT, and possibly pulmonary embolism, with LMWH is safe and cost-effective for carefully selected patients and should be considered if the required support services are in place.

 

 

Recommendation #3: Compression stockings should be used routinely to prevent post-thrombotic syndrome, beginning within one month of diagnosis of proximal DVT and continuing for a minimum of one year after diagnosis.

Recommendation #4: There is insufficient evidence to make specific recommendations for types of anticoagulation management of VTE in pregnant women.

Recommendation #5: Anticoagulation should be maintained for three to six months for VTE secondary to transient risk factors and for more than 12 months for recurrent VTE. While the appropriate duration of anticoagulation for idiopathic or recurrent VTE is not definitively known, there is evidence of substantial benefit for extended-duration therapy.

Recommendation #6: LMWH is safe and efficacious for the long-term treatment of VTE in selected patients (and may be preferable for patients with cancer).

All of these seem reasonable and appropriate with a possible exception in the second recommendation. Using LMWH to treat patients diagnosed with PE in the outpatient setting is not well supported by data. The vast majority of trials involving the treatment of VTE with LMWH have been conducted on those with DVT; the number of patients in the trials with PE has been very small. The Food and Drug Administration has not approved LMWH for outpatient treatment of PE; LMWH is FDA approved in the outpatient setting only for the treatment of DVT. We know that the hemodynamic changes that can accompany PE may not occur for at least 24 hours. In addition, we now have data from the Nijkeuter study that point to dangers that may result from treating PE outside the hospital setting. At this time, we should treat PE with LMWH in the outpatient setting only with patients whose risk factors, clinical characteristics, and outpatient resources have been carefully scrutinized. TH

Issue
The Hospitalist - 2007(04)
Publications
Topics
Sections

Treatment of Bacterial Meningitis with Vancomycin

Ricard JD, Wolff M, Lacherade JC, et al. Levels of vancomycin in cerebrospinal fluid of adult patients receiving adjunctive corticosteroids to treat pneumococcal meningitis: a prospective multicenter observational study. Clin Infect Dis. 2007 Jan 15;44(2):250-255. Epub 2006 Dec 15.

In 2002, van de Beek and de Gans published a study demonstrating that adjuvant dexamethasone decreased mortality and improved neurological disability when given to patients with bacterial meningitis. Their results changed our treatment paradigm for this disease but left us with several questions. At what point in the treatment course does giving corticosteroids become ineffective? Do their results apply to all bacterial pathogens? Can the results be applied to the use of vancomycin in treating penicillin-resistant strains of Streptococcus pneumoniae? This final question arises from the disturbing ability of vancomycin to penetrate the cerebrospinal fluid (CSF). Previous data support this concern; thus, bactericidal titers may be inadequate within the CSF. Because meningeal inflammation exerts a strong influence over whether or not vancomycin enters the CSF, administering steroids may decrease its ability to do so. This study brings some clarity to the issue.

In this observational open multicenter trial from France, 14 adults were admitted to intensive care units with suspected pneumococcal meningitis. They were treated with intravenous cefotaxime, vancomycin, and dexamethasone. The vancomycin was given as a loading dose of 15 mg per kg of body weight followed by administration of a continuous infusion of 60 mg per kg of body weight per day. The diagnosis of pneumococcal meningitis was made using a CSF pleocytosis as well as one or more of the following: a positive culture from either the blood or CSF, a Gram stain showing Gram-positive diplococci, or pneumococcal antigens in the CSF as demonstrated by latex agglutination. Patients had a second lumbar puncture on either day two or three to measure vancomycin levels—among other markers of disease activity—in the CSF. Serum levels of vancomycin were drawn simultaneously.

Thirteen of the 14 patients had pneumococcal meningitis; one patient was found to have meningitis from Neisseria meningitidis. Seven patients had pneumococcal strains resistant to penicillin. Ten of the 14 patients required mechanical intubation. The second lumbar puncture demonstrated marked improvements in leukocyte counts, protein levels, and glucose levels. All subsequent cultures from the CSF were negative. Three patients died, two had neurological sequelae, and the remainder were discharged from the hospital without complications. Vancomycin concentrations in the serum ranged from 14.2 to 39.0 mg/L, with a mean of 25.2 mg/L; concentrations in the CSF ranged from 3.1 to 22.3 mg/L, with a mean of 7.9 mg/L. There was a significant correlation between vancomycin levels in the serum and those in the CSF (r = 0.68; P = 0.01). The concentration of vancomycin in the CSF was between four and 10 times the mean inhibitory concentrations (MICs). A linear correlation exists between penetration of vancomycin into CSF and serum levels. No evidence of drug toxicities was observed.

The results demonstrate that a therapeutic concentration of vancomycin can be achieved in the CSF. The continuous infusion of vancomycin with a loading dose, which has not been standard practice, has previously been shown to achieve targeted serum levels more quickly than intermittent dosing. Levels of serum vancomycin were likely higher in this study than when troughs of 15-20 mg/L are the goal. This data strongly suggests, however, that this same treatment regimen can obtain adequate vancomycin levels in the CSF while treating pneumococcal meningitis with adjunctive steroids.

Though this is a retrospective trial, it provides guidance for a very common clinical scenario.
 

 

Nonspecific elevations in troponins

Alcalai R, Planer D, Culhaoqlu A, et al. Acute coronary syndrome vs nonspecific troponin elevation: clinical predictors and survival analysis. Arch Intern Med. 2007 Feb 12;167(3):276-281.

In 2000, the American College of Cardiology (ACC) and the European Society of Cardiology (ESC) jointly produced a recommendation for a new definition of myocardial infarction. This proposal based the diagnosis primarily on the elevation of biomarkers specific to cardiac tissue, troponin T and troponin I. Since that time, as use of these blood tests has escalated, it is apparent that elevations in these biomarkers do not always translate into thrombotic coronary artery occlusion. Instead, we have seen that they are positive in a variety of clinical settings. These include sepsis, renal failure, pulmonary embolism, and atrial fibrillation. This investigation attempts to characterize the differences among patients presenting with acute coronary syndrome (ACS) and nonthrombotic troponin elevation (NTTE), to report on outcomes for each, and to note the positive predictive values (PPV) for elevated troponins across clinical settings.

Two hospitals in Israel collected data on all adult patients who experienced an elevation in troponin T (defined as at least 0.1 ng/mL) at any time during their hospital stay. Six hundred and fifteen patients were evaluated by age, sex, cardiovascular risk factors, history of ischemic heart disease, left ventricular function (LVF) by echocardiogram, serum creatine phosphokinase (CPK), and creatinine levels, as well as by which hospital service each had been admitted under. The highest troponin T value was used in the analysis, along with the creatinine level taken on the same day. Two physicians, one a specialist in internal medicine and the other a specialist in cardiology, independently determined the principal diagnosis in accordance with the ACC/ESC guidelines for thrombotic ACS and used other diagnostic studies for alternative diagnosis for conditions known to cause NTTE.

Patients were followed up for causes of mortality for up to two-and-a-half years. Kappa (k) was calculated for physician agreement regarding the principal diagnosis. To assess independent odds ratios and their 95% confidence intervals (CIs) of predictor variables for ACS, an unconditional multiple logistic regression analysis was used. The PPV for troponin T in the diagnosis of ACS was calculated. In-house mortality rates were measured. Long-term risk of death was assessed using Cox proportional hazard models.

The diagnosis of ACS was made in only 53% (326) of the patients. Forty-one percent (254) had NTTE, and the diagnosis was not determined in 6% (35). The diagnoses comprising NTTE included—in order from most to least common—cardiac non-ischemic conditions, sepsis, pulmonary diseases, and neurologic diseases. Using the multivariate analysis, the diagnostic predictors for ACS were history of hypertension or ischemic heart disease, age between 40 and 70 years, higher troponin levels (greater than 1.0 ng/mL), and normal renal function. Extreme age and admission to a surgical team were negative predictors for ACS. Gender, presence of diabetes, and LVF did not appear to make a difference.

The PPV of an elevated troponin T for ACS among all patients was only 56% (95% CI, 52%-60%). It became lower (27%) in those older than 70 with abnormal renal function and higher (90%) in those with a troponin T greater than 1.0 ng/mL and normal renal function. In-house mortality for all patients was 8%; for those with ACS, it was 3%, while for those with NTTE, it was—at 21%—almost eight times higher than the ACS group (P<0.001). Patients were followed up for mortality for a median of 22 months. The long-term mortality was also significantly better (P<0.001) for those with a diagnosis of ACS than for those with NTTE.

 

 

Since the incorporation of the ACC/ESC guidelines, the diagnosis of ACS has substantially increased. It is critical to distinguish between ACS and NTTE when using these very sensitive biomarkers, because the underlying cause of NTTE usually requires a drastically different therapy than that of ACS; in addition, misdiagnosing a myocardial infarction may lead to potentially harmful diagnostic studies and therapies in the form of coronary angiography, antithrombotics, and antiplatelet agents. Hospitalists should look for ACS when troponin T levels exceed 1.0 ng/mL in the face of normal renal function. Based on their data, the authors present an algorithm for working up ACS and NTTE that takes into consideration the clinical presentation, age, renal function, electrocardiographic changes, and troponin T levels. Though this is a retrospective trial, it provides guidance for a very common clinical scenario. We should be concerned about a patient’s prognosis when we encounter an elevated troponin in a setting of NTTE.

Though recent literature suggests that antibiotic therapy during exacerbations reduces morbidity and mortality and reduces the lack of response to treatment, controversy remains as to whether or not this is applicable to all patients with this condition.

Guiding Antibiotic Therapy for COPD Exacerbations

Stolz D, Christ-Crain M, Bingisser R, et al. Antibiotic treatment of exacerbations of COPD: a randomized, controlled trial comparing procalcitonin-guidance with standard therapy. Chest. 2007 Jan;131(1):9-19.

Chronic obstructive pulmonary disease (COPD) is a leading cause of morbidity and mortality in the United States. Exacerbations of COPD (ECOPD) that require hospitalization are both common and costly. Though recent literature suggests that antibiotic therapy during exacerbations reduces morbidity and mortality and lowers the lack of response to treatment, controversy persists concerning whether or not these results are applicable to all patients with this condition. Procalcitonin is a protein not typically measurable in plasma. Levels of this protein rise with bacterial infections, but appear to be unaffected by inflammation from other etiologies such as autoimmune processes or viral infections. Measuring procalcitonin levels has already been shown to safely decrease the use of antibiotics in lower respiratory infections.

This single-center trial from Switzerland evaluated consecutive patients admitted from the emergency department with ECOPD. For 226 enrolled patients, symptoms were quantified, sputum was collected, spirometry was measured, and procalcitonin levels were evaluated. Attending physicians chose antibiotics, using current guidelines, for patients randomized to the standard therapy group. In the group randomized to procalcitonin guidance, antibiotics were given according to serum levels. No antibiotics were administered for levels below 0.1 micrograms (mcg)/L; antibiotics were encouraged for levels greater than 0.25 mcg/L. For levels between 0.1 and .25 mcg/L, antibiotics were encouraged or discouraged based on the clinical condition of the patient. The primary outcomes evaluated were total antibiotics used during hospitalization and up to six months following hospitalization. Secondary endpoints included clinical and laboratory data and six-month follow-up for exacerbation rate and time to the next ECOPD.

Procalcitonin guidance significantly decreased antibiotic administration compared with the standard-therapy arm (40% versus 72% respectively; P<0.0001) and antibiotic exposure (RR, 0.56; 95% CI, 0.43 to 0.73; P<0.0001). The absolute risk reduction was 31.5% (95% CI, 18.7 to 44.3%; p<0.0001). No difference in the mean time to the next exacerbation was noticed between the two groups. Clinical and laboratory measures at baseline and through the six-month follow-up demonstrated no significant differences.

Using procalcitonin levels to guide antibiotic therapy for ECOPD is a practice that is exciting and full of promise. Not only could costs be cut by omitting antibiotics for this treatment regimen in select patients, but some pressure will be relieved in terms of decreasing emerging bacterial resistance. Because procalcitonin levels have a lab turn-around time of approximately one hour, this test becomes even more attractive: decisions for treatment can be made while patients are still in the emergency department. On a cautionary note, there is more than one method of testing for procalcitonin levels, and this trial was done at only one center. Before widespread use of this test is applied, these results should be validated in a multicenter trial. In addition, one test should be used consistently for measuring procalcitonin levels.

 

 

Given the data presented, there is currently no way to consistently distinguish between CA-MRSA and CA-MSSA prior to culture results.

Community-Associated MRSA and MSSA: Clinical and Epidemiologic Characteristics

Miller LG, Perdreau-Remington F, Bayer AS, et al. Clinical and epidemiologic characteristics cannot distinguish community-associated methicillin-resistant Staphylococcus aureus infection from methicillin-susceptible S. aureus infection: a prospective investigation. Clin Infect Dis. 2007 Feb 15;44(4):471-482. Epub 2007 Jan 19.

Methicillin-susceptible Staphylococcus aureus (MSSA) was, until very recently, the predominant strain seen in community-associated (CA) S. aureus infections. Now methicillin-resistant S aureus (MRSA) is a concern around the world. Deciding whether or not to treat empirically for MRSA in those patients who do not have risk factors for healthcare-associated (HCA) infections is difficult.

Investigators at the University of California-Los Angeles Medical Center (Torrance) prospectively evaluated consecutive patients admitted to the county hospital with S. aureus infections. Daily cultures of wounds, urine, blood, and sputum were taken. An extensive questionnaire was completed by 280 patients who provided information on exposures, demographic characteristics, and clinical characteristics. CA infections were defined as those not having a positive culture from a surgical site in a patient who, in the past year, had not lived in an extended living facility, had any indwelling devices, visited an infusion center, or received dialysis.

Of those evaluated, 202 patients (78%) had CA S. aureus and 78 (28%) had HCA S. aureus. Of those with the CA infections, 108 (60%) had MRSA and 72 (40%) had MSSA. Sensitivity, specificity, predictive values, and likelihood ratios for the risk factors evaluated were unable to distinguish CA-MRSA from CA-MSSA. For example, the sensitivities for most MRSA risk factors were less than 30%, and all the positive likelihood ratios were lower than three.

This study has very important consequences. Given the data presented, there is currently no way to consistently distinguish between CA-MRSA and CA-MSSA prior to culture results. It would be very reasonable in this population to treat for MRSA empirically. One limitation is that the information comes from a single center in an area that has a very diverse patient population. Also, because this was done at a county hospital, the resources for treating patients who would be cared for in the outpatient arena at other centers might not otherwise be available, thus generalizing this data to potential outpatients. Because the morbidity and mortality from a delay in treatment of MRSA infections is significant, however, it appears sensible to treat CA S. aureus empirically in areas where CA-MRSA is common, regardless of patients’ risk factors.

Venous Thromboembolism Update

King CS, Holley AB, Jackson JL, et al. Twice vs three times daily heparin dosing for thromboembolism prophylaxis in the general medical population: a metaanalysis. Chest. 2007 Feb;131(2):507-516.

Nijkeuter M, Sohne M, Tick LW, et al. The natural course of hemodynamically stable pulmonary embolism: clinical outcome and risk factors in a large prospective cohort study. Chest. 2007 Feb;131(2):517-523.

Segal JB, Streiff MB, Hoffman LV, et al. Management of venous thromboembolism: a systematic review for a practice guideline. Ann Intern Med. 2007 Feb 6;146(3):211-222.

Snow V, Qaseem A, Barry P, et al. Management of venous thromboembolism: a clinical practice guideline from the American College of Physicians and the American Academy of Family Physicians. Ann Intern Med. 2007 Feb 6;146 (3):204-210. Epub 2007 Jan 29.

The prevention and treatment of venous thromboembolism (VTE) is a skill set required for all hospitalists given the prevalence of this condition in hospitalized patients as well as the significant morbidity and mortality associated with the condition. Several articles that help to guide our decisions in managing VTE have been published recently.

 

 

We have no randomized controlled trials (RCT) comparing twice-daily (bid) with three-times-daily (tid) dosing of unfractionated heparin (UFH) for the prevention of VTE in medically ill patient populations. It is unlikely that such a study, involving an adequate number of patients, will ever be conducted. Though low molecular weight heparins (LMWH) are used more frequently for VTE prevention, many hospitalists still use UFH to prevent VTE in patients who are morbidly obese or who have profound renal insufficiency. King and colleagues have done a meta-analysis to find out whether or not tid dosing is superior to bid dosing for VTE prevention. Twelve studies, including almost 8,000 patients from 1966 to 2004, were reviewed. All patients were hospitalized for medical rather than surgical conditions.

Tid heparin significantly decreased the incidence of the combined outcome of pulmonary embolism (PE) and proximal deep vein thrombosis (DVT). There was a trend toward significance in decreasing the incidence of PE. There was a significant increase in the number of major bleeds with tid dosing compared with bid dosing. There are many limitations to this study: It is retrospective, the population is extremely heterogeneous, and varying methods have been employed to diagnosis VTE across the many studies from which data were pooled. This is likely the best data we will have for UFH in VTE prevention, however. In summary, tid dosing is preferred for high-risk patients, but bid dosing should be considered for those at risk for bleeding complications.

Data are limited for the clinical course of PE. Outpatient treatment of PE with LMWH is not uncommon in select patients, but choosing who is safe to treat in this arena is uncertain. Nijkeuter and colleagues assessed the incidence of recurrent VTE, hemorrhagic complications from therapy, mortality, risk factors for recurrence, and the course of these events from the time of diagnosis through a three-month follow-up period.

Six hundred and seventy-three patients completed the three-month follow-up. Twenty of them (3%) had recurrent VTE; 14 of these had recurrent PE. Recurrence predominantly transpired in the first three weeks of therapy. Of those with recurrent PE, 11 (79%) were fatal, and most of these occurred within the first week of diagnosis. Major bleeding occurred in 1.5% of the patients. Immobilization for more than three days was a significant risk factor for recurrence. Inpatient status, a diagnosis of COPD, and malignancy were independent risk factors for bleeding complications. Fifty-five patients (8.2%) died over the three-month period. Twenty percent died of fatal recurrent PE, while 4% suffered fatal hemorrhage.

Multivariate analysis revealed four characteristics as independent risk factors for mortality in patients with PE. These include age, inpatient status, immobilization for more than three days, and malignancy. It appears that the majority of recurrent and fatal PE occurs during the first week of therapy. Physicians should not discharge patients to home with LMWH for PE without considering these risk factors for hemorrhage, recurrence, and mortality.

Annals of Internal Medicine has published a systematic review of management issues in VTE to provide the framework for the American College of Physicians practice guidelines. These guidelines pool data from more than 100 randomized controlled trials and comment on six areas in VTE management. The following are quotes from this document.

Recommendation #1: Use low molecular-weight heparin (LMWH) rather than unfractionated heparin whenever possible for the initial inpatient treatment of deep vein thrombosis (DVT). Either unfractionated heparin or LMWH is appropriate for the initial treatment of pulmonary embolism.

Recommendation #2: Outpatient treatment of DVT, and possibly pulmonary embolism, with LMWH is safe and cost-effective for carefully selected patients and should be considered if the required support services are in place.

 

 

Recommendation #3: Compression stockings should be used routinely to prevent post-thrombotic syndrome, beginning within one month of diagnosis of proximal DVT and continuing for a minimum of one year after diagnosis.

Recommendation #4: There is insufficient evidence to make specific recommendations for types of anticoagulation management of VTE in pregnant women.

Recommendation #5: Anticoagulation should be maintained for three to six months for VTE secondary to transient risk factors and for more than 12 months for recurrent VTE. While the appropriate duration of anticoagulation for idiopathic or recurrent VTE is not definitively known, there is evidence of substantial benefit for extended-duration therapy.

Recommendation #6: LMWH is safe and efficacious for the long-term treatment of VTE in selected patients (and may be preferable for patients with cancer).

All of these seem reasonable and appropriate with a possible exception in the second recommendation. Using LMWH to treat patients diagnosed with PE in the outpatient setting is not well supported by data. The vast majority of trials involving the treatment of VTE with LMWH have been conducted on those with DVT; the number of patients in the trials with PE has been very small. The Food and Drug Administration has not approved LMWH for outpatient treatment of PE; LMWH is FDA approved in the outpatient setting only for the treatment of DVT. We know that the hemodynamic changes that can accompany PE may not occur for at least 24 hours. In addition, we now have data from the Nijkeuter study that point to dangers that may result from treating PE outside the hospital setting. At this time, we should treat PE with LMWH in the outpatient setting only with patients whose risk factors, clinical characteristics, and outpatient resources have been carefully scrutinized. TH

Treatment of Bacterial Meningitis with Vancomycin

Ricard JD, Wolff M, Lacherade JC, et al. Levels of vancomycin in cerebrospinal fluid of adult patients receiving adjunctive corticosteroids to treat pneumococcal meningitis: a prospective multicenter observational study. Clin Infect Dis. 2007 Jan 15;44(2):250-255. Epub 2006 Dec 15.

In 2002, van de Beek and de Gans published a study demonstrating that adjuvant dexamethasone decreased mortality and improved neurological disability when given to patients with bacterial meningitis. Their results changed our treatment paradigm for this disease but left us with several questions. At what point in the treatment course does giving corticosteroids become ineffective? Do their results apply to all bacterial pathogens? Can the results be applied to the use of vancomycin in treating penicillin-resistant strains of Streptococcus pneumoniae? This final question arises from the disturbing ability of vancomycin to penetrate the cerebrospinal fluid (CSF). Previous data support this concern; thus, bactericidal titers may be inadequate within the CSF. Because meningeal inflammation exerts a strong influence over whether or not vancomycin enters the CSF, administering steroids may decrease its ability to do so. This study brings some clarity to the issue.

In this observational open multicenter trial from France, 14 adults were admitted to intensive care units with suspected pneumococcal meningitis. They were treated with intravenous cefotaxime, vancomycin, and dexamethasone. The vancomycin was given as a loading dose of 15 mg per kg of body weight followed by administration of a continuous infusion of 60 mg per kg of body weight per day. The diagnosis of pneumococcal meningitis was made using a CSF pleocytosis as well as one or more of the following: a positive culture from either the blood or CSF, a Gram stain showing Gram-positive diplococci, or pneumococcal antigens in the CSF as demonstrated by latex agglutination. Patients had a second lumbar puncture on either day two or three to measure vancomycin levels—among other markers of disease activity—in the CSF. Serum levels of vancomycin were drawn simultaneously.

Thirteen of the 14 patients had pneumococcal meningitis; one patient was found to have meningitis from Neisseria meningitidis. Seven patients had pneumococcal strains resistant to penicillin. Ten of the 14 patients required mechanical intubation. The second lumbar puncture demonstrated marked improvements in leukocyte counts, protein levels, and glucose levels. All subsequent cultures from the CSF were negative. Three patients died, two had neurological sequelae, and the remainder were discharged from the hospital without complications. Vancomycin concentrations in the serum ranged from 14.2 to 39.0 mg/L, with a mean of 25.2 mg/L; concentrations in the CSF ranged from 3.1 to 22.3 mg/L, with a mean of 7.9 mg/L. There was a significant correlation between vancomycin levels in the serum and those in the CSF (r = 0.68; P = 0.01). The concentration of vancomycin in the CSF was between four and 10 times the mean inhibitory concentrations (MICs). A linear correlation exists between penetration of vancomycin into CSF and serum levels. No evidence of drug toxicities was observed.

The results demonstrate that a therapeutic concentration of vancomycin can be achieved in the CSF. The continuous infusion of vancomycin with a loading dose, which has not been standard practice, has previously been shown to achieve targeted serum levels more quickly than intermittent dosing. Levels of serum vancomycin were likely higher in this study than when troughs of 15-20 mg/L are the goal. This data strongly suggests, however, that this same treatment regimen can obtain adequate vancomycin levels in the CSF while treating pneumococcal meningitis with adjunctive steroids.

Though this is a retrospective trial, it provides guidance for a very common clinical scenario.
 

 

Nonspecific elevations in troponins

Alcalai R, Planer D, Culhaoqlu A, et al. Acute coronary syndrome vs nonspecific troponin elevation: clinical predictors and survival analysis. Arch Intern Med. 2007 Feb 12;167(3):276-281.

In 2000, the American College of Cardiology (ACC) and the European Society of Cardiology (ESC) jointly produced a recommendation for a new definition of myocardial infarction. This proposal based the diagnosis primarily on the elevation of biomarkers specific to cardiac tissue, troponin T and troponin I. Since that time, as use of these blood tests has escalated, it is apparent that elevations in these biomarkers do not always translate into thrombotic coronary artery occlusion. Instead, we have seen that they are positive in a variety of clinical settings. These include sepsis, renal failure, pulmonary embolism, and atrial fibrillation. This investigation attempts to characterize the differences among patients presenting with acute coronary syndrome (ACS) and nonthrombotic troponin elevation (NTTE), to report on outcomes for each, and to note the positive predictive values (PPV) for elevated troponins across clinical settings.

Two hospitals in Israel collected data on all adult patients who experienced an elevation in troponin T (defined as at least 0.1 ng/mL) at any time during their hospital stay. Six hundred and fifteen patients were evaluated by age, sex, cardiovascular risk factors, history of ischemic heart disease, left ventricular function (LVF) by echocardiogram, serum creatine phosphokinase (CPK), and creatinine levels, as well as by which hospital service each had been admitted under. The highest troponin T value was used in the analysis, along with the creatinine level taken on the same day. Two physicians, one a specialist in internal medicine and the other a specialist in cardiology, independently determined the principal diagnosis in accordance with the ACC/ESC guidelines for thrombotic ACS and used other diagnostic studies for alternative diagnosis for conditions known to cause NTTE.

Patients were followed up for causes of mortality for up to two-and-a-half years. Kappa (k) was calculated for physician agreement regarding the principal diagnosis. To assess independent odds ratios and their 95% confidence intervals (CIs) of predictor variables for ACS, an unconditional multiple logistic regression analysis was used. The PPV for troponin T in the diagnosis of ACS was calculated. In-house mortality rates were measured. Long-term risk of death was assessed using Cox proportional hazard models.

The diagnosis of ACS was made in only 53% (326) of the patients. Forty-one percent (254) had NTTE, and the diagnosis was not determined in 6% (35). The diagnoses comprising NTTE included—in order from most to least common—cardiac non-ischemic conditions, sepsis, pulmonary diseases, and neurologic diseases. Using the multivariate analysis, the diagnostic predictors for ACS were history of hypertension or ischemic heart disease, age between 40 and 70 years, higher troponin levels (greater than 1.0 ng/mL), and normal renal function. Extreme age and admission to a surgical team were negative predictors for ACS. Gender, presence of diabetes, and LVF did not appear to make a difference.

The PPV of an elevated troponin T for ACS among all patients was only 56% (95% CI, 52%-60%). It became lower (27%) in those older than 70 with abnormal renal function and higher (90%) in those with a troponin T greater than 1.0 ng/mL and normal renal function. In-house mortality for all patients was 8%; for those with ACS, it was 3%, while for those with NTTE, it was—at 21%—almost eight times higher than the ACS group (P<0.001). Patients were followed up for mortality for a median of 22 months. The long-term mortality was also significantly better (P<0.001) for those with a diagnosis of ACS than for those with NTTE.

 

 

Since the incorporation of the ACC/ESC guidelines, the diagnosis of ACS has substantially increased. It is critical to distinguish between ACS and NTTE when using these very sensitive biomarkers, because the underlying cause of NTTE usually requires a drastically different therapy than that of ACS; in addition, misdiagnosing a myocardial infarction may lead to potentially harmful diagnostic studies and therapies in the form of coronary angiography, antithrombotics, and antiplatelet agents. Hospitalists should look for ACS when troponin T levels exceed 1.0 ng/mL in the face of normal renal function. Based on their data, the authors present an algorithm for working up ACS and NTTE that takes into consideration the clinical presentation, age, renal function, electrocardiographic changes, and troponin T levels. Though this is a retrospective trial, it provides guidance for a very common clinical scenario. We should be concerned about a patient’s prognosis when we encounter an elevated troponin in a setting of NTTE.

Though recent literature suggests that antibiotic therapy during exacerbations reduces morbidity and mortality and reduces the lack of response to treatment, controversy remains as to whether or not this is applicable to all patients with this condition.

Guiding Antibiotic Therapy for COPD Exacerbations

Stolz D, Christ-Crain M, Bingisser R, et al. Antibiotic treatment of exacerbations of COPD: a randomized, controlled trial comparing procalcitonin-guidance with standard therapy. Chest. 2007 Jan;131(1):9-19.

Chronic obstructive pulmonary disease (COPD) is a leading cause of morbidity and mortality in the United States. Exacerbations of COPD (ECOPD) that require hospitalization are both common and costly. Though recent literature suggests that antibiotic therapy during exacerbations reduces morbidity and mortality and lowers the lack of response to treatment, controversy persists concerning whether or not these results are applicable to all patients with this condition. Procalcitonin is a protein not typically measurable in plasma. Levels of this protein rise with bacterial infections, but appear to be unaffected by inflammation from other etiologies such as autoimmune processes or viral infections. Measuring procalcitonin levels has already been shown to safely decrease the use of antibiotics in lower respiratory infections.

This single-center trial from Switzerland evaluated consecutive patients admitted from the emergency department with ECOPD. For 226 enrolled patients, symptoms were quantified, sputum was collected, spirometry was measured, and procalcitonin levels were evaluated. Attending physicians chose antibiotics, using current guidelines, for patients randomized to the standard therapy group. In the group randomized to procalcitonin guidance, antibiotics were given according to serum levels. No antibiotics were administered for levels below 0.1 micrograms (mcg)/L; antibiotics were encouraged for levels greater than 0.25 mcg/L. For levels between 0.1 and .25 mcg/L, antibiotics were encouraged or discouraged based on the clinical condition of the patient. The primary outcomes evaluated were total antibiotics used during hospitalization and up to six months following hospitalization. Secondary endpoints included clinical and laboratory data and six-month follow-up for exacerbation rate and time to the next ECOPD.

Procalcitonin guidance significantly decreased antibiotic administration compared with the standard-therapy arm (40% versus 72% respectively; P<0.0001) and antibiotic exposure (RR, 0.56; 95% CI, 0.43 to 0.73; P<0.0001). The absolute risk reduction was 31.5% (95% CI, 18.7 to 44.3%; p<0.0001). No difference in the mean time to the next exacerbation was noticed between the two groups. Clinical and laboratory measures at baseline and through the six-month follow-up demonstrated no significant differences.

Using procalcitonin levels to guide antibiotic therapy for ECOPD is a practice that is exciting and full of promise. Not only could costs be cut by omitting antibiotics for this treatment regimen in select patients, but some pressure will be relieved in terms of decreasing emerging bacterial resistance. Because procalcitonin levels have a lab turn-around time of approximately one hour, this test becomes even more attractive: decisions for treatment can be made while patients are still in the emergency department. On a cautionary note, there is more than one method of testing for procalcitonin levels, and this trial was done at only one center. Before widespread use of this test is applied, these results should be validated in a multicenter trial. In addition, one test should be used consistently for measuring procalcitonin levels.

 

 

Given the data presented, there is currently no way to consistently distinguish between CA-MRSA and CA-MSSA prior to culture results.

Community-Associated MRSA and MSSA: Clinical and Epidemiologic Characteristics

Miller LG, Perdreau-Remington F, Bayer AS, et al. Clinical and epidemiologic characteristics cannot distinguish community-associated methicillin-resistant Staphylococcus aureus infection from methicillin-susceptible S. aureus infection: a prospective investigation. Clin Infect Dis. 2007 Feb 15;44(4):471-482. Epub 2007 Jan 19.

Methicillin-susceptible Staphylococcus aureus (MSSA) was, until very recently, the predominant strain seen in community-associated (CA) S. aureus infections. Now methicillin-resistant S aureus (MRSA) is a concern around the world. Deciding whether or not to treat empirically for MRSA in those patients who do not have risk factors for healthcare-associated (HCA) infections is difficult.

Investigators at the University of California-Los Angeles Medical Center (Torrance) prospectively evaluated consecutive patients admitted to the county hospital with S. aureus infections. Daily cultures of wounds, urine, blood, and sputum were taken. An extensive questionnaire was completed by 280 patients who provided information on exposures, demographic characteristics, and clinical characteristics. CA infections were defined as those not having a positive culture from a surgical site in a patient who, in the past year, had not lived in an extended living facility, had any indwelling devices, visited an infusion center, or received dialysis.

Of those evaluated, 202 patients (78%) had CA S. aureus and 78 (28%) had HCA S. aureus. Of those with the CA infections, 108 (60%) had MRSA and 72 (40%) had MSSA. Sensitivity, specificity, predictive values, and likelihood ratios for the risk factors evaluated were unable to distinguish CA-MRSA from CA-MSSA. For example, the sensitivities for most MRSA risk factors were less than 30%, and all the positive likelihood ratios were lower than three.

This study has very important consequences. Given the data presented, there is currently no way to consistently distinguish between CA-MRSA and CA-MSSA prior to culture results. It would be very reasonable in this population to treat for MRSA empirically. One limitation is that the information comes from a single center in an area that has a very diverse patient population. Also, because this was done at a county hospital, the resources for treating patients who would be cared for in the outpatient arena at other centers might not otherwise be available, thus generalizing this data to potential outpatients. Because the morbidity and mortality from a delay in treatment of MRSA infections is significant, however, it appears sensible to treat CA S. aureus empirically in areas where CA-MRSA is common, regardless of patients’ risk factors.

Venous Thromboembolism Update

King CS, Holley AB, Jackson JL, et al. Twice vs three times daily heparin dosing for thromboembolism prophylaxis in the general medical population: a metaanalysis. Chest. 2007 Feb;131(2):507-516.

Nijkeuter M, Sohne M, Tick LW, et al. The natural course of hemodynamically stable pulmonary embolism: clinical outcome and risk factors in a large prospective cohort study. Chest. 2007 Feb;131(2):517-523.

Segal JB, Streiff MB, Hoffman LV, et al. Management of venous thromboembolism: a systematic review for a practice guideline. Ann Intern Med. 2007 Feb 6;146(3):211-222.

Snow V, Qaseem A, Barry P, et al. Management of venous thromboembolism: a clinical practice guideline from the American College of Physicians and the American Academy of Family Physicians. Ann Intern Med. 2007 Feb 6;146 (3):204-210. Epub 2007 Jan 29.

The prevention and treatment of venous thromboembolism (VTE) is a skill set required for all hospitalists given the prevalence of this condition in hospitalized patients as well as the significant morbidity and mortality associated with the condition. Several articles that help to guide our decisions in managing VTE have been published recently.

 

 

We have no randomized controlled trials (RCT) comparing twice-daily (bid) with three-times-daily (tid) dosing of unfractionated heparin (UFH) for the prevention of VTE in medically ill patient populations. It is unlikely that such a study, involving an adequate number of patients, will ever be conducted. Though low molecular weight heparins (LMWH) are used more frequently for VTE prevention, many hospitalists still use UFH to prevent VTE in patients who are morbidly obese or who have profound renal insufficiency. King and colleagues have done a meta-analysis to find out whether or not tid dosing is superior to bid dosing for VTE prevention. Twelve studies, including almost 8,000 patients from 1966 to 2004, were reviewed. All patients were hospitalized for medical rather than surgical conditions.

Tid heparin significantly decreased the incidence of the combined outcome of pulmonary embolism (PE) and proximal deep vein thrombosis (DVT). There was a trend toward significance in decreasing the incidence of PE. There was a significant increase in the number of major bleeds with tid dosing compared with bid dosing. There are many limitations to this study: It is retrospective, the population is extremely heterogeneous, and varying methods have been employed to diagnosis VTE across the many studies from which data were pooled. This is likely the best data we will have for UFH in VTE prevention, however. In summary, tid dosing is preferred for high-risk patients, but bid dosing should be considered for those at risk for bleeding complications.

Data are limited for the clinical course of PE. Outpatient treatment of PE with LMWH is not uncommon in select patients, but choosing who is safe to treat in this arena is uncertain. Nijkeuter and colleagues assessed the incidence of recurrent VTE, hemorrhagic complications from therapy, mortality, risk factors for recurrence, and the course of these events from the time of diagnosis through a three-month follow-up period.

Six hundred and seventy-three patients completed the three-month follow-up. Twenty of them (3%) had recurrent VTE; 14 of these had recurrent PE. Recurrence predominantly transpired in the first three weeks of therapy. Of those with recurrent PE, 11 (79%) were fatal, and most of these occurred within the first week of diagnosis. Major bleeding occurred in 1.5% of the patients. Immobilization for more than three days was a significant risk factor for recurrence. Inpatient status, a diagnosis of COPD, and malignancy were independent risk factors for bleeding complications. Fifty-five patients (8.2%) died over the three-month period. Twenty percent died of fatal recurrent PE, while 4% suffered fatal hemorrhage.

Multivariate analysis revealed four characteristics as independent risk factors for mortality in patients with PE. These include age, inpatient status, immobilization for more than three days, and malignancy. It appears that the majority of recurrent and fatal PE occurs during the first week of therapy. Physicians should not discharge patients to home with LMWH for PE without considering these risk factors for hemorrhage, recurrence, and mortality.

Annals of Internal Medicine has published a systematic review of management issues in VTE to provide the framework for the American College of Physicians practice guidelines. These guidelines pool data from more than 100 randomized controlled trials and comment on six areas in VTE management. The following are quotes from this document.

Recommendation #1: Use low molecular-weight heparin (LMWH) rather than unfractionated heparin whenever possible for the initial inpatient treatment of deep vein thrombosis (DVT). Either unfractionated heparin or LMWH is appropriate for the initial treatment of pulmonary embolism.

Recommendation #2: Outpatient treatment of DVT, and possibly pulmonary embolism, with LMWH is safe and cost-effective for carefully selected patients and should be considered if the required support services are in place.

 

 

Recommendation #3: Compression stockings should be used routinely to prevent post-thrombotic syndrome, beginning within one month of diagnosis of proximal DVT and continuing for a minimum of one year after diagnosis.

Recommendation #4: There is insufficient evidence to make specific recommendations for types of anticoagulation management of VTE in pregnant women.

Recommendation #5: Anticoagulation should be maintained for three to six months for VTE secondary to transient risk factors and for more than 12 months for recurrent VTE. While the appropriate duration of anticoagulation for idiopathic or recurrent VTE is not definitively known, there is evidence of substantial benefit for extended-duration therapy.

Recommendation #6: LMWH is safe and efficacious for the long-term treatment of VTE in selected patients (and may be preferable for patients with cancer).

All of these seem reasonable and appropriate with a possible exception in the second recommendation. Using LMWH to treat patients diagnosed with PE in the outpatient setting is not well supported by data. The vast majority of trials involving the treatment of VTE with LMWH have been conducted on those with DVT; the number of patients in the trials with PE has been very small. The Food and Drug Administration has not approved LMWH for outpatient treatment of PE; LMWH is FDA approved in the outpatient setting only for the treatment of DVT. We know that the hemodynamic changes that can accompany PE may not occur for at least 24 hours. In addition, we now have data from the Nijkeuter study that point to dangers that may result from treating PE outside the hospital setting. At this time, we should treat PE with LMWH in the outpatient setting only with patients whose risk factors, clinical characteristics, and outpatient resources have been carefully scrutinized. TH

Issue
The Hospitalist - 2007(04)
Issue
The Hospitalist - 2007(04)
Publications
Publications
Topics
Article Type
Display Headline
Bacterial Meningitis, Non-Specific Troponin Elevation, Antibiotics for ECOPD, VTE Update, and More
Display Headline
Bacterial Meningitis, Non-Specific Troponin Elevation, Antibiotics for ECOPD, VTE Update, and More
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

The Hepatoadrenal Syndrome, HSS to Treat CHF, Treatment for Atrial Fib, and More

Article Type
Changed
Fri, 09/14/2018 - 12:41
Display Headline
The Hepatoadrenal Syndrome, HSS to Treat CHF, Treatment for Atrial Fib, and More

WORSENING OUTCOMES AND INCREASED RECURRENCE OF CLOSTRIDIUM DIFFICILE AFTER INITIAL TREATMENT WITH METRONIDAZOLE?

Pepin J, Alary ME, Valiquette L, et al. Increasing risk of relapse after treatment of Clostridium difficile colitis in Quebec, Canada. Clin Infect Dis. 2005;40:1591-1597; and Musher DM, Aslam S, Logan N, et al. Relatively poor outcome after treatment of Clostridium difficile colitis with metronidazole. Clin Infect Dis. 2005;40:1586-1590.

Information on treatment of colitis caused by Clostridium difficile began to appear in the late 1970s and early 1980s. Since that time there have been a paucity of novel therapies. It has been well-established that both metronidazole and vancomycin can effectively treat this entity. Traditionally metronidazole has been the first-line agent for C. difficile-associated diarrhea (CDAD). The reasons for this are three:

  1. Randomized controlled trials have shown vancomycin and metronidazole to be equally efficacious;
  2. The cost of oral vancomycin is substantially more than oral metronidazole; and
  3. Many experts have cautioned that using vancomycin may contribute to the blooming number of bacteria that are resistant to vancomycin.

Indeed recommendations from the Centers for Disease Control and Prevention’s Healthcare Infection Control Practices Advisory Committee as well as the American Society for Health-System Pharmacists have supported using metronidazole as our initial agent of choice for CDAD (oral vancomycin is actually the only agent that is approved by the Food and Drug Administration for CDAD). Most of our earlier data claim initial response rates to be 88% or better and relapse rates to be somewhere between 5% and 12% when metronidazole is used.

Two new studies have been published raising a red flag on our current standard of practice. Musher, et al., designed a prospective, observational study in which they followed more 200 patients with CDAD that were initially treated with metronidazole. The patient pool came from a Veterans Affairs Medical Center. They all had a positive fecal ELISA for C. difficile toxin and were treated for seven or more days using at least 1.5 grams per day of metronidazole.

Records were reviewed six weeks prior to the diagnosis and then patients were followed for three months after cessation of therapy. Patients were assigned to four outcome groups:

  1. Complete responders who did not have recurrence over four months;
  2. Refractory-to-treatment where signs and symptoms of CDAD were present for 10 or more days;
  3. Recurrence after initial clinical response with signs and symptoms of CDAD and a positive toxin; and
  4. Clinical recurrence where there was an initial response but a recurrence of signs and symptoms of CDAD without a positive toxin (either the toxin was not present when tested or the test was not done).

Fifty percent were completely cured. Twenty-two percent were refractory to initial therapy. Twenty-eight percent had a recurrence of CDAD within the 90-day period. The mortality was 27%. This was higher among people who had failed to respond to initial therapy (31% versus 21%; p<.05).

Pepin, et al., retrospectively looked at more than 2,000 CDAD cases from one hospital between 1991 and 2004. To be included the patients needed either a positive toxin, endoscopic evidence of pseudomembranous colitis, or histopathologic evidence of pseudomembranous colitis on a biopsy specimen. Patients received at least 1 gram per day of metronidazole for 10 to 14 days. They were considered to have a recurrence if they had diarrhea within two months of the completion of therapy and either a positive toxin at that time or if the attending physician ordered a second course of antibiotics for C. difficile.

 

 

Between 1991 and 2002 the frequency of times that either therapy was changed to vancomycin or vancomycin was added to metronidazole was unchanged (9.6%). During 2003-2004 this more than doubled (25.7%). The number of patients experiencing recurrence over a two-month period comparing data from 1991-2002 to 2003- 2004 was staggering (20.8% versus 47.2%; p<.001). The authors noted that as patients aged the probabilities of recurrence increased.

They also found that a subgroup of patients with a white blood cell count over 20,000 cells/mm3 and an elevated creatinine had a high short-term mortality rate.

Why might we be seeing these results? Several theories exist. Patients are both older and sicker than they have been in the past. Our antibiotic choice is changing with an increase in using agents that provide a more broad-spectrum coverage. Immune responses vary with fewer antitoxin antibodies found in those patients with symptoms and/or recurrence. Metronidazole levels in stool decrease as inflammation and diarrhea resolve; this is not the case with vancomycin where fecal concentrations remain high throughout treatment.

The authors noted that, as patients aged, the probabilities of recurrence increased. They also found that a subgroup of patients with a white blood cell count over 20,000 cells/mm3 and an elevated creatinine had a high short-term mortality rate.

A survey of infectious disease physicians found that they believe antibiotic failure is on the rise in this setting. Before we take this as true, consider the following:

  1. We have no universally accepted clinical definition of what constitutes diarrhea for CDAD;
  2. Previous studies did not look for recurrence as far out from initial treatment as these two did; and
  3. These studies do not have the design to support arguments powerful enough to change our paradigm just yet.

The editorial comment acknowledged the Pepin, et al., report that patients with a high white blood cell count and worsening renal function are those that we should be particularly concerned about. The authors write that if the patient’s white blood cell count is increasing while on therapy that he changes his antibiotic choice to vancomycin. In addition, if someone has either ileus or fulminant CDAD he will use multiple antibiotics and consult the surgeons. At this time we have other agents being studied for CDAD, such as tinidazole. We now need a larger randomized prospective trial to better explore treatment outcomes in CDAD.

HYPERTONIC SALINE SOLUTION TO TREAT REFRACTORY CONGESTIVE HEART FAILURE

Paterna S, Di Pasquale P, Parrinello G, et al. Changes in brain natriuretic peptide levels and bioelectrical impedance measurements after treatment with high-dose furosemide and hypertonic saline solution versus high-dose furosemide alone in refractory congestive heart failure. J Am Coll Cardiol. 2005;45:1997–2003.

CHF continues to increase in prevalence and incidence, despite our advances with therapies using ACE inhibitors, beta-blockers, and aldosterone antagonists. Refractory CHF accounts for a considerable portion of admissions to hospitalists’ services. Loop diuretics are part of the standard of arsenal we employ in these patients. Unfortunately, many patients fail to respond to initial diuretic doses. In this situation we might begin a constant infusion of diuretic or recruit diuretics from other classes in hope of synergism. Another typical approach in treating advanced CHF is restriction of sodium intake.

Paterna, et al., previously published four studies using small volume hypertonic saline solution and high-dose furosemide in refractory CHF, in which they demonstrated the safety and tolerability of these measures. They now present the first randomized double-blinded trial using this intervention. Ninety-four patients were included with NYHA functional class IV CHF on standard medical therapy and high doses of diuretics for at least two weeks. They had to have a left ventricular ejection fraction of <35%, serum creatinine <2 mg/dL, reduced urinary volume (<500 mL/24 h), and a low natriuresis (<60 mEq/24 h). They could not be taking NSAIDs.

 

 

The group receiving hypertonic saline solution had brow-raising results. They had a significant increase in daily diuresis and natriuresis, a difference in brain natriuretic peptide levels on days six and 30, a reduction in their length of stay, and a decrease in their hospital readmission rate.

Patients received either intravenous furosemide (500 to 1000 mg) plus hypertonic saline solution bid or the IV furosemide bid alone. Treatment lasted four to six days. Body weights were followed. Brain natriuretic peptide plasma levels were measured on hospital days one and six, as well as 30 days after discharge.

The group receiving hypertonic saline solution had brow-raising results. They had a significant increase in daily diuresis and natriuresis (p<0.05), a difference in brain natriuretic peptide levels on days six and 30, a reduction in their length of stay, and a decrease in their hospital readmission rate.

This is a provocative study. At this time the mechanism responsible for the results is unclear. Paterna, et al., offer multiple explanations. One possibility is through the osmotic action of hypertonic saline solution. It may hasten the mobilization of extravascular fluid into the intravascular space and then this volume is quickly excreted. Also, hypertonic saline solution may increase renal blood flow and perfusion alternating the handling of sodium and natriuresis while also allowing the concentration of furosemide in the loop of Henle to attain a more desirable level.

Should these results hold true in other investigations and the inclusion criteria loosen (measuring patients urine volume and sodium concentration for 24 hours prior to admission may not be easy or practical) then we might have a very inexpensive new method for treating refractory CHF.

PERIOPERATIVE BETA-BLOCKERS: HELPFUL OR HARMFUL FOR MAJOR NONCARDIAC SURGERY?

Lindenauer P, Pekow P, Wang K, et al. Perioperative beta-blocker therapy and mortality after major noncardiac surgery. N Engl J Med. 2005;353:349–361.

Among the most common reasons that hospitalists are consulted is the “perioperative evaluation.” This is with good reason because 50,000 patients each year have a perioperative myocardial infarction. A statement by the Agency for Health Care Research and Quality proclaims that we have “clear opportunities for safety improvement” in regard to using beta-blockers for patients with intermediate and high risk for perioperative cardiovascular complications. The American Heart Association and the American College of Cardiology recommend using these medications in patients with either risk factors for or known coronary artery disease when undergoing high-risk surgeries. Despite all of this the efficacy of the class has not been proven by large randomized clinical studies.

Given the frequency in which Marik, et al., report encountering temporary dysfunction of the hypothalamic-pituitary-adrenal axis and the effect that treatment had on mortality it seems as though this is a diagnosis worth consideration.

Using a large national registry of more than 300 U.S. hospitals, Lindenauer, et al., conducted a large observational study evaluating beta-blockade in the perioperative period in patients undergoing major noncardiac surgery. Looking at more than 700,000 patients, they found that 85% had no recorded contraindication to beta-blockers. Only 18% of eligible patients received beta-blockers (n=122, 338).

Patients were considered to have had a beta-blocker for prophylaxis if it was given within the first 48 hours of their hospitalization, though this may or may not have been the intended use (this information was not provided by the registry data base). Only in-hospital mortality was evaluated as postdischarge information was not available. All patients had a revised cardiac risk index configured. This index places risk on perioperative cardiac events by looking at the nature of the surgery as well as whether or not a history of congestive heart failure, ischemic heart disease, perioperative treatment with insulin, an elevated preoperative creatinine, and cerebrovascular disease are present. An increasing score means that major perioperative complications become more likely (scores range from 0–5).

 

 

Considering all patients, there was no risk reduction of in-hospital death for those receiving beta-blockers. If the revised cardiac risk index score was 0 or 1, the patients had an increase in the risk of death (43% and 13%, respectively). However, those patients whose scores were 2, 3, or 4 or higher had a reduction in the risk of death (from 10% to 43% as their score increased).

How are we to account for these results? In the high-risk patients we see benefit in treatment with beta-blockers. We suspect this drug class improves coronary filling time during diastole and/or prevents dangerous arrhythmias. In patients at low and intermediate risk, the results may be surprising. The study group did not have patient charts available. It is possible that these patients were given betablockers not for prophylaxis but in response to a postoperative ischemic event or infarction. If this misclassification took place, then the effectiveness of beta-blockers is underestimated and the suggestion that these drugs are harmful in this situation would be erroneous.

Given the data gleaned from this study and considering previous publications, we are justified—even obligated—in using betablockers in high-risk patients, without contraindications, who undergo major noncardiac surgery. Before using these drugs in patients at low or intermediate risk we need more information. Two large ongoing randomized trials (POISE and DECREASE–IV) should bring clarity to this issue. We expect results from these in the next four years.

A NEW CLINICAL ENTITY: THE HEPATOADRENAL SYNDROME

Marik PE, Gayowski T, Starzl TE, et al. The hepatoadrenal syndrome: a common yet unrecognized clinical condition. Crit Care Med. 2005;33:1254-1259.

It is not uncommon to see the temporary dysfunction of the hypothalamic-pituitary-adrenal axis while someone is critically ill. Many physicians who suspect this condition attempt to make a diagnosis using either a random total cortisol level or perform a cosyntropin stimulation test. End-stage liver disease and sepsis share some elements of their pathophysiology, such as endotoxemia and increased levels of mediators that influence inflammation.

A liver transplant intensive care unit has produced data on what they have coined the “hepatoadrenal syndrome.” Due to emerging evidence that severe liver disease is associated with adrenal insufficiency, this liver transplant intensive care unit began routinely testing all patients admitted to their unit for this condition. They presented their findings for 340 patients. This review will focus only on those patients with chronic liver failure and fulminant hepatic failure because transplant patients are often cared for by a multidisciplinary team. Patients were labeled as having adrenal insufficiency if the random total cortisol level was <20 micrograms (mcg)/dL in patients who were “highly stressed” (i.e., hypotension, respiratory failure). In all other patients a random total cortisol level of <15 mcg/dL or a 30-minute level <20 mcg/dL post-low-dose (1 mcg) cosyntropin established the diagnosis. Lipid profiles were also obtained from each patient. Those receiving glucocorticoids were excluded. It was left to the discretion of the treating physician whether or not to treat patients with steroids.

Given the data gleaned from this study and considering previous publications, we are justified—even obligated—in using betablockers in high-risk patients, without contraindications, who undergo major noncardiac surgery.

Eight patients (33%) with fulminant hepatic failure and 97 patients (66%) with chronic liver disease met their criteria for adrenal insufficiency. Of the patients with adrenal insufficiency the mortality rate was 46% for those not treated with glucocorticoids compared with 26% for those receiving glucocorticoid therapy. The HDL level was the only variable predictive of adrenal insufficiency (p<.0001).

The association between HDL levels and cortisol is as follows: The adrenal glands do not store cortisol. Cholesterol is a precursor for the synthesis of steroids—80% of cortisol arises from it. The lipoprotein of choice to use as substrate in steroid production is HDL. Because a major protein component of HDL is synthesized by the liver, those with liver disease have low levels of serum HDL.

 

 

Recently our current method of diagnosing adrenal insufficiency during acute illness has been challenged in the literature. Measuring free cortisol rather than total cortisol has been suggested as proteins that bind cortisol decrease in this setting while free cortisol levels actually rise. Similar to the picture we see in sepsis, there are low levels of these same proteins in liver disease.

At this time testing for free cortisol is not widely available nor do we have good information on what an “appropriate” free cortisol level should be during acute illness. Therefore, given the frequency in which Marik, et al., report encountering this condition and the effect that treatment had on mortality it seems as though this is a diagnosis worth consideration.

TREATMENT OPTIONS FOR ATRIAL FIBRILLATION

Wazni OM, Marrouche NF, Martin DO, et al. Radiofrequency ablation vs antiarrhythmic drugs as first-line treatment of symptomatic atrial fibrillation. JAMA. 2005;293(21):2634-2640.

Atrial fibrillation affects millions of people. This diagnosis has a significant mortality associated with it, causes strokes, and influences quality of life. Therapy has been less than satisfying. Both rate control and rhythm control have multiple potential adverse consequences. Pulmonary vein isolation is performed in the electrophysiology laboratory using an ablation catheter. The goal of this procedure is to completely disconnect the electrical activity between the pulmonary vein antrum and the left atrium. This is a potentially curable procedure for atrial fibrillation.

The biggest concerns about pulmonary vein isolation are the complication rates (death in 0.05% and stroke in 0.28%). We also don’t know if this procedure will translate into long-term cures.

In a multicenter prospective randomized pilot study Wazni, et al., studied 70 patients with highly symptomatic atrial fibrillation. Patients were between 18 and 75 years old. They could not have undergone ablation in the past, had a history of open-heart surgery, been previously treated with antiarrhythmic drugs, or had a contraindication to long-term anticoagulation. Patients were randomized to antiarrhythmic therapy or pulmonary vein isolation. Those receiving medical treatment were given flecainide, propafenone, or sotalol. Amiodarone was used for patients who had failed at least two or more of these medications. Drugs were titrated to the maximum tolerable doses. The other arm of the group underwent pulmonary vein isolation. This group also received anticoagulation with warfarin beginning the day of the procedure, and this was continued for at least three months. Anticoagulation was extended beyond this time if atrial fibrillation recurred or the pulmonary vein was narrowed by 50% or more (as seen on a three-month post-procedure CT scan). Follow-up was at least one year. A loop event-recorder was worn for one month by all patients and event recorders were used for patients who were symptomatic beyond the first three months of therapy initiation.

After one year, symptomatic atrial fibrillation recurred in 63% of the antiarrhythmic group versus 13% in the pulmonary vein isolation group (p<.001). Fifty-four percent of those medically treated were hospitalized versus 9% of pulmonary vein isolation patients (p<.001). There were no thromboembolic events in either group. Bleeding rates were similar in both groups. For those who underwent pulmonary vein isolation 3% had mild pulmonary vein stenosis and 3% had moderate stenosis (all of which were asymptomatic). Five of the eight measures of quality of life were significantly improved in the pulmonary vein isolation arm versus those receiving antiarrhythmic drugs.

Recently data from multiple trials such as AFFIRM and RACE confirm that rhythm control does not confer significant benefits over rate-control for atrial fibrillation. In fact rate control seems to be a more attractive approach to many patients given the side-effect profile of the antiarrhythmia medications. This study was initiated prior to the release of the information gained from RACE and AFFIRM, thus no rate-control arm was included. This trial also differed from previous studies by using a younger population that was highly symptomatic in comparison with other recent studies using older patients who had recurrent persistent atrial fibrillation.

 

 

The biggest concerns about pulmonary vein isolation are the complication rates (death in 0.05% and stroke in 0.28%). We also don’t know if this procedure will translate into long-term cures. Until we have larger studies this should not be a first-line modality for treating all patients. Quite often we find patients where neither rate nor rhythm control is a particularly attractive option, especially in regard to long-term anticoagulation. Pulmonary vein isolation provides us with a new viable option for these people as well as something to consider for carefully selected highly symptomatic patients. TH

Classic Literature

The GOLDMAN Criteria

More than 25 years have passed since Goldman’s findings, and we still have unanswered questions.

In 1930 Butler, et al., first described a potential association between ischemic heart disease and morbidity and mortality associated with the postoperative period. The Goldman, et al., article was a landmark in describing a formalized approach to the perioperative cardiac evaluation of patients undergoing noncardiac surgery (Multifactorial index of cardiac risk in noncardiac surgical procedures. N Engl J Med. 1977;297:845-850)

Goldman, et al., evaluated 1,001 patients who were operated on by the general, orthopedic, and urologic surgical teams at Massachusetts General Hospital (Boston). They excluded patients who had a transurethral resection of the prostate, an endoscopic procedure, or a minor surgery requiring only local anesthesia. Goldman and his colleagues saw each patient prior to their operation, unless it was emergent that they also see the patient in the immediate postoperative period.

They performed histories and physicals tailored to detect either risk factors for cardiac disease or physical findings suggestive of such. They also reviewed each patient’s electrocardiogram along with a radiograph of the chest. Particular attention was paid to the central venous pressure as well as evidence in support of aortic stenosis and premature ventricular contractions.

All patients were seen at least once postoperatively. Those with cardiac complications were seen more frequently, and medical consultants were involved in their management. All patients charts were reviewed daily and again after discharge.

In the study, 19 patients died from postoperative cardiac deaths. Forty additional patients died from noncardiac causes. Thirty-nine patients suffered from one or more cardiac complications considered life-threatening, but they did not die from these. Using a multivariate analysis the authors found the following nine factors to be related to the development of cardiac complications:

  1. An S3 gallop or a jugular venous distension;
  2. Recent myocardial infarction;
  3. Rhythm other than sinus;
  4. Five or more premature ventricular contractions prior to surgery;
  5. Intraperitoneal, intrathoracic, or aortic operations;
  6. Age over 70 years;
  7. Important aortic stenosis
  8. Emergency surgery; and
  9. A poor general medical condition.

These data birthed the famous Cardiac Risk Index. These nine factors were assigned “points” that could potentially sum up to a high of 53 points. Patients were then placed into one of four classes for cardiac risk. The higher their class, the greater the patient’s risk of developing cardiac complications in the perioperative period. This became the standard for almost 20 years.

By the mid-1990s there were multiple cardiac risk indices based on Goldman’s original article. In 1996 the American College of Cardiology and the American Heart Association (ACC/AHA) put together a 12-person task force that created guidelines for the evaluation of cardiac risk in the perioperative period for those patients undergoing noncardiac surgery. In 2002 these guidelines were updated. The ACC/AHA guidelines present an eight-step algorithm to assess risk.

While these guidelines have supplanted the recommendations from Goldman’s group, there are still potential pitfalls with them. Though evidence exists in support of the ACC/AHA positions, the guidelines have not been studied in a prospective fashion. The ACC/AHA paper does not provide us with a method for considering those patients with multiple intermediate or minor risk factors. Further, as in the Goldman article, the list of risk factors remains incomplete.

More than 25 years have passed since Goldman’s findings, and we still have unanswered questions. The use of perioperative beta-blockers is addressed in this issue of The Hospitalist. (See , p. 65.) The Coronary Artery Surgery Study found that patients who underwent cardiac revascularization prior to major-risk surgery had their perioperative mortality cut in half compared with those managed medically (3.3% versus 1.7%, p<.05). The ACC/AHA guidelines state that “perioperative intervention is rarely necessary simply to lower the risk of surgery, unless such intervention is indicated irrespective of the perioperative context.”

The Coronary Artery Revascularization Prophylaxis trial, published in 2004, found that those with clinically significant though stable coronary artery disease did no better after revascularization than those medically managed for elective vascular surgeries (those with significant stenosis of the left main coronary artery, a left ventricular ejection fraction of less than 20%, and severe aortic stenosis were excluded). We also have emerging data on statins. Given their pleiotropic effects and the observational data we have now it is not surprising that well-designed trials using statins in the perioperative period to reduce cardiac complications are underway.

Goldman, et al., made a major contribution to this area of consultative medicine. Their paper has had a significant effect on the data that have emerged during the last few decades. For now it remains a challenge for the hospitalist to apply our current knowledge, with its several unanswered questions, to maximize the benefit to the patient during this important chapter in their care.

Issue
The Hospitalist - 2005(10)
Publications
Sections

WORSENING OUTCOMES AND INCREASED RECURRENCE OF CLOSTRIDIUM DIFFICILE AFTER INITIAL TREATMENT WITH METRONIDAZOLE?

Pepin J, Alary ME, Valiquette L, et al. Increasing risk of relapse after treatment of Clostridium difficile colitis in Quebec, Canada. Clin Infect Dis. 2005;40:1591-1597; and Musher DM, Aslam S, Logan N, et al. Relatively poor outcome after treatment of Clostridium difficile colitis with metronidazole. Clin Infect Dis. 2005;40:1586-1590.

Information on treatment of colitis caused by Clostridium difficile began to appear in the late 1970s and early 1980s. Since that time there have been a paucity of novel therapies. It has been well-established that both metronidazole and vancomycin can effectively treat this entity. Traditionally metronidazole has been the first-line agent for C. difficile-associated diarrhea (CDAD). The reasons for this are three:

  1. Randomized controlled trials have shown vancomycin and metronidazole to be equally efficacious;
  2. The cost of oral vancomycin is substantially more than oral metronidazole; and
  3. Many experts have cautioned that using vancomycin may contribute to the blooming number of bacteria that are resistant to vancomycin.

Indeed recommendations from the Centers for Disease Control and Prevention’s Healthcare Infection Control Practices Advisory Committee as well as the American Society for Health-System Pharmacists have supported using metronidazole as our initial agent of choice for CDAD (oral vancomycin is actually the only agent that is approved by the Food and Drug Administration for CDAD). Most of our earlier data claim initial response rates to be 88% or better and relapse rates to be somewhere between 5% and 12% when metronidazole is used.

Two new studies have been published raising a red flag on our current standard of practice. Musher, et al., designed a prospective, observational study in which they followed more 200 patients with CDAD that were initially treated with metronidazole. The patient pool came from a Veterans Affairs Medical Center. They all had a positive fecal ELISA for C. difficile toxin and were treated for seven or more days using at least 1.5 grams per day of metronidazole.

Records were reviewed six weeks prior to the diagnosis and then patients were followed for three months after cessation of therapy. Patients were assigned to four outcome groups:

  1. Complete responders who did not have recurrence over four months;
  2. Refractory-to-treatment where signs and symptoms of CDAD were present for 10 or more days;
  3. Recurrence after initial clinical response with signs and symptoms of CDAD and a positive toxin; and
  4. Clinical recurrence where there was an initial response but a recurrence of signs and symptoms of CDAD without a positive toxin (either the toxin was not present when tested or the test was not done).

Fifty percent were completely cured. Twenty-two percent were refractory to initial therapy. Twenty-eight percent had a recurrence of CDAD within the 90-day period. The mortality was 27%. This was higher among people who had failed to respond to initial therapy (31% versus 21%; p<.05).

Pepin, et al., retrospectively looked at more than 2,000 CDAD cases from one hospital between 1991 and 2004. To be included the patients needed either a positive toxin, endoscopic evidence of pseudomembranous colitis, or histopathologic evidence of pseudomembranous colitis on a biopsy specimen. Patients received at least 1 gram per day of metronidazole for 10 to 14 days. They were considered to have a recurrence if they had diarrhea within two months of the completion of therapy and either a positive toxin at that time or if the attending physician ordered a second course of antibiotics for C. difficile.

 

 

Between 1991 and 2002 the frequency of times that either therapy was changed to vancomycin or vancomycin was added to metronidazole was unchanged (9.6%). During 2003-2004 this more than doubled (25.7%). The number of patients experiencing recurrence over a two-month period comparing data from 1991-2002 to 2003- 2004 was staggering (20.8% versus 47.2%; p<.001). The authors noted that as patients aged the probabilities of recurrence increased.

They also found that a subgroup of patients with a white blood cell count over 20,000 cells/mm3 and an elevated creatinine had a high short-term mortality rate.

Why might we be seeing these results? Several theories exist. Patients are both older and sicker than they have been in the past. Our antibiotic choice is changing with an increase in using agents that provide a more broad-spectrum coverage. Immune responses vary with fewer antitoxin antibodies found in those patients with symptoms and/or recurrence. Metronidazole levels in stool decrease as inflammation and diarrhea resolve; this is not the case with vancomycin where fecal concentrations remain high throughout treatment.

The authors noted that, as patients aged, the probabilities of recurrence increased. They also found that a subgroup of patients with a white blood cell count over 20,000 cells/mm3 and an elevated creatinine had a high short-term mortality rate.

A survey of infectious disease physicians found that they believe antibiotic failure is on the rise in this setting. Before we take this as true, consider the following:

  1. We have no universally accepted clinical definition of what constitutes diarrhea for CDAD;
  2. Previous studies did not look for recurrence as far out from initial treatment as these two did; and
  3. These studies do not have the design to support arguments powerful enough to change our paradigm just yet.

The editorial comment acknowledged the Pepin, et al., report that patients with a high white blood cell count and worsening renal function are those that we should be particularly concerned about. The authors write that if the patient’s white blood cell count is increasing while on therapy that he changes his antibiotic choice to vancomycin. In addition, if someone has either ileus or fulminant CDAD he will use multiple antibiotics and consult the surgeons. At this time we have other agents being studied for CDAD, such as tinidazole. We now need a larger randomized prospective trial to better explore treatment outcomes in CDAD.

HYPERTONIC SALINE SOLUTION TO TREAT REFRACTORY CONGESTIVE HEART FAILURE

Paterna S, Di Pasquale P, Parrinello G, et al. Changes in brain natriuretic peptide levels and bioelectrical impedance measurements after treatment with high-dose furosemide and hypertonic saline solution versus high-dose furosemide alone in refractory congestive heart failure. J Am Coll Cardiol. 2005;45:1997–2003.

CHF continues to increase in prevalence and incidence, despite our advances with therapies using ACE inhibitors, beta-blockers, and aldosterone antagonists. Refractory CHF accounts for a considerable portion of admissions to hospitalists’ services. Loop diuretics are part of the standard of arsenal we employ in these patients. Unfortunately, many patients fail to respond to initial diuretic doses. In this situation we might begin a constant infusion of diuretic or recruit diuretics from other classes in hope of synergism. Another typical approach in treating advanced CHF is restriction of sodium intake.

Paterna, et al., previously published four studies using small volume hypertonic saline solution and high-dose furosemide in refractory CHF, in which they demonstrated the safety and tolerability of these measures. They now present the first randomized double-blinded trial using this intervention. Ninety-four patients were included with NYHA functional class IV CHF on standard medical therapy and high doses of diuretics for at least two weeks. They had to have a left ventricular ejection fraction of <35%, serum creatinine <2 mg/dL, reduced urinary volume (<500 mL/24 h), and a low natriuresis (<60 mEq/24 h). They could not be taking NSAIDs.

 

 

The group receiving hypertonic saline solution had brow-raising results. They had a significant increase in daily diuresis and natriuresis, a difference in brain natriuretic peptide levels on days six and 30, a reduction in their length of stay, and a decrease in their hospital readmission rate.

Patients received either intravenous furosemide (500 to 1000 mg) plus hypertonic saline solution bid or the IV furosemide bid alone. Treatment lasted four to six days. Body weights were followed. Brain natriuretic peptide plasma levels were measured on hospital days one and six, as well as 30 days after discharge.

The group receiving hypertonic saline solution had brow-raising results. They had a significant increase in daily diuresis and natriuresis (p<0.05), a difference in brain natriuretic peptide levels on days six and 30, a reduction in their length of stay, and a decrease in their hospital readmission rate.

This is a provocative study. At this time the mechanism responsible for the results is unclear. Paterna, et al., offer multiple explanations. One possibility is through the osmotic action of hypertonic saline solution. It may hasten the mobilization of extravascular fluid into the intravascular space and then this volume is quickly excreted. Also, hypertonic saline solution may increase renal blood flow and perfusion alternating the handling of sodium and natriuresis while also allowing the concentration of furosemide in the loop of Henle to attain a more desirable level.

Should these results hold true in other investigations and the inclusion criteria loosen (measuring patients urine volume and sodium concentration for 24 hours prior to admission may not be easy or practical) then we might have a very inexpensive new method for treating refractory CHF.

PERIOPERATIVE BETA-BLOCKERS: HELPFUL OR HARMFUL FOR MAJOR NONCARDIAC SURGERY?

Lindenauer P, Pekow P, Wang K, et al. Perioperative beta-blocker therapy and mortality after major noncardiac surgery. N Engl J Med. 2005;353:349–361.

Among the most common reasons that hospitalists are consulted is the “perioperative evaluation.” This is with good reason because 50,000 patients each year have a perioperative myocardial infarction. A statement by the Agency for Health Care Research and Quality proclaims that we have “clear opportunities for safety improvement” in regard to using beta-blockers for patients with intermediate and high risk for perioperative cardiovascular complications. The American Heart Association and the American College of Cardiology recommend using these medications in patients with either risk factors for or known coronary artery disease when undergoing high-risk surgeries. Despite all of this the efficacy of the class has not been proven by large randomized clinical studies.

Given the frequency in which Marik, et al., report encountering temporary dysfunction of the hypothalamic-pituitary-adrenal axis and the effect that treatment had on mortality it seems as though this is a diagnosis worth consideration.

Using a large national registry of more than 300 U.S. hospitals, Lindenauer, et al., conducted a large observational study evaluating beta-blockade in the perioperative period in patients undergoing major noncardiac surgery. Looking at more than 700,000 patients, they found that 85% had no recorded contraindication to beta-blockers. Only 18% of eligible patients received beta-blockers (n=122, 338).

Patients were considered to have had a beta-blocker for prophylaxis if it was given within the first 48 hours of their hospitalization, though this may or may not have been the intended use (this information was not provided by the registry data base). Only in-hospital mortality was evaluated as postdischarge information was not available. All patients had a revised cardiac risk index configured. This index places risk on perioperative cardiac events by looking at the nature of the surgery as well as whether or not a history of congestive heart failure, ischemic heart disease, perioperative treatment with insulin, an elevated preoperative creatinine, and cerebrovascular disease are present. An increasing score means that major perioperative complications become more likely (scores range from 0–5).

 

 

Considering all patients, there was no risk reduction of in-hospital death for those receiving beta-blockers. If the revised cardiac risk index score was 0 or 1, the patients had an increase in the risk of death (43% and 13%, respectively). However, those patients whose scores were 2, 3, or 4 or higher had a reduction in the risk of death (from 10% to 43% as their score increased).

How are we to account for these results? In the high-risk patients we see benefit in treatment with beta-blockers. We suspect this drug class improves coronary filling time during diastole and/or prevents dangerous arrhythmias. In patients at low and intermediate risk, the results may be surprising. The study group did not have patient charts available. It is possible that these patients were given betablockers not for prophylaxis but in response to a postoperative ischemic event or infarction. If this misclassification took place, then the effectiveness of beta-blockers is underestimated and the suggestion that these drugs are harmful in this situation would be erroneous.

Given the data gleaned from this study and considering previous publications, we are justified—even obligated—in using betablockers in high-risk patients, without contraindications, who undergo major noncardiac surgery. Before using these drugs in patients at low or intermediate risk we need more information. Two large ongoing randomized trials (POISE and DECREASE–IV) should bring clarity to this issue. We expect results from these in the next four years.

A NEW CLINICAL ENTITY: THE HEPATOADRENAL SYNDROME

Marik PE, Gayowski T, Starzl TE, et al. The hepatoadrenal syndrome: a common yet unrecognized clinical condition. Crit Care Med. 2005;33:1254-1259.

It is not uncommon to see the temporary dysfunction of the hypothalamic-pituitary-adrenal axis while someone is critically ill. Many physicians who suspect this condition attempt to make a diagnosis using either a random total cortisol level or perform a cosyntropin stimulation test. End-stage liver disease and sepsis share some elements of their pathophysiology, such as endotoxemia and increased levels of mediators that influence inflammation.

A liver transplant intensive care unit has produced data on what they have coined the “hepatoadrenal syndrome.” Due to emerging evidence that severe liver disease is associated with adrenal insufficiency, this liver transplant intensive care unit began routinely testing all patients admitted to their unit for this condition. They presented their findings for 340 patients. This review will focus only on those patients with chronic liver failure and fulminant hepatic failure because transplant patients are often cared for by a multidisciplinary team. Patients were labeled as having adrenal insufficiency if the random total cortisol level was <20 micrograms (mcg)/dL in patients who were “highly stressed” (i.e., hypotension, respiratory failure). In all other patients a random total cortisol level of <15 mcg/dL or a 30-minute level <20 mcg/dL post-low-dose (1 mcg) cosyntropin established the diagnosis. Lipid profiles were also obtained from each patient. Those receiving glucocorticoids were excluded. It was left to the discretion of the treating physician whether or not to treat patients with steroids.

Given the data gleaned from this study and considering previous publications, we are justified—even obligated—in using betablockers in high-risk patients, without contraindications, who undergo major noncardiac surgery.

Eight patients (33%) with fulminant hepatic failure and 97 patients (66%) with chronic liver disease met their criteria for adrenal insufficiency. Of the patients with adrenal insufficiency the mortality rate was 46% for those not treated with glucocorticoids compared with 26% for those receiving glucocorticoid therapy. The HDL level was the only variable predictive of adrenal insufficiency (p<.0001).

The association between HDL levels and cortisol is as follows: The adrenal glands do not store cortisol. Cholesterol is a precursor for the synthesis of steroids—80% of cortisol arises from it. The lipoprotein of choice to use as substrate in steroid production is HDL. Because a major protein component of HDL is synthesized by the liver, those with liver disease have low levels of serum HDL.

 

 

Recently our current method of diagnosing adrenal insufficiency during acute illness has been challenged in the literature. Measuring free cortisol rather than total cortisol has been suggested as proteins that bind cortisol decrease in this setting while free cortisol levels actually rise. Similar to the picture we see in sepsis, there are low levels of these same proteins in liver disease.

At this time testing for free cortisol is not widely available nor do we have good information on what an “appropriate” free cortisol level should be during acute illness. Therefore, given the frequency in which Marik, et al., report encountering this condition and the effect that treatment had on mortality it seems as though this is a diagnosis worth consideration.

TREATMENT OPTIONS FOR ATRIAL FIBRILLATION

Wazni OM, Marrouche NF, Martin DO, et al. Radiofrequency ablation vs antiarrhythmic drugs as first-line treatment of symptomatic atrial fibrillation. JAMA. 2005;293(21):2634-2640.

Atrial fibrillation affects millions of people. This diagnosis has a significant mortality associated with it, causes strokes, and influences quality of life. Therapy has been less than satisfying. Both rate control and rhythm control have multiple potential adverse consequences. Pulmonary vein isolation is performed in the electrophysiology laboratory using an ablation catheter. The goal of this procedure is to completely disconnect the electrical activity between the pulmonary vein antrum and the left atrium. This is a potentially curable procedure for atrial fibrillation.

The biggest concerns about pulmonary vein isolation are the complication rates (death in 0.05% and stroke in 0.28%). We also don’t know if this procedure will translate into long-term cures.

In a multicenter prospective randomized pilot study Wazni, et al., studied 70 patients with highly symptomatic atrial fibrillation. Patients were between 18 and 75 years old. They could not have undergone ablation in the past, had a history of open-heart surgery, been previously treated with antiarrhythmic drugs, or had a contraindication to long-term anticoagulation. Patients were randomized to antiarrhythmic therapy or pulmonary vein isolation. Those receiving medical treatment were given flecainide, propafenone, or sotalol. Amiodarone was used for patients who had failed at least two or more of these medications. Drugs were titrated to the maximum tolerable doses. The other arm of the group underwent pulmonary vein isolation. This group also received anticoagulation with warfarin beginning the day of the procedure, and this was continued for at least three months. Anticoagulation was extended beyond this time if atrial fibrillation recurred or the pulmonary vein was narrowed by 50% or more (as seen on a three-month post-procedure CT scan). Follow-up was at least one year. A loop event-recorder was worn for one month by all patients and event recorders were used for patients who were symptomatic beyond the first three months of therapy initiation.

After one year, symptomatic atrial fibrillation recurred in 63% of the antiarrhythmic group versus 13% in the pulmonary vein isolation group (p<.001). Fifty-four percent of those medically treated were hospitalized versus 9% of pulmonary vein isolation patients (p<.001). There were no thromboembolic events in either group. Bleeding rates were similar in both groups. For those who underwent pulmonary vein isolation 3% had mild pulmonary vein stenosis and 3% had moderate stenosis (all of which were asymptomatic). Five of the eight measures of quality of life were significantly improved in the pulmonary vein isolation arm versus those receiving antiarrhythmic drugs.

Recently data from multiple trials such as AFFIRM and RACE confirm that rhythm control does not confer significant benefits over rate-control for atrial fibrillation. In fact rate control seems to be a more attractive approach to many patients given the side-effect profile of the antiarrhythmia medications. This study was initiated prior to the release of the information gained from RACE and AFFIRM, thus no rate-control arm was included. This trial also differed from previous studies by using a younger population that was highly symptomatic in comparison with other recent studies using older patients who had recurrent persistent atrial fibrillation.

 

 

The biggest concerns about pulmonary vein isolation are the complication rates (death in 0.05% and stroke in 0.28%). We also don’t know if this procedure will translate into long-term cures. Until we have larger studies this should not be a first-line modality for treating all patients. Quite often we find patients where neither rate nor rhythm control is a particularly attractive option, especially in regard to long-term anticoagulation. Pulmonary vein isolation provides us with a new viable option for these people as well as something to consider for carefully selected highly symptomatic patients. TH

Classic Literature

The GOLDMAN Criteria

More than 25 years have passed since Goldman’s findings, and we still have unanswered questions.

In 1930 Butler, et al., first described a potential association between ischemic heart disease and morbidity and mortality associated with the postoperative period. The Goldman, et al., article was a landmark in describing a formalized approach to the perioperative cardiac evaluation of patients undergoing noncardiac surgery (Multifactorial index of cardiac risk in noncardiac surgical procedures. N Engl J Med. 1977;297:845-850)

Goldman, et al., evaluated 1,001 patients who were operated on by the general, orthopedic, and urologic surgical teams at Massachusetts General Hospital (Boston). They excluded patients who had a transurethral resection of the prostate, an endoscopic procedure, or a minor surgery requiring only local anesthesia. Goldman and his colleagues saw each patient prior to their operation, unless it was emergent that they also see the patient in the immediate postoperative period.

They performed histories and physicals tailored to detect either risk factors for cardiac disease or physical findings suggestive of such. They also reviewed each patient’s electrocardiogram along with a radiograph of the chest. Particular attention was paid to the central venous pressure as well as evidence in support of aortic stenosis and premature ventricular contractions.

All patients were seen at least once postoperatively. Those with cardiac complications were seen more frequently, and medical consultants were involved in their management. All patients charts were reviewed daily and again after discharge.

In the study, 19 patients died from postoperative cardiac deaths. Forty additional patients died from noncardiac causes. Thirty-nine patients suffered from one or more cardiac complications considered life-threatening, but they did not die from these. Using a multivariate analysis the authors found the following nine factors to be related to the development of cardiac complications:

  1. An S3 gallop or a jugular venous distension;
  2. Recent myocardial infarction;
  3. Rhythm other than sinus;
  4. Five or more premature ventricular contractions prior to surgery;
  5. Intraperitoneal, intrathoracic, or aortic operations;
  6. Age over 70 years;
  7. Important aortic stenosis
  8. Emergency surgery; and
  9. A poor general medical condition.

These data birthed the famous Cardiac Risk Index. These nine factors were assigned “points” that could potentially sum up to a high of 53 points. Patients were then placed into one of four classes for cardiac risk. The higher their class, the greater the patient’s risk of developing cardiac complications in the perioperative period. This became the standard for almost 20 years.

By the mid-1990s there were multiple cardiac risk indices based on Goldman’s original article. In 1996 the American College of Cardiology and the American Heart Association (ACC/AHA) put together a 12-person task force that created guidelines for the evaluation of cardiac risk in the perioperative period for those patients undergoing noncardiac surgery. In 2002 these guidelines were updated. The ACC/AHA guidelines present an eight-step algorithm to assess risk.

While these guidelines have supplanted the recommendations from Goldman’s group, there are still potential pitfalls with them. Though evidence exists in support of the ACC/AHA positions, the guidelines have not been studied in a prospective fashion. The ACC/AHA paper does not provide us with a method for considering those patients with multiple intermediate or minor risk factors. Further, as in the Goldman article, the list of risk factors remains incomplete.

More than 25 years have passed since Goldman’s findings, and we still have unanswered questions. The use of perioperative beta-blockers is addressed in this issue of The Hospitalist. (See , p. 65.) The Coronary Artery Surgery Study found that patients who underwent cardiac revascularization prior to major-risk surgery had their perioperative mortality cut in half compared with those managed medically (3.3% versus 1.7%, p<.05). The ACC/AHA guidelines state that “perioperative intervention is rarely necessary simply to lower the risk of surgery, unless such intervention is indicated irrespective of the perioperative context.”

The Coronary Artery Revascularization Prophylaxis trial, published in 2004, found that those with clinically significant though stable coronary artery disease did no better after revascularization than those medically managed for elective vascular surgeries (those with significant stenosis of the left main coronary artery, a left ventricular ejection fraction of less than 20%, and severe aortic stenosis were excluded). We also have emerging data on statins. Given their pleiotropic effects and the observational data we have now it is not surprising that well-designed trials using statins in the perioperative period to reduce cardiac complications are underway.

Goldman, et al., made a major contribution to this area of consultative medicine. Their paper has had a significant effect on the data that have emerged during the last few decades. For now it remains a challenge for the hospitalist to apply our current knowledge, with its several unanswered questions, to maximize the benefit to the patient during this important chapter in their care.

WORSENING OUTCOMES AND INCREASED RECURRENCE OF CLOSTRIDIUM DIFFICILE AFTER INITIAL TREATMENT WITH METRONIDAZOLE?

Pepin J, Alary ME, Valiquette L, et al. Increasing risk of relapse after treatment of Clostridium difficile colitis in Quebec, Canada. Clin Infect Dis. 2005;40:1591-1597; and Musher DM, Aslam S, Logan N, et al. Relatively poor outcome after treatment of Clostridium difficile colitis with metronidazole. Clin Infect Dis. 2005;40:1586-1590.

Information on treatment of colitis caused by Clostridium difficile began to appear in the late 1970s and early 1980s. Since that time there have been a paucity of novel therapies. It has been well-established that both metronidazole and vancomycin can effectively treat this entity. Traditionally metronidazole has been the first-line agent for C. difficile-associated diarrhea (CDAD). The reasons for this are three:

  1. Randomized controlled trials have shown vancomycin and metronidazole to be equally efficacious;
  2. The cost of oral vancomycin is substantially more than oral metronidazole; and
  3. Many experts have cautioned that using vancomycin may contribute to the blooming number of bacteria that are resistant to vancomycin.

Indeed recommendations from the Centers for Disease Control and Prevention’s Healthcare Infection Control Practices Advisory Committee as well as the American Society for Health-System Pharmacists have supported using metronidazole as our initial agent of choice for CDAD (oral vancomycin is actually the only agent that is approved by the Food and Drug Administration for CDAD). Most of our earlier data claim initial response rates to be 88% or better and relapse rates to be somewhere between 5% and 12% when metronidazole is used.

Two new studies have been published raising a red flag on our current standard of practice. Musher, et al., designed a prospective, observational study in which they followed more 200 patients with CDAD that were initially treated with metronidazole. The patient pool came from a Veterans Affairs Medical Center. They all had a positive fecal ELISA for C. difficile toxin and were treated for seven or more days using at least 1.5 grams per day of metronidazole.

Records were reviewed six weeks prior to the diagnosis and then patients were followed for three months after cessation of therapy. Patients were assigned to four outcome groups:

  1. Complete responders who did not have recurrence over four months;
  2. Refractory-to-treatment where signs and symptoms of CDAD were present for 10 or more days;
  3. Recurrence after initial clinical response with signs and symptoms of CDAD and a positive toxin; and
  4. Clinical recurrence where there was an initial response but a recurrence of signs and symptoms of CDAD without a positive toxin (either the toxin was not present when tested or the test was not done).

Fifty percent were completely cured. Twenty-two percent were refractory to initial therapy. Twenty-eight percent had a recurrence of CDAD within the 90-day period. The mortality was 27%. This was higher among people who had failed to respond to initial therapy (31% versus 21%; p<.05).

Pepin, et al., retrospectively looked at more than 2,000 CDAD cases from one hospital between 1991 and 2004. To be included the patients needed either a positive toxin, endoscopic evidence of pseudomembranous colitis, or histopathologic evidence of pseudomembranous colitis on a biopsy specimen. Patients received at least 1 gram per day of metronidazole for 10 to 14 days. They were considered to have a recurrence if they had diarrhea within two months of the completion of therapy and either a positive toxin at that time or if the attending physician ordered a second course of antibiotics for C. difficile.

 

 

Between 1991 and 2002 the frequency of times that either therapy was changed to vancomycin or vancomycin was added to metronidazole was unchanged (9.6%). During 2003-2004 this more than doubled (25.7%). The number of patients experiencing recurrence over a two-month period comparing data from 1991-2002 to 2003- 2004 was staggering (20.8% versus 47.2%; p<.001). The authors noted that as patients aged the probabilities of recurrence increased.

They also found that a subgroup of patients with a white blood cell count over 20,000 cells/mm3 and an elevated creatinine had a high short-term mortality rate.

Why might we be seeing these results? Several theories exist. Patients are both older and sicker than they have been in the past. Our antibiotic choice is changing with an increase in using agents that provide a more broad-spectrum coverage. Immune responses vary with fewer antitoxin antibodies found in those patients with symptoms and/or recurrence. Metronidazole levels in stool decrease as inflammation and diarrhea resolve; this is not the case with vancomycin where fecal concentrations remain high throughout treatment.

The authors noted that, as patients aged, the probabilities of recurrence increased. They also found that a subgroup of patients with a white blood cell count over 20,000 cells/mm3 and an elevated creatinine had a high short-term mortality rate.

A survey of infectious disease physicians found that they believe antibiotic failure is on the rise in this setting. Before we take this as true, consider the following:

  1. We have no universally accepted clinical definition of what constitutes diarrhea for CDAD;
  2. Previous studies did not look for recurrence as far out from initial treatment as these two did; and
  3. These studies do not have the design to support arguments powerful enough to change our paradigm just yet.

The editorial comment acknowledged the Pepin, et al., report that patients with a high white blood cell count and worsening renal function are those that we should be particularly concerned about. The authors write that if the patient’s white blood cell count is increasing while on therapy that he changes his antibiotic choice to vancomycin. In addition, if someone has either ileus or fulminant CDAD he will use multiple antibiotics and consult the surgeons. At this time we have other agents being studied for CDAD, such as tinidazole. We now need a larger randomized prospective trial to better explore treatment outcomes in CDAD.

HYPERTONIC SALINE SOLUTION TO TREAT REFRACTORY CONGESTIVE HEART FAILURE

Paterna S, Di Pasquale P, Parrinello G, et al. Changes in brain natriuretic peptide levels and bioelectrical impedance measurements after treatment with high-dose furosemide and hypertonic saline solution versus high-dose furosemide alone in refractory congestive heart failure. J Am Coll Cardiol. 2005;45:1997–2003.

CHF continues to increase in prevalence and incidence, despite our advances with therapies using ACE inhibitors, beta-blockers, and aldosterone antagonists. Refractory CHF accounts for a considerable portion of admissions to hospitalists’ services. Loop diuretics are part of the standard of arsenal we employ in these patients. Unfortunately, many patients fail to respond to initial diuretic doses. In this situation we might begin a constant infusion of diuretic or recruit diuretics from other classes in hope of synergism. Another typical approach in treating advanced CHF is restriction of sodium intake.

Paterna, et al., previously published four studies using small volume hypertonic saline solution and high-dose furosemide in refractory CHF, in which they demonstrated the safety and tolerability of these measures. They now present the first randomized double-blinded trial using this intervention. Ninety-four patients were included with NYHA functional class IV CHF on standard medical therapy and high doses of diuretics for at least two weeks. They had to have a left ventricular ejection fraction of <35%, serum creatinine <2 mg/dL, reduced urinary volume (<500 mL/24 h), and a low natriuresis (<60 mEq/24 h). They could not be taking NSAIDs.

 

 

The group receiving hypertonic saline solution had brow-raising results. They had a significant increase in daily diuresis and natriuresis, a difference in brain natriuretic peptide levels on days six and 30, a reduction in their length of stay, and a decrease in their hospital readmission rate.

Patients received either intravenous furosemide (500 to 1000 mg) plus hypertonic saline solution bid or the IV furosemide bid alone. Treatment lasted four to six days. Body weights were followed. Brain natriuretic peptide plasma levels were measured on hospital days one and six, as well as 30 days after discharge.

The group receiving hypertonic saline solution had brow-raising results. They had a significant increase in daily diuresis and natriuresis (p<0.05), a difference in brain natriuretic peptide levels on days six and 30, a reduction in their length of stay, and a decrease in their hospital readmission rate.

This is a provocative study. At this time the mechanism responsible for the results is unclear. Paterna, et al., offer multiple explanations. One possibility is through the osmotic action of hypertonic saline solution. It may hasten the mobilization of extravascular fluid into the intravascular space and then this volume is quickly excreted. Also, hypertonic saline solution may increase renal blood flow and perfusion alternating the handling of sodium and natriuresis while also allowing the concentration of furosemide in the loop of Henle to attain a more desirable level.

Should these results hold true in other investigations and the inclusion criteria loosen (measuring patients urine volume and sodium concentration for 24 hours prior to admission may not be easy or practical) then we might have a very inexpensive new method for treating refractory CHF.

PERIOPERATIVE BETA-BLOCKERS: HELPFUL OR HARMFUL FOR MAJOR NONCARDIAC SURGERY?

Lindenauer P, Pekow P, Wang K, et al. Perioperative beta-blocker therapy and mortality after major noncardiac surgery. N Engl J Med. 2005;353:349–361.

Among the most common reasons that hospitalists are consulted is the “perioperative evaluation.” This is with good reason because 50,000 patients each year have a perioperative myocardial infarction. A statement by the Agency for Health Care Research and Quality proclaims that we have “clear opportunities for safety improvement” in regard to using beta-blockers for patients with intermediate and high risk for perioperative cardiovascular complications. The American Heart Association and the American College of Cardiology recommend using these medications in patients with either risk factors for or known coronary artery disease when undergoing high-risk surgeries. Despite all of this the efficacy of the class has not been proven by large randomized clinical studies.

Given the frequency in which Marik, et al., report encountering temporary dysfunction of the hypothalamic-pituitary-adrenal axis and the effect that treatment had on mortality it seems as though this is a diagnosis worth consideration.

Using a large national registry of more than 300 U.S. hospitals, Lindenauer, et al., conducted a large observational study evaluating beta-blockade in the perioperative period in patients undergoing major noncardiac surgery. Looking at more than 700,000 patients, they found that 85% had no recorded contraindication to beta-blockers. Only 18% of eligible patients received beta-blockers (n=122, 338).

Patients were considered to have had a beta-blocker for prophylaxis if it was given within the first 48 hours of their hospitalization, though this may or may not have been the intended use (this information was not provided by the registry data base). Only in-hospital mortality was evaluated as postdischarge information was not available. All patients had a revised cardiac risk index configured. This index places risk on perioperative cardiac events by looking at the nature of the surgery as well as whether or not a history of congestive heart failure, ischemic heart disease, perioperative treatment with insulin, an elevated preoperative creatinine, and cerebrovascular disease are present. An increasing score means that major perioperative complications become more likely (scores range from 0–5).

 

 

Considering all patients, there was no risk reduction of in-hospital death for those receiving beta-blockers. If the revised cardiac risk index score was 0 or 1, the patients had an increase in the risk of death (43% and 13%, respectively). However, those patients whose scores were 2, 3, or 4 or higher had a reduction in the risk of death (from 10% to 43% as their score increased).

How are we to account for these results? In the high-risk patients we see benefit in treatment with beta-blockers. We suspect this drug class improves coronary filling time during diastole and/or prevents dangerous arrhythmias. In patients at low and intermediate risk, the results may be surprising. The study group did not have patient charts available. It is possible that these patients were given betablockers not for prophylaxis but in response to a postoperative ischemic event or infarction. If this misclassification took place, then the effectiveness of beta-blockers is underestimated and the suggestion that these drugs are harmful in this situation would be erroneous.

Given the data gleaned from this study and considering previous publications, we are justified—even obligated—in using betablockers in high-risk patients, without contraindications, who undergo major noncardiac surgery. Before using these drugs in patients at low or intermediate risk we need more information. Two large ongoing randomized trials (POISE and DECREASE–IV) should bring clarity to this issue. We expect results from these in the next four years.

A NEW CLINICAL ENTITY: THE HEPATOADRENAL SYNDROME

Marik PE, Gayowski T, Starzl TE, et al. The hepatoadrenal syndrome: a common yet unrecognized clinical condition. Crit Care Med. 2005;33:1254-1259.

It is not uncommon to see the temporary dysfunction of the hypothalamic-pituitary-adrenal axis while someone is critically ill. Many physicians who suspect this condition attempt to make a diagnosis using either a random total cortisol level or perform a cosyntropin stimulation test. End-stage liver disease and sepsis share some elements of their pathophysiology, such as endotoxemia and increased levels of mediators that influence inflammation.

A liver transplant intensive care unit has produced data on what they have coined the “hepatoadrenal syndrome.” Due to emerging evidence that severe liver disease is associated with adrenal insufficiency, this liver transplant intensive care unit began routinely testing all patients admitted to their unit for this condition. They presented their findings for 340 patients. This review will focus only on those patients with chronic liver failure and fulminant hepatic failure because transplant patients are often cared for by a multidisciplinary team. Patients were labeled as having adrenal insufficiency if the random total cortisol level was <20 micrograms (mcg)/dL in patients who were “highly stressed” (i.e., hypotension, respiratory failure). In all other patients a random total cortisol level of <15 mcg/dL or a 30-minute level <20 mcg/dL post-low-dose (1 mcg) cosyntropin established the diagnosis. Lipid profiles were also obtained from each patient. Those receiving glucocorticoids were excluded. It was left to the discretion of the treating physician whether or not to treat patients with steroids.

Given the data gleaned from this study and considering previous publications, we are justified—even obligated—in using betablockers in high-risk patients, without contraindications, who undergo major noncardiac surgery.

Eight patients (33%) with fulminant hepatic failure and 97 patients (66%) with chronic liver disease met their criteria for adrenal insufficiency. Of the patients with adrenal insufficiency the mortality rate was 46% for those not treated with glucocorticoids compared with 26% for those receiving glucocorticoid therapy. The HDL level was the only variable predictive of adrenal insufficiency (p<.0001).

The association between HDL levels and cortisol is as follows: The adrenal glands do not store cortisol. Cholesterol is a precursor for the synthesis of steroids—80% of cortisol arises from it. The lipoprotein of choice to use as substrate in steroid production is HDL. Because a major protein component of HDL is synthesized by the liver, those with liver disease have low levels of serum HDL.

 

 

Recently our current method of diagnosing adrenal insufficiency during acute illness has been challenged in the literature. Measuring free cortisol rather than total cortisol has been suggested as proteins that bind cortisol decrease in this setting while free cortisol levels actually rise. Similar to the picture we see in sepsis, there are low levels of these same proteins in liver disease.

At this time testing for free cortisol is not widely available nor do we have good information on what an “appropriate” free cortisol level should be during acute illness. Therefore, given the frequency in which Marik, et al., report encountering this condition and the effect that treatment had on mortality it seems as though this is a diagnosis worth consideration.

TREATMENT OPTIONS FOR ATRIAL FIBRILLATION

Wazni OM, Marrouche NF, Martin DO, et al. Radiofrequency ablation vs antiarrhythmic drugs as first-line treatment of symptomatic atrial fibrillation. JAMA. 2005;293(21):2634-2640.

Atrial fibrillation affects millions of people. This diagnosis has a significant mortality associated with it, causes strokes, and influences quality of life. Therapy has been less than satisfying. Both rate control and rhythm control have multiple potential adverse consequences. Pulmonary vein isolation is performed in the electrophysiology laboratory using an ablation catheter. The goal of this procedure is to completely disconnect the electrical activity between the pulmonary vein antrum and the left atrium. This is a potentially curable procedure for atrial fibrillation.

The biggest concerns about pulmonary vein isolation are the complication rates (death in 0.05% and stroke in 0.28%). We also don’t know if this procedure will translate into long-term cures.

In a multicenter prospective randomized pilot study Wazni, et al., studied 70 patients with highly symptomatic atrial fibrillation. Patients were between 18 and 75 years old. They could not have undergone ablation in the past, had a history of open-heart surgery, been previously treated with antiarrhythmic drugs, or had a contraindication to long-term anticoagulation. Patients were randomized to antiarrhythmic therapy or pulmonary vein isolation. Those receiving medical treatment were given flecainide, propafenone, or sotalol. Amiodarone was used for patients who had failed at least two or more of these medications. Drugs were titrated to the maximum tolerable doses. The other arm of the group underwent pulmonary vein isolation. This group also received anticoagulation with warfarin beginning the day of the procedure, and this was continued for at least three months. Anticoagulation was extended beyond this time if atrial fibrillation recurred or the pulmonary vein was narrowed by 50% or more (as seen on a three-month post-procedure CT scan). Follow-up was at least one year. A loop event-recorder was worn for one month by all patients and event recorders were used for patients who were symptomatic beyond the first three months of therapy initiation.

After one year, symptomatic atrial fibrillation recurred in 63% of the antiarrhythmic group versus 13% in the pulmonary vein isolation group (p<.001). Fifty-four percent of those medically treated were hospitalized versus 9% of pulmonary vein isolation patients (p<.001). There were no thromboembolic events in either group. Bleeding rates were similar in both groups. For those who underwent pulmonary vein isolation 3% had mild pulmonary vein stenosis and 3% had moderate stenosis (all of which were asymptomatic). Five of the eight measures of quality of life were significantly improved in the pulmonary vein isolation arm versus those receiving antiarrhythmic drugs.

Recently data from multiple trials such as AFFIRM and RACE confirm that rhythm control does not confer significant benefits over rate-control for atrial fibrillation. In fact rate control seems to be a more attractive approach to many patients given the side-effect profile of the antiarrhythmia medications. This study was initiated prior to the release of the information gained from RACE and AFFIRM, thus no rate-control arm was included. This trial also differed from previous studies by using a younger population that was highly symptomatic in comparison with other recent studies using older patients who had recurrent persistent atrial fibrillation.

 

 

The biggest concerns about pulmonary vein isolation are the complication rates (death in 0.05% and stroke in 0.28%). We also don’t know if this procedure will translate into long-term cures. Until we have larger studies this should not be a first-line modality for treating all patients. Quite often we find patients where neither rate nor rhythm control is a particularly attractive option, especially in regard to long-term anticoagulation. Pulmonary vein isolation provides us with a new viable option for these people as well as something to consider for carefully selected highly symptomatic patients. TH

Classic Literature

The GOLDMAN Criteria

More than 25 years have passed since Goldman’s findings, and we still have unanswered questions.

In 1930 Butler, et al., first described a potential association between ischemic heart disease and morbidity and mortality associated with the postoperative period. The Goldman, et al., article was a landmark in describing a formalized approach to the perioperative cardiac evaluation of patients undergoing noncardiac surgery (Multifactorial index of cardiac risk in noncardiac surgical procedures. N Engl J Med. 1977;297:845-850)

Goldman, et al., evaluated 1,001 patients who were operated on by the general, orthopedic, and urologic surgical teams at Massachusetts General Hospital (Boston). They excluded patients who had a transurethral resection of the prostate, an endoscopic procedure, or a minor surgery requiring only local anesthesia. Goldman and his colleagues saw each patient prior to their operation, unless it was emergent that they also see the patient in the immediate postoperative period.

They performed histories and physicals tailored to detect either risk factors for cardiac disease or physical findings suggestive of such. They also reviewed each patient’s electrocardiogram along with a radiograph of the chest. Particular attention was paid to the central venous pressure as well as evidence in support of aortic stenosis and premature ventricular contractions.

All patients were seen at least once postoperatively. Those with cardiac complications were seen more frequently, and medical consultants were involved in their management. All patients charts were reviewed daily and again after discharge.

In the study, 19 patients died from postoperative cardiac deaths. Forty additional patients died from noncardiac causes. Thirty-nine patients suffered from one or more cardiac complications considered life-threatening, but they did not die from these. Using a multivariate analysis the authors found the following nine factors to be related to the development of cardiac complications:

  1. An S3 gallop or a jugular venous distension;
  2. Recent myocardial infarction;
  3. Rhythm other than sinus;
  4. Five or more premature ventricular contractions prior to surgery;
  5. Intraperitoneal, intrathoracic, or aortic operations;
  6. Age over 70 years;
  7. Important aortic stenosis
  8. Emergency surgery; and
  9. A poor general medical condition.

These data birthed the famous Cardiac Risk Index. These nine factors were assigned “points” that could potentially sum up to a high of 53 points. Patients were then placed into one of four classes for cardiac risk. The higher their class, the greater the patient’s risk of developing cardiac complications in the perioperative period. This became the standard for almost 20 years.

By the mid-1990s there were multiple cardiac risk indices based on Goldman’s original article. In 1996 the American College of Cardiology and the American Heart Association (ACC/AHA) put together a 12-person task force that created guidelines for the evaluation of cardiac risk in the perioperative period for those patients undergoing noncardiac surgery. In 2002 these guidelines were updated. The ACC/AHA guidelines present an eight-step algorithm to assess risk.

While these guidelines have supplanted the recommendations from Goldman’s group, there are still potential pitfalls with them. Though evidence exists in support of the ACC/AHA positions, the guidelines have not been studied in a prospective fashion. The ACC/AHA paper does not provide us with a method for considering those patients with multiple intermediate or minor risk factors. Further, as in the Goldman article, the list of risk factors remains incomplete.

More than 25 years have passed since Goldman’s findings, and we still have unanswered questions. The use of perioperative beta-blockers is addressed in this issue of The Hospitalist. (See , p. 65.) The Coronary Artery Surgery Study found that patients who underwent cardiac revascularization prior to major-risk surgery had their perioperative mortality cut in half compared with those managed medically (3.3% versus 1.7%, p<.05). The ACC/AHA guidelines state that “perioperative intervention is rarely necessary simply to lower the risk of surgery, unless such intervention is indicated irrespective of the perioperative context.”

The Coronary Artery Revascularization Prophylaxis trial, published in 2004, found that those with clinically significant though stable coronary artery disease did no better after revascularization than those medically managed for elective vascular surgeries (those with significant stenosis of the left main coronary artery, a left ventricular ejection fraction of less than 20%, and severe aortic stenosis were excluded). We also have emerging data on statins. Given their pleiotropic effects and the observational data we have now it is not surprising that well-designed trials using statins in the perioperative period to reduce cardiac complications are underway.

Goldman, et al., made a major contribution to this area of consultative medicine. Their paper has had a significant effect on the data that have emerged during the last few decades. For now it remains a challenge for the hospitalist to apply our current knowledge, with its several unanswered questions, to maximize the benefit to the patient during this important chapter in their care.

Issue
The Hospitalist - 2005(10)
Issue
The Hospitalist - 2005(10)
Publications
Publications
Article Type
Display Headline
The Hepatoadrenal Syndrome, HSS to Treat CHF, Treatment for Atrial Fib, and More
Display Headline
The Hepatoadrenal Syndrome, HSS to Treat CHF, Treatment for Atrial Fib, and More
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)