Other Literature of Interest

Article Type
Changed
Fri, 09/14/2018 - 12:41
Display Headline
Other Literature of Interest

1. Chimowitz MI, Lynn MJ, Howlett-Smith H, et al. Comparison of warfarin and aspirin for symptomatic intracranial arterial stenosis. N Engl J Med. 352:1305-16.

This is the first prospective study comparing antithrombotic therapies for patients with atherosclerotic stenosis of major intracranial arteries. This multicenter, NINDS-sponsored, placebo-controlled, blinded study randomized 569 patients to aspirin (650 mg twice daily) or warfarin (initially 5 mg daily, titrated to achieve an INR of 2.0–3.0) and followed them for nearly 2 years. The study was terminated early over safely concerns about patients in the warfarin group. Baseline characteristics between the 2 groups were not significantly different. Warfarin was not more effective than aspirin in its effect on the primary endpoints of ischemic stroke, brain hemorrhage, or vascular death other than from stroke (as defined in the study protocol). However, major cardiac events (myocardial infarction or sudden death) were significantly higher in the warfarin group, and major hemorrhage (defined as any intracranial or systemic hemorrhage requiring hospitalization, transfusion, or surgical intervention) was also significantly higher in the warfarin group. The authors note the difficulty maintaining the INR in the target range (achieved only 63.1 % of the time during the maintenance period, an observation in line with other anticoagulation studies). In an accompanying editorial, Dr. Koroshetz of the stroke service at the Massachusetts General Hospital also observed that difficulties in achieving the therapeutic goal with warfarin could have impacted the results. The authors also note that the dose of aspirin employed in this study is somewhat higher than in previous trials. Nevertheless, until other data emerge, this investigation’s results favor aspirin in preference to warfarin for this high-risk condition.

2. Cornish PL, Knowles, SR, Marchesano R, et al. Unintended medication discrepancies at the time of hospital admission Arch Intern Med. 2005;165:424-9

Of the various types of medical errors, medication errors are believed to be the most common. At the time of hospital admission, medication discrepancies may lead to unintended drug interactions, toxicity, or interruption of appropriate drug therapies. These investigators performed a prospective study to identify unintended medication discrepancies between the patient’s home medications and those ordered at the time of the patient’s admission and to evaluate the potential clinical significance of these discrepancies.

This study was conducted at a 1,000-bed tertiary care hospital in Canada on the general medicine teaching service. A member of the study team reviewed each medical record to ascertain the physician-recorded medication history, the nurse-recorded medication history, the admission medication orders, and demographic information. A comprehensive list of all of the patient’s prescription or nonprescription drugs was compiled by interviewing patients, families, and pharmacists, and by inspecting the bottles. A discrepancy was defined as any difference between this comprehensive list and the admission medication orders. These were categorized into omission or addition of a medication, substitution of an agent within the same drug class, and change in dose, route, and frequency of administration of an agent. The medical team caring for the patient was then asked whether or not these discrepancies were intended. The team then reconciled any unintended discrepancies. These unintended discrepancies were further classified according to their potential for harm by 3 medical hospitalists into Class 1, 2, 3, in increasing order of potential harm. One hundred fifty-one patients were included in the analysis. A total of 140 errors occurred in 81 patients (54%). The overall error rate was 0.93 per patient. Of the errors, 46% consisted of omission of a regularly prescribed medication, 25% involved discrepant doses, 17.1% involved discrepant frequency, and 11.4% were actually incorrect drugs. Breakdown of error severity resulted in designation of 61% as Class 1, 33% as Class 2, and 5.7% as Class 3. The interrater agreement was a kappa of 0.26. These discrepancies were not found to be associated with night or weekend admissions, high patient volume, or high numbers of medications.

 

 

Real-time clinical correlation with the responsible physicians allowed distinction of intended from unintended discrepancies. This presumably improved the accuracy of the error rate measurement. This study confirmed the relatively high rate previously reported. Further study can focus on possible intervention to minimize these errors.

3. Liperoti R, Gambassi G, Lapane KL, et al. Conventional and atypical antipsychotics and the risk of hospitalization for ventricular arrhythmias or cardiac arrest Arch Intern Med. 2005;165:696-701.

As the number of hospitalized elderly and demented patients increases, use of both typical and atypical antipsychotics has become prevalent. QT prolongation, ventricular arrhythmia, and cardiac arrest are more commonly associated with the older conventional antipsychotics than with newer atypical agents. This case-control study was conducted to estimate the effect of both conventional and atypical antipsychotics use on the risk of hospital admission for ventricular arrhythmia or cardiac arrest.

The patient population involved consisted of elderly nursing home residents in 6 US states. The investigators utilized Systematic Assessment of Geriatric Drug Use via Epidemiology database that contains data from minimum data set (MDS), a standardized data set required of all certified nursing homes in the United States. Case patients were selected by ICD-9 codes for cardiac arrest or ventricular arrhythmia. Control patients were selected via ICD-9 codes of 6 other common inpatient diagnoses. Antipsychotic exposure was determined by use of the most recent assessment in the nursing homes prior to admission. Exposed patients were those who received atypical antipsychotics such as risperidone, olanzapine, quetiapine, and clozapine, and those who used conventional agents such as haloperidol and others. After control for potential confounders, users of conventional antipsychotics showed an 86% increase in the risk of hospitalization for ventricular arrhythmias or cardiac arrest (OR: 1.86) compared with nonusers. No increased risk was reported for users of atypical antipsychotics. (OR: 0.87). When compared with atypical antipsychotic use, conventional antipsychotic use carries an OR of 2.13 for these cardiac outcomes. In patients using conventional antipsychotics, the presence and absence of cardiac diseases were 3.27 times and 2.05 times, respectively, more likely to be associated with hospitalization for ventricular arrhythmias and cardiac arrest, compared with nonusers without cardiac diseases.

These results suggest that atypical antipsychotics may carry less cardiac risk than conventional agents. In an inpatient population with advancing age and increasing prevalence of dementia and cardiac disease, use of atypical antipsychotic agents may be safer than older, typical agents.

4. Mayer SA, Brun NC, Begtrup K, et al. Recombinant activated factor VII for acute intracerebral hemorrhage. N Engl J Med. 352:777-85.

This placebo-controlled, double-blind, multicenter, industry-sponsored trial of early treatment of hemorrhagic stroke with rFVIIa at 3 escalating doses, evaluated stroke hematoma growth, mortality, and functional outcomes up to 90 days. The authors note the substantial mortality and high morbidity of this condition, which currently lacks definitive treatment. Patients within 3 hours of symptoms with intracerebral hemorrhage on CT and who met study criteria were randomized to receive either placebo or a single intravenous dose of 40, 80, or 160 mcg/kg of rFVIIa within 1 hour of baseline CT and no more than 4 hours after symptoms. Follow-up CTs at 24 and 72 hours were obtained and functional assessments were performed serially at frequent intervals throughout the study period. Three hundred ninety-nine patients were analyzed and were found similar in their baseline characteristics. Lesion volume was significantly less with treatment, in a dose-dependent fashion. Mortality at 3 months was significantly less (29% vs. 18%) with treatment, and all 4 of the global functional outcome scales utilized were favorable, 3 of them (modified Rankin Scale for all doses, NIH Stroke Scale for all doses, and the Barthel Index at the 80 and 160 mcg/kg doses) in a statistically significant fashion. However, the authors noted an increase in serious thromboembolic events in the treatment groups, with a statistically significant increased frequency of arterial thromboembolic events. These included myocardial ischemic events and cerebral infarction, and most occurred within 3 days of rFVIIa treatment. Of note, the majority of patients who suffered these events made recovery from their complications, and the overall rates of fatal or disabling thromboembolic occurrences between the treatment and placebo groups were similar. This study offers new and exciting insights into potential therapy for this serious form of stroke, although safety concerns merit further study.

 

 

5. Siguret V, Gouin I, Debray M, et al. Initiation of warfarin therapy in elderly medical inpatients: a safe and accurate regimen. Am J Med. 2005; 118:137-142.

Table 1. Loading Dose Schedule for Warfarin Initiation
click for large version
click for large version

Warfarin therapy is widely used in geriatric populations. Sometimes over-anticoagulation occurs when warfarin therapy is initiated based on standard loading and maintenance dose in the hospital setting. This is mainly due to decreased hepatic clearance and polypharmacy in the geriatric population. A recent study in France demonstrated a useful and simple low-dose regimen for starting warfarin therapy (target INR: 2.0–3.0) in the elderly without over-anticoagulation. The patients enrolled in this study were typical geriatric patients with multiple comorbid conditions. These patients also received concomitant medications known to potentiate the effect of warfarin. One hundred six consecutive inpatients (age %70, mean age of 85 years) were given a 4-mg induction dose of warfarin for 3 days, and INR levels were measured on the 4th day. From this point, the daily warfarin dose was adjusted according to an algorithm (see Table 1), and INR values were obtained every 2–3 days until actual maintenance doses were determined. The maintenance dose was defined as the amount of warfarin required to yield an INR in 2.0 to 3.0 range on 2 consecutive samples obtained 48–72 hours apart in the absence of any dosage change for at least 4 days. Based on this algorithm, the predicted daily warfarin dose (3.1 ± 1.6 mg/day) correlated closely with the actual maintenance dose (3.2 ± 1.7 mg/day). The average time needed to achieve a therapeutic INR was 6.7 ± 3.3 days. None of the patients had an INR >4.0 during the induction period. This regimen also required fewer INR measurements.

Intracranial hemorrhage and gastrointestinal bleeding are serious complications of over-anticoagulation. The majority of gastrointestinal bleeding episodes respond to withholding warfarin and reversing anticoagulation. However, intracranial hemorrhage frequently leads to devastating outcomes. A recent report suggested that an age over 85 and INR of 3.5 or greater were associated with increased risk of intracranial hemorrhage. The warfarin algorithm proposed in this study provides a simple, safe, and effective tool to predict warfarin dosing in elderly hospitalized patients without over-anticoagulation. Although this regimen still needs to be validated in a large patient population in the future, it can be incorporated into computer-based dosing entry programs in the hospital setting to guide physicians in initiating warfarin therapy.

6. Wisnivesky JP, Henschke C, Balentine J, Willner, C, Deloire AM, McGinn TG. Prospective validation of a prediction model for isolating inpatients with suspected pulmonary tuberculosis. Arch Intern Med. 2005;165:453-7.

Table 2. Prediction Model for Isolating Suspected TB patients
click for large version
click for large version

Whether to isolate a patient for suspected pulmonary tuberculosis (TB) is often a balancing act between clinical risk assessment and optimal hospital resource utilitization. Practitioners need a relatively simple but sophisticated tool that they can use at the bedside to more precisely assess the likelihood of TB for more efficient and effective triage.

These authors previously developed such a tool with a sensitivity of 98% and specificity of 46%. (See Table 2 for details) This study was designed to validate this decision rule in a new set of patients. Patients were enrolled in 2 tertiary-care hospitals in New York City area over a 21-month period. They were all admitted and isolated because of clinical suspicion for pulmonary TB, not utilizing the decision rule under study. Study team members collected demographic, clinical risk factors, presenting symptoms, and signs, laboratory, and radiographic findings. Chest x-ray findings were reviewed by investigators who were blinded to the other clinical and demographical information. The gold standard of diagnosis was at least 1 sputum culture that was positive for Mycobacterium tuberculosis.

 

 

A total of 516 patients were enrolled in this study. Of the 516, 19 (3.7%) were found to have culture-proven pulmonary TB. Univariate analyses showed that history of positive PPD, higher (98% vs. 95%) oxygen saturation, upper-lobe consolidation (not upper lobe cavity), and lymphadenopathy (hilar, mediastinal, or paratracheal) were all associated with the presence of pulmonary TB. Shortness of breath was associated with the absence of TB. A total score of 1 or higher in the prediction rule had a sensitivity of 95% for pulmonary TB, and score of less than 1 had a specificity of 35%. The investigators estimated a prevalence of 3.7%, thereby yielding a positive predictive value of 9.6% but a negative predictive value of 99.7%. They estimated that 35% of patients isolated would not have been with this prediction rule.

Though validated scientifically, this tool still has a false-negative rate of 5%. In a less endemic area, the false-negative rate would be correspondingly lower and thus more acceptable from a public health perspective. This is one step closer to a balance of optimal bed utilization and reasoned clinical assessment.

Issue
The Hospitalist - 2005(07)
Publications
Sections

1. Chimowitz MI, Lynn MJ, Howlett-Smith H, et al. Comparison of warfarin and aspirin for symptomatic intracranial arterial stenosis. N Engl J Med. 352:1305-16.

This is the first prospective study comparing antithrombotic therapies for patients with atherosclerotic stenosis of major intracranial arteries. This multicenter, NINDS-sponsored, placebo-controlled, blinded study randomized 569 patients to aspirin (650 mg twice daily) or warfarin (initially 5 mg daily, titrated to achieve an INR of 2.0–3.0) and followed them for nearly 2 years. The study was terminated early over safely concerns about patients in the warfarin group. Baseline characteristics between the 2 groups were not significantly different. Warfarin was not more effective than aspirin in its effect on the primary endpoints of ischemic stroke, brain hemorrhage, or vascular death other than from stroke (as defined in the study protocol). However, major cardiac events (myocardial infarction or sudden death) were significantly higher in the warfarin group, and major hemorrhage (defined as any intracranial or systemic hemorrhage requiring hospitalization, transfusion, or surgical intervention) was also significantly higher in the warfarin group. The authors note the difficulty maintaining the INR in the target range (achieved only 63.1 % of the time during the maintenance period, an observation in line with other anticoagulation studies). In an accompanying editorial, Dr. Koroshetz of the stroke service at the Massachusetts General Hospital also observed that difficulties in achieving the therapeutic goal with warfarin could have impacted the results. The authors also note that the dose of aspirin employed in this study is somewhat higher than in previous trials. Nevertheless, until other data emerge, this investigation’s results favor aspirin in preference to warfarin for this high-risk condition.

2. Cornish PL, Knowles, SR, Marchesano R, et al. Unintended medication discrepancies at the time of hospital admission Arch Intern Med. 2005;165:424-9

Of the various types of medical errors, medication errors are believed to be the most common. At the time of hospital admission, medication discrepancies may lead to unintended drug interactions, toxicity, or interruption of appropriate drug therapies. These investigators performed a prospective study to identify unintended medication discrepancies between the patient’s home medications and those ordered at the time of the patient’s admission and to evaluate the potential clinical significance of these discrepancies.

This study was conducted at a 1,000-bed tertiary care hospital in Canada on the general medicine teaching service. A member of the study team reviewed each medical record to ascertain the physician-recorded medication history, the nurse-recorded medication history, the admission medication orders, and demographic information. A comprehensive list of all of the patient’s prescription or nonprescription drugs was compiled by interviewing patients, families, and pharmacists, and by inspecting the bottles. A discrepancy was defined as any difference between this comprehensive list and the admission medication orders. These were categorized into omission or addition of a medication, substitution of an agent within the same drug class, and change in dose, route, and frequency of administration of an agent. The medical team caring for the patient was then asked whether or not these discrepancies were intended. The team then reconciled any unintended discrepancies. These unintended discrepancies were further classified according to their potential for harm by 3 medical hospitalists into Class 1, 2, 3, in increasing order of potential harm. One hundred fifty-one patients were included in the analysis. A total of 140 errors occurred in 81 patients (54%). The overall error rate was 0.93 per patient. Of the errors, 46% consisted of omission of a regularly prescribed medication, 25% involved discrepant doses, 17.1% involved discrepant frequency, and 11.4% were actually incorrect drugs. Breakdown of error severity resulted in designation of 61% as Class 1, 33% as Class 2, and 5.7% as Class 3. The interrater agreement was a kappa of 0.26. These discrepancies were not found to be associated with night or weekend admissions, high patient volume, or high numbers of medications.

 

 

Real-time clinical correlation with the responsible physicians allowed distinction of intended from unintended discrepancies. This presumably improved the accuracy of the error rate measurement. This study confirmed the relatively high rate previously reported. Further study can focus on possible intervention to minimize these errors.

3. Liperoti R, Gambassi G, Lapane KL, et al. Conventional and atypical antipsychotics and the risk of hospitalization for ventricular arrhythmias or cardiac arrest Arch Intern Med. 2005;165:696-701.

As the number of hospitalized elderly and demented patients increases, use of both typical and atypical antipsychotics has become prevalent. QT prolongation, ventricular arrhythmia, and cardiac arrest are more commonly associated with the older conventional antipsychotics than with newer atypical agents. This case-control study was conducted to estimate the effect of both conventional and atypical antipsychotics use on the risk of hospital admission for ventricular arrhythmia or cardiac arrest.

The patient population involved consisted of elderly nursing home residents in 6 US states. The investigators utilized Systematic Assessment of Geriatric Drug Use via Epidemiology database that contains data from minimum data set (MDS), a standardized data set required of all certified nursing homes in the United States. Case patients were selected by ICD-9 codes for cardiac arrest or ventricular arrhythmia. Control patients were selected via ICD-9 codes of 6 other common inpatient diagnoses. Antipsychotic exposure was determined by use of the most recent assessment in the nursing homes prior to admission. Exposed patients were those who received atypical antipsychotics such as risperidone, olanzapine, quetiapine, and clozapine, and those who used conventional agents such as haloperidol and others. After control for potential confounders, users of conventional antipsychotics showed an 86% increase in the risk of hospitalization for ventricular arrhythmias or cardiac arrest (OR: 1.86) compared with nonusers. No increased risk was reported for users of atypical antipsychotics. (OR: 0.87). When compared with atypical antipsychotic use, conventional antipsychotic use carries an OR of 2.13 for these cardiac outcomes. In patients using conventional antipsychotics, the presence and absence of cardiac diseases were 3.27 times and 2.05 times, respectively, more likely to be associated with hospitalization for ventricular arrhythmias and cardiac arrest, compared with nonusers without cardiac diseases.

These results suggest that atypical antipsychotics may carry less cardiac risk than conventional agents. In an inpatient population with advancing age and increasing prevalence of dementia and cardiac disease, use of atypical antipsychotic agents may be safer than older, typical agents.

4. Mayer SA, Brun NC, Begtrup K, et al. Recombinant activated factor VII for acute intracerebral hemorrhage. N Engl J Med. 352:777-85.

This placebo-controlled, double-blind, multicenter, industry-sponsored trial of early treatment of hemorrhagic stroke with rFVIIa at 3 escalating doses, evaluated stroke hematoma growth, mortality, and functional outcomes up to 90 days. The authors note the substantial mortality and high morbidity of this condition, which currently lacks definitive treatment. Patients within 3 hours of symptoms with intracerebral hemorrhage on CT and who met study criteria were randomized to receive either placebo or a single intravenous dose of 40, 80, or 160 mcg/kg of rFVIIa within 1 hour of baseline CT and no more than 4 hours after symptoms. Follow-up CTs at 24 and 72 hours were obtained and functional assessments were performed serially at frequent intervals throughout the study period. Three hundred ninety-nine patients were analyzed and were found similar in their baseline characteristics. Lesion volume was significantly less with treatment, in a dose-dependent fashion. Mortality at 3 months was significantly less (29% vs. 18%) with treatment, and all 4 of the global functional outcome scales utilized were favorable, 3 of them (modified Rankin Scale for all doses, NIH Stroke Scale for all doses, and the Barthel Index at the 80 and 160 mcg/kg doses) in a statistically significant fashion. However, the authors noted an increase in serious thromboembolic events in the treatment groups, with a statistically significant increased frequency of arterial thromboembolic events. These included myocardial ischemic events and cerebral infarction, and most occurred within 3 days of rFVIIa treatment. Of note, the majority of patients who suffered these events made recovery from their complications, and the overall rates of fatal or disabling thromboembolic occurrences between the treatment and placebo groups were similar. This study offers new and exciting insights into potential therapy for this serious form of stroke, although safety concerns merit further study.

 

 

5. Siguret V, Gouin I, Debray M, et al. Initiation of warfarin therapy in elderly medical inpatients: a safe and accurate regimen. Am J Med. 2005; 118:137-142.

Table 1. Loading Dose Schedule for Warfarin Initiation
click for large version
click for large version

Warfarin therapy is widely used in geriatric populations. Sometimes over-anticoagulation occurs when warfarin therapy is initiated based on standard loading and maintenance dose in the hospital setting. This is mainly due to decreased hepatic clearance and polypharmacy in the geriatric population. A recent study in France demonstrated a useful and simple low-dose regimen for starting warfarin therapy (target INR: 2.0–3.0) in the elderly without over-anticoagulation. The patients enrolled in this study were typical geriatric patients with multiple comorbid conditions. These patients also received concomitant medications known to potentiate the effect of warfarin. One hundred six consecutive inpatients (age %70, mean age of 85 years) were given a 4-mg induction dose of warfarin for 3 days, and INR levels were measured on the 4th day. From this point, the daily warfarin dose was adjusted according to an algorithm (see Table 1), and INR values were obtained every 2–3 days until actual maintenance doses were determined. The maintenance dose was defined as the amount of warfarin required to yield an INR in 2.0 to 3.0 range on 2 consecutive samples obtained 48–72 hours apart in the absence of any dosage change for at least 4 days. Based on this algorithm, the predicted daily warfarin dose (3.1 ± 1.6 mg/day) correlated closely with the actual maintenance dose (3.2 ± 1.7 mg/day). The average time needed to achieve a therapeutic INR was 6.7 ± 3.3 days. None of the patients had an INR >4.0 during the induction period. This regimen also required fewer INR measurements.

Intracranial hemorrhage and gastrointestinal bleeding are serious complications of over-anticoagulation. The majority of gastrointestinal bleeding episodes respond to withholding warfarin and reversing anticoagulation. However, intracranial hemorrhage frequently leads to devastating outcomes. A recent report suggested that an age over 85 and INR of 3.5 or greater were associated with increased risk of intracranial hemorrhage. The warfarin algorithm proposed in this study provides a simple, safe, and effective tool to predict warfarin dosing in elderly hospitalized patients without over-anticoagulation. Although this regimen still needs to be validated in a large patient population in the future, it can be incorporated into computer-based dosing entry programs in the hospital setting to guide physicians in initiating warfarin therapy.

6. Wisnivesky JP, Henschke C, Balentine J, Willner, C, Deloire AM, McGinn TG. Prospective validation of a prediction model for isolating inpatients with suspected pulmonary tuberculosis. Arch Intern Med. 2005;165:453-7.

Table 2. Prediction Model for Isolating Suspected TB patients
click for large version
click for large version

Whether to isolate a patient for suspected pulmonary tuberculosis (TB) is often a balancing act between clinical risk assessment and optimal hospital resource utilitization. Practitioners need a relatively simple but sophisticated tool that they can use at the bedside to more precisely assess the likelihood of TB for more efficient and effective triage.

These authors previously developed such a tool with a sensitivity of 98% and specificity of 46%. (See Table 2 for details) This study was designed to validate this decision rule in a new set of patients. Patients were enrolled in 2 tertiary-care hospitals in New York City area over a 21-month period. They were all admitted and isolated because of clinical suspicion for pulmonary TB, not utilizing the decision rule under study. Study team members collected demographic, clinical risk factors, presenting symptoms, and signs, laboratory, and radiographic findings. Chest x-ray findings were reviewed by investigators who were blinded to the other clinical and demographical information. The gold standard of diagnosis was at least 1 sputum culture that was positive for Mycobacterium tuberculosis.

 

 

A total of 516 patients were enrolled in this study. Of the 516, 19 (3.7%) were found to have culture-proven pulmonary TB. Univariate analyses showed that history of positive PPD, higher (98% vs. 95%) oxygen saturation, upper-lobe consolidation (not upper lobe cavity), and lymphadenopathy (hilar, mediastinal, or paratracheal) were all associated with the presence of pulmonary TB. Shortness of breath was associated with the absence of TB. A total score of 1 or higher in the prediction rule had a sensitivity of 95% for pulmonary TB, and score of less than 1 had a specificity of 35%. The investigators estimated a prevalence of 3.7%, thereby yielding a positive predictive value of 9.6% but a negative predictive value of 99.7%. They estimated that 35% of patients isolated would not have been with this prediction rule.

Though validated scientifically, this tool still has a false-negative rate of 5%. In a less endemic area, the false-negative rate would be correspondingly lower and thus more acceptable from a public health perspective. This is one step closer to a balance of optimal bed utilization and reasoned clinical assessment.

1. Chimowitz MI, Lynn MJ, Howlett-Smith H, et al. Comparison of warfarin and aspirin for symptomatic intracranial arterial stenosis. N Engl J Med. 352:1305-16.

This is the first prospective study comparing antithrombotic therapies for patients with atherosclerotic stenosis of major intracranial arteries. This multicenter, NINDS-sponsored, placebo-controlled, blinded study randomized 569 patients to aspirin (650 mg twice daily) or warfarin (initially 5 mg daily, titrated to achieve an INR of 2.0–3.0) and followed them for nearly 2 years. The study was terminated early over safely concerns about patients in the warfarin group. Baseline characteristics between the 2 groups were not significantly different. Warfarin was not more effective than aspirin in its effect on the primary endpoints of ischemic stroke, brain hemorrhage, or vascular death other than from stroke (as defined in the study protocol). However, major cardiac events (myocardial infarction or sudden death) were significantly higher in the warfarin group, and major hemorrhage (defined as any intracranial or systemic hemorrhage requiring hospitalization, transfusion, or surgical intervention) was also significantly higher in the warfarin group. The authors note the difficulty maintaining the INR in the target range (achieved only 63.1 % of the time during the maintenance period, an observation in line with other anticoagulation studies). In an accompanying editorial, Dr. Koroshetz of the stroke service at the Massachusetts General Hospital also observed that difficulties in achieving the therapeutic goal with warfarin could have impacted the results. The authors also note that the dose of aspirin employed in this study is somewhat higher than in previous trials. Nevertheless, until other data emerge, this investigation’s results favor aspirin in preference to warfarin for this high-risk condition.

2. Cornish PL, Knowles, SR, Marchesano R, et al. Unintended medication discrepancies at the time of hospital admission Arch Intern Med. 2005;165:424-9

Of the various types of medical errors, medication errors are believed to be the most common. At the time of hospital admission, medication discrepancies may lead to unintended drug interactions, toxicity, or interruption of appropriate drug therapies. These investigators performed a prospective study to identify unintended medication discrepancies between the patient’s home medications and those ordered at the time of the patient’s admission and to evaluate the potential clinical significance of these discrepancies.

This study was conducted at a 1,000-bed tertiary care hospital in Canada on the general medicine teaching service. A member of the study team reviewed each medical record to ascertain the physician-recorded medication history, the nurse-recorded medication history, the admission medication orders, and demographic information. A comprehensive list of all of the patient’s prescription or nonprescription drugs was compiled by interviewing patients, families, and pharmacists, and by inspecting the bottles. A discrepancy was defined as any difference between this comprehensive list and the admission medication orders. These were categorized into omission or addition of a medication, substitution of an agent within the same drug class, and change in dose, route, and frequency of administration of an agent. The medical team caring for the patient was then asked whether or not these discrepancies were intended. The team then reconciled any unintended discrepancies. These unintended discrepancies were further classified according to their potential for harm by 3 medical hospitalists into Class 1, 2, 3, in increasing order of potential harm. One hundred fifty-one patients were included in the analysis. A total of 140 errors occurred in 81 patients (54%). The overall error rate was 0.93 per patient. Of the errors, 46% consisted of omission of a regularly prescribed medication, 25% involved discrepant doses, 17.1% involved discrepant frequency, and 11.4% were actually incorrect drugs. Breakdown of error severity resulted in designation of 61% as Class 1, 33% as Class 2, and 5.7% as Class 3. The interrater agreement was a kappa of 0.26. These discrepancies were not found to be associated with night or weekend admissions, high patient volume, or high numbers of medications.

 

 

Real-time clinical correlation with the responsible physicians allowed distinction of intended from unintended discrepancies. This presumably improved the accuracy of the error rate measurement. This study confirmed the relatively high rate previously reported. Further study can focus on possible intervention to minimize these errors.

3. Liperoti R, Gambassi G, Lapane KL, et al. Conventional and atypical antipsychotics and the risk of hospitalization for ventricular arrhythmias or cardiac arrest Arch Intern Med. 2005;165:696-701.

As the number of hospitalized elderly and demented patients increases, use of both typical and atypical antipsychotics has become prevalent. QT prolongation, ventricular arrhythmia, and cardiac arrest are more commonly associated with the older conventional antipsychotics than with newer atypical agents. This case-control study was conducted to estimate the effect of both conventional and atypical antipsychotics use on the risk of hospital admission for ventricular arrhythmia or cardiac arrest.

The patient population involved consisted of elderly nursing home residents in 6 US states. The investigators utilized Systematic Assessment of Geriatric Drug Use via Epidemiology database that contains data from minimum data set (MDS), a standardized data set required of all certified nursing homes in the United States. Case patients were selected by ICD-9 codes for cardiac arrest or ventricular arrhythmia. Control patients were selected via ICD-9 codes of 6 other common inpatient diagnoses. Antipsychotic exposure was determined by use of the most recent assessment in the nursing homes prior to admission. Exposed patients were those who received atypical antipsychotics such as risperidone, olanzapine, quetiapine, and clozapine, and those who used conventional agents such as haloperidol and others. After control for potential confounders, users of conventional antipsychotics showed an 86% increase in the risk of hospitalization for ventricular arrhythmias or cardiac arrest (OR: 1.86) compared with nonusers. No increased risk was reported for users of atypical antipsychotics. (OR: 0.87). When compared with atypical antipsychotic use, conventional antipsychotic use carries an OR of 2.13 for these cardiac outcomes. In patients using conventional antipsychotics, the presence and absence of cardiac diseases were 3.27 times and 2.05 times, respectively, more likely to be associated with hospitalization for ventricular arrhythmias and cardiac arrest, compared with nonusers without cardiac diseases.

These results suggest that atypical antipsychotics may carry less cardiac risk than conventional agents. In an inpatient population with advancing age and increasing prevalence of dementia and cardiac disease, use of atypical antipsychotic agents may be safer than older, typical agents.

4. Mayer SA, Brun NC, Begtrup K, et al. Recombinant activated factor VII for acute intracerebral hemorrhage. N Engl J Med. 352:777-85.

This placebo-controlled, double-blind, multicenter, industry-sponsored trial of early treatment of hemorrhagic stroke with rFVIIa at 3 escalating doses, evaluated stroke hematoma growth, mortality, and functional outcomes up to 90 days. The authors note the substantial mortality and high morbidity of this condition, which currently lacks definitive treatment. Patients within 3 hours of symptoms with intracerebral hemorrhage on CT and who met study criteria were randomized to receive either placebo or a single intravenous dose of 40, 80, or 160 mcg/kg of rFVIIa within 1 hour of baseline CT and no more than 4 hours after symptoms. Follow-up CTs at 24 and 72 hours were obtained and functional assessments were performed serially at frequent intervals throughout the study period. Three hundred ninety-nine patients were analyzed and were found similar in their baseline characteristics. Lesion volume was significantly less with treatment, in a dose-dependent fashion. Mortality at 3 months was significantly less (29% vs. 18%) with treatment, and all 4 of the global functional outcome scales utilized were favorable, 3 of them (modified Rankin Scale for all doses, NIH Stroke Scale for all doses, and the Barthel Index at the 80 and 160 mcg/kg doses) in a statistically significant fashion. However, the authors noted an increase in serious thromboembolic events in the treatment groups, with a statistically significant increased frequency of arterial thromboembolic events. These included myocardial ischemic events and cerebral infarction, and most occurred within 3 days of rFVIIa treatment. Of note, the majority of patients who suffered these events made recovery from their complications, and the overall rates of fatal or disabling thromboembolic occurrences between the treatment and placebo groups were similar. This study offers new and exciting insights into potential therapy for this serious form of stroke, although safety concerns merit further study.

 

 

5. Siguret V, Gouin I, Debray M, et al. Initiation of warfarin therapy in elderly medical inpatients: a safe and accurate regimen. Am J Med. 2005; 118:137-142.

Table 1. Loading Dose Schedule for Warfarin Initiation
click for large version
click for large version

Warfarin therapy is widely used in geriatric populations. Sometimes over-anticoagulation occurs when warfarin therapy is initiated based on standard loading and maintenance dose in the hospital setting. This is mainly due to decreased hepatic clearance and polypharmacy in the geriatric population. A recent study in France demonstrated a useful and simple low-dose regimen for starting warfarin therapy (target INR: 2.0–3.0) in the elderly without over-anticoagulation. The patients enrolled in this study were typical geriatric patients with multiple comorbid conditions. These patients also received concomitant medications known to potentiate the effect of warfarin. One hundred six consecutive inpatients (age %70, mean age of 85 years) were given a 4-mg induction dose of warfarin for 3 days, and INR levels were measured on the 4th day. From this point, the daily warfarin dose was adjusted according to an algorithm (see Table 1), and INR values were obtained every 2–3 days until actual maintenance doses were determined. The maintenance dose was defined as the amount of warfarin required to yield an INR in 2.0 to 3.0 range on 2 consecutive samples obtained 48–72 hours apart in the absence of any dosage change for at least 4 days. Based on this algorithm, the predicted daily warfarin dose (3.1 ± 1.6 mg/day) correlated closely with the actual maintenance dose (3.2 ± 1.7 mg/day). The average time needed to achieve a therapeutic INR was 6.7 ± 3.3 days. None of the patients had an INR >4.0 during the induction period. This regimen also required fewer INR measurements.

Intracranial hemorrhage and gastrointestinal bleeding are serious complications of over-anticoagulation. The majority of gastrointestinal bleeding episodes respond to withholding warfarin and reversing anticoagulation. However, intracranial hemorrhage frequently leads to devastating outcomes. A recent report suggested that an age over 85 and INR of 3.5 or greater were associated with increased risk of intracranial hemorrhage. The warfarin algorithm proposed in this study provides a simple, safe, and effective tool to predict warfarin dosing in elderly hospitalized patients without over-anticoagulation. Although this regimen still needs to be validated in a large patient population in the future, it can be incorporated into computer-based dosing entry programs in the hospital setting to guide physicians in initiating warfarin therapy.

6. Wisnivesky JP, Henschke C, Balentine J, Willner, C, Deloire AM, McGinn TG. Prospective validation of a prediction model for isolating inpatients with suspected pulmonary tuberculosis. Arch Intern Med. 2005;165:453-7.

Table 2. Prediction Model for Isolating Suspected TB patients
click for large version
click for large version

Whether to isolate a patient for suspected pulmonary tuberculosis (TB) is often a balancing act between clinical risk assessment and optimal hospital resource utilitization. Practitioners need a relatively simple but sophisticated tool that they can use at the bedside to more precisely assess the likelihood of TB for more efficient and effective triage.

These authors previously developed such a tool with a sensitivity of 98% and specificity of 46%. (See Table 2 for details) This study was designed to validate this decision rule in a new set of patients. Patients were enrolled in 2 tertiary-care hospitals in New York City area over a 21-month period. They were all admitted and isolated because of clinical suspicion for pulmonary TB, not utilizing the decision rule under study. Study team members collected demographic, clinical risk factors, presenting symptoms, and signs, laboratory, and radiographic findings. Chest x-ray findings were reviewed by investigators who were blinded to the other clinical and demographical information. The gold standard of diagnosis was at least 1 sputum culture that was positive for Mycobacterium tuberculosis.

 

 

A total of 516 patients were enrolled in this study. Of the 516, 19 (3.7%) were found to have culture-proven pulmonary TB. Univariate analyses showed that history of positive PPD, higher (98% vs. 95%) oxygen saturation, upper-lobe consolidation (not upper lobe cavity), and lymphadenopathy (hilar, mediastinal, or paratracheal) were all associated with the presence of pulmonary TB. Shortness of breath was associated with the absence of TB. A total score of 1 or higher in the prediction rule had a sensitivity of 95% for pulmonary TB, and score of less than 1 had a specificity of 35%. The investigators estimated a prevalence of 3.7%, thereby yielding a positive predictive value of 9.6% but a negative predictive value of 99.7%. They estimated that 35% of patients isolated would not have been with this prediction rule.

Though validated scientifically, this tool still has a false-negative rate of 5%. In a less endemic area, the false-negative rate would be correspondingly lower and thus more acceptable from a public health perspective. This is one step closer to a balance of optimal bed utilization and reasoned clinical assessment.

Issue
The Hospitalist - 2005(07)
Issue
The Hospitalist - 2005(07)
Publications
Publications
Article Type
Display Headline
Other Literature of Interest
Display Headline
Other Literature of Interest
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

In the Literature

Article Type
Changed
Fri, 09/14/2018 - 12:41
Display Headline
In the Literature

Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors

Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005;293:1197-1203.

Computerized Physician Order Entry (CPOE) has been touted as an effective means to reduce medical errors, especially medication errors. There have been preliminary studies that showed both potential and actual error reductions with CPOE. More recent data suggested that there may be potential for facilitating errors as well.

Koppel et al. aimed to study CPOE system-related factors that may actually increase risk of medication errors. The authors conducted structured interviews with end users (housestaff, pharmacists, nurses, nurse managers, and attending physicians), real-time observations of end users interfacing with the system, entering orders, charting medications, and reviewing orders, and focus groups with housestaff. These qualitative data were used to help generate a 71-question structured survey subsequently given to the housestaff. These questions pertain to working conditions, sources of stress, and errors. There were 261 responses representing an 88% response rate.

Twenty-two previously unexplored potential medication error sources abstracted from the survey were grouped into the 2 categories: 1) information errors, and 2) human-machine interface flaws. The first category refers to fragmented data and the disparate information systems within hospitals. The latter category includes rigid machine programming that does not correspond to or facilitate workflow. Only 10 survey elements with sufficiently robust results were reported. About 40% of respondents used CPOE to determine dosage of infrequently prescribed medications at least once a week or more. Incorrect doses may be ordered if users follow the dosage information in the system that is based on drug inventory rather than clinical recommendations. Twenty-two percent of respondents noted that more than once a week duplicate or conflicting medications were ordered and not detected for several hours. Disorganized display of patient medications was believed to be partly responsible. More than 80% of respondents noted unintended delay in renewing antibiotics at least once. Such gaps were possible partially because the reminder system occurred in the paper chart while order entry was done with the computer. With respect to the human-machine interface, 55% reported difficulty identifying the correct patient because of poor or fragmented displays, and 23% reported this occurring more than a few times per week. System downtime leading to delay in order entry was reported by 47% to occur more than once a week. System inflexibility also led to difficulties in specifying medications and ordering nonformulary medications. This was reported by 31% to occur at least several times a week, and 24% reported this daily or more frequently.

This was a survey of end users of a CPOE system in a single institution, and the survey elements were mainly estimates of error risks. Nevertheless, it appropriately draws attention to the importance of the unique culture of each institution, efficient workflow, and coherent human-machine interface. The anticipated error reductions may not materialize if these issues are neglected. Hospitalists can serve a critical role in implementation and customization of CPOE systems that allow clinicians to do the right thing more timely and efficiently.

Risk Stratification for In-hospital Mortality in Acutely Decompensated Heart Failure: Classification and Regression Tree Analysis

Fonarow GC, Adams KF, Abraham WT, Yancy CW, Boscardin WJ; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Risk stratification for in-hospital mortality in acutely decompensated heart failure: classification and regression tree analysis. JAMA. 2005;293:572-80.

Heart failure is an important and growing cause of hospitalization in this country, and it is one of the most common clinical entities encountered by hospitalists. While there are some risk assessment tools available for outpatients with heart failure, there has not been a risk stratification tool published for inpatients. In this study by Fonarow et al. in JAMA, the authors describe a simple risk-stratification formula for in-hospital mortality in patients with acutely decompensated heart failure. Data from the ADHERE registry (Acute Decompensated Heart Failure National Registry, which is industry sponsored, as was this study) were used to model the risk of in-hospital death using a classification and regression tree (CART) analysis. This was done in a 2-stage process. First, investigators established a derivation cohort of approximately 33,000 patients (sequential hospital admissions from October 2001 to February 2003) from the ADHERE registry, and used the CART method to analyze 39 clinical variables to determine which were the best predictors of in-hospital mortality. This analysis was used to derive a risk tree to partition patients into low-, intermediate-, and high-risk groups. Second, the validity of this method was tested by applying the prediction tool to a cohort of the subsequent 32,229 patients hospitalized in the ADHERE registry, from March 2003 to July 2003. The results were striking. Baseline characteristics and clinical outcomes between the derivation and validation cohorts were similar across the wide range of parameters examined. The difference in mortality between the low-, intermediate-, and high-risk groups was 23.6% in the highest-risk category and 1.8% in the low-risk category, while the intermediate group was stratified into 3 levels, with 20.0%, 5.0%, and 5.1% mortality risk in intermediate group levels 1, 2, and 3, respectively. Aside from the more than 10-fold range in mortality risk across the various groups, the outstanding feature of the authors’ findings was that 3 simple parameters were the most significant predictors of in-hospital mortality risk: BUN, SBP, and serum creatinine. Specifically, combinations of a serum BUN of 43 or greater, a serum creatinine of 2.75 or greater, and a systolic blood pressure of less than 115 were associated with higher mortality. They note that adding other predictors did not meaningfully increase the model’s accuracy. The authors comment that unlike other predictive models based on multivariate analyses (which are often complex, and therefore difficult to employ at bedside), this simple tool is easy to use. An additional advantage is that the data needed are typically available at time of admission and can therefore be used to make a timely clinical decision in terms of triage into an appropriate level of care. Similar risk assessment tools exist for the risk stratification of patients with the acute coronary syndrome, and given the frequency with which patients are admitted with acutely decompensated heart failure, this new tool should prove a welcome addition to the clinical decision-making abilities of hospitalists.

 

 

Risk of Endocarditis among Patients with Prosthetic Valves and Staphylococcus Aureus Bacteremia

El-Ahdab F, Benjamin DK, Wang A, , et al. Risk of endocarditis among patients with prosthetic valves and Staphylococcus aureus bacteremia. Am J Med. 2005;118:225-9.

The risk of developing endocarditis in patients with Staphylococcus aureus bacteremia and prosthetic valves increases as more than 600,000 prosthetic valves are implanted annually in the United States. A prospective study at Duke University identified 51 patients with prosthetic valves or mitral ring who developed S. aureus bacteremia. The modified Duke criteria were used for the diagnosis of endocarditis. The onset and sources of bacteremia, locations of acquiring bacteremia, as well as clinical outcome were analyzed. The overall incidence of definite prosthetic valve endocarditis was as high as 51%, with the remaining 49% patients meeting Duke criteria for possible endocarditis. The results showed that endocarditis occurred more frequently in mitral (62%) and aortic positions (48%), and with mitral ring the rate of endocarditis was slightly lower (33%). Among prostheses, mechanical and bioprosthetic valves had endocarditis rates of 62% and 44%, respectively. About 63% of patients had early onset of bacteremia (<1 year after valve placement), and 37% had late onset of bacteremia (>1 year after valve placement). Overall, the most common source of bacteremia was from infected surgical wound sites (33%). Early bacteremia was more likely to result from infected surgical wound sites (59%), while late bacteremia was more likely to have an unidentified source (48%). The majority of episodes of bacteremia (47%) were hospital-acquired (i.e., a positive blood culture occurred >72 hours after admission). The frequency of healthcare-associated bacteremia and community-acquired bacteremia was about 26%–27%.

In terms of mortality, there was no difference for a patient with early and late S. aureus bacteremia, bioprosthetic and mechanical valves, and infection due to methicillin-resistant or methicillin-susceptible S. aureus. However, mortality was higher among patients with definite endocarditis (62%) vs. possible endocarditis (28%). Patients with endocarditis who underwent valve surgery had lower mortality than those who did not undergo valve surgery due to inoperable comorbid conditions, such as stroke, multiorgan system failure, and mediastinitis. Persistent fever (≥ 38°C after 72 hours of adequate parenteral antibiotics) and persistent bacteremia (positive blood culture within 2–4 days of the initial positive blood culture) were independently associated with definite endocarditis with odds ratio of 4.4 and 11.7, respectively. Overall, 96% of patients underwent echocardiography (55% with both transesophageal and transthoracic echo, 14% with only transesophageal echo, 27% with only transthoracic echo). However, 10% patients with definite endocarditis had no diagnostic finding on either transthoracic or transesophageal echocardiography.

S. aureus bacteremia is a common phenomenon in inpatient settings. This study demonstrated an approximately 50% rate of definite prosthetic valve endocarditis in patients with S. aureus bacteremia. The risks of endocarditis were independent of valve type, location, and duration of implantation. This study highlights the need for aggressive treatment and evaluation of S. aureus bacteremia in patients with prosthetic valves. Clinically, persistent fever and bacteremia were independently associated with definite endocarditis in this study population. Clinicians cannot over-rely on transesophageal echocardiogram to identify occult endocarditis in high-risk patients.

Optimizing the Prediction of Perioperative Mortality in Vascular Surgery by Using a Customized Probability Model

Kertai MD, Boersma E, Klein J, van Urk H, Poldermans D. Optimizing the prediction of perioperative mortality in vascular surgery by using a customized probability model. Arch Intern Med. 2005;165:898-904.

Traditional perioperative risk-assessment models and indexes have focused primarily on cardiac outcomes and involved mainly clinical risk factors. The model proposed in this paper focused instead on overall mortality and incorporated not only clinical risk factors but also more precise surgery-specific risks and the use of beta-blocker and statin agents.

 

 

Figure. Customized Probability Model for Perioperative All-cause Mortality
click for large version
click for large version

Investigators in the Netherlands targeted only vascular surgery patients identified from a computerized hospital information system. From a system, 2,310 patients who underwent 2,758 noncardiac vascular surgeries during a 10-year period in the 1990s were selected. Clinical risk factors, data on noninvasive stress testing, and long-term medication use, including statin agents, beta-blockers, calcium channel blockers, diuretics, insulin, nitrate, and aspirin, were abstracted. Outcome measures were all-cause mortality before discharge or within 30 days after surgery. The proposed model (see Figure) was based on modifications of the original Goldman Index, with the addition of more precise surgery risk stratification and statin and beta-blocker use. The specific types of vascular surgeries: carotid endarterectomy, infrainguinal bypass, abdominal aortic surgery, thoracoabdominal surgery, and acute abdominal aortic aneurysm rupture repair, carried systematically increased risk in expected fashion. Upon univariate analysis in the derivation cohort (n = 1,537), most of the clinical predictors from the Goldman Index were associated with increased perioperative mortality. Similar conclusions persisted in multivariate logistic regression analysis. Risk of surgical procedures, cardiovascular morbidity (ischemic heart disease, congestive heart failure, history of cerebrovascular event, and hypertension), renal dysfunction, and chronic pulmonary disease are independent predictors of increased all-cause perioperative mortality. In contrast, use of beta-blockers and statins were associated with reduced incidence of perioperative mortality. The final model included a scoring system with points assigned according to risk estimates of individual predictors. Beta-blocker and statin use in this model are assigned negative scores as their use lowers risk. For example, a patient with ischemic heart disease and hypertension undergoing abdominal aortic surgery would have a score of 46, corresponding to a 14% probability of mortality. That risk would be reduced to about 4% (score of 31) by use of beta-blockers (−15). In the same database, with 773 patients as the validation cohort, this prediction model performed nearly as well as the derivation model. Hypertension was not found to be an independent predictor in this validation cohort.

This tool appears provide robust risk assessment for vascular surgery patients. The inclusion of estimated benefit-of-statin and beta-blocker use may allow a more accurate “net” risk assessment. Those patients who are already on these 2 agents but still deemed at higher risk can be informed and may benefit from close monitoring. Additional preoperative interventions may include revascularization, if these high-risk patients have a decompensated cardiac status.

Issue
The Hospitalist - 2005(07)
Publications
Sections

Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors

Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005;293:1197-1203.

Computerized Physician Order Entry (CPOE) has been touted as an effective means to reduce medical errors, especially medication errors. There have been preliminary studies that showed both potential and actual error reductions with CPOE. More recent data suggested that there may be potential for facilitating errors as well.

Koppel et al. aimed to study CPOE system-related factors that may actually increase risk of medication errors. The authors conducted structured interviews with end users (housestaff, pharmacists, nurses, nurse managers, and attending physicians), real-time observations of end users interfacing with the system, entering orders, charting medications, and reviewing orders, and focus groups with housestaff. These qualitative data were used to help generate a 71-question structured survey subsequently given to the housestaff. These questions pertain to working conditions, sources of stress, and errors. There were 261 responses representing an 88% response rate.

Twenty-two previously unexplored potential medication error sources abstracted from the survey were grouped into the 2 categories: 1) information errors, and 2) human-machine interface flaws. The first category refers to fragmented data and the disparate information systems within hospitals. The latter category includes rigid machine programming that does not correspond to or facilitate workflow. Only 10 survey elements with sufficiently robust results were reported. About 40% of respondents used CPOE to determine dosage of infrequently prescribed medications at least once a week or more. Incorrect doses may be ordered if users follow the dosage information in the system that is based on drug inventory rather than clinical recommendations. Twenty-two percent of respondents noted that more than once a week duplicate or conflicting medications were ordered and not detected for several hours. Disorganized display of patient medications was believed to be partly responsible. More than 80% of respondents noted unintended delay in renewing antibiotics at least once. Such gaps were possible partially because the reminder system occurred in the paper chart while order entry was done with the computer. With respect to the human-machine interface, 55% reported difficulty identifying the correct patient because of poor or fragmented displays, and 23% reported this occurring more than a few times per week. System downtime leading to delay in order entry was reported by 47% to occur more than once a week. System inflexibility also led to difficulties in specifying medications and ordering nonformulary medications. This was reported by 31% to occur at least several times a week, and 24% reported this daily or more frequently.

This was a survey of end users of a CPOE system in a single institution, and the survey elements were mainly estimates of error risks. Nevertheless, it appropriately draws attention to the importance of the unique culture of each institution, efficient workflow, and coherent human-machine interface. The anticipated error reductions may not materialize if these issues are neglected. Hospitalists can serve a critical role in implementation and customization of CPOE systems that allow clinicians to do the right thing more timely and efficiently.

Risk Stratification for In-hospital Mortality in Acutely Decompensated Heart Failure: Classification and Regression Tree Analysis

Fonarow GC, Adams KF, Abraham WT, Yancy CW, Boscardin WJ; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Risk stratification for in-hospital mortality in acutely decompensated heart failure: classification and regression tree analysis. JAMA. 2005;293:572-80.

Heart failure is an important and growing cause of hospitalization in this country, and it is one of the most common clinical entities encountered by hospitalists. While there are some risk assessment tools available for outpatients with heart failure, there has not been a risk stratification tool published for inpatients. In this study by Fonarow et al. in JAMA, the authors describe a simple risk-stratification formula for in-hospital mortality in patients with acutely decompensated heart failure. Data from the ADHERE registry (Acute Decompensated Heart Failure National Registry, which is industry sponsored, as was this study) were used to model the risk of in-hospital death using a classification and regression tree (CART) analysis. This was done in a 2-stage process. First, investigators established a derivation cohort of approximately 33,000 patients (sequential hospital admissions from October 2001 to February 2003) from the ADHERE registry, and used the CART method to analyze 39 clinical variables to determine which were the best predictors of in-hospital mortality. This analysis was used to derive a risk tree to partition patients into low-, intermediate-, and high-risk groups. Second, the validity of this method was tested by applying the prediction tool to a cohort of the subsequent 32,229 patients hospitalized in the ADHERE registry, from March 2003 to July 2003. The results were striking. Baseline characteristics and clinical outcomes between the derivation and validation cohorts were similar across the wide range of parameters examined. The difference in mortality between the low-, intermediate-, and high-risk groups was 23.6% in the highest-risk category and 1.8% in the low-risk category, while the intermediate group was stratified into 3 levels, with 20.0%, 5.0%, and 5.1% mortality risk in intermediate group levels 1, 2, and 3, respectively. Aside from the more than 10-fold range in mortality risk across the various groups, the outstanding feature of the authors’ findings was that 3 simple parameters were the most significant predictors of in-hospital mortality risk: BUN, SBP, and serum creatinine. Specifically, combinations of a serum BUN of 43 or greater, a serum creatinine of 2.75 or greater, and a systolic blood pressure of less than 115 were associated with higher mortality. They note that adding other predictors did not meaningfully increase the model’s accuracy. The authors comment that unlike other predictive models based on multivariate analyses (which are often complex, and therefore difficult to employ at bedside), this simple tool is easy to use. An additional advantage is that the data needed are typically available at time of admission and can therefore be used to make a timely clinical decision in terms of triage into an appropriate level of care. Similar risk assessment tools exist for the risk stratification of patients with the acute coronary syndrome, and given the frequency with which patients are admitted with acutely decompensated heart failure, this new tool should prove a welcome addition to the clinical decision-making abilities of hospitalists.

 

 

Risk of Endocarditis among Patients with Prosthetic Valves and Staphylococcus Aureus Bacteremia

El-Ahdab F, Benjamin DK, Wang A, , et al. Risk of endocarditis among patients with prosthetic valves and Staphylococcus aureus bacteremia. Am J Med. 2005;118:225-9.

The risk of developing endocarditis in patients with Staphylococcus aureus bacteremia and prosthetic valves increases as more than 600,000 prosthetic valves are implanted annually in the United States. A prospective study at Duke University identified 51 patients with prosthetic valves or mitral ring who developed S. aureus bacteremia. The modified Duke criteria were used for the diagnosis of endocarditis. The onset and sources of bacteremia, locations of acquiring bacteremia, as well as clinical outcome were analyzed. The overall incidence of definite prosthetic valve endocarditis was as high as 51%, with the remaining 49% patients meeting Duke criteria for possible endocarditis. The results showed that endocarditis occurred more frequently in mitral (62%) and aortic positions (48%), and with mitral ring the rate of endocarditis was slightly lower (33%). Among prostheses, mechanical and bioprosthetic valves had endocarditis rates of 62% and 44%, respectively. About 63% of patients had early onset of bacteremia (<1 year after valve placement), and 37% had late onset of bacteremia (>1 year after valve placement). Overall, the most common source of bacteremia was from infected surgical wound sites (33%). Early bacteremia was more likely to result from infected surgical wound sites (59%), while late bacteremia was more likely to have an unidentified source (48%). The majority of episodes of bacteremia (47%) were hospital-acquired (i.e., a positive blood culture occurred >72 hours after admission). The frequency of healthcare-associated bacteremia and community-acquired bacteremia was about 26%–27%.

In terms of mortality, there was no difference for a patient with early and late S. aureus bacteremia, bioprosthetic and mechanical valves, and infection due to methicillin-resistant or methicillin-susceptible S. aureus. However, mortality was higher among patients with definite endocarditis (62%) vs. possible endocarditis (28%). Patients with endocarditis who underwent valve surgery had lower mortality than those who did not undergo valve surgery due to inoperable comorbid conditions, such as stroke, multiorgan system failure, and mediastinitis. Persistent fever (≥ 38°C after 72 hours of adequate parenteral antibiotics) and persistent bacteremia (positive blood culture within 2–4 days of the initial positive blood culture) were independently associated with definite endocarditis with odds ratio of 4.4 and 11.7, respectively. Overall, 96% of patients underwent echocardiography (55% with both transesophageal and transthoracic echo, 14% with only transesophageal echo, 27% with only transthoracic echo). However, 10% patients with definite endocarditis had no diagnostic finding on either transthoracic or transesophageal echocardiography.

S. aureus bacteremia is a common phenomenon in inpatient settings. This study demonstrated an approximately 50% rate of definite prosthetic valve endocarditis in patients with S. aureus bacteremia. The risks of endocarditis were independent of valve type, location, and duration of implantation. This study highlights the need for aggressive treatment and evaluation of S. aureus bacteremia in patients with prosthetic valves. Clinically, persistent fever and bacteremia were independently associated with definite endocarditis in this study population. Clinicians cannot over-rely on transesophageal echocardiogram to identify occult endocarditis in high-risk patients.

Optimizing the Prediction of Perioperative Mortality in Vascular Surgery by Using a Customized Probability Model

Kertai MD, Boersma E, Klein J, van Urk H, Poldermans D. Optimizing the prediction of perioperative mortality in vascular surgery by using a customized probability model. Arch Intern Med. 2005;165:898-904.

Traditional perioperative risk-assessment models and indexes have focused primarily on cardiac outcomes and involved mainly clinical risk factors. The model proposed in this paper focused instead on overall mortality and incorporated not only clinical risk factors but also more precise surgery-specific risks and the use of beta-blocker and statin agents.

 

 

Figure. Customized Probability Model for Perioperative All-cause Mortality
click for large version
click for large version

Investigators in the Netherlands targeted only vascular surgery patients identified from a computerized hospital information system. From a system, 2,310 patients who underwent 2,758 noncardiac vascular surgeries during a 10-year period in the 1990s were selected. Clinical risk factors, data on noninvasive stress testing, and long-term medication use, including statin agents, beta-blockers, calcium channel blockers, diuretics, insulin, nitrate, and aspirin, were abstracted. Outcome measures were all-cause mortality before discharge or within 30 days after surgery. The proposed model (see Figure) was based on modifications of the original Goldman Index, with the addition of more precise surgery risk stratification and statin and beta-blocker use. The specific types of vascular surgeries: carotid endarterectomy, infrainguinal bypass, abdominal aortic surgery, thoracoabdominal surgery, and acute abdominal aortic aneurysm rupture repair, carried systematically increased risk in expected fashion. Upon univariate analysis in the derivation cohort (n = 1,537), most of the clinical predictors from the Goldman Index were associated with increased perioperative mortality. Similar conclusions persisted in multivariate logistic regression analysis. Risk of surgical procedures, cardiovascular morbidity (ischemic heart disease, congestive heart failure, history of cerebrovascular event, and hypertension), renal dysfunction, and chronic pulmonary disease are independent predictors of increased all-cause perioperative mortality. In contrast, use of beta-blockers and statins were associated with reduced incidence of perioperative mortality. The final model included a scoring system with points assigned according to risk estimates of individual predictors. Beta-blocker and statin use in this model are assigned negative scores as their use lowers risk. For example, a patient with ischemic heart disease and hypertension undergoing abdominal aortic surgery would have a score of 46, corresponding to a 14% probability of mortality. That risk would be reduced to about 4% (score of 31) by use of beta-blockers (−15). In the same database, with 773 patients as the validation cohort, this prediction model performed nearly as well as the derivation model. Hypertension was not found to be an independent predictor in this validation cohort.

This tool appears provide robust risk assessment for vascular surgery patients. The inclusion of estimated benefit-of-statin and beta-blocker use may allow a more accurate “net” risk assessment. Those patients who are already on these 2 agents but still deemed at higher risk can be informed and may benefit from close monitoring. Additional preoperative interventions may include revascularization, if these high-risk patients have a decompensated cardiac status.

Role of Computerized Physician Order Entry Systems in Facilitating Medication Errors

Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA. 2005;293:1197-1203.

Computerized Physician Order Entry (CPOE) has been touted as an effective means to reduce medical errors, especially medication errors. There have been preliminary studies that showed both potential and actual error reductions with CPOE. More recent data suggested that there may be potential for facilitating errors as well.

Koppel et al. aimed to study CPOE system-related factors that may actually increase risk of medication errors. The authors conducted structured interviews with end users (housestaff, pharmacists, nurses, nurse managers, and attending physicians), real-time observations of end users interfacing with the system, entering orders, charting medications, and reviewing orders, and focus groups with housestaff. These qualitative data were used to help generate a 71-question structured survey subsequently given to the housestaff. These questions pertain to working conditions, sources of stress, and errors. There were 261 responses representing an 88% response rate.

Twenty-two previously unexplored potential medication error sources abstracted from the survey were grouped into the 2 categories: 1) information errors, and 2) human-machine interface flaws. The first category refers to fragmented data and the disparate information systems within hospitals. The latter category includes rigid machine programming that does not correspond to or facilitate workflow. Only 10 survey elements with sufficiently robust results were reported. About 40% of respondents used CPOE to determine dosage of infrequently prescribed medications at least once a week or more. Incorrect doses may be ordered if users follow the dosage information in the system that is based on drug inventory rather than clinical recommendations. Twenty-two percent of respondents noted that more than once a week duplicate or conflicting medications were ordered and not detected for several hours. Disorganized display of patient medications was believed to be partly responsible. More than 80% of respondents noted unintended delay in renewing antibiotics at least once. Such gaps were possible partially because the reminder system occurred in the paper chart while order entry was done with the computer. With respect to the human-machine interface, 55% reported difficulty identifying the correct patient because of poor or fragmented displays, and 23% reported this occurring more than a few times per week. System downtime leading to delay in order entry was reported by 47% to occur more than once a week. System inflexibility also led to difficulties in specifying medications and ordering nonformulary medications. This was reported by 31% to occur at least several times a week, and 24% reported this daily or more frequently.

This was a survey of end users of a CPOE system in a single institution, and the survey elements were mainly estimates of error risks. Nevertheless, it appropriately draws attention to the importance of the unique culture of each institution, efficient workflow, and coherent human-machine interface. The anticipated error reductions may not materialize if these issues are neglected. Hospitalists can serve a critical role in implementation and customization of CPOE systems that allow clinicians to do the right thing more timely and efficiently.

Risk Stratification for In-hospital Mortality in Acutely Decompensated Heart Failure: Classification and Regression Tree Analysis

Fonarow GC, Adams KF, Abraham WT, Yancy CW, Boscardin WJ; ADHERE Scientific Advisory Committee, Study Group, and Investigators. Risk stratification for in-hospital mortality in acutely decompensated heart failure: classification and regression tree analysis. JAMA. 2005;293:572-80.

Heart failure is an important and growing cause of hospitalization in this country, and it is one of the most common clinical entities encountered by hospitalists. While there are some risk assessment tools available for outpatients with heart failure, there has not been a risk stratification tool published for inpatients. In this study by Fonarow et al. in JAMA, the authors describe a simple risk-stratification formula for in-hospital mortality in patients with acutely decompensated heart failure. Data from the ADHERE registry (Acute Decompensated Heart Failure National Registry, which is industry sponsored, as was this study) were used to model the risk of in-hospital death using a classification and regression tree (CART) analysis. This was done in a 2-stage process. First, investigators established a derivation cohort of approximately 33,000 patients (sequential hospital admissions from October 2001 to February 2003) from the ADHERE registry, and used the CART method to analyze 39 clinical variables to determine which were the best predictors of in-hospital mortality. This analysis was used to derive a risk tree to partition patients into low-, intermediate-, and high-risk groups. Second, the validity of this method was tested by applying the prediction tool to a cohort of the subsequent 32,229 patients hospitalized in the ADHERE registry, from March 2003 to July 2003. The results were striking. Baseline characteristics and clinical outcomes between the derivation and validation cohorts were similar across the wide range of parameters examined. The difference in mortality between the low-, intermediate-, and high-risk groups was 23.6% in the highest-risk category and 1.8% in the low-risk category, while the intermediate group was stratified into 3 levels, with 20.0%, 5.0%, and 5.1% mortality risk in intermediate group levels 1, 2, and 3, respectively. Aside from the more than 10-fold range in mortality risk across the various groups, the outstanding feature of the authors’ findings was that 3 simple parameters were the most significant predictors of in-hospital mortality risk: BUN, SBP, and serum creatinine. Specifically, combinations of a serum BUN of 43 or greater, a serum creatinine of 2.75 or greater, and a systolic blood pressure of less than 115 were associated with higher mortality. They note that adding other predictors did not meaningfully increase the model’s accuracy. The authors comment that unlike other predictive models based on multivariate analyses (which are often complex, and therefore difficult to employ at bedside), this simple tool is easy to use. An additional advantage is that the data needed are typically available at time of admission and can therefore be used to make a timely clinical decision in terms of triage into an appropriate level of care. Similar risk assessment tools exist for the risk stratification of patients with the acute coronary syndrome, and given the frequency with which patients are admitted with acutely decompensated heart failure, this new tool should prove a welcome addition to the clinical decision-making abilities of hospitalists.

 

 

Risk of Endocarditis among Patients with Prosthetic Valves and Staphylococcus Aureus Bacteremia

El-Ahdab F, Benjamin DK, Wang A, , et al. Risk of endocarditis among patients with prosthetic valves and Staphylococcus aureus bacteremia. Am J Med. 2005;118:225-9.

The risk of developing endocarditis in patients with Staphylococcus aureus bacteremia and prosthetic valves increases as more than 600,000 prosthetic valves are implanted annually in the United States. A prospective study at Duke University identified 51 patients with prosthetic valves or mitral ring who developed S. aureus bacteremia. The modified Duke criteria were used for the diagnosis of endocarditis. The onset and sources of bacteremia, locations of acquiring bacteremia, as well as clinical outcome were analyzed. The overall incidence of definite prosthetic valve endocarditis was as high as 51%, with the remaining 49% patients meeting Duke criteria for possible endocarditis. The results showed that endocarditis occurred more frequently in mitral (62%) and aortic positions (48%), and with mitral ring the rate of endocarditis was slightly lower (33%). Among prostheses, mechanical and bioprosthetic valves had endocarditis rates of 62% and 44%, respectively. About 63% of patients had early onset of bacteremia (<1 year after valve placement), and 37% had late onset of bacteremia (>1 year after valve placement). Overall, the most common source of bacteremia was from infected surgical wound sites (33%). Early bacteremia was more likely to result from infected surgical wound sites (59%), while late bacteremia was more likely to have an unidentified source (48%). The majority of episodes of bacteremia (47%) were hospital-acquired (i.e., a positive blood culture occurred >72 hours after admission). The frequency of healthcare-associated bacteremia and community-acquired bacteremia was about 26%–27%.

In terms of mortality, there was no difference for a patient with early and late S. aureus bacteremia, bioprosthetic and mechanical valves, and infection due to methicillin-resistant or methicillin-susceptible S. aureus. However, mortality was higher among patients with definite endocarditis (62%) vs. possible endocarditis (28%). Patients with endocarditis who underwent valve surgery had lower mortality than those who did not undergo valve surgery due to inoperable comorbid conditions, such as stroke, multiorgan system failure, and mediastinitis. Persistent fever (≥ 38°C after 72 hours of adequate parenteral antibiotics) and persistent bacteremia (positive blood culture within 2–4 days of the initial positive blood culture) were independently associated with definite endocarditis with odds ratio of 4.4 and 11.7, respectively. Overall, 96% of patients underwent echocardiography (55% with both transesophageal and transthoracic echo, 14% with only transesophageal echo, 27% with only transthoracic echo). However, 10% patients with definite endocarditis had no diagnostic finding on either transthoracic or transesophageal echocardiography.

S. aureus bacteremia is a common phenomenon in inpatient settings. This study demonstrated an approximately 50% rate of definite prosthetic valve endocarditis in patients with S. aureus bacteremia. The risks of endocarditis were independent of valve type, location, and duration of implantation. This study highlights the need for aggressive treatment and evaluation of S. aureus bacteremia in patients with prosthetic valves. Clinically, persistent fever and bacteremia were independently associated with definite endocarditis in this study population. Clinicians cannot over-rely on transesophageal echocardiogram to identify occult endocarditis in high-risk patients.

Optimizing the Prediction of Perioperative Mortality in Vascular Surgery by Using a Customized Probability Model

Kertai MD, Boersma E, Klein J, van Urk H, Poldermans D. Optimizing the prediction of perioperative mortality in vascular surgery by using a customized probability model. Arch Intern Med. 2005;165:898-904.

Traditional perioperative risk-assessment models and indexes have focused primarily on cardiac outcomes and involved mainly clinical risk factors. The model proposed in this paper focused instead on overall mortality and incorporated not only clinical risk factors but also more precise surgery-specific risks and the use of beta-blocker and statin agents.

 

 

Figure. Customized Probability Model for Perioperative All-cause Mortality
click for large version
click for large version

Investigators in the Netherlands targeted only vascular surgery patients identified from a computerized hospital information system. From a system, 2,310 patients who underwent 2,758 noncardiac vascular surgeries during a 10-year period in the 1990s were selected. Clinical risk factors, data on noninvasive stress testing, and long-term medication use, including statin agents, beta-blockers, calcium channel blockers, diuretics, insulin, nitrate, and aspirin, were abstracted. Outcome measures were all-cause mortality before discharge or within 30 days after surgery. The proposed model (see Figure) was based on modifications of the original Goldman Index, with the addition of more precise surgery risk stratification and statin and beta-blocker use. The specific types of vascular surgeries: carotid endarterectomy, infrainguinal bypass, abdominal aortic surgery, thoracoabdominal surgery, and acute abdominal aortic aneurysm rupture repair, carried systematically increased risk in expected fashion. Upon univariate analysis in the derivation cohort (n = 1,537), most of the clinical predictors from the Goldman Index were associated with increased perioperative mortality. Similar conclusions persisted in multivariate logistic regression analysis. Risk of surgical procedures, cardiovascular morbidity (ischemic heart disease, congestive heart failure, history of cerebrovascular event, and hypertension), renal dysfunction, and chronic pulmonary disease are independent predictors of increased all-cause perioperative mortality. In contrast, use of beta-blockers and statins were associated with reduced incidence of perioperative mortality. The final model included a scoring system with points assigned according to risk estimates of individual predictors. Beta-blocker and statin use in this model are assigned negative scores as their use lowers risk. For example, a patient with ischemic heart disease and hypertension undergoing abdominal aortic surgery would have a score of 46, corresponding to a 14% probability of mortality. That risk would be reduced to about 4% (score of 31) by use of beta-blockers (−15). In the same database, with 773 patients as the validation cohort, this prediction model performed nearly as well as the derivation model. Hypertension was not found to be an independent predictor in this validation cohort.

This tool appears provide robust risk assessment for vascular surgery patients. The inclusion of estimated benefit-of-statin and beta-blocker use may allow a more accurate “net” risk assessment. Those patients who are already on these 2 agents but still deemed at higher risk can be informed and may benefit from close monitoring. Additional preoperative interventions may include revascularization, if these high-risk patients have a decompensated cardiac status.

Issue
The Hospitalist - 2005(07)
Issue
The Hospitalist - 2005(07)
Publications
Publications
Article Type
Display Headline
In the Literature
Display Headline
In the Literature
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)