User login
Epidemiology, Consequences of Non-Leg VTE in Critically Ill Patients
Clinical question: Which risk factors are key in the development of non-leg deep vein thromboses (NLDVTs), and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin versus standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE (acute physiology and chronic health evaluation), BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% versus 1.9%) and have longer ICU stays (19 versus nine days). On average, one in seven patients with NLDVT developed PE during the hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model. However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE. TH
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Clinical question: Which risk factors are key in the development of non-leg deep vein thromboses (NLDVTs), and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin versus standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE (acute physiology and chronic health evaluation), BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% versus 1.9%) and have longer ICU stays (19 versus nine days). On average, one in seven patients with NLDVT developed PE during the hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model. However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE. TH
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Clinical question: Which risk factors are key in the development of non-leg deep vein thromboses (NLDVTs), and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin versus standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE (acute physiology and chronic health evaluation), BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% versus 1.9%) and have longer ICU stays (19 versus nine days). On average, one in seven patients with NLDVT developed PE during the hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model. However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE. TH
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Epidemiology, Consequences of Non-Leg VTE
Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.
However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.
However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Clinical question: Which risk factors are key in the development of nonleg deep vein thromboses (NLDVTs) and what are the expected clinical sequelae from these events?
Background: Critically ill patients are at increased risk of venous thrombosis. Despite adherence to recommended daily thromboprophylaxis, many patients will develop a venous thrombosis in a vein other than the lower extremity. The association between NLDVT and pulmonary embolism (PE) or death is less clearly identified.
Study design: The PROphylaxis for ThromboEmbolism in Critical Care Trial (PROTECT), a multicenter, randomized, blinded, and concealed prospective cohort study occurring between May 2006 and June 2010.
Setting: Sixty-seven international secondary and tertiary care ICUs in both academic and community settings.
Synopsis: Researchers enrolled 3,746 ICU patients in a randomized controlled trial of dalteparin vs. standard heparin for thromboprophylaxis. Of these patients, 84 (2.2%) developed a NLDVT. These thromboses were more likely to be deep and located proximally.
Risk factors were assessed using five selected variables: APACHE [acute physiology and chronic health evaluation], BMI, malignancy, and treatment with vasopressors or statins. Outside of indwelling upper extremity central venous catheters, cancer was the only independent predictor of NLDVT.
Compared to patients without any VTE, those with NLDVT were more likely to develop PE (14.9% vs. 1.9%) and have longer ICU stays (19 vs. nine days). On average, one in seven patients with NLDVT developed PE during their hospital stay. Despite the association with PE, NLDVT was not associated with an increased ICU mortality in an adjusted model.
However, the PROTECT trial may have been underpowered to detect a difference. Additional limitations of the study included a relatively small total number of NLDVTs and a lack of standardized screening protocols for both NLDVT and PE.
Bottom line: Despite universal heparin thromboprophylaxis, many medical-surgical critically ill patients may develop NLDVT, placing them at higher risk for longer ICU stays and PE.
Citation: Lamontagne F, McIntyre L, Dodek P, et al. Nonleg venous thrombosis in critically ill adults: a nested prospective cohort study. JAMA Intern Med. 2014;174(5):689-696.
Model for End-Stage Liver Disease (MELD) May Help Determine Mortality Risk
Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?
Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.
Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.
Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.
Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.
Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).
Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.
Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.
Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.
Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?
Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.
Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.
Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.
Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.
Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).
Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.
Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.
Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.
Clinical question: How can the model for end-stage liver disease (MELD)-based model be updated and utilized to predict inpatient mortality rates of hospitalized cirrhotic patients with acute variceal bleeding (AVB)?
Background: AVB in cirrhosis continues to carry mortality rates as high as 20%. Risk prediction for individual patients is important to determine when a step-up in acuity of care is needed and to identify patients who would most benefit from preemptive treatments such as a transjugular intrahepatic portosystemic shunt. Many predictive models are available but are currently difficult to apply in the clinical setting.
Study design: Initial comparison data was collected via a prospective study from clinical records. Confirmation of updated MELD model occurred via cohort validation studies.
Setting: Prospective data collected from Hospital Clinic in Barcelona, Spain. Validation cohorts for new MELD model calibration completed in hospital settings in Canada and Spain.
Synopsis: Data was collected from 178 patients with cirrhosis and esophageal AVB receiving standard therapy from 2007-2010. Esophageal bleeding was confirmed endoscopically. The primary endpoint was six-week, bleeding-related mortality. Among all the subjects studied, the average six-week mortality rate was 16%. Models evaluated for validity included the Child-Pugh, the D’Amico and Augustin models, and the MELD score.
Each model was assessed via discrimination, calibration, and overall performance in mortality prediction. The MELD was identified as the best model in terms of discrimination and overall performance but was miscalibrated. The original validation cohort from the Hospital Clinic in Spain was utilized to update the MELD calibration via logistic regression. External validation was completed via cohort studies in Canada (N=240) and at Vall D’Hebron Hospital in Spain (N=221).
Using the updated model, the MELD score adds a predictive component in the setting of AVB that has not been available. MELD values of 19 and higher predict mortality >20%, whereas MELD values lower than 11 predict mortality of 5%.
Bottom line: Utilization of the updated MELD model may provide a more accurate method to identify patients in which more aggressive preemptive therapies are indicated using prognostic predictions of mortality.
Citation: Reverter E, Tandon P, Augustin S, et al. A MELD-based model to determine risk of mortality among patients with acute variceal bleeding. Gastroenterology. 2014;146(2):412-419.
Emergency Department Visits, Hospitalizations Due to Insulin
Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?
Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.
Study design: Observational study using national adverse drug surveillance database and national household survey.
Setting: U.S. hospitals, excluding psychiatric and penal institutions.
Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.
The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.
Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.
Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.
Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?
Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.
Study design: Observational study using national adverse drug surveillance database and national household survey.
Setting: U.S. hospitals, excluding psychiatric and penal institutions.
Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.
The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.
Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.
Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.
Clinical question: What is the national burden of ED visits and hospitalizations for insulin-related hypoglycemia?
Background: As the prevalence of diabetes mellitus continues to rise, the use of insulin and the burden of insulin-related hypoglycemia on our healthcare system will increase. By identifying high-risk populations and analyzing the circumstances of insulin-related hypoglycemia, we might be able to identify and employ strategies to decrease the risk of insulin use.
Study design: Observational study using national adverse drug surveillance database and national household survey.
Setting: U.S. hospitals, excluding psychiatric and penal institutions.
Synopsis: Using data from the National Electronic Injury Surveillance System–Cooperative Adverse Drug Event Surveillance (NEISS-CADES) Project and the National Health Interview Survey (NHIS), the authors estimated the rates and characteristics of ED visits and hospitalizations for insulin-related hypoglycemia. The authors estimated that about 100,000 ED visits occur nationally and that almost one-third of those visits result in hospitalization. Compared to younger patients treated with insulin, patients 80 years or older were more likely to present to the ED (rate ratio, 2.5; 95% CI, 1.5-4.3) and much more likely to be subsequently hospitalized (rate ratio, 4.9; 95% CI, 2.6-9.1) for insulin-related hypoglycemia.
The most common causes of insulin-induced hypoglycemia were failure to reduce insulin during periods of reduced food intake and confusion between short-acting and long-acting insulin. The authors suggest that looser glycemic control be sought in elderly patients to decrease the risk of insulin-related hypoglycemia and subsequent sequelae. Patient education addressing common insulin errors might also decrease the burden of ED visits and hospitalizations related to insulin.
Bottom line: Risks of hypoglycemia in patients older than 80 should be considered prior to starting an insulin regimen or prior to increasing the dose of insulin.
Citation: Geller AI, Shehab N, Lovegrove MC, et al. National estimates of insulin-related hypoglycemia and errors leading to emergency department visits and hospitalizations. JAMA Intern Med. 2014;174(5):678-686.
Healthcare Worker Attire Recommendations
Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?
Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.
Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.
Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.
Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.
Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.
There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.
Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.
BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.
Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.
Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.
Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?
Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.
Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.
Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.
Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.
Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.
There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.
Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.
BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.
Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.
Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.
Clinical question: What are the perceptions of patients and healthcare personnel (HCP) regarding attire, and what evidence exists for contamination and transmission of pathogenic microorganisms by HCP attire?
Background: HCP attire is an important aspect of the healthcare profession. There is increasing concern for microorganism transmission in the hospital by fomites, including HCP apparel, and studies demonstrate contamination of HCP apparel; however, there is a lack of evidence demonstrating the role of HCP apparel in transmission of microorganisms to patients.
Study design: Literature and policy review, survey of Society for Healthcare Epidemiology of America (SHEA) members.
Setting: Literature search from January 2013 to March 2013 for articles related to bacterial contamination and laundering of HCP attire and patient and provider perceptions of HCP attire and/or footwear. Review of policies related to HCP attire from seven large teaching hospitals.
Synopsis: The search identified 26 articles that studied patients’ perceptions of HCP attire and only four studies that reviewed HCP preferences relating to attire. There were 11 small prospective studies related to pathogen contamination of HCP apparel but no clinical studies demonstrating transmission of pathogens from HCP attire to patients. There was one report of a pathogen outbreak potentially related to HCP apparel.
Hospital policies primarily related to general appearance and dress for all employees without significant specifications for HCP outside of sterile or procedure-based areas. One institution recommended bare below the elbows (BBE) attire for physicians during patient care activities.
There were 337 responses (21.7% response rate) to the survey, which showed poor enforcement of HCP attire policies, but a majority of respondents felt that the role of HCP attire in the transmission of pathogens in the healthcare setting was very important or somewhat important.
Patients preferred formal attire, including a white coat, but this preference had limited impact on patient satisfaction or confidence in practitioners. Patients did not perceive HCP attire as an infection risk but were willing to change their preference for formal attire when informed of this potential risk.
BBE policies are in effect at some U.S. hospitals and in the United Kingdom, but the effect on healthcare-associated infection rates and transmission of pathogens to patients is unknown.
Bottom line: Contamination of HCP attire with healthcare pathogens occurs, but no clinical data currently exists related to transmission of these pathogens to patients and its impact on the healthcare system. Patient satisfaction and confidence are not affected by less formal attire when informed of potential infection risks.
Citation: Bearman G, Bryant K, Leekha S, et al. Healthcare personnel attire in non-operating-room settings. Infect Control Hosp Epidemiol. 2014;35(2):107-121.
Prediction Tool for Readmissions Due to End-of-Life Care
Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?
Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.
Study design: Nested case-control.
Setting: Academic, tertiary-care medical center.
Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.
The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.
Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.
Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.
Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?
Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.
Study design: Nested case-control.
Setting: Academic, tertiary-care medical center.
Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.
The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.
Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.
Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.
Clinical question: What are the risk factors associated with potentially avoidable readmissions (PARs) for end-of-life care issues?
Background: The 6% of Medicare beneficiaries who die each year account for 30% of yearly Medicare expenditures on medical treatments, with repeated hospitalizations a frequent occurrence at the end of life. There are many opportunities to improve the care of patients at the end of life.
Study design: Nested case-control.
Setting: Academic, tertiary-care medical center.
Synopsis: There were 10,275 eligible admissions to Brigham and Women’s Hospital in Boston from July 1, 2009 to June 30, 2010, with a length of stay less than one day. There were 2,301 readmissions within 30 days of the index hospitalization, of which 826 were considered potentially avoidable. From a random sample of 594 of these patients, 80 patients had PAR related to end-of-life care issues. There were 7,974 patients who were not admitted within 30 days of index admission (controls). The primary study outcome was any 30-day PAR due to end-of-life care issues. A readmission was considered a PAR if it related to previously known conditions from the index hospitalization or was due to a complication of treatment.
The four factors that were significantly associated with 30-day PAR for end-of-life care issues were: neoplasm (OR 5.6, 95% CI: 2.85-11.0), opiate medication at discharge (OR 2.29, 95% CI: 1.29-4.07), Elixhauser comorbidity index, per five-unit increase (OR 1.16, 95% CI: 1.10-1.22), and number of admissions in previous 12 months (OR 1.10, 95% CI: 1.02-1.20). The model that included all four variables had excellent discrimination power, with a C-statistic of 0.85.
Bottom line: The factors from this prediction model can be used, formally or informally, to identify those patients at higher risk for readmission for end-of-life care issues and prioritize resources to help minimize this risk.
Citation: Donzé J, Lipsitz S, Schnipper JL. Risk factors for potentially avoidable readmissions due to end-of-life care issues. J Hosp Med. 2014;9(5):310-314.
Colonic Malignancy Risk Appears Low After Uncomplicated Diverticulitis
Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?
Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.
Study design: Meta-analysis.
Setting: Search of online databases and the Cochrane Library.
Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).
For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.
Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.
Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.
Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.
Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?
Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.
Study design: Meta-analysis.
Setting: Search of online databases and the Cochrane Library.
Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).
For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.
Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.
Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.
Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.
Clinical question: What is the benefit of routine colonic evaluation after an episode of acute diverticulitis?
Background: Currently accepted guidelines recommend routine colonic evaluation (colonoscopy, computed tomography (CT) colonography) after an episode of acute diverticulitis to confirm the diagnosis and exclude malignancy. Increased use of CT to confirm the diagnosis of acute diverticulitis and exclude associated complications has brought into question the recommendation for routine colonic evaluation after an episode of acute diverticulitis.
Study design: Meta-analysis.
Setting: Search of online databases and the Cochrane Library.
Synopsis: Eleven studies from seven countries included 1,970 patients who had a colonic evaluation after an episode of acute diverticulitis. The risk of finding a malignancy was 1.6%. Within this population, 1,497 patients were identified as having uncomplicated diverticulitis. Cancer was found in only five patients (proportional risk estimate 0.7%).
For the 79 patients identified as having complicated diverticulitis, the risk of finding a malignancy on subsequent screening was 10.8%.
Every systematic review is limited by the quality of the studies available for review and the differences in design and methodology of the studies. In this meta-analysis, the risk of finding cancer after an episode of uncomplicated diverticulitis appears to be low. Given the limited resources of the healthcare system and the small but real risk of morbidity and mortality associated with invasive colonic procedures, the routine recommendation for colon cancer screening after an episode of acute uncomplicated diverticulitis should be further evaluated.
Bottom line: The risk of malignancy after a radiologically proven episode of acute uncomplicated diverticulitis is low. In the absence of other indications, additional routine colonic evaluation may not be necessary.
Citation: Sharma PV, Eglinton T, Hider P, Frizelle F. Systematic review and meta-analysis of the role of routine colonic evaluation after radiologically confirmed acute diverticulitis. Ann Surg. 2014;259(2):263-272.
Physician Burnout Reduced with Intervention Groups
Clinical question: Does an intervention involving a facilitated physician small group result in improvement in well-being and reduction in burnout?
Background: Burnout affects nearly half of medical students, residents, and practicing physicians in the U.S.; however, very few interventions have been tested to address this problem.
Study design: Randomized controlled trial (RCT).
Setting: Department of Medicine at the Mayo Clinic, Rochester, Minn.
Synopsis: Practicing physicians were randomly assigned to facilitated, small-group intervention curriculum for one hour every two weeks (N=37) or control with unstructured, protected time for one hour every two weeks (N=37). A non-trial cohort of 350 practicing physicians was surveyed annually. This study showed a significant increase in empowerment and engagement at three months that was sustained for 12 months, and a significant decrease in high depersonalization scores was seen at both three and 12 months in the intervention group. There were no significant differences in stress, depression, quality of life, or job satisfaction.
Compared to the non-trial cohort, depersonalization, emotional exhaustion, and overall burnout decreased substantially in the intervention arm and slightly in the control arm.
Sample size was small and results may not be generalizable. Topics covered included reflection, self-awareness, and mindfulness, with a combination of community building and skill acquisition to promote connectedness and meaning in work. It is not clear which elements of the curriculum were most effective.
Bottom line: A facilitated, small-group intervention with institution-provided protected time can improve physician empowerment and engagement and reduce depersonalization, an important component of burnout.
Citation: West CP, Dyrbye LN, Rabatin JT, et al. Intervention to promote physician well-being, job satisfaction, and professionalism: a randomized clinical trial. JAMA Intern Med. 2014;174(4):527-533.
Clinical question: Does an intervention involving a facilitated physician small group result in improvement in well-being and reduction in burnout?
Background: Burnout affects nearly half of medical students, residents, and practicing physicians in the U.S.; however, very few interventions have been tested to address this problem.
Study design: Randomized controlled trial (RCT).
Setting: Department of Medicine at the Mayo Clinic, Rochester, Minn.
Synopsis: Practicing physicians were randomly assigned to facilitated, small-group intervention curriculum for one hour every two weeks (N=37) or control with unstructured, protected time for one hour every two weeks (N=37). A non-trial cohort of 350 practicing physicians was surveyed annually. This study showed a significant increase in empowerment and engagement at three months that was sustained for 12 months, and a significant decrease in high depersonalization scores was seen at both three and 12 months in the intervention group. There were no significant differences in stress, depression, quality of life, or job satisfaction.
Compared to the non-trial cohort, depersonalization, emotional exhaustion, and overall burnout decreased substantially in the intervention arm and slightly in the control arm.
Sample size was small and results may not be generalizable. Topics covered included reflection, self-awareness, and mindfulness, with a combination of community building and skill acquisition to promote connectedness and meaning in work. It is not clear which elements of the curriculum were most effective.
Bottom line: A facilitated, small-group intervention with institution-provided protected time can improve physician empowerment and engagement and reduce depersonalization, an important component of burnout.
Citation: West CP, Dyrbye LN, Rabatin JT, et al. Intervention to promote physician well-being, job satisfaction, and professionalism: a randomized clinical trial. JAMA Intern Med. 2014;174(4):527-533.
Clinical question: Does an intervention involving a facilitated physician small group result in improvement in well-being and reduction in burnout?
Background: Burnout affects nearly half of medical students, residents, and practicing physicians in the U.S.; however, very few interventions have been tested to address this problem.
Study design: Randomized controlled trial (RCT).
Setting: Department of Medicine at the Mayo Clinic, Rochester, Minn.
Synopsis: Practicing physicians were randomly assigned to facilitated, small-group intervention curriculum for one hour every two weeks (N=37) or control with unstructured, protected time for one hour every two weeks (N=37). A non-trial cohort of 350 practicing physicians was surveyed annually. This study showed a significant increase in empowerment and engagement at three months that was sustained for 12 months, and a significant decrease in high depersonalization scores was seen at both three and 12 months in the intervention group. There were no significant differences in stress, depression, quality of life, or job satisfaction.
Compared to the non-trial cohort, depersonalization, emotional exhaustion, and overall burnout decreased substantially in the intervention arm and slightly in the control arm.
Sample size was small and results may not be generalizable. Topics covered included reflection, self-awareness, and mindfulness, with a combination of community building and skill acquisition to promote connectedness and meaning in work. It is not clear which elements of the curriculum were most effective.
Bottom line: A facilitated, small-group intervention with institution-provided protected time can improve physician empowerment and engagement and reduce depersonalization, an important component of burnout.
Citation: West CP, Dyrbye LN, Rabatin JT, et al. Intervention to promote physician well-being, job satisfaction, and professionalism: a randomized clinical trial. JAMA Intern Med. 2014;174(4):527-533.
In the Literature
In This Edition
Literature at a Glance
A guide to this month’s studies
- Stress testing in young patients with chest pain.
- Family presence during CPR.
- Achieving rate control in rapid atrial fibrillation.
- D-dimer in aortic dissection.
- Acute kidney injury outcomes.
- Statins after stroke.
- Low-dose steroids in septic shock.
- Genetic testing for VTE.
1) Utility of Cardiac Stress Testing Is Limited for Young Patients with Chest Pain
Clinical question: Does routine, provocative cardiac testing in low-risk adult patients younger than 40 years of age add to the diagnostic evaluation for acute coronary syndrome?
Background: In EDs, aggressive evaluation of chest pain is the standard of care due to high morbidity, mortality, and liability associated with acute coronary syndrome (ACS). Guidelines recommend provocative cardiac testing for all patients for whom ACS is suspected, yet the prevalence is low in patients younger than 40.
Study design: Retrospective observational study.
Setting: ED chest pain observation unit of an urban academic tertiary-care center in New York City.
Synopsis: Two hundred twenty patients between 22 and 39 years old admitted for ACS evaluation between March 2004 and September 2007 were eligible. Patients with known coronary artery disease, diagnostic ECG findings, or evidence of cocaine use were excluded. Provocative cardiac testing for the presence of myocardial ischemia followed serial cardiac biomarker testing to rule out myocardial infarction.
Six patients had positive stress tests. Four underwent subsequent coronary angiography, which demonstrated no evidence of obstructive coronary disease. One refused catheterization, and the other was lost to followup. Age younger than 40 years, nondiagnostic or normal ECG, and two sets of negative cardiac biomarker results at least six hours apart identified a patient group with a low rate of true-positive provocative testing.
This study is limited by its retrospective, single-centered nature; it was unable to include patients admitted to the hospital or those who left the chest-pain unit without provocative testing or against medical advice. The possibility of false-negative provocative testing results was not excluded. The methods of provocative testing were limited to those available prior to 2007.
Bottom line: Cardiac stress testing adds little to the diagnostic evaluation for patients younger than 40 years having nondiagnostic ECG and negative serial biomarker results. However, routine provocative testing is unlikely to decrease until better clinical risk-stratification tools exist for this very-low-prevalence population.
Citation: Hermann LK, Weingart SD, Duvall WL, Henzlova MJ. The limited utility of routine cardiac stress testing in emergency department chest pain patients younger than 40 years. Ann Emerg Med. 2009;54(1):12-16.
2) Family Witness Behavior Impacts Physician Performance during CPR
Clinical question: Does the presence or behavior of a family witness to cardiopulmonary resuscitation (CPR) alter the critical actions performed by physicians?
Background: Because few patients undergoing in-hospital CPR survive to hospital discharge, many hospitals allow at least one family member of the dying patient to be present during the resuscitation attempt. There is little evidence concerning the bereavement outcomes for the family witness or effect on the resuscitation environment and physician performance.
Study design: Randomized controlled study.
Setting: Human patient simulator-based medical resuscitation environment with standardized actors in an academic medical center.
Synopsis: Sixty second- and third-year emergency medicine residents were randomized in pairs and assigned to one of three groups: no family witness, a non-obstructive witness, or a witness displaying overt grief reactions. Trained actors played the roles of social worker and family member. All groups were joined by the social worker and participated in identical cardiac-arrest-code scenarios ending in asystole. The nurse in each group was scripted to make a potentially harmful medication error. Outcomes studied were physician-performance-based, such as length of the resuscitation attempt, time to critical events, and recognition of a potential drug administration error.
Delay in initiation of defibrillator shocks and decrease in the number of shocks delivered occurred in the overt grief reaction group. Failure to recognize the medication error occurred only once, and it was in the control group. No other significant differences were observed between groups.
Limitations to this study are its small size and possibility that physician behavior in simulated environments might not reflect that of real patient-care situations.
Bottom line: Overt grief reactions from family members witnessing CPR attempts might negatively impact important procedural events and decisions made by physicians, specifically the use of defibrillation, which could negatively affect outcomes of CPR.
Citation: Fernandez R, Compton S, Jones KA, Velilla MA. The presence of a family witness impacts physician performance during simulated medical codes. Crit Care Med. 2009;37(6): 1956-1960.
3) Diltiazem Is a Better Choice in Uncomplicated Atrial Fibrillation than Amiodarone or Digoxin
Clinical question: How does IV diltiazem compare to IV amiodarone or digoxin in achieving ventricular rate control in patients hospitalized for acute uncomplicated atrial fibrillation (AF)?
Background: Current guidelines for acute AF management are based on expert opinion and recommend calcium antagonists, beta-blockers, or digoxin for initial ventricular rate control in hemodynamically stable patients. Recommendations lead to wide clinical variations in the first 24 hours of presentation.
Study design: Randomized, open-label trial.
Setting: Single-center study in Hong Kong.
Synopsis: One hundred fifty patients presenting with acute symptomatic AF of <48 hours duration with a rapid ventricular rate were enrolled. The study endpoints were ventricular rate control within the first 24 hours defined as a sustained heart rate of <90 beats per minute for less than four hours. The time to ventricular rate control in patients assigned to the diltiazem group (three hours) was significantly shorter than that of the digoxin group (six hours) or the amiodarone group (seven hours). The percentage of patients who achieved ventricular rate control in the diltiazem group was 90%, compared with 74% in both the digoxin and amiodarone groups. Length of stay was shorter in diltiazem group (3.9 days) when compared with the digoxin group (4.7 days) and amiodarone group (4.7 days).
Major limitations of the study were the lack of beta-blockers as an option for rate control and the exclusion of patients with hemodynamic instability, heart failure, and myocardial infarction. As patients with underlying heart disease were excluded, these results cannot be applied to all patients presenting with acute AF.
Bottom line: Compared with digoxin and amidarone, intravenous diltiazem is safe and effective in achieving ventricular rate control to improve symptoms and to reduce length of hospital stay in acute uncomplicated AF.
Citation: Siu CW, Lau CP, Lee WL, Lam KF, Tse HF. Intravenous diltiazem is superior to intravenous amiodarone or digoxin for achieving ventricular rate control in patients with acute uncomplicated atrial fibrillation. Crit Care Med. 2009;37(7):2174-2179.
4) D-dimer Might Be a Reliable Assay to Determine Likelihood of Acute Aortic Dissection
Clinical question: Is the D-dimer assay beneficial in the evaluation of acute aortic dissection (AD)?
Background: Aortic dissection is a potentially lethal disorder that is included in the differential diagnosis of chest pain. No studies exist that specifically examine the use of the D-dimer assay to exclude or predict AD. D-dimer has been proven to be a useful tool to help rule out pulmonary embolism (PE) and DVT.
Study design: Prospective.
Setting: Fourteen centers in the U.S., Europe, and Japan.
Synopsis: Of 220 patients enrolled in the study, 87 had radiologically proven AD, and 133 had an initial suspicion of AD but a different final diagnosis. D-dimer assay was obtained on patients with a suspicion of AD within 24 hours of symptom onset. Additionally, appropriate imaging was performed on all patients to identify AD presence.
D-dimer was found to be a useful “rule out” test. At a cutoff level of 500 ng/mL, the negative likelihood ratio was 0.07 (<0.1 being suggestive of a good rule-out tool) and the negative predictive value was >90%. D-dimer was not shown to be as useful to predict the presence of AD in this study.
A major limitation of the study was a relatively small sample size, especially when subgroups were analyzed, therefore decreasing the overall accuracy of the study. Although this study shows promise for the D-dimer assay in the evaluation of suspected AD, it does not establish D-dimer as a reliable enough test to rule out AD without further imaging or evaluation.
Bottom line: Though this study illustrated a high negative predictive value for D-dimer in AD evaluation, physicians are cautioned against allowing a negative D-dimer to affect their management of a patient with a suspected acute aortic dissection.
Citation: Suzuki T, Distante A, Zizza A, et al. Diagnosis of acute aortic dissection by D-dimer: the International Registry of Acute Aortic Dissection Substudy on Biomarkers (IRAD-bio) experience. Circulation. 2009;119(20):2702-2707.
5) Acute Kidney Injury Predicts Outcomes of Non-Critically-Ill Patients
Clinical question: Does acute kidney injury affect in-hospital mortality, need for renal replacement therapy, and length of stay in patients who are not critically ill?
Background: Using the Acute Kidney Injury Network’s definition of acute kidney injury (AKI), including an abrupt increase in creatinine of 0.3 mg/dL, the authors previously showed an association with poor outcomes in critically ill patients. There is less evidence as to whether it predicts outcomes in non-critically-ill patients.
Study design: Retrospective cohort and a case-control study.
Setting: Bridgeport (Conn.) Hospital, a 350-bed community teaching hospital affiliated with Yale New Haven Health System.
Synopsis: Seven hundred thirty-five patients admitted to a medical unit who developed AKI, defined as a serum creatinine increase of 0.3 mg/dL or more within a 48-hour period, were compared to 5,089 controls. Patients who were admitted to critical care or who received RRT within the first 48 hours were excluded. AKI patients had higher in-hospital mortality of 14.8%, compared with 1.5% of controls, longer hospital stays of 7.9 versus 3.7 days, higher rates of transfer to the ICU of 28.6% versus 4.3%, and 73 versus zero patients who required renal replacement therapy. All findings were statistically significant (P<0.001). Some patients were omitted because of inadequate data collection.
Two hundred eighty-two patients were randomly selected from each group and matched based on age and demographic characteristics to perform a case-control study. AKI patients were eight times more likely to die in the hospital and five times more likely to have prolonged lengths of stay and transfer to the ICU.
This study does not differentiate between types of AKI and does not show a causal relationship.
Bottom line: AKI, defined as a serum creatinine increase of 0.3 mg/dL or more within a 48-hour period, predicts worse outcomes in non-critically-ill patients.
Citation: Barrantes F, Feng Y, Ivanov O, et al. Acute kidney injury predicts outcomes of non-critically ill patients. Mayo Clin Proc. 2009;84(5):410-416.
6) Statin Therapy after First Stroke Reduces Recurrence and Improves Survival
Clinical question: Does statin therapy after first ischemic stroke reduce recurrence and long-term mortality?
Background: Statin treatment has been shown to reduce primary stroke incidence, but there is less evidence on secondary prevention.
Study design: Retrospective observational study.
Setting: Acute stroke, general medicine, and neurology units at the hospitals affiliated with the University of Ioannina School of Medicine in Athens, Greece.
Synopsis: Seven hundred ninety-four primary ischemic stroke patients were admitted and followed for a 10-year period. Of those, 596 patients were discharged without a statin; 198 patients were discharged on statin therapy. One hundred twelve, or 14.1%, of the 794 patients had a recurrent event. The recurrence rate was 16.3% among those who did not receive a statin versus 7.6% among those who did receive a statin post-discharge (P=0.002).
Bottom line: Post-discharge statin therapy after an initial stroke appears to reduce the risk of recurrence.
Citation: Milionis HJ, Giannopoulos S, Kosmidou M, et al. Statin therapy after first stroke reduces 10-year stroke recurrence and improves survival. Neurology. 2009;72(21):1816-1822.
7) Low-Dose Corticosteroids Provide Modest Benefit to Patients with Vasopressor-Dependent Sepsis and Septic Shock
Clinical question: What are the risks and benefits of corticosteroid treatment in severe sepsis and septic shock?
Background: For more than 50 years, corticosteroids have been used as an adjuvant treatment for sepsis with conflicting benefits on mortality.
Study design: Meta-analysis. Literature Search of Cochrane Library, MEDLINE, EMBASE, and LILACS.
Synopsis: Seventeen randomized controlled trials and three quasi-randomized controlled trials of 3,384 patients were selected for statistical analysis. Overall, corticosteroids did not improve 28-day, all-cause mortality in severe sepsis and septic shock. There was a statistically significant reduction in 28-day mortality only for the subgroup of patients receiving prolonged low-dose steroid treatment (37.5% vs. 44% in the control group).
There was no increased risk of gastrointestinal hemorrhage, superinfection, or neuromuscular weakness seen in treated patients.
The trials differed in the types of corticosteroid used, the time to institution of therapy, bolus versus continuous administration, duration of therapy, and abrupt versus gradual interruption of treatment. All of these factors make the clinical application of the data challenging.
Bottom line: Many questions remain about the optimal dose, timing, and duration of corticosteroids in patients with vasopressor-dependent sepsis and septic shock, but there appears to be a modest mortality benefit with prolonged low-dose corticosteroid therapy.
Citation: Annane D, Bellissant E, Bollaert P, et al. Corticosteroids in the treatment of severe sepsis and septic shock in adults: a systematic review. JAMA. 2009;301(22): 2362-2375.
8) Testing for FVL Mutation but Not 20210A Predicts Recurrent VTE Risk
Clinical question: Are the rates of recurrent VTE higher in adults with VTE who possess either the factor V Leiden (FVL) or Prothrombin G20210A mutation, and what are the rates of VTE among family members?
Background: Clinicians commonly test for genetic mutations when treating patients who have a thrombotic event. However, the utility of such tests on predicting the risk for recurrent events and on outcomes needs review.
Study design: Meta-analysis.
Setting: Literature search of MEDLINE, EMBASE, the Cochrane Library, CINAHL, and PsycInfo.
Synopsis: Forty-six articles were selected for statistical analysis. The presence of either homozygous or heterozygous FVL mutation increased the risk of recurrent VTE compared with individuals without the FVL mutation (OR 2.65 and 1.56, respectively).
Compared with family members of adults without the FVL mutation, the presence of either homozygous or heterozygous FVL mutation predicts VTE in family members (OR 18 and 3.5, respectively).
The presence of G20210A is not predictive of recurrent VTE compared with individuals without this mutation. There is not sufficient evidence regarding the predictive value of the G20210A mutation on the risk of VTE in family members.
No studies directly address the effect of testing on outcomes other than recurrent VTE. In family members who were tested, there did not seem to be any impact on daily behavior, recognition of VTE risk factors, or perceived stress from testing.
Bottom line: FVL mutation increases the risk of recurrent VTE and predicts VTE in family members. The benefits of testing family members remain unclear.
Citation: Segal J, Brotman, D, Necochea, A, et al. Predictive value of factor V Leiden and prothrombin G20210A in adults with venous thromboembolism and in family members of those with a mutation: a systematic review. JAMA. 2009;301 (23):2472-2485. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- Stress testing in young patients with chest pain.
- Family presence during CPR.
- Achieving rate control in rapid atrial fibrillation.
- D-dimer in aortic dissection.
- Acute kidney injury outcomes.
- Statins after stroke.
- Low-dose steroids in septic shock.
- Genetic testing for VTE.
1) Utility of Cardiac Stress Testing Is Limited for Young Patients with Chest Pain
Clinical question: Does routine, provocative cardiac testing in low-risk adult patients younger than 40 years of age add to the diagnostic evaluation for acute coronary syndrome?
Background: In EDs, aggressive evaluation of chest pain is the standard of care due to high morbidity, mortality, and liability associated with acute coronary syndrome (ACS). Guidelines recommend provocative cardiac testing for all patients for whom ACS is suspected, yet the prevalence is low in patients younger than 40.
Study design: Retrospective observational study.
Setting: ED chest pain observation unit of an urban academic tertiary-care center in New York City.
Synopsis: Two hundred twenty patients between 22 and 39 years old admitted for ACS evaluation between March 2004 and September 2007 were eligible. Patients with known coronary artery disease, diagnostic ECG findings, or evidence of cocaine use were excluded. Provocative cardiac testing for the presence of myocardial ischemia followed serial cardiac biomarker testing to rule out myocardial infarction.
Six patients had positive stress tests. Four underwent subsequent coronary angiography, which demonstrated no evidence of obstructive coronary disease. One refused catheterization, and the other was lost to followup. Age younger than 40 years, nondiagnostic or normal ECG, and two sets of negative cardiac biomarker results at least six hours apart identified a patient group with a low rate of true-positive provocative testing.
This study is limited by its retrospective, single-centered nature; it was unable to include patients admitted to the hospital or those who left the chest-pain unit without provocative testing or against medical advice. The possibility of false-negative provocative testing results was not excluded. The methods of provocative testing were limited to those available prior to 2007.
Bottom line: Cardiac stress testing adds little to the diagnostic evaluation for patients younger than 40 years having nondiagnostic ECG and negative serial biomarker results. However, routine provocative testing is unlikely to decrease until better clinical risk-stratification tools exist for this very-low-prevalence population.
Citation: Hermann LK, Weingart SD, Duvall WL, Henzlova MJ. The limited utility of routine cardiac stress testing in emergency department chest pain patients younger than 40 years. Ann Emerg Med. 2009;54(1):12-16.
2) Family Witness Behavior Impacts Physician Performance during CPR
Clinical question: Does the presence or behavior of a family witness to cardiopulmonary resuscitation (CPR) alter the critical actions performed by physicians?
Background: Because few patients undergoing in-hospital CPR survive to hospital discharge, many hospitals allow at least one family member of the dying patient to be present during the resuscitation attempt. There is little evidence concerning the bereavement outcomes for the family witness or effect on the resuscitation environment and physician performance.
Study design: Randomized controlled study.
Setting: Human patient simulator-based medical resuscitation environment with standardized actors in an academic medical center.
Synopsis: Sixty second- and third-year emergency medicine residents were randomized in pairs and assigned to one of three groups: no family witness, a non-obstructive witness, or a witness displaying overt grief reactions. Trained actors played the roles of social worker and family member. All groups were joined by the social worker and participated in identical cardiac-arrest-code scenarios ending in asystole. The nurse in each group was scripted to make a potentially harmful medication error. Outcomes studied were physician-performance-based, such as length of the resuscitation attempt, time to critical events, and recognition of a potential drug administration error.
Delay in initiation of defibrillator shocks and decrease in the number of shocks delivered occurred in the overt grief reaction group. Failure to recognize the medication error occurred only once, and it was in the control group. No other significant differences were observed between groups.
Limitations to this study are its small size and possibility that physician behavior in simulated environments might not reflect that of real patient-care situations.
Bottom line: Overt grief reactions from family members witnessing CPR attempts might negatively impact important procedural events and decisions made by physicians, specifically the use of defibrillation, which could negatively affect outcomes of CPR.
Citation: Fernandez R, Compton S, Jones KA, Velilla MA. The presence of a family witness impacts physician performance during simulated medical codes. Crit Care Med. 2009;37(6): 1956-1960.
3) Diltiazem Is a Better Choice in Uncomplicated Atrial Fibrillation than Amiodarone or Digoxin
Clinical question: How does IV diltiazem compare to IV amiodarone or digoxin in achieving ventricular rate control in patients hospitalized for acute uncomplicated atrial fibrillation (AF)?
Background: Current guidelines for acute AF management are based on expert opinion and recommend calcium antagonists, beta-blockers, or digoxin for initial ventricular rate control in hemodynamically stable patients. Recommendations lead to wide clinical variations in the first 24 hours of presentation.
Study design: Randomized, open-label trial.
Setting: Single-center study in Hong Kong.
Synopsis: One hundred fifty patients presenting with acute symptomatic AF of <48 hours duration with a rapid ventricular rate were enrolled. The study endpoints were ventricular rate control within the first 24 hours defined as a sustained heart rate of <90 beats per minute for less than four hours. The time to ventricular rate control in patients assigned to the diltiazem group (three hours) was significantly shorter than that of the digoxin group (six hours) or the amiodarone group (seven hours). The percentage of patients who achieved ventricular rate control in the diltiazem group was 90%, compared with 74% in both the digoxin and amiodarone groups. Length of stay was shorter in diltiazem group (3.9 days) when compared with the digoxin group (4.7 days) and amiodarone group (4.7 days).
Major limitations of the study were the lack of beta-blockers as an option for rate control and the exclusion of patients with hemodynamic instability, heart failure, and myocardial infarction. As patients with underlying heart disease were excluded, these results cannot be applied to all patients presenting with acute AF.
Bottom line: Compared with digoxin and amidarone, intravenous diltiazem is safe and effective in achieving ventricular rate control to improve symptoms and to reduce length of hospital stay in acute uncomplicated AF.
Citation: Siu CW, Lau CP, Lee WL, Lam KF, Tse HF. Intravenous diltiazem is superior to intravenous amiodarone or digoxin for achieving ventricular rate control in patients with acute uncomplicated atrial fibrillation. Crit Care Med. 2009;37(7):2174-2179.
4) D-dimer Might Be a Reliable Assay to Determine Likelihood of Acute Aortic Dissection
Clinical question: Is the D-dimer assay beneficial in the evaluation of acute aortic dissection (AD)?
Background: Aortic dissection is a potentially lethal disorder that is included in the differential diagnosis of chest pain. No studies exist that specifically examine the use of the D-dimer assay to exclude or predict AD. D-dimer has been proven to be a useful tool to help rule out pulmonary embolism (PE) and DVT.
Study design: Prospective.
Setting: Fourteen centers in the U.S., Europe, and Japan.
Synopsis: Of 220 patients enrolled in the study, 87 had radiologically proven AD, and 133 had an initial suspicion of AD but a different final diagnosis. D-dimer assay was obtained on patients with a suspicion of AD within 24 hours of symptom onset. Additionally, appropriate imaging was performed on all patients to identify AD presence.
D-dimer was found to be a useful “rule out” test. At a cutoff level of 500 ng/mL, the negative likelihood ratio was 0.07 (<0.1 being suggestive of a good rule-out tool) and the negative predictive value was >90%. D-dimer was not shown to be as useful to predict the presence of AD in this study.
A major limitation of the study was a relatively small sample size, especially when subgroups were analyzed, therefore decreasing the overall accuracy of the study. Although this study shows promise for the D-dimer assay in the evaluation of suspected AD, it does not establish D-dimer as a reliable enough test to rule out AD without further imaging or evaluation.
Bottom line: Though this study illustrated a high negative predictive value for D-dimer in AD evaluation, physicians are cautioned against allowing a negative D-dimer to affect their management of a patient with a suspected acute aortic dissection.
Citation: Suzuki T, Distante A, Zizza A, et al. Diagnosis of acute aortic dissection by D-dimer: the International Registry of Acute Aortic Dissection Substudy on Biomarkers (IRAD-bio) experience. Circulation. 2009;119(20):2702-2707.
5) Acute Kidney Injury Predicts Outcomes of Non-Critically-Ill Patients
Clinical question: Does acute kidney injury affect in-hospital mortality, need for renal replacement therapy, and length of stay in patients who are not critically ill?
Background: Using the Acute Kidney Injury Network’s definition of acute kidney injury (AKI), including an abrupt increase in creatinine of 0.3 mg/dL, the authors previously showed an association with poor outcomes in critically ill patients. There is less evidence as to whether it predicts outcomes in non-critically-ill patients.
Study design: Retrospective cohort and a case-control study.
Setting: Bridgeport (Conn.) Hospital, a 350-bed community teaching hospital affiliated with Yale New Haven Health System.
Synopsis: Seven hundred thirty-five patients admitted to a medical unit who developed AKI, defined as a serum creatinine increase of 0.3 mg/dL or more within a 48-hour period, were compared to 5,089 controls. Patients who were admitted to critical care or who received RRT within the first 48 hours were excluded. AKI patients had higher in-hospital mortality of 14.8%, compared with 1.5% of controls, longer hospital stays of 7.9 versus 3.7 days, higher rates of transfer to the ICU of 28.6% versus 4.3%, and 73 versus zero patients who required renal replacement therapy. All findings were statistically significant (P<0.001). Some patients were omitted because of inadequate data collection.
Two hundred eighty-two patients were randomly selected from each group and matched based on age and demographic characteristics to perform a case-control study. AKI patients were eight times more likely to die in the hospital and five times more likely to have prolonged lengths of stay and transfer to the ICU.
This study does not differentiate between types of AKI and does not show a causal relationship.
Bottom line: AKI, defined as a serum creatinine increase of 0.3 mg/dL or more within a 48-hour period, predicts worse outcomes in non-critically-ill patients.
Citation: Barrantes F, Feng Y, Ivanov O, et al. Acute kidney injury predicts outcomes of non-critically ill patients. Mayo Clin Proc. 2009;84(5):410-416.
6) Statin Therapy after First Stroke Reduces Recurrence and Improves Survival
Clinical question: Does statin therapy after first ischemic stroke reduce recurrence and long-term mortality?
Background: Statin treatment has been shown to reduce primary stroke incidence, but there is less evidence on secondary prevention.
Study design: Retrospective observational study.
Setting: Acute stroke, general medicine, and neurology units at the hospitals affiliated with the University of Ioannina School of Medicine in Athens, Greece.
Synopsis: Seven hundred ninety-four primary ischemic stroke patients were admitted and followed for a 10-year period. Of those, 596 patients were discharged without a statin; 198 patients were discharged on statin therapy. One hundred twelve, or 14.1%, of the 794 patients had a recurrent event. The recurrence rate was 16.3% among those who did not receive a statin versus 7.6% among those who did receive a statin post-discharge (P=0.002).
Bottom line: Post-discharge statin therapy after an initial stroke appears to reduce the risk of recurrence.
Citation: Milionis HJ, Giannopoulos S, Kosmidou M, et al. Statin therapy after first stroke reduces 10-year stroke recurrence and improves survival. Neurology. 2009;72(21):1816-1822.
7) Low-Dose Corticosteroids Provide Modest Benefit to Patients with Vasopressor-Dependent Sepsis and Septic Shock
Clinical question: What are the risks and benefits of corticosteroid treatment in severe sepsis and septic shock?
Background: For more than 50 years, corticosteroids have been used as an adjuvant treatment for sepsis with conflicting benefits on mortality.
Study design: Meta-analysis. Literature Search of Cochrane Library, MEDLINE, EMBASE, and LILACS.
Synopsis: Seventeen randomized controlled trials and three quasi-randomized controlled trials of 3,384 patients were selected for statistical analysis. Overall, corticosteroids did not improve 28-day, all-cause mortality in severe sepsis and septic shock. There was a statistically significant reduction in 28-day mortality only for the subgroup of patients receiving prolonged low-dose steroid treatment (37.5% vs. 44% in the control group).
There was no increased risk of gastrointestinal hemorrhage, superinfection, or neuromuscular weakness seen in treated patients.
The trials differed in the types of corticosteroid used, the time to institution of therapy, bolus versus continuous administration, duration of therapy, and abrupt versus gradual interruption of treatment. All of these factors make the clinical application of the data challenging.
Bottom line: Many questions remain about the optimal dose, timing, and duration of corticosteroids in patients with vasopressor-dependent sepsis and septic shock, but there appears to be a modest mortality benefit with prolonged low-dose corticosteroid therapy.
Citation: Annane D, Bellissant E, Bollaert P, et al. Corticosteroids in the treatment of severe sepsis and septic shock in adults: a systematic review. JAMA. 2009;301(22): 2362-2375.
8) Testing for FVL Mutation but Not 20210A Predicts Recurrent VTE Risk
Clinical question: Are the rates of recurrent VTE higher in adults with VTE who possess either the factor V Leiden (FVL) or Prothrombin G20210A mutation, and what are the rates of VTE among family members?
Background: Clinicians commonly test for genetic mutations when treating patients who have a thrombotic event. However, the utility of such tests on predicting the risk for recurrent events and on outcomes needs review.
Study design: Meta-analysis.
Setting: Literature search of MEDLINE, EMBASE, the Cochrane Library, CINAHL, and PsycInfo.
Synopsis: Forty-six articles were selected for statistical analysis. The presence of either homozygous or heterozygous FVL mutation increased the risk of recurrent VTE compared with individuals without the FVL mutation (OR 2.65 and 1.56, respectively).
Compared with family members of adults without the FVL mutation, the presence of either homozygous or heterozygous FVL mutation predicts VTE in family members (OR 18 and 3.5, respectively).
The presence of G20210A is not predictive of recurrent VTE compared with individuals without this mutation. There is not sufficient evidence regarding the predictive value of the G20210A mutation on the risk of VTE in family members.
No studies directly address the effect of testing on outcomes other than recurrent VTE. In family members who were tested, there did not seem to be any impact on daily behavior, recognition of VTE risk factors, or perceived stress from testing.
Bottom line: FVL mutation increases the risk of recurrent VTE and predicts VTE in family members. The benefits of testing family members remain unclear.
Citation: Segal J, Brotman, D, Necochea, A, et al. Predictive value of factor V Leiden and prothrombin G20210A in adults with venous thromboembolism and in family members of those with a mutation: a systematic review. JAMA. 2009;301 (23):2472-2485. TH
In This Edition
Literature at a Glance
A guide to this month’s studies
- Stress testing in young patients with chest pain.
- Family presence during CPR.
- Achieving rate control in rapid atrial fibrillation.
- D-dimer in aortic dissection.
- Acute kidney injury outcomes.
- Statins after stroke.
- Low-dose steroids in septic shock.
- Genetic testing for VTE.
1) Utility of Cardiac Stress Testing Is Limited for Young Patients with Chest Pain
Clinical question: Does routine, provocative cardiac testing in low-risk adult patients younger than 40 years of age add to the diagnostic evaluation for acute coronary syndrome?
Background: In EDs, aggressive evaluation of chest pain is the standard of care due to high morbidity, mortality, and liability associated with acute coronary syndrome (ACS). Guidelines recommend provocative cardiac testing for all patients for whom ACS is suspected, yet the prevalence is low in patients younger than 40.
Study design: Retrospective observational study.
Setting: ED chest pain observation unit of an urban academic tertiary-care center in New York City.
Synopsis: Two hundred twenty patients between 22 and 39 years old admitted for ACS evaluation between March 2004 and September 2007 were eligible. Patients with known coronary artery disease, diagnostic ECG findings, or evidence of cocaine use were excluded. Provocative cardiac testing for the presence of myocardial ischemia followed serial cardiac biomarker testing to rule out myocardial infarction.
Six patients had positive stress tests. Four underwent subsequent coronary angiography, which demonstrated no evidence of obstructive coronary disease. One refused catheterization, and the other was lost to followup. Age younger than 40 years, nondiagnostic or normal ECG, and two sets of negative cardiac biomarker results at least six hours apart identified a patient group with a low rate of true-positive provocative testing.
This study is limited by its retrospective, single-centered nature; it was unable to include patients admitted to the hospital or those who left the chest-pain unit without provocative testing or against medical advice. The possibility of false-negative provocative testing results was not excluded. The methods of provocative testing were limited to those available prior to 2007.
Bottom line: Cardiac stress testing adds little to the diagnostic evaluation for patients younger than 40 years having nondiagnostic ECG and negative serial biomarker results. However, routine provocative testing is unlikely to decrease until better clinical risk-stratification tools exist for this very-low-prevalence population.
Citation: Hermann LK, Weingart SD, Duvall WL, Henzlova MJ. The limited utility of routine cardiac stress testing in emergency department chest pain patients younger than 40 years. Ann Emerg Med. 2009;54(1):12-16.
2) Family Witness Behavior Impacts Physician Performance during CPR
Clinical question: Does the presence or behavior of a family witness to cardiopulmonary resuscitation (CPR) alter the critical actions performed by physicians?
Background: Because few patients undergoing in-hospital CPR survive to hospital discharge, many hospitals allow at least one family member of the dying patient to be present during the resuscitation attempt. There is little evidence concerning the bereavement outcomes for the family witness or effect on the resuscitation environment and physician performance.
Study design: Randomized controlled study.
Setting: Human patient simulator-based medical resuscitation environment with standardized actors in an academic medical center.
Synopsis: Sixty second- and third-year emergency medicine residents were randomized in pairs and assigned to one of three groups: no family witness, a non-obstructive witness, or a witness displaying overt grief reactions. Trained actors played the roles of social worker and family member. All groups were joined by the social worker and participated in identical cardiac-arrest-code scenarios ending in asystole. The nurse in each group was scripted to make a potentially harmful medication error. Outcomes studied were physician-performance-based, such as length of the resuscitation attempt, time to critical events, and recognition of a potential drug administration error.
Delay in initiation of defibrillator shocks and decrease in the number of shocks delivered occurred in the overt grief reaction group. Failure to recognize the medication error occurred only once, and it was in the control group. No other significant differences were observed between groups.
Limitations to this study are its small size and possibility that physician behavior in simulated environments might not reflect that of real patient-care situations.
Bottom line: Overt grief reactions from family members witnessing CPR attempts might negatively impact important procedural events and decisions made by physicians, specifically the use of defibrillation, which could negatively affect outcomes of CPR.
Citation: Fernandez R, Compton S, Jones KA, Velilla MA. The presence of a family witness impacts physician performance during simulated medical codes. Crit Care Med. 2009;37(6): 1956-1960.
3) Diltiazem Is a Better Choice in Uncomplicated Atrial Fibrillation than Amiodarone or Digoxin
Clinical question: How does IV diltiazem compare to IV amiodarone or digoxin in achieving ventricular rate control in patients hospitalized for acute uncomplicated atrial fibrillation (AF)?
Background: Current guidelines for acute AF management are based on expert opinion and recommend calcium antagonists, beta-blockers, or digoxin for initial ventricular rate control in hemodynamically stable patients. Recommendations lead to wide clinical variations in the first 24 hours of presentation.
Study design: Randomized, open-label trial.
Setting: Single-center study in Hong Kong.
Synopsis: One hundred fifty patients presenting with acute symptomatic AF of <48 hours duration with a rapid ventricular rate were enrolled. The study endpoints were ventricular rate control within the first 24 hours defined as a sustained heart rate of <90 beats per minute for less than four hours. The time to ventricular rate control in patients assigned to the diltiazem group (three hours) was significantly shorter than that of the digoxin group (six hours) or the amiodarone group (seven hours). The percentage of patients who achieved ventricular rate control in the diltiazem group was 90%, compared with 74% in both the digoxin and amiodarone groups. Length of stay was shorter in diltiazem group (3.9 days) when compared with the digoxin group (4.7 days) and amiodarone group (4.7 days).
Major limitations of the study were the lack of beta-blockers as an option for rate control and the exclusion of patients with hemodynamic instability, heart failure, and myocardial infarction. As patients with underlying heart disease were excluded, these results cannot be applied to all patients presenting with acute AF.
Bottom line: Compared with digoxin and amidarone, intravenous diltiazem is safe and effective in achieving ventricular rate control to improve symptoms and to reduce length of hospital stay in acute uncomplicated AF.
Citation: Siu CW, Lau CP, Lee WL, Lam KF, Tse HF. Intravenous diltiazem is superior to intravenous amiodarone or digoxin for achieving ventricular rate control in patients with acute uncomplicated atrial fibrillation. Crit Care Med. 2009;37(7):2174-2179.
4) D-dimer Might Be a Reliable Assay to Determine Likelihood of Acute Aortic Dissection
Clinical question: Is the D-dimer assay beneficial in the evaluation of acute aortic dissection (AD)?
Background: Aortic dissection is a potentially lethal disorder that is included in the differential diagnosis of chest pain. No studies exist that specifically examine the use of the D-dimer assay to exclude or predict AD. D-dimer has been proven to be a useful tool to help rule out pulmonary embolism (PE) and DVT.
Study design: Prospective.
Setting: Fourteen centers in the U.S., Europe, and Japan.
Synopsis: Of 220 patients enrolled in the study, 87 had radiologically proven AD, and 133 had an initial suspicion of AD but a different final diagnosis. D-dimer assay was obtained on patients with a suspicion of AD within 24 hours of symptom onset. Additionally, appropriate imaging was performed on all patients to identify AD presence.
D-dimer was found to be a useful “rule out” test. At a cutoff level of 500 ng/mL, the negative likelihood ratio was 0.07 (<0.1 being suggestive of a good rule-out tool) and the negative predictive value was >90%. D-dimer was not shown to be as useful to predict the presence of AD in this study.
A major limitation of the study was a relatively small sample size, especially when subgroups were analyzed, therefore decreasing the overall accuracy of the study. Although this study shows promise for the D-dimer assay in the evaluation of suspected AD, it does not establish D-dimer as a reliable enough test to rule out AD without further imaging or evaluation.
Bottom line: Though this study illustrated a high negative predictive value for D-dimer in AD evaluation, physicians are cautioned against allowing a negative D-dimer to affect their management of a patient with a suspected acute aortic dissection.
Citation: Suzuki T, Distante A, Zizza A, et al. Diagnosis of acute aortic dissection by D-dimer: the International Registry of Acute Aortic Dissection Substudy on Biomarkers (IRAD-bio) experience. Circulation. 2009;119(20):2702-2707.
5) Acute Kidney Injury Predicts Outcomes of Non-Critically-Ill Patients
Clinical question: Does acute kidney injury affect in-hospital mortality, need for renal replacement therapy, and length of stay in patients who are not critically ill?
Background: Using the Acute Kidney Injury Network’s definition of acute kidney injury (AKI), including an abrupt increase in creatinine of 0.3 mg/dL, the authors previously showed an association with poor outcomes in critically ill patients. There is less evidence as to whether it predicts outcomes in non-critically-ill patients.
Study design: Retrospective cohort and a case-control study.
Setting: Bridgeport (Conn.) Hospital, a 350-bed community teaching hospital affiliated with Yale New Haven Health System.
Synopsis: Seven hundred thirty-five patients admitted to a medical unit who developed AKI, defined as a serum creatinine increase of 0.3 mg/dL or more within a 48-hour period, were compared to 5,089 controls. Patients who were admitted to critical care or who received RRT within the first 48 hours were excluded. AKI patients had higher in-hospital mortality of 14.8%, compared with 1.5% of controls, longer hospital stays of 7.9 versus 3.7 days, higher rates of transfer to the ICU of 28.6% versus 4.3%, and 73 versus zero patients who required renal replacement therapy. All findings were statistically significant (P<0.001). Some patients were omitted because of inadequate data collection.
Two hundred eighty-two patients were randomly selected from each group and matched based on age and demographic characteristics to perform a case-control study. AKI patients were eight times more likely to die in the hospital and five times more likely to have prolonged lengths of stay and transfer to the ICU.
This study does not differentiate between types of AKI and does not show a causal relationship.
Bottom line: AKI, defined as a serum creatinine increase of 0.3 mg/dL or more within a 48-hour period, predicts worse outcomes in non-critically-ill patients.
Citation: Barrantes F, Feng Y, Ivanov O, et al. Acute kidney injury predicts outcomes of non-critically ill patients. Mayo Clin Proc. 2009;84(5):410-416.
6) Statin Therapy after First Stroke Reduces Recurrence and Improves Survival
Clinical question: Does statin therapy after first ischemic stroke reduce recurrence and long-term mortality?
Background: Statin treatment has been shown to reduce primary stroke incidence, but there is less evidence on secondary prevention.
Study design: Retrospective observational study.
Setting: Acute stroke, general medicine, and neurology units at the hospitals affiliated with the University of Ioannina School of Medicine in Athens, Greece.
Synopsis: Seven hundred ninety-four primary ischemic stroke patients were admitted and followed for a 10-year period. Of those, 596 patients were discharged without a statin; 198 patients were discharged on statin therapy. One hundred twelve, or 14.1%, of the 794 patients had a recurrent event. The recurrence rate was 16.3% among those who did not receive a statin versus 7.6% among those who did receive a statin post-discharge (P=0.002).
Bottom line: Post-discharge statin therapy after an initial stroke appears to reduce the risk of recurrence.
Citation: Milionis HJ, Giannopoulos S, Kosmidou M, et al. Statin therapy after first stroke reduces 10-year stroke recurrence and improves survival. Neurology. 2009;72(21):1816-1822.
7) Low-Dose Corticosteroids Provide Modest Benefit to Patients with Vasopressor-Dependent Sepsis and Septic Shock
Clinical question: What are the risks and benefits of corticosteroid treatment in severe sepsis and septic shock?
Background: For more than 50 years, corticosteroids have been used as an adjuvant treatment for sepsis with conflicting benefits on mortality.
Study design: Meta-analysis. Literature Search of Cochrane Library, MEDLINE, EMBASE, and LILACS.
Synopsis: Seventeen randomized controlled trials and three quasi-randomized controlled trials of 3,384 patients were selected for statistical analysis. Overall, corticosteroids did not improve 28-day, all-cause mortality in severe sepsis and septic shock. There was a statistically significant reduction in 28-day mortality only for the subgroup of patients receiving prolonged low-dose steroid treatment (37.5% vs. 44% in the control group).
There was no increased risk of gastrointestinal hemorrhage, superinfection, or neuromuscular weakness seen in treated patients.
The trials differed in the types of corticosteroid used, the time to institution of therapy, bolus versus continuous administration, duration of therapy, and abrupt versus gradual interruption of treatment. All of these factors make the clinical application of the data challenging.
Bottom line: Many questions remain about the optimal dose, timing, and duration of corticosteroids in patients with vasopressor-dependent sepsis and septic shock, but there appears to be a modest mortality benefit with prolonged low-dose corticosteroid therapy.
Citation: Annane D, Bellissant E, Bollaert P, et al. Corticosteroids in the treatment of severe sepsis and septic shock in adults: a systematic review. JAMA. 2009;301(22): 2362-2375.
8) Testing for FVL Mutation but Not 20210A Predicts Recurrent VTE Risk
Clinical question: Are the rates of recurrent VTE higher in adults with VTE who possess either the factor V Leiden (FVL) or Prothrombin G20210A mutation, and what are the rates of VTE among family members?
Background: Clinicians commonly test for genetic mutations when treating patients who have a thrombotic event. However, the utility of such tests on predicting the risk for recurrent events and on outcomes needs review.
Study design: Meta-analysis.
Setting: Literature search of MEDLINE, EMBASE, the Cochrane Library, CINAHL, and PsycInfo.
Synopsis: Forty-six articles were selected for statistical analysis. The presence of either homozygous or heterozygous FVL mutation increased the risk of recurrent VTE compared with individuals without the FVL mutation (OR 2.65 and 1.56, respectively).
Compared with family members of adults without the FVL mutation, the presence of either homozygous or heterozygous FVL mutation predicts VTE in family members (OR 18 and 3.5, respectively).
The presence of G20210A is not predictive of recurrent VTE compared with individuals without this mutation. There is not sufficient evidence regarding the predictive value of the G20210A mutation on the risk of VTE in family members.
No studies directly address the effect of testing on outcomes other than recurrent VTE. In family members who were tested, there did not seem to be any impact on daily behavior, recognition of VTE risk factors, or perceived stress from testing.
Bottom line: FVL mutation increases the risk of recurrent VTE and predicts VTE in family members. The benefits of testing family members remain unclear.
Citation: Segal J, Brotman, D, Necochea, A, et al. Predictive value of factor V Leiden and prothrombin G20210A in adults with venous thromboembolism and in family members of those with a mutation: a systematic review. JAMA. 2009;301 (23):2472-2485. TH