Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method

Article Type
Changed
Thu, 05/25/2023 - 09:36
Display Headline
Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method

Study 1 Overview (Trivedi et al)

Objective: This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academic teaching hospitals, specifically focusing on patient education and counseling in 6 key discharge communication domains.

Design: Observations were conducted over a 13-month period from September 2018 through October 2019, following the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines.

Setting and participants: The study involved a total of 33 English- and Spanish-speaking patients purposefully selected from the “discharge before noon” list at 2 urban tertiary-care teaching hospitals. A total of 155 observation hours were accumulated, with an average observation time of 4.7 hours per patient on the day of discharge.

Main outcome measures: The study assessed 6 discharge communication domains: (1) the name and function of medication changes, (2) the purpose of postdischarge appointments, (3) disease self-management, (4) red flags or warning signs for complications, (5) teach-back techniques to confirm patient understanding, and (6) staff solicitation of patient questions or concerns.

Main results: The study found several gaps in discharge communication practices. Among the 29 patients with medication changes, 28% were not informed about the name and basic function of the changes, while 59% did not receive counseling on the purpose for the medication change. In terms of postdischarge appointments, 48% of patients were not told the purpose of these appointments. Moreover, 54% of patients did not receive counseling on self-management of their primary discharge diagnosis or other diagnoses, and 73% were not informed about symptom expectations or the expected course of their illness after leaving the hospital. Most patients (82%) were not counseled on red-flag signs and symptoms that should prompt immediate return to care.

Teach-back techniques, which are critical for ensuring patient understanding, were used in only 3% of cases, and 85% of patients were not asked by health care providers if there might be barriers to following the care plan. Less than half (42%) of the patients were asked if they had any questions, with most questions being logistical and often deferred to another team member or met with uncertainty. Of note, among the 33 patients, only 2 patients received extensive information that covered 5 or 6 out of 6 discharge communication domains.

The study found variable roles in who communicated what aspects of discharge education, with most domains being communicated in an ad hoc manner and no clear pattern of responsibility. However, 2 exceptions were observed: nurses were more likely to provide information about new or changed medications and follow-up appointments, and the only example of teach-back was conducted by an attending physician.

Conclusion: The study highlights a significant need for improved discharge techniques to enhance patient safety and quality of care upon leaving the hospital. Interventions should focus on increasing transparency in patient education and understanding, clarifying assumptions of roles among the interprofessional team, and implementing effective communication strategies and system redesigns that foster patient-centered discharge education. Also, the study revealed that some patients received more robust discharge education than others, indicating systemic inequality in the patient experience. Further studies are needed to explore the development and assessment of such interventions to ensure optimal patient outcomes and equal care following hospital discharge.

 

 

Study 2 Overview (Marks et al)

Objective: This study aimed to investigate the impact of a nurse-led discharge medication education program, Teaching Important Medication Effects (TIME), on patients’ new medication knowledge at discharge and 48 to 72 hours post discharge. The specific objectives were to identify patients’ priority learning needs, evaluate the influence of TIME on patients’ new medication knowledge before and after discharge, and assess the effect of TIME on patients’ experience and satisfaction with medication education.

Design: The study employed a longitudinal pretest/post-test, 2-group design involving 107 randomly selected medical-surgical patients from an academic hospital. Participants were interviewed before and within 72 hours after discharge following administration of medication instructions. Bivariate analyses were performed to assess demographic and outcome variable differences between groups.

Setting and participants: Conducted on a 24-bed medical-surgical unit at a large Magnet® hospital over 18 months (2018-2019), the study included patients with at least 1 new medication, aged 18 years or older, able to read and speak English or Spanish, admitted from home with a minimum 1 overnight stay, and planning to return home post discharge. Excluded were cognitively impaired patients, those assigned to a resource pool nurse without TIME training, and those having a research team member assigned. Participants were randomly selected from a computerized list of patients scheduled for discharge.

Main outcome measures: Primary outcome measures included patients’ new medication knowledge before and after discharge and patients’ experience and satisfaction with medication education.

Main results: The usual care (n = 52) and TIME (n = 55) patients had similar baseline demographic characteristics. The study revealed that almost all patients in both usual care and TIME groups were aware of their new medication and its purpose at discharge. However, differences were observed in medication side effect responses, with 72.5% of the usual-care group knowing side effects compared to 94.3% of the TIME group (P = .003). Additionally, 81.5% of the usual-care group understood the medication purpose compared to 100% of the TIME group (P = .02). During the 48- to 72-hour postdischarge calls, consistent responses were found from both groups regarding knowledge of new medication, medication name, and medication purpose. Similar to discharge results, differences in medication side effect responses were observed, with 75.8% of the usual care group correctly identifying at least 1 medication side effect compared to 93.9% of the TIME group (P = .04). TIME was associated with higher satisfaction with medication education compared to usual care (97% vs. 46.9%, P < .001).

Conclusion: The nurse-led discharge medication education program TIME effectively enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge. The program also significantly improved patients’ experience and satisfaction with medication education. These findings indicate that TIME is a valuable tool for augmenting patient education and medication adherence in a hospital setting. By incorporating the teach-back method, TIME offers a structured approach to educating patients about their medications at hospital discharge, leading to improved care transitions.

 

 

Commentary

Suboptimal communication between patients, caregivers, and providers upon hospital discharge is a major contributor to patients’ inadequate understanding of postdischarge care plans. This inadequate understanding leads to preventable harms, such as medication errors, adverse events, emergency room visits, and costly hospital readmissions.1 The issue is further exacerbated by a lack of clarity among health care team members’ respective roles in providing information that optimizes care transitions during the discharge communication process. Moreover, low health literacy, particularly prevalent among seniors, those from disadvantaged backgrouds, and those with lower education attainment or chronic illnesses, create additional barriers to effective discharge communication. A potential solution to this problem is the adoption of effective teaching strategies, specifically the teach-back method. This method employs techniques that ensure patients’ understanding and recall of new information regardless of health literacy, and places accountability on clinicians rather than patients. By closing communication gaps between clinicians and patients, the teach-back method can reduce hospital readmissions, hospital-acquired conditions, and mortality rates, while improving patient satisfaction with health care instructions and the overall hospital experience.2

Study 1, by Trivedi et al, and study 2, by Marks et al, aimed to identify and address problems related to poor communication between patients and health care team members at hospital discharge. Specifically, study 1 examined routine discharge communication practices to determine communication gaps, while study 2 evaluated a nurse-led teach-back intervention program designed to improve patients’ medication knowledge and satisfaction. These distinct objectives and designs reflected the unique ways each study approached the challenges associated with care transitions at the time of hospital discharge.

Study 1 used direct observation of patient-practitioner interactions to evaluate routine discharge communication practices in internal medicine services at 2 urban academic teaching hospitals. In the 33 patients observed, significant gaps in discharge communication practices were identified in the domains of medication changes, postdischarge appointments, disease self-management, and red flags or warning signs. Unsurprisingly, most of these domains were communicated in an ad hoc manner by members of the health care team without a clear pattern of responsibility in reference to patient discharge education, and teach-back was seldom used. These findings underscore the need for improved discharge techniques, effective communication strategies, and clarification of roles among the interprofessional team to enhance the safety, quality of care, and overall patient experience during hospital discharge.

Study 2 aimed to augment the hospital discharge communication process by implementing a nurse-led discharge medication education program (TIME), which targeted patients’ priority learning needs, new medication knowledge, and satisfaction with medication education. In the 107 patients assessed, this teach-back method enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge, as well as improved patients’ experience and satisfaction with medication education. These results suggest that a teach-back method such as the TIME program could be a solution to care transition problems identified in the Trivedi et al study by providing a structured approach to patient education and enhancing communication practices during the hospital discharge process. Thus, by implementing the TIME program, hospitals may improve patient outcomes, safety, and overall quality of care upon leaving the hospital.

Applications for Clinical Practice and System Implementation

Care transition at the time of hospital discharge is a particularly pivotal period in the care of vulnerable individuals. There is growing literature, including studies discussed in this review, to indicate that by focusing on improving patient-practitioner communication during the discharge process and using strategies such as the teach-back method, health care professionals can better prepare patients for self-management in the post-acute period and help them make informed decisions about their care. This emphasis on care-transition communication strategies may lead to a reduction in medication errors, adverse events, and hospital readmissions, ultimately improving patient outcomes and satisfaction. Barriers to system implementation of such strategies may include competing demands and responsibilities of busy practitioners as well as the inherent complexities associated with hospital discharge. Creative solutions, such as the utilization of telehealth and early transition-of-care visits, represent some potential approaches to counter these barriers.

While both studies illustrated barriers and facilitators of hospital discharge communication, each study had limitations that impacted their generalizability to real-world clinical practice. Limitations in study 1 included a small sample size, purposive sampling method, and a focus on planned discharges in a teaching hospital, which may introduce selection bias. The study’s findings may not be generalizable to unplanned discharges, patients who do not speak English or Spanish, or nonteaching hospitals. Additionally, the data were collected before the COVID-19 pandemic, which could have further impacted discharge education practices. The study also revealed that some patients received more robust discharge education than others, which indicated systemic inequality in the patient experience. Further research is required to address this discrepancy. Limitations in study 2 included a relatively small and homogeneous sample, with most participants being younger, non-Hispanic White, English-speaking, and well-educated. This lack of diversity may limit the generalizability of the findings. Furthermore, the study did not evaluate the patients’ knowledge of medication dosage and focused only on new medications. Future studies should examine the effect of teach-back on a broader range of self-management topics in preparation for discharge, while also including a more diverse population to account for factors related to social determinants of health. Taken together, further research is needed to address these limitations and ensure more generalizable results that can more broadly improve discharge education and care transitions that bridge acute and post-acute care.

Practice Points

  • There is a significant need for improved discharge strategies to enhance patient safety and quality of care upon leaving the hospital.
  • Teach-back method may offer a structured approach to educating patients about their medications at hospital discharge and improve care transitions.

–Yuka Shichijo, MD, and Fred Ko, MD, Mount Sinai Beth Israel Hospital, New York, NY

References

1. Snow V, Beck D, Budnitz T, Miller DC, Potter J, Wears RL, Weiss KB, Williams MV; American College of Physicians; Society of General Internal Medicine; Society of Hospital Medicine; American Geriatrics Society; American College of Emergency Physicians; Society of Academic Emergency Medicine. Transitions of care consensus policy statement American College of Physicians-Society of General Internal Medicine-Society of Hospital Medicine-American Geriatrics Society-American College of Emergency Physicians-Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971-976. doi:10.1007/s11606-009-0969-x

2. Yen PH, Leasure AR. Use and effectiveness of the teach-back method in patient education and health outcomes. Fed. Pract. 2019;36(6):284-289.

Article PDF
Issue
Journal of Clinical Outcomes Management - 30(3)
Publications
Topics
Page Number
58-61
Sections
Article PDF
Article PDF

Study 1 Overview (Trivedi et al)

Objective: This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academic teaching hospitals, specifically focusing on patient education and counseling in 6 key discharge communication domains.

Design: Observations were conducted over a 13-month period from September 2018 through October 2019, following the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines.

Setting and participants: The study involved a total of 33 English- and Spanish-speaking patients purposefully selected from the “discharge before noon” list at 2 urban tertiary-care teaching hospitals. A total of 155 observation hours were accumulated, with an average observation time of 4.7 hours per patient on the day of discharge.

Main outcome measures: The study assessed 6 discharge communication domains: (1) the name and function of medication changes, (2) the purpose of postdischarge appointments, (3) disease self-management, (4) red flags or warning signs for complications, (5) teach-back techniques to confirm patient understanding, and (6) staff solicitation of patient questions or concerns.

Main results: The study found several gaps in discharge communication practices. Among the 29 patients with medication changes, 28% were not informed about the name and basic function of the changes, while 59% did not receive counseling on the purpose for the medication change. In terms of postdischarge appointments, 48% of patients were not told the purpose of these appointments. Moreover, 54% of patients did not receive counseling on self-management of their primary discharge diagnosis or other diagnoses, and 73% were not informed about symptom expectations or the expected course of their illness after leaving the hospital. Most patients (82%) were not counseled on red-flag signs and symptoms that should prompt immediate return to care.

Teach-back techniques, which are critical for ensuring patient understanding, were used in only 3% of cases, and 85% of patients were not asked by health care providers if there might be barriers to following the care plan. Less than half (42%) of the patients were asked if they had any questions, with most questions being logistical and often deferred to another team member or met with uncertainty. Of note, among the 33 patients, only 2 patients received extensive information that covered 5 or 6 out of 6 discharge communication domains.

The study found variable roles in who communicated what aspects of discharge education, with most domains being communicated in an ad hoc manner and no clear pattern of responsibility. However, 2 exceptions were observed: nurses were more likely to provide information about new or changed medications and follow-up appointments, and the only example of teach-back was conducted by an attending physician.

Conclusion: The study highlights a significant need for improved discharge techniques to enhance patient safety and quality of care upon leaving the hospital. Interventions should focus on increasing transparency in patient education and understanding, clarifying assumptions of roles among the interprofessional team, and implementing effective communication strategies and system redesigns that foster patient-centered discharge education. Also, the study revealed that some patients received more robust discharge education than others, indicating systemic inequality in the patient experience. Further studies are needed to explore the development and assessment of such interventions to ensure optimal patient outcomes and equal care following hospital discharge.

 

 

Study 2 Overview (Marks et al)

Objective: This study aimed to investigate the impact of a nurse-led discharge medication education program, Teaching Important Medication Effects (TIME), on patients’ new medication knowledge at discharge and 48 to 72 hours post discharge. The specific objectives were to identify patients’ priority learning needs, evaluate the influence of TIME on patients’ new medication knowledge before and after discharge, and assess the effect of TIME on patients’ experience and satisfaction with medication education.

Design: The study employed a longitudinal pretest/post-test, 2-group design involving 107 randomly selected medical-surgical patients from an academic hospital. Participants were interviewed before and within 72 hours after discharge following administration of medication instructions. Bivariate analyses were performed to assess demographic and outcome variable differences between groups.

Setting and participants: Conducted on a 24-bed medical-surgical unit at a large Magnet® hospital over 18 months (2018-2019), the study included patients with at least 1 new medication, aged 18 years or older, able to read and speak English or Spanish, admitted from home with a minimum 1 overnight stay, and planning to return home post discharge. Excluded were cognitively impaired patients, those assigned to a resource pool nurse without TIME training, and those having a research team member assigned. Participants were randomly selected from a computerized list of patients scheduled for discharge.

Main outcome measures: Primary outcome measures included patients’ new medication knowledge before and after discharge and patients’ experience and satisfaction with medication education.

Main results: The usual care (n = 52) and TIME (n = 55) patients had similar baseline demographic characteristics. The study revealed that almost all patients in both usual care and TIME groups were aware of their new medication and its purpose at discharge. However, differences were observed in medication side effect responses, with 72.5% of the usual-care group knowing side effects compared to 94.3% of the TIME group (P = .003). Additionally, 81.5% of the usual-care group understood the medication purpose compared to 100% of the TIME group (P = .02). During the 48- to 72-hour postdischarge calls, consistent responses were found from both groups regarding knowledge of new medication, medication name, and medication purpose. Similar to discharge results, differences in medication side effect responses were observed, with 75.8% of the usual care group correctly identifying at least 1 medication side effect compared to 93.9% of the TIME group (P = .04). TIME was associated with higher satisfaction with medication education compared to usual care (97% vs. 46.9%, P < .001).

Conclusion: The nurse-led discharge medication education program TIME effectively enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge. The program also significantly improved patients’ experience and satisfaction with medication education. These findings indicate that TIME is a valuable tool for augmenting patient education and medication adherence in a hospital setting. By incorporating the teach-back method, TIME offers a structured approach to educating patients about their medications at hospital discharge, leading to improved care transitions.

 

 

Commentary

Suboptimal communication between patients, caregivers, and providers upon hospital discharge is a major contributor to patients’ inadequate understanding of postdischarge care plans. This inadequate understanding leads to preventable harms, such as medication errors, adverse events, emergency room visits, and costly hospital readmissions.1 The issue is further exacerbated by a lack of clarity among health care team members’ respective roles in providing information that optimizes care transitions during the discharge communication process. Moreover, low health literacy, particularly prevalent among seniors, those from disadvantaged backgrouds, and those with lower education attainment or chronic illnesses, create additional barriers to effective discharge communication. A potential solution to this problem is the adoption of effective teaching strategies, specifically the teach-back method. This method employs techniques that ensure patients’ understanding and recall of new information regardless of health literacy, and places accountability on clinicians rather than patients. By closing communication gaps between clinicians and patients, the teach-back method can reduce hospital readmissions, hospital-acquired conditions, and mortality rates, while improving patient satisfaction with health care instructions and the overall hospital experience.2

Study 1, by Trivedi et al, and study 2, by Marks et al, aimed to identify and address problems related to poor communication between patients and health care team members at hospital discharge. Specifically, study 1 examined routine discharge communication practices to determine communication gaps, while study 2 evaluated a nurse-led teach-back intervention program designed to improve patients’ medication knowledge and satisfaction. These distinct objectives and designs reflected the unique ways each study approached the challenges associated with care transitions at the time of hospital discharge.

Study 1 used direct observation of patient-practitioner interactions to evaluate routine discharge communication practices in internal medicine services at 2 urban academic teaching hospitals. In the 33 patients observed, significant gaps in discharge communication practices were identified in the domains of medication changes, postdischarge appointments, disease self-management, and red flags or warning signs. Unsurprisingly, most of these domains were communicated in an ad hoc manner by members of the health care team without a clear pattern of responsibility in reference to patient discharge education, and teach-back was seldom used. These findings underscore the need for improved discharge techniques, effective communication strategies, and clarification of roles among the interprofessional team to enhance the safety, quality of care, and overall patient experience during hospital discharge.

Study 2 aimed to augment the hospital discharge communication process by implementing a nurse-led discharge medication education program (TIME), which targeted patients’ priority learning needs, new medication knowledge, and satisfaction with medication education. In the 107 patients assessed, this teach-back method enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge, as well as improved patients’ experience and satisfaction with medication education. These results suggest that a teach-back method such as the TIME program could be a solution to care transition problems identified in the Trivedi et al study by providing a structured approach to patient education and enhancing communication practices during the hospital discharge process. Thus, by implementing the TIME program, hospitals may improve patient outcomes, safety, and overall quality of care upon leaving the hospital.

Applications for Clinical Practice and System Implementation

Care transition at the time of hospital discharge is a particularly pivotal period in the care of vulnerable individuals. There is growing literature, including studies discussed in this review, to indicate that by focusing on improving patient-practitioner communication during the discharge process and using strategies such as the teach-back method, health care professionals can better prepare patients for self-management in the post-acute period and help them make informed decisions about their care. This emphasis on care-transition communication strategies may lead to a reduction in medication errors, adverse events, and hospital readmissions, ultimately improving patient outcomes and satisfaction. Barriers to system implementation of such strategies may include competing demands and responsibilities of busy practitioners as well as the inherent complexities associated with hospital discharge. Creative solutions, such as the utilization of telehealth and early transition-of-care visits, represent some potential approaches to counter these barriers.

While both studies illustrated barriers and facilitators of hospital discharge communication, each study had limitations that impacted their generalizability to real-world clinical practice. Limitations in study 1 included a small sample size, purposive sampling method, and a focus on planned discharges in a teaching hospital, which may introduce selection bias. The study’s findings may not be generalizable to unplanned discharges, patients who do not speak English or Spanish, or nonteaching hospitals. Additionally, the data were collected before the COVID-19 pandemic, which could have further impacted discharge education practices. The study also revealed that some patients received more robust discharge education than others, which indicated systemic inequality in the patient experience. Further research is required to address this discrepancy. Limitations in study 2 included a relatively small and homogeneous sample, with most participants being younger, non-Hispanic White, English-speaking, and well-educated. This lack of diversity may limit the generalizability of the findings. Furthermore, the study did not evaluate the patients’ knowledge of medication dosage and focused only on new medications. Future studies should examine the effect of teach-back on a broader range of self-management topics in preparation for discharge, while also including a more diverse population to account for factors related to social determinants of health. Taken together, further research is needed to address these limitations and ensure more generalizable results that can more broadly improve discharge education and care transitions that bridge acute and post-acute care.

Practice Points

  • There is a significant need for improved discharge strategies to enhance patient safety and quality of care upon leaving the hospital.
  • Teach-back method may offer a structured approach to educating patients about their medications at hospital discharge and improve care transitions.

–Yuka Shichijo, MD, and Fred Ko, MD, Mount Sinai Beth Israel Hospital, New York, NY

Study 1 Overview (Trivedi et al)

Objective: This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academic teaching hospitals, specifically focusing on patient education and counseling in 6 key discharge communication domains.

Design: Observations were conducted over a 13-month period from September 2018 through October 2019, following the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines.

Setting and participants: The study involved a total of 33 English- and Spanish-speaking patients purposefully selected from the “discharge before noon” list at 2 urban tertiary-care teaching hospitals. A total of 155 observation hours were accumulated, with an average observation time of 4.7 hours per patient on the day of discharge.

Main outcome measures: The study assessed 6 discharge communication domains: (1) the name and function of medication changes, (2) the purpose of postdischarge appointments, (3) disease self-management, (4) red flags or warning signs for complications, (5) teach-back techniques to confirm patient understanding, and (6) staff solicitation of patient questions or concerns.

Main results: The study found several gaps in discharge communication practices. Among the 29 patients with medication changes, 28% were not informed about the name and basic function of the changes, while 59% did not receive counseling on the purpose for the medication change. In terms of postdischarge appointments, 48% of patients were not told the purpose of these appointments. Moreover, 54% of patients did not receive counseling on self-management of their primary discharge diagnosis or other diagnoses, and 73% were not informed about symptom expectations or the expected course of their illness after leaving the hospital. Most patients (82%) were not counseled on red-flag signs and symptoms that should prompt immediate return to care.

Teach-back techniques, which are critical for ensuring patient understanding, were used in only 3% of cases, and 85% of patients were not asked by health care providers if there might be barriers to following the care plan. Less than half (42%) of the patients were asked if they had any questions, with most questions being logistical and often deferred to another team member or met with uncertainty. Of note, among the 33 patients, only 2 patients received extensive information that covered 5 or 6 out of 6 discharge communication domains.

The study found variable roles in who communicated what aspects of discharge education, with most domains being communicated in an ad hoc manner and no clear pattern of responsibility. However, 2 exceptions were observed: nurses were more likely to provide information about new or changed medications and follow-up appointments, and the only example of teach-back was conducted by an attending physician.

Conclusion: The study highlights a significant need for improved discharge techniques to enhance patient safety and quality of care upon leaving the hospital. Interventions should focus on increasing transparency in patient education and understanding, clarifying assumptions of roles among the interprofessional team, and implementing effective communication strategies and system redesigns that foster patient-centered discharge education. Also, the study revealed that some patients received more robust discharge education than others, indicating systemic inequality in the patient experience. Further studies are needed to explore the development and assessment of such interventions to ensure optimal patient outcomes and equal care following hospital discharge.

 

 

Study 2 Overview (Marks et al)

Objective: This study aimed to investigate the impact of a nurse-led discharge medication education program, Teaching Important Medication Effects (TIME), on patients’ new medication knowledge at discharge and 48 to 72 hours post discharge. The specific objectives were to identify patients’ priority learning needs, evaluate the influence of TIME on patients’ new medication knowledge before and after discharge, and assess the effect of TIME on patients’ experience and satisfaction with medication education.

Design: The study employed a longitudinal pretest/post-test, 2-group design involving 107 randomly selected medical-surgical patients from an academic hospital. Participants were interviewed before and within 72 hours after discharge following administration of medication instructions. Bivariate analyses were performed to assess demographic and outcome variable differences between groups.

Setting and participants: Conducted on a 24-bed medical-surgical unit at a large Magnet® hospital over 18 months (2018-2019), the study included patients with at least 1 new medication, aged 18 years or older, able to read and speak English or Spanish, admitted from home with a minimum 1 overnight stay, and planning to return home post discharge. Excluded were cognitively impaired patients, those assigned to a resource pool nurse without TIME training, and those having a research team member assigned. Participants were randomly selected from a computerized list of patients scheduled for discharge.

Main outcome measures: Primary outcome measures included patients’ new medication knowledge before and after discharge and patients’ experience and satisfaction with medication education.

Main results: The usual care (n = 52) and TIME (n = 55) patients had similar baseline demographic characteristics. The study revealed that almost all patients in both usual care and TIME groups were aware of their new medication and its purpose at discharge. However, differences were observed in medication side effect responses, with 72.5% of the usual-care group knowing side effects compared to 94.3% of the TIME group (P = .003). Additionally, 81.5% of the usual-care group understood the medication purpose compared to 100% of the TIME group (P = .02). During the 48- to 72-hour postdischarge calls, consistent responses were found from both groups regarding knowledge of new medication, medication name, and medication purpose. Similar to discharge results, differences in medication side effect responses were observed, with 75.8% of the usual care group correctly identifying at least 1 medication side effect compared to 93.9% of the TIME group (P = .04). TIME was associated with higher satisfaction with medication education compared to usual care (97% vs. 46.9%, P < .001).

Conclusion: The nurse-led discharge medication education program TIME effectively enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge. The program also significantly improved patients’ experience and satisfaction with medication education. These findings indicate that TIME is a valuable tool for augmenting patient education and medication adherence in a hospital setting. By incorporating the teach-back method, TIME offers a structured approach to educating patients about their medications at hospital discharge, leading to improved care transitions.

 

 

Commentary

Suboptimal communication between patients, caregivers, and providers upon hospital discharge is a major contributor to patients’ inadequate understanding of postdischarge care plans. This inadequate understanding leads to preventable harms, such as medication errors, adverse events, emergency room visits, and costly hospital readmissions.1 The issue is further exacerbated by a lack of clarity among health care team members’ respective roles in providing information that optimizes care transitions during the discharge communication process. Moreover, low health literacy, particularly prevalent among seniors, those from disadvantaged backgrouds, and those with lower education attainment or chronic illnesses, create additional barriers to effective discharge communication. A potential solution to this problem is the adoption of effective teaching strategies, specifically the teach-back method. This method employs techniques that ensure patients’ understanding and recall of new information regardless of health literacy, and places accountability on clinicians rather than patients. By closing communication gaps between clinicians and patients, the teach-back method can reduce hospital readmissions, hospital-acquired conditions, and mortality rates, while improving patient satisfaction with health care instructions and the overall hospital experience.2

Study 1, by Trivedi et al, and study 2, by Marks et al, aimed to identify and address problems related to poor communication between patients and health care team members at hospital discharge. Specifically, study 1 examined routine discharge communication practices to determine communication gaps, while study 2 evaluated a nurse-led teach-back intervention program designed to improve patients’ medication knowledge and satisfaction. These distinct objectives and designs reflected the unique ways each study approached the challenges associated with care transitions at the time of hospital discharge.

Study 1 used direct observation of patient-practitioner interactions to evaluate routine discharge communication practices in internal medicine services at 2 urban academic teaching hospitals. In the 33 patients observed, significant gaps in discharge communication practices were identified in the domains of medication changes, postdischarge appointments, disease self-management, and red flags or warning signs. Unsurprisingly, most of these domains were communicated in an ad hoc manner by members of the health care team without a clear pattern of responsibility in reference to patient discharge education, and teach-back was seldom used. These findings underscore the need for improved discharge techniques, effective communication strategies, and clarification of roles among the interprofessional team to enhance the safety, quality of care, and overall patient experience during hospital discharge.

Study 2 aimed to augment the hospital discharge communication process by implementing a nurse-led discharge medication education program (TIME), which targeted patients’ priority learning needs, new medication knowledge, and satisfaction with medication education. In the 107 patients assessed, this teach-back method enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge, as well as improved patients’ experience and satisfaction with medication education. These results suggest that a teach-back method such as the TIME program could be a solution to care transition problems identified in the Trivedi et al study by providing a structured approach to patient education and enhancing communication practices during the hospital discharge process. Thus, by implementing the TIME program, hospitals may improve patient outcomes, safety, and overall quality of care upon leaving the hospital.

Applications for Clinical Practice and System Implementation

Care transition at the time of hospital discharge is a particularly pivotal period in the care of vulnerable individuals. There is growing literature, including studies discussed in this review, to indicate that by focusing on improving patient-practitioner communication during the discharge process and using strategies such as the teach-back method, health care professionals can better prepare patients for self-management in the post-acute period and help them make informed decisions about their care. This emphasis on care-transition communication strategies may lead to a reduction in medication errors, adverse events, and hospital readmissions, ultimately improving patient outcomes and satisfaction. Barriers to system implementation of such strategies may include competing demands and responsibilities of busy practitioners as well as the inherent complexities associated with hospital discharge. Creative solutions, such as the utilization of telehealth and early transition-of-care visits, represent some potential approaches to counter these barriers.

While both studies illustrated barriers and facilitators of hospital discharge communication, each study had limitations that impacted their generalizability to real-world clinical practice. Limitations in study 1 included a small sample size, purposive sampling method, and a focus on planned discharges in a teaching hospital, which may introduce selection bias. The study’s findings may not be generalizable to unplanned discharges, patients who do not speak English or Spanish, or nonteaching hospitals. Additionally, the data were collected before the COVID-19 pandemic, which could have further impacted discharge education practices. The study also revealed that some patients received more robust discharge education than others, which indicated systemic inequality in the patient experience. Further research is required to address this discrepancy. Limitations in study 2 included a relatively small and homogeneous sample, with most participants being younger, non-Hispanic White, English-speaking, and well-educated. This lack of diversity may limit the generalizability of the findings. Furthermore, the study did not evaluate the patients’ knowledge of medication dosage and focused only on new medications. Future studies should examine the effect of teach-back on a broader range of self-management topics in preparation for discharge, while also including a more diverse population to account for factors related to social determinants of health. Taken together, further research is needed to address these limitations and ensure more generalizable results that can more broadly improve discharge education and care transitions that bridge acute and post-acute care.

Practice Points

  • There is a significant need for improved discharge strategies to enhance patient safety and quality of care upon leaving the hospital.
  • Teach-back method may offer a structured approach to educating patients about their medications at hospital discharge and improve care transitions.

–Yuka Shichijo, MD, and Fred Ko, MD, Mount Sinai Beth Israel Hospital, New York, NY

References

1. Snow V, Beck D, Budnitz T, Miller DC, Potter J, Wears RL, Weiss KB, Williams MV; American College of Physicians; Society of General Internal Medicine; Society of Hospital Medicine; American Geriatrics Society; American College of Emergency Physicians; Society of Academic Emergency Medicine. Transitions of care consensus policy statement American College of Physicians-Society of General Internal Medicine-Society of Hospital Medicine-American Geriatrics Society-American College of Emergency Physicians-Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971-976. doi:10.1007/s11606-009-0969-x

2. Yen PH, Leasure AR. Use and effectiveness of the teach-back method in patient education and health outcomes. Fed. Pract. 2019;36(6):284-289.

References

1. Snow V, Beck D, Budnitz T, Miller DC, Potter J, Wears RL, Weiss KB, Williams MV; American College of Physicians; Society of General Internal Medicine; Society of Hospital Medicine; American Geriatrics Society; American College of Emergency Physicians; Society of Academic Emergency Medicine. Transitions of care consensus policy statement American College of Physicians-Society of General Internal Medicine-Society of Hospital Medicine-American Geriatrics Society-American College of Emergency Physicians-Society of Academic Emergency Medicine. J Gen Intern Med. 2009;24(8):971-976. doi:10.1007/s11606-009-0969-x

2. Yen PH, Leasure AR. Use and effectiveness of the teach-back method in patient education and health outcomes. Fed. Pract. 2019;36(6):284-289.

Issue
Journal of Clinical Outcomes Management - 30(3)
Issue
Journal of Clinical Outcomes Management - 30(3)
Page Number
58-61
Page Number
58-61
Publications
Publications
Topics
Article Type
Display Headline
Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method
Display Headline
Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>0523 JCOM ORR Ko</fileName> <TBEID>0C02D014.SIG</TBEID> <TBUniqueIdentifier>NJ_0C02D014</TBUniqueIdentifier> <newsOrJournal>Journal</newsOrJournal> <publisherName>Frontline Medical Communications Inc.</publisherName> <storyname>Patient Safety in Transitions of</storyname> <articleType>1</articleType> <TBLocation>Copyfitting-JCOM</TBLocation> <QCDate/> <firstPublished>20230519T134004</firstPublished> <LastPublished>20230519T134004</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20230519T134004</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline/> <bylineText/> <bylineFull/> <bylineTitleText>Trivedi SP, Corderman S, Berlinberg E, et al. Assessment of patient education delivered at time of hospital discharge. JAMA Intern Med. 2023;183(5):417-423. doi:10.1001/jamainternmed.2023.0070Marks L, O’Sullivan L, Pytel K, et al. Using a teach-back intervention significantly improves knowledge, perceptions, and satisfaction of patients with nurses’ discharge medication education. Worldviews Evid Based Nurs. 2022;19(6):458-466. doi:10.1111/wvn.12612</bylineTitleText> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:"> <name/> <rightsInfo> <copyrightHolder> <name/> </copyrightHolder> <copyrightNotice/> </rightsInfo> </provider> <abstract/> <metaDescription>Objective: This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academi</metaDescription> <articlePDF/> <teaserImage/> <title>Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi>10.12788/jcom.0131</doi> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>jcom</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">40713</term> </publications> <sections> <term canonical="true">41021</term> </sections> <topics> <term>38029</term> <term canonical="true">327</term> <term>215</term> <term>278</term> <term>325</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Patient Safety in Transitions of Care: Addressing Discharge Communication Gaps and the Potential of the Teach-Back Method</title> <deck/> </itemMeta> <itemContent> <p class="sub1">Study 1 Overview (Trivedi et al)</p> <p><strong><em>Objective:</em></strong> This observational quality improvement study aimed to evaluate the discharge communication practices in internal medicine services at 2 urban academic teaching hospitals, specifically focusing on patient education and counseling in 6 key discharge communication domains.<br/><br/><strong><em>Design:</em></strong> Observations were conducted over a 13-month period from September 2018 through October 2019, following the Standards for Quality Improvement Reporting Excellence (SQUIRE) guidelines. <br/><br/><strong><em>Setting and participants:</em></strong> The study involved a total of 33 English- and Spanish-speaking patients purposefully selected from the “discharge before noon” list at 2 urban tertiary-care teaching hospitals. A total of 155 observation hours were accumulated, with an average observation time of 4.7 hours per patient on the day of discharge.<br/><br/><strong><em>Main outcome measures:</em></strong> The study assessed 6 discharge communication domains: (1) the name and function of medication changes, (2) the purpose of postdischarge appointments, (3) disease self-management, (4) red flags or warning signs for complications, (5) teach-back techniques to confirm patient understanding, and (6) staff solicitation of patient questions or concerns.<br/><br/><strong><em>Main results:</em></strong> The study found several gaps in discharge communication practices. Among the 29 patients with medication changes, 28% were not informed about the name and basic function of the changes, while 59% did not receive counseling on the purpose for the medication change. In terms of postdischarge appointments, 48% of patients were not told the purpose of these appointments. Moreover, 54% of patients did not receive counseling on self-management of their primary discharge diagnosis or other diagnoses, and 73% were not informed about symptom expectations or the expected course of their illness after leaving the hospital. Most patients (82%) were not counseled on red-flag signs and symptoms that should prompt immediate return to care. </p> <p>Teach-back techniques, which are critical for ensuring patient understanding, were used in only 3% of cases, and 85% of patients were not asked by health care providers if there might be barriers to following the care plan. Less than half (42%) of the patients were asked if they had any questions, with most questions being logistical and often deferred to another team member or met with uncertainty. Of note, among the 33 patients, only 2 patients received extensive information that covered 5 or 6 out of 6 discharge communication domains. <br/><br/>The study found variable roles in who communicated what aspects of discharge education, with most domains being communicated in an ad hoc manner and no clear pattern of responsibility. However, 2 exceptions were observed: nurses were more likely to provide information about new or changed medications and follow-up appointments, and the only example of teach-back was conducted by an attending physician.</p> <p><strong><em>Conclusion:</em></strong> The study highlights a significant need for improved discharge techniques to enhance patient safety and quality of care upon leaving the hospital. Interventions should focus on increasing transparency in patient education and understanding, clarifying assumptions of roles among the interprofessional team, and implementing effective communication strategies and system redesigns that foster patient-centered discharge education. Also, the study revealed that some patients received more robust discharge education than others, indicating systemic inequality in the patient experience. Further studies are needed to explore the development and assessment of such interventions to ensure optimal patient outcomes and equal care following hospital discharge. </p> <p class="sub1">Study 2 Overview (Marks et al)</p> <p><strong><em>Objective:</em></strong> This study aimed to investigate the impact of a nurse-led discharge medication education program, Teaching Important Medication Effects (TIME), on patients’ new medication knowledge at discharge and 48 to 72 hours post discharge. The specific objectives were to identify patients’ priority learning needs, evaluate the influence of TIME on patients’ new medication knowledge before and after discharge, and assess the effect of TIME on patients’ experience and satisfaction with medication education.<br/><br/><strong><em>Design:</em></strong> The study employed a longitudinal pretest/post-test, 2-group design involving 107 randomly selected medical-surgical patients from an academic hospital. Participants were interviewed before and within 72 hours after discharge following administration of medication instructions. Bivariate analyses were performed to assess demographic and outcome variable differences between groups.<br/><br/><strong><em>Setting and participants:</em></strong> Conducted on a 24-bed medical-surgical unit at a large Magnet<sup>®</sup> hospital over 18 months (2018-2019), the study included patients with at least 1 new medication, aged 18 years or older, able to read and speak English or Spanish, admitted from home with a minimum 1 overnight stay, and planning to return home post discharge. Excluded were cognitively impaired patients, those assigned to a resource pool nurse without TIME training, and those having a research team member assigned. Participants were randomly selected from a computerized list of patients scheduled for discharge. <br/><br/><strong><em>Main outcome measures:</em></strong> Primary outcome measures included patients’ new medication knowledge before and after discharge and patients’ experience and satisfaction with medication education.</p> <p><strong><em>Main results:</em></strong> The usual care (n = 52) and TIME (n = 55) patients had similar baseline demographic characteristics. The study revealed that almost all patients in both usual care and TIME groups were aware of their new medication and its purpose at discharge. However, differences were observed in medication side effect responses, with 72.5% of the usual-care group knowing side effects compared to 94.3% of the TIME group (<em>P</em> = .003). Additionally, 81.5% of the usual-care group understood the medication purpose compared to 100% of the TIME group (<em>P</em> = .02). During the 48- to 72-hour postdischarge calls, consistent responses were found from both groups regarding knowledge of new medication, medication name, and medication purpose. Similar to discharge results, differences in medication side effect responses were observed, with 75.8% of the usual care group correctly identifying at least 1 medication side effect compared to 93.9% of the TIME group (<em>P</em> = .04). TIME was associated with higher satisfaction with medication education compared to usual care (97% vs. 46.9%, <em>P</em> &lt; .001).</p> <p><strong><em>Conclusion:</em></strong> The nurse-led discharge medication education program TIME effectively enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge. The program also significantly improved patients’ experience and satisfaction with medication education. These findings indicate that TIME is a valuable tool for augmenting patient education and medication adherence in a hospital setting. By incorporating the teach-back method, TIME offers a structured approach to educating patients about their medications at hospital discharge, leading to improved care transitions.</p> <p class="sub1">Commentary</p> <p>Suboptimal communication between patients, caregivers, and providers upon hospital discharge is a major contributor to patients’ inadequate understanding of postdischarge care plans. This inadequate understanding leads to preventable harms, such as medication errors, adverse events, emergency room visits, and costly hospital readmissions.<sup>1</sup> The issue is further exacerbated by a lack of clarity among health care team members’ respective roles in providing information that optimizes care transitions during the discharge communication process. Moreover, low health literacy, particularly prevalent among seniors, those from disadvantaged backgrouds, and those with lower education attainment or chronic illnesses, create additional barriers to effective discharge communication. A potential solution to this problem is the adoption of effective teaching strategies, specifically the teach-back method. This method employs techniques that ensure patients’ understanding and recall of new information regardless of health literacy, and places accountability on clinicians rather than patients. By closing communication gaps between clinicians and patients, the teach-back method can reduce hospital readmissions, hospital-acquired conditions, and mortality rates, while improving patient satisfaction with health care instructions and the overall hospital experience.<sup>2</sup> </p> <p>Study 1, by Trivedi et al, and study 2, by Marks et al, aimed to identify and address problems related to poor communication between patients and health care team members at hospital discharge. Specifically, study 1 examined routine discharge communication practices to determine communication gaps, while study 2 evaluated a nurse-led teach-back intervention program designed to improve patients’ medication knowledge and satisfaction. These distinct objectives and designs reflected the unique ways each study approached the challenges associated with care transitions at the time of hospital discharge.<br/><br/>Study 1 used direct observation of patient-practitioner interactions to evaluate routine discharge communication practices in internal medicine services at 2 urban academic teaching hospitals. In the 33 patients observed, significant gaps in discharge communication practices were identified in the domains of medication changes, postdischarge appointments, disease self-management, and red flags or warning signs. Unsurprisingly, most of these domains were communicated in an ad hoc manner by members of the health care team without a clear pattern of responsibility in reference to patient discharge education, and teach-back was seldom used. These findings underscore the need for improved discharge techniques, effective communication strategies, and clarification of roles among the interprofessional team to enhance the safety, quality of care, and overall patient experience during hospital discharge.<br/><br/>Study 2 aimed to augment the hospital discharge communication process by implementing a nurse-led discharge medication education program (TIME), which targeted patients’ priority learning needs, new medication knowledge, and satisfaction with medication education. In the 107 patients assessed, this teach-back method enhanced patients’ new medication knowledge at discharge and 48 to 72 hours after discharge, as well as improved patients’ experience and satisfaction with medication education. These results suggest that a teach-back method such as the TIME program could be a solution to care transition problems identified in the Trivedi et al study by providing a structured approach to patient education and enhancing communication practices during the hospital discharge process. Thus, by implementing the TIME program, hospitals may improve patient outcomes, safety, and overall quality of care upon leaving the hospital.</p> <p class="sub1">Applications for Clinical Practice and System Implementation</p> <p>Care transition at the time of hospital discharge is a particularly pivotal period in the care of vulnerable individuals. There is growing literature, including studies discussed in this review, to indicate that by focusing on improving patient-practitioner communication during the discharge process and using strategies such as the teach-back method, health care professionals can better prepare patients for self-management in the post-acute period and help them make informed decisions about their care. This emphasis on care-transition communication strategies may lead to a reduction in medication errors, adverse events, and hospital readmissions, ultimately improving patient outcomes and satisfaction. Barriers to system implementation of such strategies may include competing demands and responsibilities of busy practitioners as well as the inherent complexities associated with hospital discharge. Creative solutions, such as the utilization of telehealth and early transition-of-care visits, represent some potential approaches to counter these barriers. </p> <p>While both studies illustrated barriers and facilitators of hospital discharge communication, each study had limitations that impacted their generalizability to real-world clinical practice. Limitations in study 1 included a small sample size, purposive sampling method, and a focus on planned discharges in a teaching hospital, which may introduce selection bias. The study’s findings may not be generalizable to unplanned discharges, patients who do not speak English or Spanish, or nonteaching hospitals. Additionally, the data were collected before the COVID-19 pandemic, which could have further impacted discharge education practices. The study also revealed that some patients received more robust discharge education than others, which indicated systemic inequality in the patient experience. Further research is required to address this discrepancy. Limitations in study 2 included a relatively small and homogeneous sample, with most participants being younger, non-Hispanic White, English-speaking, and well-educated. This lack of diversity may limit the generalizability of the findings. Furthermore, the study did not evaluate the patients’ knowledge of medication dosage and focused only on new medications. Future studies should examine the effect of teach-back on a broader range of self-management topics in preparation for discharge, while also including a more diverse population to account for factors related to social determinants of health. Taken together, further research is needed to address these limitations and ensure more generalizable results that can more broadly improve discharge education and care transitions that bridge acute and post-acute care.</p> <p class="sub1">Practice Points</p> <ul class="body"> <li>There is a significant need for improved discharge strategies to enhance patient safety and quality of care upon leaving the hospital.</li> <li>Teach-back method may offer a structured approach to educating patients about their medications at hospital discharge and improve care transitions.</li> </ul> <p> <em>–Yuka Shichijo, MD, and Fred Ko, MD, Mount Sinai Beth Israel Hospital, New York, NY<br/><br/>doi:10.12788/jcom.0131</em> </p> <p class="sub1">References</p> <p class="reference">1. Snow V, Beck D, Budnitz T, Miller DC, Potter J, Wears RL, Weiss KB, Williams MV; American College of Physicians; Society of General Internal Medicine; Society of Hospital Medicine; American Geriatrics Society; American College of Emergency Physicians; Society of Academic Emergency Medicine. Transitions of care consensus policy statement American College of Physicians-Society of General Internal Medicine-Society of Hospital Medicine-American Geriatrics Society-American College of Emergency Physicians-Society of Academic Emergency Medicine. <em>J Gen Intern Med. </em>2009;24(8):971-976. doi:10.1007/s11606-009-0969-x<br/><br/>2. Yen PH, Leasure AR. Use and effectiveness of the teach-back method in patient education and health outcomes. <em>Fed. Pract.</em> 2019;36(6):284-289. </p> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
18002467.SIG
Disable zoom
Off

Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Article Type
Changed
Wed, 12/28/2022 - 12:32
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(6)
Publications
Topics
Page Number
199-201
Sections
Article PDF
Article PDF

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

Study 1 Overview (Chang et al)

Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients.

Design: Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.

Setting and participants: Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.

Main outcome measures: The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.

Main results: POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; P = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; P = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; P < .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; P = .016) were associated with increased risk of POD.

Conclusion: Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.

Study 2 Overview (Mei et al)

Objective: To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.

Design: Randomized clinical trial of propofol and sevoflurane groups.

Setting and participants: This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.

Main outcome measures: All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.

Main results: All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; P = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; P =  .049, Student’s t-test).

Conclusion: This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.

 

 

Commentary

Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.1 Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.1 Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.

In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research.

In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.

 

 

Applications for Clinical Practice and System Implementation

The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research.

The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as CYP2E1, CYP2B6, CYP2C9, GSTP1, UGT1A9, SULT1A1, and NQO1 have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.2 Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.

Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health.

Practice Points

  • Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.
  • Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.

–Jared Doan, BS, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

References

1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. J Am Geriatr Soc. 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x

2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. Adv Med Sci. 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z

Issue
Journal of Clinical Outcomes Management - 29(6)
Issue
Journal of Clinical Outcomes Management - 29(6)
Page Number
199-201
Page Number
199-201
Publications
Publications
Topics
Article Type
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Display Headline
Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>1122 JCOM ORR Ko</fileName> <TBEID>0C02B54E.SIG</TBEID> <TBUniqueIdentifier>NJ_0C02B54E</TBUniqueIdentifier> <newsOrJournal>Journal</newsOrJournal> <publisherName>Frontline Medical Communications Inc.</publisherName> <storyname>Anesthetic Choices and Postopera</storyname> <articleType>1</articleType> <TBLocation>Copyfitting-JCOM</TBLocation> <QCDate/> <firstPublished>20221117T155806</firstPublished> <LastPublished>20221117T155807</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20221117T155806</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline/> <bylineText/> <bylineFull/> <bylineTitleText>Chang JE, Min SW, Kim H, et al. Association between anesthetics and postoperative delirium in elderly patients undergoing spine surgery: propofol versus sevoflurane. Global Spine J. 2022 Jun 22:21925682221110828. doi:10.1177/21925682221110828 Mei X, Zheng HL, Li C, et al. The effects of propofol and sevoflurane on postoperative delirium in older patients: a randomized clinical trial study. J Alzheimers Dis. 2020;76(4):1627-1636. doi:10.3233/JAD-200322 </bylineTitleText> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:"> <name/> <rightsInfo> <copyrightHolder> <name/> </copyrightHolder> <copyrightNotice/> </rightsInfo> </provider> <abstract/> <metaDescription>Objective: To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients. Desi</metaDescription> <articlePDF/> <teaserImage/> <title>Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi>10.12788/jcom.0116</doi> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>jcom</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">40713</term> </publications> <sections> <term canonical="true">41021</term> </sections> <topics> <term canonical="true">302</term> <term>325</term> <term>215</term> <term>327</term> <term>248</term> <term>258</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Anesthetic Choices and Postoperative Delirium Incidence: Propofol vs Sevoflurane</title> <deck/> </itemMeta> <itemContent> <p class="sub1">Study 1 Overview (Chang et al) </p> <p><strong><em>Objective:</em></strong> To assess the incidence of postoperative delirium (POD) following propofol- vs sevoflurane-based anesthesia in geriatric spine surgery patients. <br/><br/><strong><em>Design:</em></strong> Retrospective, single-blinded observational study of propofol- and sevoflurane-based anesthesia cohorts.<br/><br/><strong><em>Setting and participants:</em></strong> Patients eligible for this study were aged 65 years or older admitted to the SMG-SNU Boramae Medical Center (Seoul, South Korea). All patients underwent general anesthesia either via intravenous propofol or inhalational sevoflurane for spine surgery between January 2015 and December 2019. Patients were retrospectively identified via electronic medical records. Patient exclusion criteria included preoperative delirium, history of dementia, psychiatric disease, alcoholism, hepatic or renal dysfunction, postoperative mechanical ventilation dependence, other surgery within the recent 6 months, maintenance of intraoperative anesthesia with combined anesthetics, or incomplete medical record.<br/><br/><strong><em>Main outcome measures:</em></strong> The primary outcome was the incidence of POD after administration of propofol- and sevoflurane-based anesthesia during hospitalization. Patients were screened for POD regularly by attending nurses using the Nursing Delirium Screening Scale (disorientation, inappropriate behavior, inappropriate communication, hallucination, and psychomotor retardation) during the entirety of the patient’s hospital stay; if 1 or more screening criteria were met, a psychiatrist was consulted for the proper diagnosis and management of delirium. A psychiatric diagnosis was required for a case to be counted toward the incidence of POD in this study. Secondary outcomes included postoperative 30-day complications (angina, myocardial infarction, transient ischemic attack/stroke, pneumonia, deep vein thrombosis, pulmonary embolism, acute kidney injury, or infection) and length of postoperative hospital stay.<br/><br/><strong><em>Main results:</em></strong> POD occurred in 29 patients (10.3%) out of the total cohort of 281. POD was more common in the sevoflurane group than in the propofol group (15.7% vs 5.0%; <em>P</em> = .003). Using multivariable logistic regression, inhalational sevoflurane was associated with an increased risk of POD as compared to propofol-based anesthesia (odds ratio [OR], 4.120; 95% CI, 1.549-10.954; <em>P</em> = .005). There was no association between choice of anesthetic and postoperative 30-day complications or the length of postoperative hospital stay. Both older age (OR, 1.242; 95% CI, 1.130-1.366; <em>P</em> &lt; .001) and higher pain score at postoperative day 1 (OR, 1.338; 95% CI, 1.056-1.696; <em>P</em> = .016) were associated with increased risk of POD.<br/><br/><strong><em>Conclusion:</em></strong> Propofol-based anesthesia was associated with a lower incidence of and risk for POD than sevoflurane-based anesthesia in older patients undergoing spine surgery.</p> <p class="sub1">Study 2 Overview (Mei et al) </p> <p><strong><em>Objective:</em></strong> To determine the incidence and duration of POD in older patients after total knee/hip replacement (TKR/THR) under intravenous propofol or inhalational sevoflurane general anesthesia.<br/><br/><strong><em>Design:</em></strong> Randomized clinical trial of propofol and sevoflurane groups.<br/><br/><strong><em>Setting and participants:</em></strong> This study was conducted at the Shanghai Tenth People’s Hospital and involved 209 participants enrolled between June 2016 and November 2019. All participants were 60 years of age or older, scheduled for TKR/THR surgery under general anesthesia, American Society of Anesthesiologists (ASA) class I to III, and assessed to be of normal cognitive function preoperatively via a Mini-Mental State Examination. Participant exclusion criteria included preexisting delirium as assessed by the Confusion Assessment Method (CAM), prior diagnosed neurological diseases (eg, Parkinson’s disease), prior diagnosed mental disorders (eg, schizophrenia), or impaired vision or hearing that would influence cognitive assessments. All participants were randomly assigned to either sevoflurane or propofol anesthesia for their surgery via a computer-generated list. Of these, 103 received inhalational sevoflurane and 106 received intravenous propofol. All participants received standardized postoperative care.<br/><br/><strong><em>Main outcome measures:</em></strong> All participants were interviewed by investigators, who were blinded to the anesthesia regimen, twice daily on postoperative days 1, 2, and 3 using CAM and a CAM-based scoring system (CAM-S) to assess delirium severity. The CAM encapsulated 4 criteria: acute onset and fluctuating course, agitation, disorganized thinking, and altered level of consciousness. To diagnose delirium, both the first and second criteria must be met, in addition to either the third or fourth criterion. The averages of the scores across the 3 postoperative days indicated delirium severity, while the incidence and duration of delirium was assessed by the presence of delirium as determined by CAM on any postoperative day.<br/><br/><strong><em>Main results:</em></strong> All eligible participants (N = 209; mean [SD] age 71.2 [6.7] years; 29.2% male) were included in the final analysis. The incidence of POD was not statistically different between the propofol and sevoflurane groups (33.0% vs 23.3%; <em>P</em> = .119, Chi-square test). It was estimated that 316 participants in each arm of the study were needed to detect statistical differences. The number of days of POD per person were higher with propofol anesthesia as compared to sevoflurane (0.5 [0.8] vs 0.3 [0.5]; <em>P </em>=  .049, Student’s <em>t</em>-test).<br/><br/><strong><em>Conclusion:</em></strong> This underpowered study showed a 9.7% difference in the incidence of POD between older adults who received propofol (33.0%) and sevoflurane (23.3%) after THR/TKR. Further studies with a larger sample size are needed to compare general anesthetics and their role in POD.</p> <p class="sub1">Commentary</p> <p>Delirium is characterized by an acute state of confusion with fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is often caused by medications and/or their related adverse effects, infections, electrolyte imbalances, and other clinical etiologies. Delirium often manifests in post-surgical settings, disproportionately affecting older patients and leading to increased risk of morbidity, mortality, hospital length of stay, and health care costs.<sup>1</sup> Intraoperative risk factors for POD are determined by the degree of operative stress (eg, lower-risk surgeries put the patient at reduced risk for POD as compared to higher-risk surgeries) and are additive to preexisting patient-specific risk factors, such as older age and functional impairment.<sup>1</sup> Because operative stress is associated with risk for POD, limiting operative stress in controlled ways, such as through the choice of anesthetic agent administered, may be a pragmatic way to manage operative risks and optimize outcomes, especially when serving a surgically vulnerable population.</p> <p>In Study 1, Chang et al sought to assess whether 2 commonly utilized general anesthetics, propofol and sevoflurane, in older patients undergoing spine surgery differentially affected the incidence of POD. In this retrospective, single-blinded observational study of 281 geriatric patients, the researchers found that sevoflurane was associated with a higher risk of POD as compared to propofol. However, these anesthetics were not associated with surgical outcomes such as postoperative 30-day complications or the length of postoperative hospital stay. While these findings added new knowledge to this field of research, several limitations should be kept in mind when interpreting this study’s results. For instance, the sample size was relatively small, with all cases selected from a single center utilizing a retrospective analysis. In addition, although a standardized nursing screening tool was used as a method for delirium detection, hypoactive delirium or less symptomatic delirium may have been missed, which in turn would lead to an underestimation of POD incidence. The latter is a common limitation in delirium research. <br/><br/>In Study 2, Mei et al similarly explored the effects of general anesthetics on POD in older surgical patients. Specifically, using a randomized clinical trial design, the investigators compared propofol with sevoflurane in older patients who underwent TKR/THR, and their roles in POD severity and duration. Although the incidence of POD was higher in those who received propofol compared to sevoflurane, this trial was underpowered and the results did not reach statistical significance. In addition, while the duration of POD was slightly longer in the propofol group compared to the sevoflurane group (0.5 vs 0.3 days), it was unclear if this finding was clinically significant. Similar to many research studies in POD, limitations of Study 2 included a small sample size of 209 patients, with all participants enrolled from a single center. On the other hand, this study illustrated the feasibility of a method that allowed reproducible prospective assessment of POD time course using CAM and CAM-S.</p> <p class="sub1">Applications for Clinical Practice and System Implementation</p> <p>The delineation of risk factors that contribute to delirium after surgery in older patients is key to mitigating risks for POD and improving clinical outcomes. An important step towards a better understanding of these modifiable risk factors is to clearly quantify intraoperative risk of POD attributable to specific anesthetics. While preclinical studies have shown differential neurotoxicity effects of propofol and sevoflurane, their impact on clinically important neurologic outcomes such as delirium and cognitive decline remains poorly understood. Although Studies 1 and 2 both provided head-to-head comparisons of propofol and sevoflurane as risk factors for POD in high-operative-stress surgeries in older patients, the results were inconsistent. That being said, this small incremental increase in knowledge was not unexpected in the course of discovery around a clinically complex research question. Importantly, these studies provided evidence regarding the methodological approaches that could be taken to further this line of research. </p> <p>The mediating factors of the differences on neurologic outcomes between anesthetic agents are likely pharmacological, biological, and methodological. Pharmacologically, the differences between target receptors, such as GABAA (propofol, etomidate) or NMDA (ketamine), could be a defining feature in the difference in incidence of POD. Additionally, secondary actions of anesthetic agents on glycine, nicotinic, and acetylcholine receptors could play a role as well. Biologically, genes such as <em>CYP2E1</em>, <em>CYP2B6</em>, <em>CYP2C9</em>, <em>GSTP1</em>, <em>UGT1A9</em>, <em>SULT1A1</em>, and <em>NQO1</em> have all been identified as genetic factors in the metabolism of anesthetics, and variations in such genes could result in different responses to anesthetics.<sup>2</sup> Methodologically, routes of anesthetic administration (eg, inhalation vs intravenous), preexisting anatomical structures, or confounding medical conditions (eg, lower respiratory volume due to older age) may influence POD incidence, duration, or severity. Moreover, methodological differences between Studies 1 and 2, such as surgeries performed (spinal vs TKR/THR), patient populations (South Korean vs Chinese), and the diagnosis and monitoring of delirium (retrospective screening and diagnosis vs prospective CAM/CAM-S) may impact delirium outcomes. Thus, these factors should be considered in the design of future clinical trials undertaken to investigate the effects of anesthetics on POD.<br/><br/>Given the high prevalence of delirium and its associated adverse outcomes in the immediate postoperative period in older patients, further research is warranted to determine how anesthetics affect POD in order to optimize perioperative care and mitigate risks in this vulnerable population. Moreover, parallel investigations into how anesthetics differentially impact the development of transient or longer-term cognitive impairment after a surgical procedure (ie, postoperative cognitive dysfunction) in older adults are urgently needed in order to improve their cognitive health. </p> <p class="sub1">Practice Points </p> <ul class="body"> <li>Intravenous propofol and inhalational sevoflurane may be differentially associated with incidence, duration, and severity of POD in geriatric surgical patients.</li> <li>Further larger-scale studies are warranted to clarify the role of anesthetic choice in POD in order to optimize surgical outcomes in older patients.</li> </ul> <p> <em>–Jared Doan, BS, and Fred Ko, MD Icahn School of Medicine at Mount Sinaidoi:10.12788/jcom.0116</em> </p> <p class="sub1">References</p> <p class="reference">1. Dasgupta M, Dumbrell AC. Preoperative risk assessment for delirium after noncardiac surgery: a systematic review. <em>J Am Geriatr Soc.</em> 2006;54(10):1578-1589. doi:10.1111/j.1532-5415.2006.00893.x</p> <p class="reference">2. Mikstacki A, Skrzypczak-Zielinska M, Tamowicz B, et al. The impact of genetic factors on response to anaesthetics. <em>Adv Med Sci.</em> 2013;58(1):9-14. doi:10.2478/v10039-012-0065-z</p> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
180022BF.SIG
Disable zoom
Off

Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients

Article Type
Changed
Mon, 09/26/2022 - 13:53
Display Headline
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients

Study 1 Overview (Oberhaus et al)

Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.

Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.

Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.

Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.

Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).

Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.

Study 2 Overview (Shenkin et al)

Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.

Design: Prospective randomized diagnostic test accuracy study.

Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.

Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.

Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).

Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.

 

 

Commentary

Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.

In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.

In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IVbased evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.

Application for Clinical Practice and System Implementation

The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.

Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.

The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Practice Points

  • Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
  • Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai

References

1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24

2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865

3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8

4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(5)
Publications
Topics
Page Number
166-169
Sections
Article PDF
Article PDF

Study 1 Overview (Oberhaus et al)

Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.

Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.

Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.

Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.

Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).

Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.

Study 2 Overview (Shenkin et al)

Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.

Design: Prospective randomized diagnostic test accuracy study.

Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.

Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.

Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).

Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.

 

 

Commentary

Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.

In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.

In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IVbased evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.

Application for Clinical Practice and System Implementation

The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.

Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.

The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Practice Points

  • Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
  • Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai

Study 1 Overview (Oberhaus et al)

Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.

Design: Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients.

Setting and participants: Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.

Main outcome measures: Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.

Main results: Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; P = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).

Conclusion: The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients.

Study 2 Overview (Shenkin et al)

Objective: To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM.

Design: Prospective randomized diagnostic test accuracy study.

Setting and participants: This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) (DSM-IV) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.

Main outcome measures: All participants were evaluated by the reference standard (DSM-IV criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.

Results: All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%).

Conclusions: The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.

 

 

Commentary

Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.1 While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes.

In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.

In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, DSM-IVbased evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium.

Application for Clinical Practice and System Implementation

The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital.

Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.2 For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.3,4 In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital.

The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Practice Points

  • Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.
  • Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening.

Jared Doan, BS, and Fred Ko, MD
Geriatrics and Palliative Medicine, Icahn School of Medicine at Mount Sinai

References

1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24

2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865

3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8

4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x

References

1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. Nat Rev Neurol. 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24

2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. Ann Intern Med. 2014;161(8):554-561. doi:10.7326/M14-0865

3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. BMC Geriatr. 2019;19(1):107. doi:10.1186/s12877-019-1129-8

4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. Am Geriatr Soc. 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x

Issue
Journal of Clinical Outcomes Management - 29(5)
Issue
Journal of Clinical Outcomes Management - 29(5)
Page Number
166-169
Page Number
166-169
Publications
Publications
Topics
Article Type
Display Headline
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients
Display Headline
Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients
Sections
Teambase XML
<?xml version="1.0" encoding="UTF-8"?>
<!--$RCSfile: InCopy_agile.xsl,v $ $Revision: 1.35 $-->
<!--$RCSfile: drupal.xsl,v $ $Revision: 1.7 $-->
<root generator="drupal.xsl" gversion="1.7"> <header> <fileName>JCOM 0922 ORR Ko</fileName> <TBEID>0C02AC15.SIG</TBEID> <TBUniqueIdentifier>NJ_0C02AC15</TBUniqueIdentifier> <newsOrJournal>Journal</newsOrJournal> <publisherName>Frontline Medical Communications Inc.</publisherName> <storyname>Abbreviated Delirium Screening I</storyname> <articleType>1</articleType> <TBLocation>Copyfitting-JCOM</TBLocation> <QCDate/> <firstPublished>20220916T080704</firstPublished> <LastPublished>20220916T080704</LastPublished> <pubStatus qcode="stat:"/> <embargoDate/> <killDate/> <CMSDate>20220916T080703</CMSDate> <articleSource/> <facebookInfo/> <meetingNumber/> <byline>Robert Litchkofski</byline> <bylineText/> <bylineFull>Robert Litchkofski</bylineFull> <bylineTitleText>Oberhaus J, Wang W, Mickle AM, et al. Evaluation of the 3-Minute Diagnostic Confusion Assessment Method for identification of postoperative delirium in older patients. JAMA Netw Open. 2021;4(12):e2137267. doi:10.1001/jamanetworkopen.2021.37267 Shenkin SD, Fox C, Godfrey M, et al. Delirium detection in older acute medical inpatients: a multicentre prospective comparative diagnostic test accuracy study of the 4AT and the confusion assessment method. BMC Med. 2019;17(1):138. doi:10.1186/s12916-019-1367-9</bylineTitleText> <USOrGlobal/> <wireDocType/> <newsDocType/> <journalDocType/> <linkLabel/> <pageRange/> <citation/> <quizID/> <indexIssueDate/> <itemClass qcode="ninat:text"/> <provider qcode="provider:"> <name/> <rightsInfo> <copyrightHolder> <name/> </copyrightHolder> <copyrightNotice/> </rightsInfo> </provider> <abstract/> <metaDescription>Objective: To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative</metaDescription> <articlePDF/> <teaserImage/> <title>Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients</title> <deck/> <disclaimer/> <AuthorList/> <articleURL/> <doi>10.12788/jcom.0111</doi> <pubMedID/> <publishXMLStatus/> <publishXMLVersion>1</publishXMLVersion> <useEISSN>0</useEISSN> <urgency/> <pubPubdateYear/> <pubPubdateMonth/> <pubPubdateDay/> <pubVolume/> <pubNumber/> <wireChannels/> <primaryCMSID/> <CMSIDs/> <keywords/> <seeAlsos/> <publications_g> <publicationData> <publicationCode>jcom</publicationCode> <pubIssueName/> <pubArticleType/> <pubTopics/> <pubCategories/> <pubSections/> </publicationData> </publications_g> <publications> <term canonical="true">40713</term> </publications> <sections> <term canonical="true">41021</term> </sections> <topics> <term>38029</term> <term>201</term> <term>223</term> <term canonical="true">327</term> <term>248</term> <term>215</term> <term>258</term> <term>278</term> <term>280</term> </topics> <links/> </header> <itemSet> <newsItem> <itemMeta> <itemRole>Main</itemRole> <itemClass>text</itemClass> <title>Abbreviated Delirium Screening Instruments: Plausible Tool to Improve Delirium Detection in Hospitalized Older Patients</title> <deck/> </itemMeta> <itemContent> <p class="sub1">Study 1 Overview (Oberhaus et al)</p> <p><strong><em>Objective:</em></strong> To compare the 3-Minute Diagnostic Confusion Assessment Method (3D-CAM) to the long-form Confusion Assessment Method (CAM) in detecting postoperative delirium.<br/><br/><strong><em>Design:</em></strong> Prospective concurrent comparison of 3D-CAM and CAM evaluations in a cohort of postoperative geriatric patients. <br/><br/><strong><em>Setting and participants:</em></strong> Eligible participants were patients aged 60 years or older undergoing major elective surgery at Barnes Jewish Hospital (St. Louis, Missouri) who were enrolled in ongoing clinical trials (PODCAST, ENGAGES, SATISFY-SOS) between 2015 and 2018. Surgeries were at least 2 hours in length and required general anesthesia, planned extubation, and a minimum 2-day hospital stay. Investigators were extensively trained in administering 3D-CAM and CAM instruments. Participants were evaluated 2 hours after the end of anesthesia care on the day of surgery, then daily until follow-up was completed per clinical trial protocol or until the participant was determined by CAM to be nondelirious for 3 consecutive days. For each evaluation, both 3D-CAM and CAM assessors approached the participant together, but the evaluation was conducted such that the 3D-CAM assessor was masked to the additional questions ascertained by the long-form CAM assessment. The 3D-CAM or CAM assessor independently scored their respective assessments blinded to the results of the other assessor.<br/><br/><strong><em>Main outcome measures:</em></strong> Participants were concurrently evaluated for postoperative delirium by both 3D-CAM and long-form CAM assessments. Comparisons between 3D-CAM and CAM scores were made using Cohen κ with repeated measures, generalized linear mixed-effects model, and Bland-Altman analysis.<br/><br/><strong><em>Main results:</em></strong> Sixteen raters performed 471 concurrent 3D-CAM and CAM assessments in 299 participants (mean [SD] age, 69 [6.5] years). Of these participants, 152 (50.8%) were men, 263 (88.0%) were White, and 211 (70.6%) underwent noncardiac surgery. Both instruments showed good intraclass correlation (0.98 for 3D-CAM, 0.84 for CAM) with good overall agreement (Cohen κ = 0.71; 95% CI, 0.58-0.83). The mixed-effects model indicated a significant disagreement between the 3D-CAM and CAM assessments (estimated difference in fixed effect, –0.68; 95% CI, –1.32 to –0.05; <em>P</em> = .04). The Bland-Altman analysis showed that the probability of a delirium diagnosis with the 3D-CAM was more than twice that with the CAM (probability ratio, 2.78; 95% CI, 2.44-3.23).</p> <p><strong><em>Conclusion:</em></strong> The high degree of agreement between 3D-CAM and long-form CAM assessments suggests that the former may be a pragmatic and easy-to-administer clinical tool to screen for postoperative delirium in vulnerable older surgical patients. </p> <p class="sub1">Study 2 Overview (Shenkin et al)</p> <p><strong><em>Objective:</em></strong> To assess the accuracy of the 4 ‘A’s Test (4AT) for delirium detection in the medical inpatient setting and to compare the 4AT to the CAM. <br/><br/><strong><em>Design:</em></strong> Prospective randomized diagnostic test accuracy study.<br/><br/><strong><em>Setting and participants:</em></strong> This study was conducted in emergency departments and acute medical wards at 3 UK sites (Edinburgh, Bradford, and Sheffield) and enrolled acute medical patients aged 70 years or older without acute life-threatening illnesses and/or coma. Assessors administering the delirium evaluation were nurses or graduate clinical research associates who underwent systematic training in delirium and delirium assessment. Additional training was provided to those administering the CAM but not to those administering the 4AT as the latter is designed to be administered without special training. First, all participants underwent a reference standard delirium assessment using <em>Diagnostic and Statistical Manual of Mental Disorders (Fourth Edition) </em>(<em>DSM-IV</em>) criteria to derive a final definitive diagnosis of delirium via expert consensus (1 psychiatrist and 2 geriatricians). Then, the participants were randomized to either the 4AT or the comparator CAM group using computer-generated pseudo-random numbers, stratified by study site, with block allocation. All assessments were performed by pairs of independent assessors blinded to the results of the other assessment.<br/><br/><strong><em>Main outcome measures:</em></strong> All participants were evaluated by the reference standard (<em>DSM-IV</em> criteria for delirium) and by either 4AT or CAM instruments for delirium. The accuracy of the 4AT instrument was evaluated by comparing its positive and negative predictive values, sensitivity, and specificity to the reference standard and analyzed via the area under the receiver operating characteristic curve. The diagnostic accuracy of 4AT, compared to the CAM, was evaluated by comparing positive and negative predictive values, sensitivity, and specificity using Fisher’s exact test. The overall performance of 4AT and CAM was summarized using Youden’s Index and the diagnostic odds ratio of sensitivity to specificity.<br/><br/><strong><em>Results:</em></strong> All 843 individuals enrolled in the study were randomized and 785 were included in the analysis (23 withdrew, 3 lost contact, 32 indeterminate diagnosis, 2 missing outcome). Of the participants analyzed, the mean age was 81.4 [6.4] years, and 12.1% (95/785) had delirium by reference standard assessment, 14.3% (56/392) by 4AT, and 4.7% (18/384) by CAM. The 4AT group had an area under the receiver operating characteristic curve of 0.90 (95% CI, 0.84-0.96), a sensitivity of 76% (95% CI, 61%-87%), and a specificity of 94% (95% CI, 92%-97%). In comparison, the CAM group had a sensitivity of 40% (95% CI, 26%-57%) and a specificity of 100% (95% CI, 98%-100%). </p> <p><strong><em>Conclusions:</em></strong> The 4AT is a pragmatic screening test for delirium in a medical space that does not require special training to administer. The use of this instrument may help to improve delirium detection as a part of routine clinical care in hospitalized older adults.</p> <p class="sub1">Commentary</p> <p>Delirium is an acute confusional state marked by fluctuating mental status, inattention, disorganized thinking, and altered level of consciousness. It is exceedingly common in older patients in both surgical and medical settings and is associated with increased morbidity, mortality, hospital length of stay, institutionalization, and health care costs. Delirium is frequently underdiagnosed in the hospitalized setting, perhaps due to a combination of its waxing and waning nature and a lack of pragmatic and easily implementable screening tools that can be readily administered by clinicians and nonclinicians alike.<sup>1</sup> While the CAM is a well-validated instrument to diagnose delirium, it requires specific training in the rating of each of the cardinal features ascertained through a brief cognitive assessment and takes 5 to 10 minutes to complete. Taken together, given the high patient load for clinicians in the hospital setting, the validation and application of brief delirium screening instruments that can be reliably administered by nonphysicians and nonclinicians may enhance delirium detection in vulnerable patients and consequently improve their outcomes. </p> <p>In Study 1, Oberhaus et al approach the challenge of underdiagnosing delirium in the postoperative setting by investigating whether the widely accepted long-form CAM and an abbreviated 3-minute version, the 3D-CAM, provide similar delirium detection in older surgical patients. The authors found that both instruments were reliable tests individually (high interrater reliability) and had good overall agreement. However, the 3D-CAM was more likely to yield a positive diagnosis of delirium compared to the long-form CAM, consistent with its purpose as a screening tool with a high sensitivity. It is important to emphasize that the 3D-CAM takes less time to administer, but also requires less extensive training and clinical knowledge than the long-form CAM. Therefore, this instrument meets the prerequisite of a brief screening test that can be rapidly administered by nonclinicians, and if affirmative, followed by a more extensive confirmatory test performed by a clinician. Limitations of this study include a lack of a reference standard structured interview conducted by a physician-rater to better determine the true diagnostic accuracy of both 3D-CAM and CAM assessments, and the use of convenience sampling at a single center, which reduces the generalizability of its findings.<br/><br/>In a similar vein, Shenkin et al in Study 2 attempt to evaluate the utility of the 4AT instrument in diagnosing delirium in older medical inpatients by testing the diagnostic accuracy of the 4AT against a reference standard (ie, <em>DSM-IV</em><em>–</em>based evaluation by physicians) as well as comparing it to CAM. The 4AT takes less time (~2 minutes) and requires less knowledge and training to administer as compared to the CAM. The study showed that the abbreviated 4AT, compared to CAM, had a higher sensitivity (76% vs 40%) and lower specificity (94% vs 100%) in delirium detection. Thus, akin to the application of 3D-CAM in the postoperative setting, 4AT possesses key characteristics of a brief delirium screening test for older patients in the acute medical setting. In contrast to the Oberhaus et al study, a major strength of this study was the utilization of a reference standard that was validated by expert consensus. This allowed the 4AT and CAM assessments to be compared to a more objective standard, thereby directly testing their diagnostic performance in detecting delirium. </p> <p class="sub1">Application for Clinical Practice and System Implementation </p> <p>The findings from both Study 1 and 2 suggest that using an abbreviated delirium instrument in both surgical and acute medical settings may provide a pragmatic and sensitive method to detect delirium in older patients. The brevity of administration of 3D-CAM (~3 minutes) and 4AT (~2 minutes), combined with their higher sensitivity for detecting delirium compared to CAM, make these instruments potentially effective rapid screening tests for delirium in hospitalized older patients. Importantly, the utilization of such instruments might be a feasible way to mitigate the issue of underdiagnosing delirium in the hospital. </p> <p>Several additional aspects of these abbreviated delirium instruments increase their suitability for clinical application. Specifically, the 3D-CAM and 4AT require less extensive training and clinical knowledge to both administer and interpret the results than the CAM.<sup>2 </sup>For instance, a multistage, multiday training for CAM is a key factor in maintaining its diagnostic accuracy.<sup>3,4</sup> In contrast, the 3D-CAM requires only a 1- to 2-hour training session, and the 4AT can be administered by a nonclinician without the need for instrument-specific training. Thus, implementation of these instruments can be particularly pragmatic in clinical settings in which the staff involved in delirium screening cannot undergo the substantial training required to administer CAM. Moreover, these abbreviated tests enable nonphysician care team members to assume the role of delirium screener in the hospital. Taken together, the adoption of these abbreviated instruments may facilitate brief screenings of delirium in older patients by caregivers who see them most often—nurses and certified nursing assistants—thereby improving early detection and prevention of delirium-related complications in the hospital. <br/><br/>The feasibility of using abbreviated delirium screening instruments in the hospital setting raises a system implementation question—if these instruments are designed to be administered by those with limited to no training, could nonclinicians, such as hospital volunteers, effectively take on delirium screening roles in the hospital? If volunteers are able to take on this role, the integration of hospital volunteers into the clinical team can greatly expand the capacity for delirium screening in the hospital setting. Further research is warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening. </p> <p class="sub1"><hl name="1"/>Practice Points</p> <ul class="body"> <li>Abbreviated delirium screening tools such as 3D-CAM and 4AT may be pragmatic instruments to improve delirium detection in surgical and hospitalized older patients, respectively.</li> <li>Further studies are warranted to validate the diagnostic accuracy of 3D-CAM and 4AT by nonclinician administrators in order to more broadly adopt this approach to delirium screening. </li> </ul> <p> <em>Jared Doan, BS, and Fred Ko, MD</em> </p> <p> <em>Geriatrics and Palliative Medicine,<br/><br/>Icahn School of Medicine at Mount Sinai<br/><br/>doi:10.12788/jcom.0111</em> </p> <p class="sub1">References</p> <p class="reference">1. Fong TG, Tulebaev SR, Inouye SK. Delirium in elderly adults: diagnosis, prevention and treatment. <em>Nat Rev Neurol.</em> 2009;5(4):210-220. doi:10.1038/nrneurol.2009.24<br/><br/>2. Marcantonio ER, Ngo LH, O’Connor M, et al. 3D-CAM: derivation and validation of a 3-minute diagnostic interview for CAM-defined delirium: a cross-sectional diagnostic test study. <em>Ann Intern Med.</em> 2014;161(8):554-561. doi:10.7326/M14-0865 <br/><br/>3. Green JR, Smith J, Teale E, et al. Use of the confusion assessment method in multicentre delirium trials: training and standardisation. <em>BMC Geriatr. </em>2019;19(1):107. doi:10.1186/s12877-019-1129-8<br/><br/>4. Wei LA, Fearing MA, Sternberg EJ, Inouye SK. The Confusion Assessment Method: a systematic review of current usage. <em>Am Geriatr Soc.</em> 2008;56(5):823-830. doi:10.1111/j.1532-5415.2008.01674.x</p> </itemContent> </newsItem> </itemSet></root>
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
1800222F.SIG
Disable zoom
Off

Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital

Article Type
Changed
Fri, 03/25/2022 - 11:41
Display Headline
Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital

Study Overview

Objective: This study evaluated whether a clinical-decision-support (CDS) tool that utilizes a real-time algorithm incorporating patient vital sign data can identify hospitalized patients who can forgo overnight vital sign checks and thus reduce delirium incidence.

Design: This was a parallel randomized clinical trial of adult inpatients admitted to the general medical service of a tertiary care academic medical center in the United States. The trial intervention consisted of a CDS notification in the electronic health record (EHR) that informed the physician if a patient had a high likelihood of nighttime vital signs within the reference ranges based on a logistic regression model of real-time patient data input. This notification provided the physician an opportunity to discontinue nighttime vital sign checks, dismiss the notification for 1 hour, or dismiss the notification until the next day.

Setting and participants: This clinical trial was conducted at the University of California, San Francisco Medical Center from March 11 to November 24, 2019. Participants included physicians who served on the primary team (eg, attending, resident) of 1699 patients on the general medical service who were outside of the intensive care unit (ICU). The hospital encounters were randomized (allocation ratio of 1:1) to sleep promotion vitals CDS (SPV CDS) intervention or usual care.

Main outcome and measures: The primary outcome was delirium as determined by bedside nurse assessment using the Nursing Delirium Screening Scale (Nu-DESC) recorded once per nursing shift. The Nu-DESC is a standardized delirium screening tool that defines delirium with a score ≥2. Secondary outcomes included sleep opportunity (ie, EHR-based sleep metrics that reflected the maximum time between iatrogenic interruptions, such as nighttime vital sign checks) and patient satisfaction (ie, patient satisfaction measured by standardized Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey). Potential balancing outcomes were assessed to ensure that reduced vital sign checks were not causing harms; these included ICU transfers, rapid response calls, and code blue alarms. All analyses were conducted on the basis of intention-to-treat.

Main results: A total of 3025 inpatient encounters were screened and 1930 encounters were randomized (966 SPV CDS intervention; 964 usual care). The randomized encounters consisted of 1699 patients; demographic factors between the 2 trial arms were similar. Specifically, the intervention arm included 566 men (59%) and mean (SD) age was 53 (15) years. The incidence of delirium was similar between the intervention and usual care arms: 108 (11%) vs 123 (13%) (P = .32). Compared to the usual care arm, the intervention arm had a higher mean (SD) number of sleep opportunity hours per night (4.95 [1.45] vs 4.57 [1.30], P < .001) and fewer nighttime vital sign checks (0.97 [0.95] vs 1.41 [0.86], P < .001). The post-discharge HCAHPS survey measuring patient satisfaction was completed by only 5% of patients (53 intervention, 49 usual care), and survey results were similar between the 2 arms (P = .86). In addition, safety outcomes including ICU transfers (49 [5%] vs 47 [5%], P = .92), rapid response calls (68 [7%] vs 55 [6%], P = .27), and code blue alarms (2 [0.2%] vs 9 [0.9%], P = .07) were similar between the study arms.

Conclusion: In this randomized clinical trial, a CDS tool utilizing a real-time prediction algorithm embedded in EHR did not reduce the incidence of delirium in hospitalized patients. However, this SPV CDS intervention helped physicians identify clinically stable patients who can forgo routine nighttime vital sign checks and facilitated greater opportunity for patients to sleep. These findings suggest that augmenting physician judgment using a real-time prediction algorithm can help to improve sleep opportunity without an accompanying increased risk of clinical decompensation during acute care.

 

 

Commentary

High-quality sleep is fundamental to health and well-being. Sleep deprivation and disorders are associated with many adverse health outcomes, including increased risks for obesity, diabetes, hypertension, myocardial infarction, and depression.1 In hospitalized patients who are acutely ill, restorative sleep is critical to facilitating recovery. However, poor sleep is exceedingly common in hospitalized patients and is associated with deleterious outcomes, such as high blood pressure, hyperglycemia, and delirium.2,3 Moreover, some of these adverse sleep-induced cardiometabolic outcomes, as well as sleep disruption itself, may persist after hospital discharge.4 Factors that precipitate interrupted sleep during hospitalization include iatrogenic causes such as frequent vital sign checks, nighttime procedures or early morning blood draws, and environmental factors such as loud ambient noise.3 Thus, a potential intervention to improve sleep quality in the hospital is to reduce nighttime interruptions such as frequent vital sign checks.

In the current study, Najafi and colleagues conducted a randomized trial to evaluate whether a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, can be utilized to identify patients in whom vital sign checks can be safely discontinued at nighttime. The authors found a modest but statistically significant reduction in the number of nighttime vital sign checks in patients who underwent the SPV CDS intervention, and a corresponding higher sleep opportunity per night in those who received the intervention. Importantly, this reduction in nighttime vital sign checks did not cause a higher risk of clinical decompensation as measured by ICU transfers, rapid response calls, or code blue alarms. Thus, the results demonstrated the feasibility of using a real-time, patient data-driven CDS tool to augment physician judgment in managing sleep disruption, an important hospital-associated stressor and a common hazard of hospitalization in older patients.

Delirium is a common clinical problem in hospitalized older patients that is associated with prolonged hospitalization, functional and cognitive decline, institutionalization, death, and increased health care costs.5 Despite a potential benefit of SPV CDS intervention in reducing vital sign checks and increasing sleep opportunity, this intervention did not reduce the incidence of delirium in hospitalized patients. This finding is not surprising given that delirium has a multifactorial etiology (eg, metabolic derangements, infections, medication side effects and drug toxicity, hospital environment). A small modification in nighttime vital sign checks and sleep opportunity may have limited impact on optimizing sleep quality and does not address other risk factors for delirium. As such, a multicomponent nonpharmacologic approach that includes sleep enhancement, early mobilization, feeding assistance, fluid repletion, infection prevention, and other interventions should guide delirium prevention in the hospital setting. The SPV CDS intervention may play a role in the delivery of a multifaceted, nonpharmacologic delirium prevention intervention in high-risk individuals.

Sleep disruption is one of the multiple hazards of hospitalization frequently experience by hospitalized older patients. Other hazards, or hospital-associated stressors, include mobility restriction (eg, physical restraints such as urinary catheters and intravenous lines, bed elevation and rails), malnourishment and dehydration (eg, frequent use of no-food-by-mouth order, lack of easy access to hydration), and pain (eg, poor pain control). Extended exposures to these stressors may lead to a maladaptive state called allostatic overload that transiently increases vulnerability to post-hospitalization adverse events, including emergency department use, hospital readmission, or death (ie, post-hospital syndrome).6 Thus, the optimization of sleep during hospitalization in vulnerable patients may have benefits that extend beyond delirium prevention. It is perceivable that a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, may be applied to reduce some of these hazards of hospitalization in addition to improving sleep opportunity.

Applications for Clinical Practice

Findings from the current study indicate that a CDS tool embedded in EHR that utilizes a real-time prediction algorithm of patient data may help to safely improve sleep opportunity in hospitalized patients. The participants in the current study were relatively young (53 [15] years). Given that age is a risk factor for delirium, the effects of this intervention on delirium prevention in the most susceptible population (ie, those over the age of 65) remain unknown and further investigation is warranted. Additional studies are needed to determine whether this approach yields similar results in geriatric patients and improves clinical outcomes.

—Fred Ko, MD

References

1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem. Colten HR, Altevogt BM, editors. National Academies Press (US); 2006.

2. Pilkington S. Causes and consequences of sleep deprivation in hospitalised patients. Nurs Stand. 2013;27(49):350-342. doi:10.7748/ns2013.08.27.49.35.e7649

3. Stewart NH, Arora VM. Sleep in hospitalized older adults. Sleep Med Clin. 2018;13(1):127-135. doi:10.1016/j.jsmc.2017.09.012

4. Altman MT, Knauert MP, Pisani MA. Sleep disturbance after hospitalization and critical illness: a systematic review. Ann Am Thorac Soc. 2017;14(9):1457-1468. doi:10.1513/AnnalsATS.201702-148SR

5. Oh ES, Fong TG, Hshieh TT, Inouye SK. Delirium in older persons: advances in diagnosis and treatment. JAMA. 2017;318(12):1161-1174. doi:10.1001/jama.2017.12067

6. Goldwater DS, Dharmarajan K, McEwan BS, Krumholz HM. Is posthospital syndrome a result of hospitalization-induced allostatic overload? J Hosp Med. 2018;13(5). doi:10.12788/jhm.2986

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(2)
Publications
Topics
Page Number
54-56
Sections
Article PDF
Article PDF

Study Overview

Objective: This study evaluated whether a clinical-decision-support (CDS) tool that utilizes a real-time algorithm incorporating patient vital sign data can identify hospitalized patients who can forgo overnight vital sign checks and thus reduce delirium incidence.

Design: This was a parallel randomized clinical trial of adult inpatients admitted to the general medical service of a tertiary care academic medical center in the United States. The trial intervention consisted of a CDS notification in the electronic health record (EHR) that informed the physician if a patient had a high likelihood of nighttime vital signs within the reference ranges based on a logistic regression model of real-time patient data input. This notification provided the physician an opportunity to discontinue nighttime vital sign checks, dismiss the notification for 1 hour, or dismiss the notification until the next day.

Setting and participants: This clinical trial was conducted at the University of California, San Francisco Medical Center from March 11 to November 24, 2019. Participants included physicians who served on the primary team (eg, attending, resident) of 1699 patients on the general medical service who were outside of the intensive care unit (ICU). The hospital encounters were randomized (allocation ratio of 1:1) to sleep promotion vitals CDS (SPV CDS) intervention or usual care.

Main outcome and measures: The primary outcome was delirium as determined by bedside nurse assessment using the Nursing Delirium Screening Scale (Nu-DESC) recorded once per nursing shift. The Nu-DESC is a standardized delirium screening tool that defines delirium with a score ≥2. Secondary outcomes included sleep opportunity (ie, EHR-based sleep metrics that reflected the maximum time between iatrogenic interruptions, such as nighttime vital sign checks) and patient satisfaction (ie, patient satisfaction measured by standardized Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey). Potential balancing outcomes were assessed to ensure that reduced vital sign checks were not causing harms; these included ICU transfers, rapid response calls, and code blue alarms. All analyses were conducted on the basis of intention-to-treat.

Main results: A total of 3025 inpatient encounters were screened and 1930 encounters were randomized (966 SPV CDS intervention; 964 usual care). The randomized encounters consisted of 1699 patients; demographic factors between the 2 trial arms were similar. Specifically, the intervention arm included 566 men (59%) and mean (SD) age was 53 (15) years. The incidence of delirium was similar between the intervention and usual care arms: 108 (11%) vs 123 (13%) (P = .32). Compared to the usual care arm, the intervention arm had a higher mean (SD) number of sleep opportunity hours per night (4.95 [1.45] vs 4.57 [1.30], P < .001) and fewer nighttime vital sign checks (0.97 [0.95] vs 1.41 [0.86], P < .001). The post-discharge HCAHPS survey measuring patient satisfaction was completed by only 5% of patients (53 intervention, 49 usual care), and survey results were similar between the 2 arms (P = .86). In addition, safety outcomes including ICU transfers (49 [5%] vs 47 [5%], P = .92), rapid response calls (68 [7%] vs 55 [6%], P = .27), and code blue alarms (2 [0.2%] vs 9 [0.9%], P = .07) were similar between the study arms.

Conclusion: In this randomized clinical trial, a CDS tool utilizing a real-time prediction algorithm embedded in EHR did not reduce the incidence of delirium in hospitalized patients. However, this SPV CDS intervention helped physicians identify clinically stable patients who can forgo routine nighttime vital sign checks and facilitated greater opportunity for patients to sleep. These findings suggest that augmenting physician judgment using a real-time prediction algorithm can help to improve sleep opportunity without an accompanying increased risk of clinical decompensation during acute care.

 

 

Commentary

High-quality sleep is fundamental to health and well-being. Sleep deprivation and disorders are associated with many adverse health outcomes, including increased risks for obesity, diabetes, hypertension, myocardial infarction, and depression.1 In hospitalized patients who are acutely ill, restorative sleep is critical to facilitating recovery. However, poor sleep is exceedingly common in hospitalized patients and is associated with deleterious outcomes, such as high blood pressure, hyperglycemia, and delirium.2,3 Moreover, some of these adverse sleep-induced cardiometabolic outcomes, as well as sleep disruption itself, may persist after hospital discharge.4 Factors that precipitate interrupted sleep during hospitalization include iatrogenic causes such as frequent vital sign checks, nighttime procedures or early morning blood draws, and environmental factors such as loud ambient noise.3 Thus, a potential intervention to improve sleep quality in the hospital is to reduce nighttime interruptions such as frequent vital sign checks.

In the current study, Najafi and colleagues conducted a randomized trial to evaluate whether a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, can be utilized to identify patients in whom vital sign checks can be safely discontinued at nighttime. The authors found a modest but statistically significant reduction in the number of nighttime vital sign checks in patients who underwent the SPV CDS intervention, and a corresponding higher sleep opportunity per night in those who received the intervention. Importantly, this reduction in nighttime vital sign checks did not cause a higher risk of clinical decompensation as measured by ICU transfers, rapid response calls, or code blue alarms. Thus, the results demonstrated the feasibility of using a real-time, patient data-driven CDS tool to augment physician judgment in managing sleep disruption, an important hospital-associated stressor and a common hazard of hospitalization in older patients.

Delirium is a common clinical problem in hospitalized older patients that is associated with prolonged hospitalization, functional and cognitive decline, institutionalization, death, and increased health care costs.5 Despite a potential benefit of SPV CDS intervention in reducing vital sign checks and increasing sleep opportunity, this intervention did not reduce the incidence of delirium in hospitalized patients. This finding is not surprising given that delirium has a multifactorial etiology (eg, metabolic derangements, infections, medication side effects and drug toxicity, hospital environment). A small modification in nighttime vital sign checks and sleep opportunity may have limited impact on optimizing sleep quality and does not address other risk factors for delirium. As such, a multicomponent nonpharmacologic approach that includes sleep enhancement, early mobilization, feeding assistance, fluid repletion, infection prevention, and other interventions should guide delirium prevention in the hospital setting. The SPV CDS intervention may play a role in the delivery of a multifaceted, nonpharmacologic delirium prevention intervention in high-risk individuals.

Sleep disruption is one of the multiple hazards of hospitalization frequently experience by hospitalized older patients. Other hazards, or hospital-associated stressors, include mobility restriction (eg, physical restraints such as urinary catheters and intravenous lines, bed elevation and rails), malnourishment and dehydration (eg, frequent use of no-food-by-mouth order, lack of easy access to hydration), and pain (eg, poor pain control). Extended exposures to these stressors may lead to a maladaptive state called allostatic overload that transiently increases vulnerability to post-hospitalization adverse events, including emergency department use, hospital readmission, or death (ie, post-hospital syndrome).6 Thus, the optimization of sleep during hospitalization in vulnerable patients may have benefits that extend beyond delirium prevention. It is perceivable that a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, may be applied to reduce some of these hazards of hospitalization in addition to improving sleep opportunity.

Applications for Clinical Practice

Findings from the current study indicate that a CDS tool embedded in EHR that utilizes a real-time prediction algorithm of patient data may help to safely improve sleep opportunity in hospitalized patients. The participants in the current study were relatively young (53 [15] years). Given that age is a risk factor for delirium, the effects of this intervention on delirium prevention in the most susceptible population (ie, those over the age of 65) remain unknown and further investigation is warranted. Additional studies are needed to determine whether this approach yields similar results in geriatric patients and improves clinical outcomes.

—Fred Ko, MD

Study Overview

Objective: This study evaluated whether a clinical-decision-support (CDS) tool that utilizes a real-time algorithm incorporating patient vital sign data can identify hospitalized patients who can forgo overnight vital sign checks and thus reduce delirium incidence.

Design: This was a parallel randomized clinical trial of adult inpatients admitted to the general medical service of a tertiary care academic medical center in the United States. The trial intervention consisted of a CDS notification in the electronic health record (EHR) that informed the physician if a patient had a high likelihood of nighttime vital signs within the reference ranges based on a logistic regression model of real-time patient data input. This notification provided the physician an opportunity to discontinue nighttime vital sign checks, dismiss the notification for 1 hour, or dismiss the notification until the next day.

Setting and participants: This clinical trial was conducted at the University of California, San Francisco Medical Center from March 11 to November 24, 2019. Participants included physicians who served on the primary team (eg, attending, resident) of 1699 patients on the general medical service who were outside of the intensive care unit (ICU). The hospital encounters were randomized (allocation ratio of 1:1) to sleep promotion vitals CDS (SPV CDS) intervention or usual care.

Main outcome and measures: The primary outcome was delirium as determined by bedside nurse assessment using the Nursing Delirium Screening Scale (Nu-DESC) recorded once per nursing shift. The Nu-DESC is a standardized delirium screening tool that defines delirium with a score ≥2. Secondary outcomes included sleep opportunity (ie, EHR-based sleep metrics that reflected the maximum time between iatrogenic interruptions, such as nighttime vital sign checks) and patient satisfaction (ie, patient satisfaction measured by standardized Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey). Potential balancing outcomes were assessed to ensure that reduced vital sign checks were not causing harms; these included ICU transfers, rapid response calls, and code blue alarms. All analyses were conducted on the basis of intention-to-treat.

Main results: A total of 3025 inpatient encounters were screened and 1930 encounters were randomized (966 SPV CDS intervention; 964 usual care). The randomized encounters consisted of 1699 patients; demographic factors between the 2 trial arms were similar. Specifically, the intervention arm included 566 men (59%) and mean (SD) age was 53 (15) years. The incidence of delirium was similar between the intervention and usual care arms: 108 (11%) vs 123 (13%) (P = .32). Compared to the usual care arm, the intervention arm had a higher mean (SD) number of sleep opportunity hours per night (4.95 [1.45] vs 4.57 [1.30], P < .001) and fewer nighttime vital sign checks (0.97 [0.95] vs 1.41 [0.86], P < .001). The post-discharge HCAHPS survey measuring patient satisfaction was completed by only 5% of patients (53 intervention, 49 usual care), and survey results were similar between the 2 arms (P = .86). In addition, safety outcomes including ICU transfers (49 [5%] vs 47 [5%], P = .92), rapid response calls (68 [7%] vs 55 [6%], P = .27), and code blue alarms (2 [0.2%] vs 9 [0.9%], P = .07) were similar between the study arms.

Conclusion: In this randomized clinical trial, a CDS tool utilizing a real-time prediction algorithm embedded in EHR did not reduce the incidence of delirium in hospitalized patients. However, this SPV CDS intervention helped physicians identify clinically stable patients who can forgo routine nighttime vital sign checks and facilitated greater opportunity for patients to sleep. These findings suggest that augmenting physician judgment using a real-time prediction algorithm can help to improve sleep opportunity without an accompanying increased risk of clinical decompensation during acute care.

 

 

Commentary

High-quality sleep is fundamental to health and well-being. Sleep deprivation and disorders are associated with many adverse health outcomes, including increased risks for obesity, diabetes, hypertension, myocardial infarction, and depression.1 In hospitalized patients who are acutely ill, restorative sleep is critical to facilitating recovery. However, poor sleep is exceedingly common in hospitalized patients and is associated with deleterious outcomes, such as high blood pressure, hyperglycemia, and delirium.2,3 Moreover, some of these adverse sleep-induced cardiometabolic outcomes, as well as sleep disruption itself, may persist after hospital discharge.4 Factors that precipitate interrupted sleep during hospitalization include iatrogenic causes such as frequent vital sign checks, nighttime procedures or early morning blood draws, and environmental factors such as loud ambient noise.3 Thus, a potential intervention to improve sleep quality in the hospital is to reduce nighttime interruptions such as frequent vital sign checks.

In the current study, Najafi and colleagues conducted a randomized trial to evaluate whether a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, can be utilized to identify patients in whom vital sign checks can be safely discontinued at nighttime. The authors found a modest but statistically significant reduction in the number of nighttime vital sign checks in patients who underwent the SPV CDS intervention, and a corresponding higher sleep opportunity per night in those who received the intervention. Importantly, this reduction in nighttime vital sign checks did not cause a higher risk of clinical decompensation as measured by ICU transfers, rapid response calls, or code blue alarms. Thus, the results demonstrated the feasibility of using a real-time, patient data-driven CDS tool to augment physician judgment in managing sleep disruption, an important hospital-associated stressor and a common hazard of hospitalization in older patients.

Delirium is a common clinical problem in hospitalized older patients that is associated with prolonged hospitalization, functional and cognitive decline, institutionalization, death, and increased health care costs.5 Despite a potential benefit of SPV CDS intervention in reducing vital sign checks and increasing sleep opportunity, this intervention did not reduce the incidence of delirium in hospitalized patients. This finding is not surprising given that delirium has a multifactorial etiology (eg, metabolic derangements, infections, medication side effects and drug toxicity, hospital environment). A small modification in nighttime vital sign checks and sleep opportunity may have limited impact on optimizing sleep quality and does not address other risk factors for delirium. As such, a multicomponent nonpharmacologic approach that includes sleep enhancement, early mobilization, feeding assistance, fluid repletion, infection prevention, and other interventions should guide delirium prevention in the hospital setting. The SPV CDS intervention may play a role in the delivery of a multifaceted, nonpharmacologic delirium prevention intervention in high-risk individuals.

Sleep disruption is one of the multiple hazards of hospitalization frequently experience by hospitalized older patients. Other hazards, or hospital-associated stressors, include mobility restriction (eg, physical restraints such as urinary catheters and intravenous lines, bed elevation and rails), malnourishment and dehydration (eg, frequent use of no-food-by-mouth order, lack of easy access to hydration), and pain (eg, poor pain control). Extended exposures to these stressors may lead to a maladaptive state called allostatic overload that transiently increases vulnerability to post-hospitalization adverse events, including emergency department use, hospital readmission, or death (ie, post-hospital syndrome).6 Thus, the optimization of sleep during hospitalization in vulnerable patients may have benefits that extend beyond delirium prevention. It is perceivable that a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, may be applied to reduce some of these hazards of hospitalization in addition to improving sleep opportunity.

Applications for Clinical Practice

Findings from the current study indicate that a CDS tool embedded in EHR that utilizes a real-time prediction algorithm of patient data may help to safely improve sleep opportunity in hospitalized patients. The participants in the current study were relatively young (53 [15] years). Given that age is a risk factor for delirium, the effects of this intervention on delirium prevention in the most susceptible population (ie, those over the age of 65) remain unknown and further investigation is warranted. Additional studies are needed to determine whether this approach yields similar results in geriatric patients and improves clinical outcomes.

—Fred Ko, MD

References

1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem. Colten HR, Altevogt BM, editors. National Academies Press (US); 2006.

2. Pilkington S. Causes and consequences of sleep deprivation in hospitalised patients. Nurs Stand. 2013;27(49):350-342. doi:10.7748/ns2013.08.27.49.35.e7649

3. Stewart NH, Arora VM. Sleep in hospitalized older adults. Sleep Med Clin. 2018;13(1):127-135. doi:10.1016/j.jsmc.2017.09.012

4. Altman MT, Knauert MP, Pisani MA. Sleep disturbance after hospitalization and critical illness: a systematic review. Ann Am Thorac Soc. 2017;14(9):1457-1468. doi:10.1513/AnnalsATS.201702-148SR

5. Oh ES, Fong TG, Hshieh TT, Inouye SK. Delirium in older persons: advances in diagnosis and treatment. JAMA. 2017;318(12):1161-1174. doi:10.1001/jama.2017.12067

6. Goldwater DS, Dharmarajan K, McEwan BS, Krumholz HM. Is posthospital syndrome a result of hospitalization-induced allostatic overload? J Hosp Med. 2018;13(5). doi:10.12788/jhm.2986

References

1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem. Colten HR, Altevogt BM, editors. National Academies Press (US); 2006.

2. Pilkington S. Causes and consequences of sleep deprivation in hospitalised patients. Nurs Stand. 2013;27(49):350-342. doi:10.7748/ns2013.08.27.49.35.e7649

3. Stewart NH, Arora VM. Sleep in hospitalized older adults. Sleep Med Clin. 2018;13(1):127-135. doi:10.1016/j.jsmc.2017.09.012

4. Altman MT, Knauert MP, Pisani MA. Sleep disturbance after hospitalization and critical illness: a systematic review. Ann Am Thorac Soc. 2017;14(9):1457-1468. doi:10.1513/AnnalsATS.201702-148SR

5. Oh ES, Fong TG, Hshieh TT, Inouye SK. Delirium in older persons: advances in diagnosis and treatment. JAMA. 2017;318(12):1161-1174. doi:10.1001/jama.2017.12067

6. Goldwater DS, Dharmarajan K, McEwan BS, Krumholz HM. Is posthospital syndrome a result of hospitalization-induced allostatic overload? J Hosp Med. 2018;13(5). doi:10.12788/jhm.2986

Issue
Journal of Clinical Outcomes Management - 29(2)
Issue
Journal of Clinical Outcomes Management - 29(2)
Page Number
54-56
Page Number
54-56
Publications
Publications
Topics
Article Type
Display Headline
Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital
Display Headline
Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
18002059.SIG
Disable zoom
Off

Preoperative Advance Care Planning for Older Adults Undergoing High-Risk Surgery: An Essential but Underutilized Aspect of Clinical Care

Article Type
Changed
Fri, 09/24/2021 - 16:09
Display Headline
Preoperative Advance Care Planning for Older Adults Undergoing High-Risk Surgery: An Essential but Underutilized Aspect of Clinical Care

Study Overview

Objective. The objectives of this study were to (1) quantify the frequency of preoperative advance care planning (ACP) discussion and documentation for older adults undergoing major surgery in a national sample, and (2) characterize how surgical patients and their family members considered ACP after postoperative complications.

Design. A secondary analysis of data from a multisite randomized clinical trial testing the effects of a question prompt list intervention (a Question Problem List [QPL] brochure with 11 questions) given to patients aged 60 years or older undergoing high-risk surgery on preoperative communication with their surgeons.

Setting and participants. This multisite randomized controlled trial involved 5 study sites that encompassed distinct US geographic areas, including University of Wisconsin Hospital and Clinics (UWHC), Madison; the University of California, San Francisco, Medical Center (UCSF); Oregon Health & Science University (OHSU), Portland; the University Hospital of Rutgers New Jersey Medical School (Rutgers), Newark; and the Brigham and Women’s Hospital (BWH), Boston, Massachusetts. The study enrolled 40 surgeons who routinely performed high-risk oncological or vascular surgery via purposeful sampling; patients aged 60 years or older with at least 1 comorbidity and an oncological or vascular problem that were treatable with high-risk surgery; and 1 invited family member per enrolled patient to participate in open-ended interviews postsurgery. High-risk surgery was defined as an operation that has a 30-day in-hospital mortality rate greater than or equal to 1%. Data were collected from June 1, 2016, to November 30, 2018.

Main outcome measures. The frequency of preoperative discussions and documentation of ACP was determined. For patients who had major surgery, any mention of ACP (ie, mention of advance directive [AD], health care power of attorney, or preference for limitations of life-sustaining treatments) by the surgeon, patient or family member during the audio recorded, transcribed, and coded preoperative consultation was counted. The presence of a written AD in the medical record at the time of the initial consultation, filed between the consultation and the date of surgery, or added postoperatively, was recorded using a standardized abstraction form. Postoperative treatments administered and complications experienced within 6 weeks after surgery were recorded. Open-ended interviews with patients who experienced significant postoperative complications (eg, prolonged hospitalization > 8 days, intensive care unit stay > 3 days) and their family members were conducted 6 weeks after surgery. Information ascertained during interviews focused on treatment decisions, postoperative experiences, and interpersonal relationships among patients, families, and clinicians. Transcripts of these interviews were then subjected to qualitative content analysis.

Main results. A total of 446 patients were enrolled in the primary study. Of these patients, 213 (122 men [57%]; 91 women [43%]; mean [SD] age, 72 [7] years) underwent major surgery. Only 13 (6.1%) of those who had major surgery had any discussion related to ACP in the preoperative consultation. In this cohort, 141 (66%) patients did not have an AD on file before undergoing major surgery. The presence of AD was not associated with age (60-69 years, 26 [31%]; 70-79 years, 31 [33%]; ≥ 80 years, 15 [42%]; P = .55), number of comorbidities (1, 35 [32%]; 2, 18 [33%]; ≥ 3, 19 [40%]; P = .62), or type of procedure (oncological, 53 [32%]; vascular, 19 [42%]; P = .22). Moreover, there was no difference in preoperative communication about ACP or documentation of an AD for patients who were mailed a QPL brochure compared to those who received usual care (intervention, 38 [35%]; usual care, 34 [33%]; P = .77). Rates of AD documentation were associated with individual study sites with BWH and UWHC having higher rates of documentation (20 [50%] and 27 [44%], respectively) compared to OHSU, UCSF, or Rutgers (7 [17%], 17 [35%], and 1 [5%], respectively). Analysis from the interviews indicated that patients and families felt unprepared for serious surgical complications and had varied interpretations of ACP. Patients with complications were enthusiastic about ACP but did not think it was important to discuss their preferences for life-sustaining treatments with their surgeon preoperatively.

Conclusion. Although surgeons and patients report that they believe ACP is important, preoperative discussion of patient preferences rarely occurs. This study found that the frequency of ACP discussions or AD documentations among older patients undergoing high-risk oncologic or vascular surgery was low. Interventions that are aimed to increase rates of preoperative ACP discussions should be implemented to help prepare patients and their families for difficult decisions in the setting of serious surgical complications and could help decrease postoperative conflicts that result from unclear patient care goals.

Commentary

Surgeons and patients approach surgical interventions with optimistic outlooks while simultaneously preparing for unintended adverse outcomes. For patients, preoperative ACP discussions ease the burden on their families and ensure their wishes and care goals are communicated. For surgeons, these discussions inform them how best to support the values of the patient. Therefore, it is unsurprising that preoperative ACP is viewed favorably by both groups. Given the consensus that ACP is important in the care of older adults undergoing high-risk surgery, one would assume that preoperative ACP discussion is a standard of practice among surgeons and their aging patients. However, in a secondary analysis of a randomized control trial testing a patient-mediated intervention to improve preoperative communication, Kalbfell et al1 showed that ACP discussions rarely take place prior to major surgery in older adults. This finding highlights the significant discrepancy between the belief that ACP is important, and the actual rate that it is practiced, in older patients undergoing high-risk surgery. This discordance is highly concerning because it suggests that surgeons who provide care to a very vulnerable subset of older patients may overlook an essential aspect of preoperative care and therefore lack a thorough and thoughtful understanding of the patient’s care goals. In practice, this omission can pose significant challenges associated with the surgeon and family’s decisions to use postoperative life-sustaining interventions or to manage unforeseen complications should a patient become unable to make medical decisions.

 

 

The barriers to conducting successful ACP discussions between surgeons and patients are multifactorial. Kalbfell et al1 highlighted several of these barriers, including lack of patient efficacy, physician attitudes, and institutional values in older adults who require major surgeries. The inadequacy of patient efficacy in preoperative ACP is illustrated by findings from the primary, multisite trial of QPL intervention conducted by Schwarze et al. Interestingly, the authors found that patients who did not receive QPL brochure had no ACP discussions, and that QPL implementation did not significantly improve discussion rates despite its intent to encourage these discussions.2 Possible explanations for this lack of engagement might be a lack of health literacy or patient efficacy in the study population. Qualitative data from the current study provided further evidence to support these explanations. For instance, some patients provided limited or incomplete information about their wishes for health care management while others felt it was unnecessary to have ACP discussions unless complications arose.1 However, the latter example counters the purpose of ACP which is to enable patients to make plans about future health care and not reactive to a medical complication or emergency.

Surgeons bear a large responsibility in providing treatments that are consistent with the care goals of the patient. Thus, surgeons play a crucial role in engaging, guiding, and facilitating ACP discussions with patients. This role is even more critical when patients are unable or unwilling to initiate care goal discussions. Physician attitudes towards ACP, therefore, greatly influence the effectiveness of these discussions. In a study of self-administered surveys by vascular, neurologic, and cardiothoracic surgeons, greater than 90% of respondents viewed postoperative life-supporting therapy as necessary, and 54% would decline to operate on patients with an AD limiting life-supporting therapy.3 Moreover, the same study showed that 52% of respondents reported discussing AD before surgery, a figure that exceeded the actual rates at which ACP discussions occur in many other studies. In the current study, Kalbfell et al1 also found that surgeons viewed ACP discussions largely in the context of AD creation and declined to investigate the full scope of patient preferences. These findings, when combined with other studies that indicate an incomplete understanding of ACP in some surgeons, suggest that not all physicians are able or willing to navigate these sometimes lengthy and difficult conversations with patients. This gap in practice provides opportunities for training in surgical specialties that center on optimizing preoperative ACP discussions to meet the care needs of older patients.

Institutional value and culture are important factors that impact physician behavior and the practice of ACP discussion. In the current study, the authors reported that the majority of ACP discussions were held by a minority of surgeons and that different institutions and study sites had vastly different rates of ACP documentation.1 These results are further supported by findings of large variations between physicians and hospitals in ACP reporting in hospitalized frail older adults.4 These variations in practices at different institutions suggest that it is possible to improve rates of preoperative ACP discussion. Reasons for these differences need to be further investigated in order to identify strategies, resources, or trainings required by medical institutions to support surgeons to carry out ACP discussions with patients undergoing high-risk surgeries.

The study conducted by Kalbfell et al1 has several strengths. For example, it included Spanish-speaking patients and the use of a Spanish version of the QPL intervention to account for cultural differences. The study also included multiple surgical specialties and institutions and captured a large and national sample, thus making its findings more generalizable. However, the lack of data on the duration of preoperative consultation visits in patients who completed ACP discussions poses a limitation to this study. This is relevant because surgeon availability to engage in lengthy ACP discussions may be limited due to busy clinical schedules. Additional data on the duration of preoperative visits inclusive of a thoughtfully conducted ACP discussion could help to modify clinical workflow to facilitate its uptake in surgical practices.

Applications for Clinical Practice

The findings from the current study indicate that patients and surgeons agree that preoperative ACP discussions are beneficial to the clinical care of older adults before high-risk surgeries. However, these important conversations do not occur frequently. Surgeons and health care institutions need to identify strategies to initiate, facilitate, and optimize productive preoperative ACP discussions to provide patient-centered care in vulnerable older surgical patients.

Financial disclosures: None.

References

1. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521

2. Schwarze ML, Buffington A, Tucholka JL, et al. Effectiveness of a Question Prompt List Intervention for Older Patients Considering Major Surgery: A Multisite Randomized Clinical Trial. JAMA Surg. 2020;155(1):6-13. doi:10.1001/jamasurg.2019.3778

3. Redmann AJ, Brasel KJ, Alexander CG, Schwarze ML. Use of advance directives for high-risk operations: a national survey of surgeons. Ann Surgery. 2012;255(3):418-423. doi:10.1097/SLA.0b013e31823b6782

4. Hopkins SA, Bentley A, Phillips V, Barclay S. Advance care plans and hospitalized frail older adults: a systematic review. BMJ Support Palliat Care. 2020;10:164-174. doi:10.1136/bmjspcare-2019-002093

Article PDF
Issue
Journal of Clinical Outcomes Management - 28(5)
Publications
Topics
Page Number
196-199
Sections
Article PDF
Article PDF

Study Overview

Objective. The objectives of this study were to (1) quantify the frequency of preoperative advance care planning (ACP) discussion and documentation for older adults undergoing major surgery in a national sample, and (2) characterize how surgical patients and their family members considered ACP after postoperative complications.

Design. A secondary analysis of data from a multisite randomized clinical trial testing the effects of a question prompt list intervention (a Question Problem List [QPL] brochure with 11 questions) given to patients aged 60 years or older undergoing high-risk surgery on preoperative communication with their surgeons.

Setting and participants. This multisite randomized controlled trial involved 5 study sites that encompassed distinct US geographic areas, including University of Wisconsin Hospital and Clinics (UWHC), Madison; the University of California, San Francisco, Medical Center (UCSF); Oregon Health & Science University (OHSU), Portland; the University Hospital of Rutgers New Jersey Medical School (Rutgers), Newark; and the Brigham and Women’s Hospital (BWH), Boston, Massachusetts. The study enrolled 40 surgeons who routinely performed high-risk oncological or vascular surgery via purposeful sampling; patients aged 60 years or older with at least 1 comorbidity and an oncological or vascular problem that were treatable with high-risk surgery; and 1 invited family member per enrolled patient to participate in open-ended interviews postsurgery. High-risk surgery was defined as an operation that has a 30-day in-hospital mortality rate greater than or equal to 1%. Data were collected from June 1, 2016, to November 30, 2018.

Main outcome measures. The frequency of preoperative discussions and documentation of ACP was determined. For patients who had major surgery, any mention of ACP (ie, mention of advance directive [AD], health care power of attorney, or preference for limitations of life-sustaining treatments) by the surgeon, patient or family member during the audio recorded, transcribed, and coded preoperative consultation was counted. The presence of a written AD in the medical record at the time of the initial consultation, filed between the consultation and the date of surgery, or added postoperatively, was recorded using a standardized abstraction form. Postoperative treatments administered and complications experienced within 6 weeks after surgery were recorded. Open-ended interviews with patients who experienced significant postoperative complications (eg, prolonged hospitalization > 8 days, intensive care unit stay > 3 days) and their family members were conducted 6 weeks after surgery. Information ascertained during interviews focused on treatment decisions, postoperative experiences, and interpersonal relationships among patients, families, and clinicians. Transcripts of these interviews were then subjected to qualitative content analysis.

Main results. A total of 446 patients were enrolled in the primary study. Of these patients, 213 (122 men [57%]; 91 women [43%]; mean [SD] age, 72 [7] years) underwent major surgery. Only 13 (6.1%) of those who had major surgery had any discussion related to ACP in the preoperative consultation. In this cohort, 141 (66%) patients did not have an AD on file before undergoing major surgery. The presence of AD was not associated with age (60-69 years, 26 [31%]; 70-79 years, 31 [33%]; ≥ 80 years, 15 [42%]; P = .55), number of comorbidities (1, 35 [32%]; 2, 18 [33%]; ≥ 3, 19 [40%]; P = .62), or type of procedure (oncological, 53 [32%]; vascular, 19 [42%]; P = .22). Moreover, there was no difference in preoperative communication about ACP or documentation of an AD for patients who were mailed a QPL brochure compared to those who received usual care (intervention, 38 [35%]; usual care, 34 [33%]; P = .77). Rates of AD documentation were associated with individual study sites with BWH and UWHC having higher rates of documentation (20 [50%] and 27 [44%], respectively) compared to OHSU, UCSF, or Rutgers (7 [17%], 17 [35%], and 1 [5%], respectively). Analysis from the interviews indicated that patients and families felt unprepared for serious surgical complications and had varied interpretations of ACP. Patients with complications were enthusiastic about ACP but did not think it was important to discuss their preferences for life-sustaining treatments with their surgeon preoperatively.

Conclusion. Although surgeons and patients report that they believe ACP is important, preoperative discussion of patient preferences rarely occurs. This study found that the frequency of ACP discussions or AD documentations among older patients undergoing high-risk oncologic or vascular surgery was low. Interventions that are aimed to increase rates of preoperative ACP discussions should be implemented to help prepare patients and their families for difficult decisions in the setting of serious surgical complications and could help decrease postoperative conflicts that result from unclear patient care goals.

Commentary

Surgeons and patients approach surgical interventions with optimistic outlooks while simultaneously preparing for unintended adverse outcomes. For patients, preoperative ACP discussions ease the burden on their families and ensure their wishes and care goals are communicated. For surgeons, these discussions inform them how best to support the values of the patient. Therefore, it is unsurprising that preoperative ACP is viewed favorably by both groups. Given the consensus that ACP is important in the care of older adults undergoing high-risk surgery, one would assume that preoperative ACP discussion is a standard of practice among surgeons and their aging patients. However, in a secondary analysis of a randomized control trial testing a patient-mediated intervention to improve preoperative communication, Kalbfell et al1 showed that ACP discussions rarely take place prior to major surgery in older adults. This finding highlights the significant discrepancy between the belief that ACP is important, and the actual rate that it is practiced, in older patients undergoing high-risk surgery. This discordance is highly concerning because it suggests that surgeons who provide care to a very vulnerable subset of older patients may overlook an essential aspect of preoperative care and therefore lack a thorough and thoughtful understanding of the patient’s care goals. In practice, this omission can pose significant challenges associated with the surgeon and family’s decisions to use postoperative life-sustaining interventions or to manage unforeseen complications should a patient become unable to make medical decisions.

 

 

The barriers to conducting successful ACP discussions between surgeons and patients are multifactorial. Kalbfell et al1 highlighted several of these barriers, including lack of patient efficacy, physician attitudes, and institutional values in older adults who require major surgeries. The inadequacy of patient efficacy in preoperative ACP is illustrated by findings from the primary, multisite trial of QPL intervention conducted by Schwarze et al. Interestingly, the authors found that patients who did not receive QPL brochure had no ACP discussions, and that QPL implementation did not significantly improve discussion rates despite its intent to encourage these discussions.2 Possible explanations for this lack of engagement might be a lack of health literacy or patient efficacy in the study population. Qualitative data from the current study provided further evidence to support these explanations. For instance, some patients provided limited or incomplete information about their wishes for health care management while others felt it was unnecessary to have ACP discussions unless complications arose.1 However, the latter example counters the purpose of ACP which is to enable patients to make plans about future health care and not reactive to a medical complication or emergency.

Surgeons bear a large responsibility in providing treatments that are consistent with the care goals of the patient. Thus, surgeons play a crucial role in engaging, guiding, and facilitating ACP discussions with patients. This role is even more critical when patients are unable or unwilling to initiate care goal discussions. Physician attitudes towards ACP, therefore, greatly influence the effectiveness of these discussions. In a study of self-administered surveys by vascular, neurologic, and cardiothoracic surgeons, greater than 90% of respondents viewed postoperative life-supporting therapy as necessary, and 54% would decline to operate on patients with an AD limiting life-supporting therapy.3 Moreover, the same study showed that 52% of respondents reported discussing AD before surgery, a figure that exceeded the actual rates at which ACP discussions occur in many other studies. In the current study, Kalbfell et al1 also found that surgeons viewed ACP discussions largely in the context of AD creation and declined to investigate the full scope of patient preferences. These findings, when combined with other studies that indicate an incomplete understanding of ACP in some surgeons, suggest that not all physicians are able or willing to navigate these sometimes lengthy and difficult conversations with patients. This gap in practice provides opportunities for training in surgical specialties that center on optimizing preoperative ACP discussions to meet the care needs of older patients.

Institutional value and culture are important factors that impact physician behavior and the practice of ACP discussion. In the current study, the authors reported that the majority of ACP discussions were held by a minority of surgeons and that different institutions and study sites had vastly different rates of ACP documentation.1 These results are further supported by findings of large variations between physicians and hospitals in ACP reporting in hospitalized frail older adults.4 These variations in practices at different institutions suggest that it is possible to improve rates of preoperative ACP discussion. Reasons for these differences need to be further investigated in order to identify strategies, resources, or trainings required by medical institutions to support surgeons to carry out ACP discussions with patients undergoing high-risk surgeries.

The study conducted by Kalbfell et al1 has several strengths. For example, it included Spanish-speaking patients and the use of a Spanish version of the QPL intervention to account for cultural differences. The study also included multiple surgical specialties and institutions and captured a large and national sample, thus making its findings more generalizable. However, the lack of data on the duration of preoperative consultation visits in patients who completed ACP discussions poses a limitation to this study. This is relevant because surgeon availability to engage in lengthy ACP discussions may be limited due to busy clinical schedules. Additional data on the duration of preoperative visits inclusive of a thoughtfully conducted ACP discussion could help to modify clinical workflow to facilitate its uptake in surgical practices.

Applications for Clinical Practice

The findings from the current study indicate that patients and surgeons agree that preoperative ACP discussions are beneficial to the clinical care of older adults before high-risk surgeries. However, these important conversations do not occur frequently. Surgeons and health care institutions need to identify strategies to initiate, facilitate, and optimize productive preoperative ACP discussions to provide patient-centered care in vulnerable older surgical patients.

Financial disclosures: None.

Study Overview

Objective. The objectives of this study were to (1) quantify the frequency of preoperative advance care planning (ACP) discussion and documentation for older adults undergoing major surgery in a national sample, and (2) characterize how surgical patients and their family members considered ACP after postoperative complications.

Design. A secondary analysis of data from a multisite randomized clinical trial testing the effects of a question prompt list intervention (a Question Problem List [QPL] brochure with 11 questions) given to patients aged 60 years or older undergoing high-risk surgery on preoperative communication with their surgeons.

Setting and participants. This multisite randomized controlled trial involved 5 study sites that encompassed distinct US geographic areas, including University of Wisconsin Hospital and Clinics (UWHC), Madison; the University of California, San Francisco, Medical Center (UCSF); Oregon Health & Science University (OHSU), Portland; the University Hospital of Rutgers New Jersey Medical School (Rutgers), Newark; and the Brigham and Women’s Hospital (BWH), Boston, Massachusetts. The study enrolled 40 surgeons who routinely performed high-risk oncological or vascular surgery via purposeful sampling; patients aged 60 years or older with at least 1 comorbidity and an oncological or vascular problem that were treatable with high-risk surgery; and 1 invited family member per enrolled patient to participate in open-ended interviews postsurgery. High-risk surgery was defined as an operation that has a 30-day in-hospital mortality rate greater than or equal to 1%. Data were collected from June 1, 2016, to November 30, 2018.

Main outcome measures. The frequency of preoperative discussions and documentation of ACP was determined. For patients who had major surgery, any mention of ACP (ie, mention of advance directive [AD], health care power of attorney, or preference for limitations of life-sustaining treatments) by the surgeon, patient or family member during the audio recorded, transcribed, and coded preoperative consultation was counted. The presence of a written AD in the medical record at the time of the initial consultation, filed between the consultation and the date of surgery, or added postoperatively, was recorded using a standardized abstraction form. Postoperative treatments administered and complications experienced within 6 weeks after surgery were recorded. Open-ended interviews with patients who experienced significant postoperative complications (eg, prolonged hospitalization > 8 days, intensive care unit stay > 3 days) and their family members were conducted 6 weeks after surgery. Information ascertained during interviews focused on treatment decisions, postoperative experiences, and interpersonal relationships among patients, families, and clinicians. Transcripts of these interviews were then subjected to qualitative content analysis.

Main results. A total of 446 patients were enrolled in the primary study. Of these patients, 213 (122 men [57%]; 91 women [43%]; mean [SD] age, 72 [7] years) underwent major surgery. Only 13 (6.1%) of those who had major surgery had any discussion related to ACP in the preoperative consultation. In this cohort, 141 (66%) patients did not have an AD on file before undergoing major surgery. The presence of AD was not associated with age (60-69 years, 26 [31%]; 70-79 years, 31 [33%]; ≥ 80 years, 15 [42%]; P = .55), number of comorbidities (1, 35 [32%]; 2, 18 [33%]; ≥ 3, 19 [40%]; P = .62), or type of procedure (oncological, 53 [32%]; vascular, 19 [42%]; P = .22). Moreover, there was no difference in preoperative communication about ACP or documentation of an AD for patients who were mailed a QPL brochure compared to those who received usual care (intervention, 38 [35%]; usual care, 34 [33%]; P = .77). Rates of AD documentation were associated with individual study sites with BWH and UWHC having higher rates of documentation (20 [50%] and 27 [44%], respectively) compared to OHSU, UCSF, or Rutgers (7 [17%], 17 [35%], and 1 [5%], respectively). Analysis from the interviews indicated that patients and families felt unprepared for serious surgical complications and had varied interpretations of ACP. Patients with complications were enthusiastic about ACP but did not think it was important to discuss their preferences for life-sustaining treatments with their surgeon preoperatively.

Conclusion. Although surgeons and patients report that they believe ACP is important, preoperative discussion of patient preferences rarely occurs. This study found that the frequency of ACP discussions or AD documentations among older patients undergoing high-risk oncologic or vascular surgery was low. Interventions that are aimed to increase rates of preoperative ACP discussions should be implemented to help prepare patients and their families for difficult decisions in the setting of serious surgical complications and could help decrease postoperative conflicts that result from unclear patient care goals.

Commentary

Surgeons and patients approach surgical interventions with optimistic outlooks while simultaneously preparing for unintended adverse outcomes. For patients, preoperative ACP discussions ease the burden on their families and ensure their wishes and care goals are communicated. For surgeons, these discussions inform them how best to support the values of the patient. Therefore, it is unsurprising that preoperative ACP is viewed favorably by both groups. Given the consensus that ACP is important in the care of older adults undergoing high-risk surgery, one would assume that preoperative ACP discussion is a standard of practice among surgeons and their aging patients. However, in a secondary analysis of a randomized control trial testing a patient-mediated intervention to improve preoperative communication, Kalbfell et al1 showed that ACP discussions rarely take place prior to major surgery in older adults. This finding highlights the significant discrepancy between the belief that ACP is important, and the actual rate that it is practiced, in older patients undergoing high-risk surgery. This discordance is highly concerning because it suggests that surgeons who provide care to a very vulnerable subset of older patients may overlook an essential aspect of preoperative care and therefore lack a thorough and thoughtful understanding of the patient’s care goals. In practice, this omission can pose significant challenges associated with the surgeon and family’s decisions to use postoperative life-sustaining interventions or to manage unforeseen complications should a patient become unable to make medical decisions.

 

 

The barriers to conducting successful ACP discussions between surgeons and patients are multifactorial. Kalbfell et al1 highlighted several of these barriers, including lack of patient efficacy, physician attitudes, and institutional values in older adults who require major surgeries. The inadequacy of patient efficacy in preoperative ACP is illustrated by findings from the primary, multisite trial of QPL intervention conducted by Schwarze et al. Interestingly, the authors found that patients who did not receive QPL brochure had no ACP discussions, and that QPL implementation did not significantly improve discussion rates despite its intent to encourage these discussions.2 Possible explanations for this lack of engagement might be a lack of health literacy or patient efficacy in the study population. Qualitative data from the current study provided further evidence to support these explanations. For instance, some patients provided limited or incomplete information about their wishes for health care management while others felt it was unnecessary to have ACP discussions unless complications arose.1 However, the latter example counters the purpose of ACP which is to enable patients to make plans about future health care and not reactive to a medical complication or emergency.

Surgeons bear a large responsibility in providing treatments that are consistent with the care goals of the patient. Thus, surgeons play a crucial role in engaging, guiding, and facilitating ACP discussions with patients. This role is even more critical when patients are unable or unwilling to initiate care goal discussions. Physician attitudes towards ACP, therefore, greatly influence the effectiveness of these discussions. In a study of self-administered surveys by vascular, neurologic, and cardiothoracic surgeons, greater than 90% of respondents viewed postoperative life-supporting therapy as necessary, and 54% would decline to operate on patients with an AD limiting life-supporting therapy.3 Moreover, the same study showed that 52% of respondents reported discussing AD before surgery, a figure that exceeded the actual rates at which ACP discussions occur in many other studies. In the current study, Kalbfell et al1 also found that surgeons viewed ACP discussions largely in the context of AD creation and declined to investigate the full scope of patient preferences. These findings, when combined with other studies that indicate an incomplete understanding of ACP in some surgeons, suggest that not all physicians are able or willing to navigate these sometimes lengthy and difficult conversations with patients. This gap in practice provides opportunities for training in surgical specialties that center on optimizing preoperative ACP discussions to meet the care needs of older patients.

Institutional value and culture are important factors that impact physician behavior and the practice of ACP discussion. In the current study, the authors reported that the majority of ACP discussions were held by a minority of surgeons and that different institutions and study sites had vastly different rates of ACP documentation.1 These results are further supported by findings of large variations between physicians and hospitals in ACP reporting in hospitalized frail older adults.4 These variations in practices at different institutions suggest that it is possible to improve rates of preoperative ACP discussion. Reasons for these differences need to be further investigated in order to identify strategies, resources, or trainings required by medical institutions to support surgeons to carry out ACP discussions with patients undergoing high-risk surgeries.

The study conducted by Kalbfell et al1 has several strengths. For example, it included Spanish-speaking patients and the use of a Spanish version of the QPL intervention to account for cultural differences. The study also included multiple surgical specialties and institutions and captured a large and national sample, thus making its findings more generalizable. However, the lack of data on the duration of preoperative consultation visits in patients who completed ACP discussions poses a limitation to this study. This is relevant because surgeon availability to engage in lengthy ACP discussions may be limited due to busy clinical schedules. Additional data on the duration of preoperative visits inclusive of a thoughtfully conducted ACP discussion could help to modify clinical workflow to facilitate its uptake in surgical practices.

Applications for Clinical Practice

The findings from the current study indicate that patients and surgeons agree that preoperative ACP discussions are beneficial to the clinical care of older adults before high-risk surgeries. However, these important conversations do not occur frequently. Surgeons and health care institutions need to identify strategies to initiate, facilitate, and optimize productive preoperative ACP discussions to provide patient-centered care in vulnerable older surgical patients.

Financial disclosures: None.

References

1. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521

2. Schwarze ML, Buffington A, Tucholka JL, et al. Effectiveness of a Question Prompt List Intervention for Older Patients Considering Major Surgery: A Multisite Randomized Clinical Trial. JAMA Surg. 2020;155(1):6-13. doi:10.1001/jamasurg.2019.3778

3. Redmann AJ, Brasel KJ, Alexander CG, Schwarze ML. Use of advance directives for high-risk operations: a national survey of surgeons. Ann Surgery. 2012;255(3):418-423. doi:10.1097/SLA.0b013e31823b6782

4. Hopkins SA, Bentley A, Phillips V, Barclay S. Advance care plans and hospitalized frail older adults: a systematic review. BMJ Support Palliat Care. 2020;10:164-174. doi:10.1136/bmjspcare-2019-002093

References

1. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521

2. Schwarze ML, Buffington A, Tucholka JL, et al. Effectiveness of a Question Prompt List Intervention for Older Patients Considering Major Surgery: A Multisite Randomized Clinical Trial. JAMA Surg. 2020;155(1):6-13. doi:10.1001/jamasurg.2019.3778

3. Redmann AJ, Brasel KJ, Alexander CG, Schwarze ML. Use of advance directives for high-risk operations: a national survey of surgeons. Ann Surgery. 2012;255(3):418-423. doi:10.1097/SLA.0b013e31823b6782

4. Hopkins SA, Bentley A, Phillips V, Barclay S. Advance care plans and hospitalized frail older adults: a systematic review. BMJ Support Palliat Care. 2020;10:164-174. doi:10.1136/bmjspcare-2019-002093

Issue
Journal of Clinical Outcomes Management - 28(5)
Issue
Journal of Clinical Outcomes Management - 28(5)
Page Number
196-199
Page Number
196-199
Publications
Publications
Topics
Article Type
Display Headline
Preoperative Advance Care Planning for Older Adults Undergoing High-Risk Surgery: An Essential but Underutilized Aspect of Clinical Care
Display Headline
Preoperative Advance Care Planning for Older Adults Undergoing High-Risk Surgery: An Essential but Underutilized Aspect of Clinical Care
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
18001E3A.SIG
Disable zoom
Off

Recent Trends in Diabetes Treatment and Control in US Adults: A Geriatrician’s Point of View

Article Type
Changed
Tue, 05/03/2022 - 15:04
Display Headline
Recent Trends in Diabetes Treatment and Control in US Adults: A Geriatrician’s Point of View

Study Overview

Objective. To update national trends in the treatment and risk factor control of diabetic patients from 1999 through 2018 in the US using data from the National Health and Nutrition Examination Survey (NHANES) with the goal of identifying population subgroups with the highest probability of having untreated risk factors.

Design. The authors conducted a cross-sectional analysis of data from NHANES focusing on adults with diabetes. They examined patient characteristics and medication use over time and estimated the prevalence of risk factor control and medication use. To minimize the effects of a small sample size, the survey years were pooled into 4-year intervals. The variables studied included glycated hemoglobin (HbA1c), blood pressure, serum cholesterol, medication use, sociodemographic characteristics, and weight status. For statistical analysis, logistic and multinomial logistic regression models were used to examine factors associated with treatment in participants who did not achieve targets for glycemic, blood pressure, and lipid control. Temporal trends were estimated using 2-piece linear spline models with 1 knot at inflection points.

Setting and participants. The NHANES program began in the early 1960s to monitor the health of the US population. In 1999, the survey became a continuous program combining interviews and physical examinations. The survey examines a nationally representative sample of about 5000 persons each year. This study included 6653 participants who were nonpregnant, aged older than 20 years, reported a diagnosis of diabetes from a physician, and participated in NHANES from 1999 through 2018.

Main outcome measures. The main outcome measures were temporal trends in risk factor control (glycemic, blood pressure, or lipid levels) and medication use (glucose lowering, blood pressure lowering, or lipid lowering medications), and number as well as class of drug used, from 1999 through 2018 in diabetic adults from the US participating in NHANES.

Results. Sociodemographic characteristics of the studied diabetes population—The age and racial or ethnic distribution of participants with diabetes were stable from 1999 through 2018, whereas participants with a college degree, higher income, health insurance, obesity, or long-standing diabetes increased during the same period.

Trends in diabetes risk factor control—The trends for glycemic, blood pressure, and lipid control were nonlinear, with an inflection point around 2010. Glycemic control was defined as HbA1c less than 7%, blood pressure was considered controlled if less than 140/90 mmHg, and lipid was controlled if non-HDL cholesterol level was less than 130 mg/dL. Although these chosen targets were based on the most recent clinical guidelines, the authors declared that they observed similar trends when alternative targets were used. The level of risk factor control improved in all diabetic patients from 1999 through 2010. However, the percentage of adult diabetic participants for whom glycemic control was achieved declined from 57.4% (95% CI, 52.9-61.8) in 2007-2010 to 50.5% (95% CI, 45.8-55.3) in 2015-2018. Blood pressure control was achieved in 74.2% of participants (95% CI, 70.7-77.4) in 2011-2014 but declined to 70.4% (95% CI, 66.7-73.8) in 2015-2018. Control in lipid levels improved during the entire study period; however, the rate of improvement heavily declined after 2007 with lipid target levels attained in 52.3% of participants (95% CI, 49.2-55.3) in 2007-2014 and 55.7% (95% CI, 50.8-60.5) in 2015-2018. Finally, the percentage of participants in whom targets for all 3 risk factors were simultaneously achieved plateaued after 2010 and was 22.2% (95% CI, 17.9-27.3) in 2015-2018.

Trends in diabetes treatment—The use of glucose lowering drugs increased from 74.1% in 1999-2002 to 82.7% in 2007-2010 and then stabilized. A shift toward a safer glucose lowering treatment choice was observed with a decline in the use of older glucose lowering medications such as sulfonylureas, which increases the risk of hypoglycemia, and an increase in the use of metformin, insulin, and newer agents such as sodium-glucose cotransporter 2 inhibitors.

 

 

Similarly, blood pressure lowering medication use rose from 1999-2002 to 2007-2010 and then stabilized, with increased use of first-line recommended treatments including angiotensin-converting enzyme inhibitors or angiotensin-receptor blockers. Likewise, statin use rose from 28.4% in 1999-2002 to 56% in 2011-2014 and then stabilized. The total number of drugs used culminated in 2011-2014 with 60% of participants using more than 5 drugs and then leveled off to 57.2% in 2015-2018. Lastly, health insurance status and race or ethnicity impacted the likelihood of receiving monotherapy or combination drug therapy when targets for glycemic, blood pressure, or lipid control were not achieved.

Conclusion. Despite great progress in the control of diabetes and its associated risk factors between 1999 and 2010, this trend declined for glycemic and blood pressure control and leveled off for lipid control in adult NHANES participants with diabetes after 2010. First-line treatments for diabetes and associated risk factors remain underused, and treatment intensification may not be sufficiently considered in patients with uncontrolled risk factors despite clinical guideline recommendations. The findings of this study may portend a possible population-level increase in diabetes-related illnesses in the years to come.

Commentary

The thorough understanding of trends in management of diseases is critical to inform public health policies and planning. Well designed clinical studies heavily influence the development of public health policies and clinical guidelines, which in turn drive real-world clinical practice. In a recent analysis utilizing data from NHANES, Fang et al1 showed evidence of a general shift toward less intensive treatment of diabetes, hypertension, and hypercholesterolemia in adults living in the US during the last decade.

Similarly, in a separate study using NHANES data collected between 1999 and 2018 published in JAMA just 2 weeks after the current report, Wang et al2 confirms this declining trend in diabetes management with only 21.2% of diabetic adults simultaneously attaining glycemic, blood pressure, and lipid level targets during the same period. What led to the decline in more stringent risk factor and diabetes management since 2010 observed in these studies? One possible explanation, as suggested by Fang et al, is that major clinical trials from the late 2000s­—including Action to Control Cardiovascular Risk in Diabetes, UK Prospective Diabetes Study, Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation, and Veterans Affairs Diabetes Trial—that assessed the effects of intensive glycemic control (with target HbA1c < 6.5%) found that intensive treatment of diabetes compared to standard care had no cardiovascular benefit albeit increasing the risk of hypoglycemia. Thus, these trial findings may have translated into suboptimal diabetes treatment observed in some NHANES participants. Wang et al propose that effective tailored approaches are needed to improve risk factor control in diabetic patients, such as enhance and maintain adherence to medications and healthy lifestyle behaviors, as well as better access to health care and therapeutic education.

The changes in recent trends in diabetes management have immense clinical implications. The authors of this study suggest a link between the recent relaxation of glycemic targets, as well as risk factor control, and a resurgence of diabetic complications such as lower limb amputation or stroke. Indeed, several recent studies indicate an upward trend or plateau in diabetic complications which had been decreasing in prevalence prior to 2010.3 For example, lower extremity amputation has surged by more than 25% between 2010 and 2015, especially in young and middle-aged adults.4 Among the arguments brought forward that this recent resurgence in amputations is directly linked to worsening glycemic control is the fact that between 2007 and 2010, when glucose levels were best controlled within the previous 30-year period, amputations were also at the lowest levels. Moreover, data from the Centers for Disease Control and Prevention also show a 55% increase in mortality (from 15.7 to 24.2 per 1000) among diabetic patients between 2010 and 2015.14 On the other hand, a growing number of studies show that an increase of inappropriate treatment intensification—reaching HbA1c levels that are way below the recommended targets—is associated with adverse consequences in diabetic patients particularly in those aged more than 65 years.5-7 These seemingly contradictory findings highlight the importance of a personalized and thoughtful approach to the management of diabetes and its risk factors. As an example, an increase in the use of newer and safer glucose lowering drugs (eg, sodium-glucose cotransporter 2 inhibitors, glucagon-like peptide 1 receptor agonists, and dipeptidyl peptidase 4 inhibitors) can help achieve better HbA1c goals with a reduced risk of hypoglycemic episodes as recently shown by a Danish study.8 In this study, the authors concluded that the reduction of the rate of hypoglycemic episodes leading to hospitalization in Denmark was directly linked to the use of these safer and newer glucose lowering drugs.

 

 

A discussion on the specifics of trends in diabetes treatment and control must include considerations in older adults aged more than 65 years who constitute more than 40% of the diabetic population. Despite the high prevalence of diabetes in this vulnerable population, such data are still insufficient in the literature and are critically needed to inform public health policies and clinical guidelines. In epidemiological studies focusing on diabetic complications from the last 10 years, concerning increases have been observed in younger9 and middle-aged adults while remaining stable in older adults. However, the risk of hypoglycemia or severe hypoglycemia remains high in older adults living in nursing facilities, even in those with an elevated HbA1c of greater than 8%.7 Moreover, in light of more relaxed HbA1c treatment goals for older frail adults as recommended by international guidelines since 2010,10,11 recent findings from the French GERODIAB cohort show an increased mortality (hazard ratio, 1.76) in type 2 diabetics aged 70 years and older with HbA1c greater than or equal to 8.6%.12 Similarly, a 5-year retrospective British study from 2018 which included patients aged 70 years and older, shows an increased overall mortality in those with HbA1c greater than 8.5%.13 Taken together, further age-stratified analysis utilizing data from large cohort studies including NHANES may help to clarify national trends in diabetes treatment and risk factor control as well as diabetic complications specific to the geriatric population. By being better informed of such trends, clinicians could then develop treatment strategies that minimize complications (eg, hypoglycemia, falls) while achieving favorable outcomes (eg, reduce hyperglycemic emergencies, improve survival) in frail older patients.

Applications for Clinical Practice

The understanding of population-wide trends in diabetes control is critical to planning public health approaches for the prevention and treatment of this disease and its complications. In older adults, the high risk of hypoglycemic events and insufficient epidemiological data on trends of diabetes control hinder diabetes management. Personalized treatment targets taking into account geriatric syndromes and general health status, as well as multidisciplinary management involving endocrinologists, geriatricians, and clinical pharmacists, are necessary to optimize care in older adults with diabetes.

References

1. Fang M, Wang D, Coresh J, Selvin E. Trends in Diabetes Treatment and Control in U.S. Adults, 1999-2018. N Engl J Med. 2021;384(23):2219-28. doi:10.1056/NEJMsa2032271

2. Wang L, Li X, Wang Z, et al. Trends in Prevalence of Diabetes and Control of Risk Factors in Diabetes Among US Adults, 1999-2018. JAMA. 2021. doi:10.1001/jama.2021.9883

3. Gregg EW, Hora I, Benoit SR. Resurgence in Diabetes-Related Complications. JAMA. 2019;321(19):1867-8. doi:10.1001/jama.2019.3471

4. Caruso P, Scappaticcio L, Maiorino MI, et al. Up and down waves of glycemic control and lower-extremity amputation in diabetes. Cardiovasc Diabetol. 2021;20(1):135. doi:10.1186/s12933-021-01325-3

5. Bongaerts B, Arnold SV, Charbonnel BH, et al. Inappropriate intensification of glucose-lowering treatment in older patients with type 2 diabetes: the global DISCOVER study. BMJ Open Diabetes Res Care. 2021;9(1)e001585. doi:10.1136/bmjdrc-2020-001585

6. Lipska KJ, Ross JS, Wang Y, et al. National trends in US hospital admissions for hyperglycemia and hypoglycemia among Medicare beneficiaries, 1999 to 2011. JAMA Intern Med. 2014;174(7):1116-1124. doi: 10.1001/jamainternmed.2014.1824

7. Bouillet B, Tscherter P, Vaillard L, et al. Frequent and severe hypoglycaemia detected with continuous glucose monitoring in older institutionalised patients with diabetes. Age Ageing. 2021;afab128. doi: 10.1093/ageing/afab128

8. Jensen MH, Hejlesen O, Vestergaard P. Epidemiology of hypoglycaemic episodes leading to hospitalisations in Denmark in 1998-2018. Diabetologia. 2021. doi: 10.1007/s00125-021-05507-2

9. TODAY Study Group, Bjornstad P, Drews KL, et al. Long-Term Complications in Youth-Onset Type 2 Diabetes. N Engl J Med. 2021;385(5):416-426. doi: 10.1056/NEJMoa2100165

10. Sinclair AJ, Paolisso G, Castro M, et al. European Diabetes Working Party for Older People 2011 clinical guidelines for type 2 diabetes mellitus. Executive summary. Diabetes Metab. 2011;37 Suppl 3:S27-S38. doi:10.1016/S1262-3636(11)70962-4

11. Kirkman MS, Briscoe VJ, Clark N, et al. Diabetes in older adults. Diabetes Care. 2012;35(12):2650-2664. doi: 10.2337/dc12-1801

12. Doucet J, Verny C, Balkau B, et al. Haemoglobin A1c and 5-year all-cause mortality in French type 2 diabetic patients aged 70 years and older: The GERODIAB observational cohort. Diabetes Metab. 2018;44(6):465-472. doi: 10.1016/j.diabet.2018.05.003

13. Forbes A, Murrells T, Mulnier H, Sinclair AJ. Mean HbA1c, HbA1c variability, and mortality in people with diabetes aged 70 years and older: a retrospective cohort study. Lancet Diabetes Endocrinol. 2018;6(6):476-486. doi: 10.1016/S2213-8587(18)30048-2

14. US Centers for Disease Control and Prevention. US diabetes surveillance system and diabetes atlas, 2019. https://www.cdc.gov/diabetes/data

Article PDF
Publications
Topics
Page Number
e1-e4
Sections
Article PDF
Article PDF

Study Overview

Objective. To update national trends in the treatment and risk factor control of diabetic patients from 1999 through 2018 in the US using data from the National Health and Nutrition Examination Survey (NHANES) with the goal of identifying population subgroups with the highest probability of having untreated risk factors.

Design. The authors conducted a cross-sectional analysis of data from NHANES focusing on adults with diabetes. They examined patient characteristics and medication use over time and estimated the prevalence of risk factor control and medication use. To minimize the effects of a small sample size, the survey years were pooled into 4-year intervals. The variables studied included glycated hemoglobin (HbA1c), blood pressure, serum cholesterol, medication use, sociodemographic characteristics, and weight status. For statistical analysis, logistic and multinomial logistic regression models were used to examine factors associated with treatment in participants who did not achieve targets for glycemic, blood pressure, and lipid control. Temporal trends were estimated using 2-piece linear spline models with 1 knot at inflection points.

Setting and participants. The NHANES program began in the early 1960s to monitor the health of the US population. In 1999, the survey became a continuous program combining interviews and physical examinations. The survey examines a nationally representative sample of about 5000 persons each year. This study included 6653 participants who were nonpregnant, aged older than 20 years, reported a diagnosis of diabetes from a physician, and participated in NHANES from 1999 through 2018.

Main outcome measures. The main outcome measures were temporal trends in risk factor control (glycemic, blood pressure, or lipid levels) and medication use (glucose lowering, blood pressure lowering, or lipid lowering medications), and number as well as class of drug used, from 1999 through 2018 in diabetic adults from the US participating in NHANES.

Results. Sociodemographic characteristics of the studied diabetes population—The age and racial or ethnic distribution of participants with diabetes were stable from 1999 through 2018, whereas participants with a college degree, higher income, health insurance, obesity, or long-standing diabetes increased during the same period.

Trends in diabetes risk factor control—The trends for glycemic, blood pressure, and lipid control were nonlinear, with an inflection point around 2010. Glycemic control was defined as HbA1c less than 7%, blood pressure was considered controlled if less than 140/90 mmHg, and lipid was controlled if non-HDL cholesterol level was less than 130 mg/dL. Although these chosen targets were based on the most recent clinical guidelines, the authors declared that they observed similar trends when alternative targets were used. The level of risk factor control improved in all diabetic patients from 1999 through 2010. However, the percentage of adult diabetic participants for whom glycemic control was achieved declined from 57.4% (95% CI, 52.9-61.8) in 2007-2010 to 50.5% (95% CI, 45.8-55.3) in 2015-2018. Blood pressure control was achieved in 74.2% of participants (95% CI, 70.7-77.4) in 2011-2014 but declined to 70.4% (95% CI, 66.7-73.8) in 2015-2018. Control in lipid levels improved during the entire study period; however, the rate of improvement heavily declined after 2007 with lipid target levels attained in 52.3% of participants (95% CI, 49.2-55.3) in 2007-2014 and 55.7% (95% CI, 50.8-60.5) in 2015-2018. Finally, the percentage of participants in whom targets for all 3 risk factors were simultaneously achieved plateaued after 2010 and was 22.2% (95% CI, 17.9-27.3) in 2015-2018.

Trends in diabetes treatment—The use of glucose lowering drugs increased from 74.1% in 1999-2002 to 82.7% in 2007-2010 and then stabilized. A shift toward a safer glucose lowering treatment choice was observed with a decline in the use of older glucose lowering medications such as sulfonylureas, which increases the risk of hypoglycemia, and an increase in the use of metformin, insulin, and newer agents such as sodium-glucose cotransporter 2 inhibitors.

 

 

Similarly, blood pressure lowering medication use rose from 1999-2002 to 2007-2010 and then stabilized, with increased use of first-line recommended treatments including angiotensin-converting enzyme inhibitors or angiotensin-receptor blockers. Likewise, statin use rose from 28.4% in 1999-2002 to 56% in 2011-2014 and then stabilized. The total number of drugs used culminated in 2011-2014 with 60% of participants using more than 5 drugs and then leveled off to 57.2% in 2015-2018. Lastly, health insurance status and race or ethnicity impacted the likelihood of receiving monotherapy or combination drug therapy when targets for glycemic, blood pressure, or lipid control were not achieved.

Conclusion. Despite great progress in the control of diabetes and its associated risk factors between 1999 and 2010, this trend declined for glycemic and blood pressure control and leveled off for lipid control in adult NHANES participants with diabetes after 2010. First-line treatments for diabetes and associated risk factors remain underused, and treatment intensification may not be sufficiently considered in patients with uncontrolled risk factors despite clinical guideline recommendations. The findings of this study may portend a possible population-level increase in diabetes-related illnesses in the years to come.

Commentary

The thorough understanding of trends in management of diseases is critical to inform public health policies and planning. Well designed clinical studies heavily influence the development of public health policies and clinical guidelines, which in turn drive real-world clinical practice. In a recent analysis utilizing data from NHANES, Fang et al1 showed evidence of a general shift toward less intensive treatment of diabetes, hypertension, and hypercholesterolemia in adults living in the US during the last decade.

Similarly, in a separate study using NHANES data collected between 1999 and 2018 published in JAMA just 2 weeks after the current report, Wang et al2 confirms this declining trend in diabetes management with only 21.2% of diabetic adults simultaneously attaining glycemic, blood pressure, and lipid level targets during the same period. What led to the decline in more stringent risk factor and diabetes management since 2010 observed in these studies? One possible explanation, as suggested by Fang et al, is that major clinical trials from the late 2000s­—including Action to Control Cardiovascular Risk in Diabetes, UK Prospective Diabetes Study, Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation, and Veterans Affairs Diabetes Trial—that assessed the effects of intensive glycemic control (with target HbA1c < 6.5%) found that intensive treatment of diabetes compared to standard care had no cardiovascular benefit albeit increasing the risk of hypoglycemia. Thus, these trial findings may have translated into suboptimal diabetes treatment observed in some NHANES participants. Wang et al propose that effective tailored approaches are needed to improve risk factor control in diabetic patients, such as enhance and maintain adherence to medications and healthy lifestyle behaviors, as well as better access to health care and therapeutic education.

The changes in recent trends in diabetes management have immense clinical implications. The authors of this study suggest a link between the recent relaxation of glycemic targets, as well as risk factor control, and a resurgence of diabetic complications such as lower limb amputation or stroke. Indeed, several recent studies indicate an upward trend or plateau in diabetic complications which had been decreasing in prevalence prior to 2010.3 For example, lower extremity amputation has surged by more than 25% between 2010 and 2015, especially in young and middle-aged adults.4 Among the arguments brought forward that this recent resurgence in amputations is directly linked to worsening glycemic control is the fact that between 2007 and 2010, when glucose levels were best controlled within the previous 30-year period, amputations were also at the lowest levels. Moreover, data from the Centers for Disease Control and Prevention also show a 55% increase in mortality (from 15.7 to 24.2 per 1000) among diabetic patients between 2010 and 2015.14 On the other hand, a growing number of studies show that an increase of inappropriate treatment intensification—reaching HbA1c levels that are way below the recommended targets—is associated with adverse consequences in diabetic patients particularly in those aged more than 65 years.5-7 These seemingly contradictory findings highlight the importance of a personalized and thoughtful approach to the management of diabetes and its risk factors. As an example, an increase in the use of newer and safer glucose lowering drugs (eg, sodium-glucose cotransporter 2 inhibitors, glucagon-like peptide 1 receptor agonists, and dipeptidyl peptidase 4 inhibitors) can help achieve better HbA1c goals with a reduced risk of hypoglycemic episodes as recently shown by a Danish study.8 In this study, the authors concluded that the reduction of the rate of hypoglycemic episodes leading to hospitalization in Denmark was directly linked to the use of these safer and newer glucose lowering drugs.

 

 

A discussion on the specifics of trends in diabetes treatment and control must include considerations in older adults aged more than 65 years who constitute more than 40% of the diabetic population. Despite the high prevalence of diabetes in this vulnerable population, such data are still insufficient in the literature and are critically needed to inform public health policies and clinical guidelines. In epidemiological studies focusing on diabetic complications from the last 10 years, concerning increases have been observed in younger9 and middle-aged adults while remaining stable in older adults. However, the risk of hypoglycemia or severe hypoglycemia remains high in older adults living in nursing facilities, even in those with an elevated HbA1c of greater than 8%.7 Moreover, in light of more relaxed HbA1c treatment goals for older frail adults as recommended by international guidelines since 2010,10,11 recent findings from the French GERODIAB cohort show an increased mortality (hazard ratio, 1.76) in type 2 diabetics aged 70 years and older with HbA1c greater than or equal to 8.6%.12 Similarly, a 5-year retrospective British study from 2018 which included patients aged 70 years and older, shows an increased overall mortality in those with HbA1c greater than 8.5%.13 Taken together, further age-stratified analysis utilizing data from large cohort studies including NHANES may help to clarify national trends in diabetes treatment and risk factor control as well as diabetic complications specific to the geriatric population. By being better informed of such trends, clinicians could then develop treatment strategies that minimize complications (eg, hypoglycemia, falls) while achieving favorable outcomes (eg, reduce hyperglycemic emergencies, improve survival) in frail older patients.

Applications for Clinical Practice

The understanding of population-wide trends in diabetes control is critical to planning public health approaches for the prevention and treatment of this disease and its complications. In older adults, the high risk of hypoglycemic events and insufficient epidemiological data on trends of diabetes control hinder diabetes management. Personalized treatment targets taking into account geriatric syndromes and general health status, as well as multidisciplinary management involving endocrinologists, geriatricians, and clinical pharmacists, are necessary to optimize care in older adults with diabetes.

Study Overview

Objective. To update national trends in the treatment and risk factor control of diabetic patients from 1999 through 2018 in the US using data from the National Health and Nutrition Examination Survey (NHANES) with the goal of identifying population subgroups with the highest probability of having untreated risk factors.

Design. The authors conducted a cross-sectional analysis of data from NHANES focusing on adults with diabetes. They examined patient characteristics and medication use over time and estimated the prevalence of risk factor control and medication use. To minimize the effects of a small sample size, the survey years were pooled into 4-year intervals. The variables studied included glycated hemoglobin (HbA1c), blood pressure, serum cholesterol, medication use, sociodemographic characteristics, and weight status. For statistical analysis, logistic and multinomial logistic regression models were used to examine factors associated with treatment in participants who did not achieve targets for glycemic, blood pressure, and lipid control. Temporal trends were estimated using 2-piece linear spline models with 1 knot at inflection points.

Setting and participants. The NHANES program began in the early 1960s to monitor the health of the US population. In 1999, the survey became a continuous program combining interviews and physical examinations. The survey examines a nationally representative sample of about 5000 persons each year. This study included 6653 participants who were nonpregnant, aged older than 20 years, reported a diagnosis of diabetes from a physician, and participated in NHANES from 1999 through 2018.

Main outcome measures. The main outcome measures were temporal trends in risk factor control (glycemic, blood pressure, or lipid levels) and medication use (glucose lowering, blood pressure lowering, or lipid lowering medications), and number as well as class of drug used, from 1999 through 2018 in diabetic adults from the US participating in NHANES.

Results. Sociodemographic characteristics of the studied diabetes population—The age and racial or ethnic distribution of participants with diabetes were stable from 1999 through 2018, whereas participants with a college degree, higher income, health insurance, obesity, or long-standing diabetes increased during the same period.

Trends in diabetes risk factor control—The trends for glycemic, blood pressure, and lipid control were nonlinear, with an inflection point around 2010. Glycemic control was defined as HbA1c less than 7%, blood pressure was considered controlled if less than 140/90 mmHg, and lipid was controlled if non-HDL cholesterol level was less than 130 mg/dL. Although these chosen targets were based on the most recent clinical guidelines, the authors declared that they observed similar trends when alternative targets were used. The level of risk factor control improved in all diabetic patients from 1999 through 2010. However, the percentage of adult diabetic participants for whom glycemic control was achieved declined from 57.4% (95% CI, 52.9-61.8) in 2007-2010 to 50.5% (95% CI, 45.8-55.3) in 2015-2018. Blood pressure control was achieved in 74.2% of participants (95% CI, 70.7-77.4) in 2011-2014 but declined to 70.4% (95% CI, 66.7-73.8) in 2015-2018. Control in lipid levels improved during the entire study period; however, the rate of improvement heavily declined after 2007 with lipid target levels attained in 52.3% of participants (95% CI, 49.2-55.3) in 2007-2014 and 55.7% (95% CI, 50.8-60.5) in 2015-2018. Finally, the percentage of participants in whom targets for all 3 risk factors were simultaneously achieved plateaued after 2010 and was 22.2% (95% CI, 17.9-27.3) in 2015-2018.

Trends in diabetes treatment—The use of glucose lowering drugs increased from 74.1% in 1999-2002 to 82.7% in 2007-2010 and then stabilized. A shift toward a safer glucose lowering treatment choice was observed with a decline in the use of older glucose lowering medications such as sulfonylureas, which increases the risk of hypoglycemia, and an increase in the use of metformin, insulin, and newer agents such as sodium-glucose cotransporter 2 inhibitors.

 

 

Similarly, blood pressure lowering medication use rose from 1999-2002 to 2007-2010 and then stabilized, with increased use of first-line recommended treatments including angiotensin-converting enzyme inhibitors or angiotensin-receptor blockers. Likewise, statin use rose from 28.4% in 1999-2002 to 56% in 2011-2014 and then stabilized. The total number of drugs used culminated in 2011-2014 with 60% of participants using more than 5 drugs and then leveled off to 57.2% in 2015-2018. Lastly, health insurance status and race or ethnicity impacted the likelihood of receiving monotherapy or combination drug therapy when targets for glycemic, blood pressure, or lipid control were not achieved.

Conclusion. Despite great progress in the control of diabetes and its associated risk factors between 1999 and 2010, this trend declined for glycemic and blood pressure control and leveled off for lipid control in adult NHANES participants with diabetes after 2010. First-line treatments for diabetes and associated risk factors remain underused, and treatment intensification may not be sufficiently considered in patients with uncontrolled risk factors despite clinical guideline recommendations. The findings of this study may portend a possible population-level increase in diabetes-related illnesses in the years to come.

Commentary

The thorough understanding of trends in management of diseases is critical to inform public health policies and planning. Well designed clinical studies heavily influence the development of public health policies and clinical guidelines, which in turn drive real-world clinical practice. In a recent analysis utilizing data from NHANES, Fang et al1 showed evidence of a general shift toward less intensive treatment of diabetes, hypertension, and hypercholesterolemia in adults living in the US during the last decade.

Similarly, in a separate study using NHANES data collected between 1999 and 2018 published in JAMA just 2 weeks after the current report, Wang et al2 confirms this declining trend in diabetes management with only 21.2% of diabetic adults simultaneously attaining glycemic, blood pressure, and lipid level targets during the same period. What led to the decline in more stringent risk factor and diabetes management since 2010 observed in these studies? One possible explanation, as suggested by Fang et al, is that major clinical trials from the late 2000s­—including Action to Control Cardiovascular Risk in Diabetes, UK Prospective Diabetes Study, Action in Diabetes and Vascular Disease: Preterax and Diamicron Modified Release Controlled Evaluation, and Veterans Affairs Diabetes Trial—that assessed the effects of intensive glycemic control (with target HbA1c < 6.5%) found that intensive treatment of diabetes compared to standard care had no cardiovascular benefit albeit increasing the risk of hypoglycemia. Thus, these trial findings may have translated into suboptimal diabetes treatment observed in some NHANES participants. Wang et al propose that effective tailored approaches are needed to improve risk factor control in diabetic patients, such as enhance and maintain adherence to medications and healthy lifestyle behaviors, as well as better access to health care and therapeutic education.

The changes in recent trends in diabetes management have immense clinical implications. The authors of this study suggest a link between the recent relaxation of glycemic targets, as well as risk factor control, and a resurgence of diabetic complications such as lower limb amputation or stroke. Indeed, several recent studies indicate an upward trend or plateau in diabetic complications which had been decreasing in prevalence prior to 2010.3 For example, lower extremity amputation has surged by more than 25% between 2010 and 2015, especially in young and middle-aged adults.4 Among the arguments brought forward that this recent resurgence in amputations is directly linked to worsening glycemic control is the fact that between 2007 and 2010, when glucose levels were best controlled within the previous 30-year period, amputations were also at the lowest levels. Moreover, data from the Centers for Disease Control and Prevention also show a 55% increase in mortality (from 15.7 to 24.2 per 1000) among diabetic patients between 2010 and 2015.14 On the other hand, a growing number of studies show that an increase of inappropriate treatment intensification—reaching HbA1c levels that are way below the recommended targets—is associated with adverse consequences in diabetic patients particularly in those aged more than 65 years.5-7 These seemingly contradictory findings highlight the importance of a personalized and thoughtful approach to the management of diabetes and its risk factors. As an example, an increase in the use of newer and safer glucose lowering drugs (eg, sodium-glucose cotransporter 2 inhibitors, glucagon-like peptide 1 receptor agonists, and dipeptidyl peptidase 4 inhibitors) can help achieve better HbA1c goals with a reduced risk of hypoglycemic episodes as recently shown by a Danish study.8 In this study, the authors concluded that the reduction of the rate of hypoglycemic episodes leading to hospitalization in Denmark was directly linked to the use of these safer and newer glucose lowering drugs.

 

 

A discussion on the specifics of trends in diabetes treatment and control must include considerations in older adults aged more than 65 years who constitute more than 40% of the diabetic population. Despite the high prevalence of diabetes in this vulnerable population, such data are still insufficient in the literature and are critically needed to inform public health policies and clinical guidelines. In epidemiological studies focusing on diabetic complications from the last 10 years, concerning increases have been observed in younger9 and middle-aged adults while remaining stable in older adults. However, the risk of hypoglycemia or severe hypoglycemia remains high in older adults living in nursing facilities, even in those with an elevated HbA1c of greater than 8%.7 Moreover, in light of more relaxed HbA1c treatment goals for older frail adults as recommended by international guidelines since 2010,10,11 recent findings from the French GERODIAB cohort show an increased mortality (hazard ratio, 1.76) in type 2 diabetics aged 70 years and older with HbA1c greater than or equal to 8.6%.12 Similarly, a 5-year retrospective British study from 2018 which included patients aged 70 years and older, shows an increased overall mortality in those with HbA1c greater than 8.5%.13 Taken together, further age-stratified analysis utilizing data from large cohort studies including NHANES may help to clarify national trends in diabetes treatment and risk factor control as well as diabetic complications specific to the geriatric population. By being better informed of such trends, clinicians could then develop treatment strategies that minimize complications (eg, hypoglycemia, falls) while achieving favorable outcomes (eg, reduce hyperglycemic emergencies, improve survival) in frail older patients.

Applications for Clinical Practice

The understanding of population-wide trends in diabetes control is critical to planning public health approaches for the prevention and treatment of this disease and its complications. In older adults, the high risk of hypoglycemic events and insufficient epidemiological data on trends of diabetes control hinder diabetes management. Personalized treatment targets taking into account geriatric syndromes and general health status, as well as multidisciplinary management involving endocrinologists, geriatricians, and clinical pharmacists, are necessary to optimize care in older adults with diabetes.

References

1. Fang M, Wang D, Coresh J, Selvin E. Trends in Diabetes Treatment and Control in U.S. Adults, 1999-2018. N Engl J Med. 2021;384(23):2219-28. doi:10.1056/NEJMsa2032271

2. Wang L, Li X, Wang Z, et al. Trends in Prevalence of Diabetes and Control of Risk Factors in Diabetes Among US Adults, 1999-2018. JAMA. 2021. doi:10.1001/jama.2021.9883

3. Gregg EW, Hora I, Benoit SR. Resurgence in Diabetes-Related Complications. JAMA. 2019;321(19):1867-8. doi:10.1001/jama.2019.3471

4. Caruso P, Scappaticcio L, Maiorino MI, et al. Up and down waves of glycemic control and lower-extremity amputation in diabetes. Cardiovasc Diabetol. 2021;20(1):135. doi:10.1186/s12933-021-01325-3

5. Bongaerts B, Arnold SV, Charbonnel BH, et al. Inappropriate intensification of glucose-lowering treatment in older patients with type 2 diabetes: the global DISCOVER study. BMJ Open Diabetes Res Care. 2021;9(1)e001585. doi:10.1136/bmjdrc-2020-001585

6. Lipska KJ, Ross JS, Wang Y, et al. National trends in US hospital admissions for hyperglycemia and hypoglycemia among Medicare beneficiaries, 1999 to 2011. JAMA Intern Med. 2014;174(7):1116-1124. doi: 10.1001/jamainternmed.2014.1824

7. Bouillet B, Tscherter P, Vaillard L, et al. Frequent and severe hypoglycaemia detected with continuous glucose monitoring in older institutionalised patients with diabetes. Age Ageing. 2021;afab128. doi: 10.1093/ageing/afab128

8. Jensen MH, Hejlesen O, Vestergaard P. Epidemiology of hypoglycaemic episodes leading to hospitalisations in Denmark in 1998-2018. Diabetologia. 2021. doi: 10.1007/s00125-021-05507-2

9. TODAY Study Group, Bjornstad P, Drews KL, et al. Long-Term Complications in Youth-Onset Type 2 Diabetes. N Engl J Med. 2021;385(5):416-426. doi: 10.1056/NEJMoa2100165

10. Sinclair AJ, Paolisso G, Castro M, et al. European Diabetes Working Party for Older People 2011 clinical guidelines for type 2 diabetes mellitus. Executive summary. Diabetes Metab. 2011;37 Suppl 3:S27-S38. doi:10.1016/S1262-3636(11)70962-4

11. Kirkman MS, Briscoe VJ, Clark N, et al. Diabetes in older adults. Diabetes Care. 2012;35(12):2650-2664. doi: 10.2337/dc12-1801

12. Doucet J, Verny C, Balkau B, et al. Haemoglobin A1c and 5-year all-cause mortality in French type 2 diabetic patients aged 70 years and older: The GERODIAB observational cohort. Diabetes Metab. 2018;44(6):465-472. doi: 10.1016/j.diabet.2018.05.003

13. Forbes A, Murrells T, Mulnier H, Sinclair AJ. Mean HbA1c, HbA1c variability, and mortality in people with diabetes aged 70 years and older: a retrospective cohort study. Lancet Diabetes Endocrinol. 2018;6(6):476-486. doi: 10.1016/S2213-8587(18)30048-2

14. US Centers for Disease Control and Prevention. US diabetes surveillance system and diabetes atlas, 2019. https://www.cdc.gov/diabetes/data

References

1. Fang M, Wang D, Coresh J, Selvin E. Trends in Diabetes Treatment and Control in U.S. Adults, 1999-2018. N Engl J Med. 2021;384(23):2219-28. doi:10.1056/NEJMsa2032271

2. Wang L, Li X, Wang Z, et al. Trends in Prevalence of Diabetes and Control of Risk Factors in Diabetes Among US Adults, 1999-2018. JAMA. 2021. doi:10.1001/jama.2021.9883

3. Gregg EW, Hora I, Benoit SR. Resurgence in Diabetes-Related Complications. JAMA. 2019;321(19):1867-8. doi:10.1001/jama.2019.3471

4. Caruso P, Scappaticcio L, Maiorino MI, et al. Up and down waves of glycemic control and lower-extremity amputation in diabetes. Cardiovasc Diabetol. 2021;20(1):135. doi:10.1186/s12933-021-01325-3

5. Bongaerts B, Arnold SV, Charbonnel BH, et al. Inappropriate intensification of glucose-lowering treatment in older patients with type 2 diabetes: the global DISCOVER study. BMJ Open Diabetes Res Care. 2021;9(1)e001585. doi:10.1136/bmjdrc-2020-001585

6. Lipska KJ, Ross JS, Wang Y, et al. National trends in US hospital admissions for hyperglycemia and hypoglycemia among Medicare beneficiaries, 1999 to 2011. JAMA Intern Med. 2014;174(7):1116-1124. doi: 10.1001/jamainternmed.2014.1824

7. Bouillet B, Tscherter P, Vaillard L, et al. Frequent and severe hypoglycaemia detected with continuous glucose monitoring in older institutionalised patients with diabetes. Age Ageing. 2021;afab128. doi: 10.1093/ageing/afab128

8. Jensen MH, Hejlesen O, Vestergaard P. Epidemiology of hypoglycaemic episodes leading to hospitalisations in Denmark in 1998-2018. Diabetologia. 2021. doi: 10.1007/s00125-021-05507-2

9. TODAY Study Group, Bjornstad P, Drews KL, et al. Long-Term Complications in Youth-Onset Type 2 Diabetes. N Engl J Med. 2021;385(5):416-426. doi: 10.1056/NEJMoa2100165

10. Sinclair AJ, Paolisso G, Castro M, et al. European Diabetes Working Party for Older People 2011 clinical guidelines for type 2 diabetes mellitus. Executive summary. Diabetes Metab. 2011;37 Suppl 3:S27-S38. doi:10.1016/S1262-3636(11)70962-4

11. Kirkman MS, Briscoe VJ, Clark N, et al. Diabetes in older adults. Diabetes Care. 2012;35(12):2650-2664. doi: 10.2337/dc12-1801

12. Doucet J, Verny C, Balkau B, et al. Haemoglobin A1c and 5-year all-cause mortality in French type 2 diabetic patients aged 70 years and older: The GERODIAB observational cohort. Diabetes Metab. 2018;44(6):465-472. doi: 10.1016/j.diabet.2018.05.003

13. Forbes A, Murrells T, Mulnier H, Sinclair AJ. Mean HbA1c, HbA1c variability, and mortality in people with diabetes aged 70 years and older: a retrospective cohort study. Lancet Diabetes Endocrinol. 2018;6(6):476-486. doi: 10.1016/S2213-8587(18)30048-2

14. US Centers for Disease Control and Prevention. US diabetes surveillance system and diabetes atlas, 2019. https://www.cdc.gov/diabetes/data

Page Number
e1-e4
Page Number
e1-e4
Publications
Publications
Topics
Article Type
Display Headline
Recent Trends in Diabetes Treatment and Control in US Adults: A Geriatrician’s Point of View
Display Headline
Recent Trends in Diabetes Treatment and Control in US Adults: A Geriatrician’s Point of View
Sections
Citation Override
Journal of Clinical Outcomes Management. 2021 August;28(4):e1-e4
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
18001DB7.SIG
Disable zoom
Off

Traumatic Fractures Should Trigger Osteoporosis Assessment in Postmenopausal Women

Article Type
Changed
Fri, 07/30/2021 - 01:15
Display Headline
Traumatic Fractures Should Trigger Osteoporosis Assessment in Postmenopausal Women

Study Overview

Objective. To compare the risk of subsequent fractures after an initial traumatic or nontraumatic fracture in postmenopausal women.

Design. A prospective observational study utilizing data from the Women’s Health Initiative (WHI) Study, WHI Clinical Trials (WHI-CT), and WHI Bone Density Substudy to evaluate rates at which patients who suffered a traumatic fracture vs nontraumatic fracture develop a subsequent fracture.

Setting and participants. The WHI study, implemented at 40 United States clinical sites, enrolled 161 808 postmenopausal women aged 50 to 79 years at baseline between 1993 and 1998. The study cohort consisted of 75 335 patients who had self-reported fractures from September 1994 to December 1998 that were confirmed by the WHI Bone Density Substudy and WHI-CT. Of these participants, 253 (0.3%) were excluded because of a lack of follow-up information regarding incident fractures, and 8208 (10.9%) were excluded due to incomplete information on covariates, thus resulting in an analytic sample of 66 874 (88.8%) participants. Prospective fracture ascertainment with participants was conducted at least annually and the mechanism of fracture was assessed to differentiate traumatic vs nontraumatic incident fractures. Traumatic fractures were defined as fractures caused by motor vehicle collisions, falls from a height, falls downstairs, or sports injury. Nontraumatic fractures were defined as fractures caused by a trip and fall.

Main outcome measures. The primary outcome was an incident fracture at an anatomically distinct body part. Fractures were classified as upper extremity (carpal, elbow, lower or upper end of humerus, shaft of humerus, upper radius/ulna, or radius/ulna), lower extremity (ankle, hip, patella, pelvis, shaft of femur, tibia/fibula, or tibial plateau), or spine (lumbar and/or thoracic spine). Self-reported fractures were verified via medical chart review by WHI study physicians; hip fractures were confirmed by review of written reports of radiographic studies; and nonhip fractures were confirmed by review of radiography reports or clinical documentations.

Main results. In total, 66 874 women in the study (mean [SD] age) 63.1 (7.0) years without clinical fracture and 65.3 (7.2) years with clinical fracture at baseline were followed for 8.1 (1.6) years. Of these participants, 7142 (10.7%) experienced incident fracture during the study follow-up period (13.9 per 1000 person-years), and 721 (10.1%) of whom had a subsequent fracture. The adjusted hazard ratio (aHR) of subsequent fracture after an initial fracture was 1.49 (95% CI, 1.38-1.61, P < .001). Covariates adjusted were age, race, ethnicity, body mass index, treated diabetes, frequency of falls in the previous year, and physical function and activity. In women with initial traumatic fracture, the association between initial and subsequent fracture was increased (aHR, 1.25; 95% CI, 1.06-1.48, P = .01). Among women with initial nontraumatic fracture, the association between initial and subsequent fracture was also increased (aHR, 1.52; 95% CI, 1.37-1.68, P < .001). The confidence intervals for the 2 preceding associations for traumatic and nontraumatic initial fracture strata were overlapping.

Conclusion. Fractures, regardless of mechanism of injury, are similarly associated with an increased risk of subsequent fractures in postmenopausal women aged 50 years and older. Findings from this study provide evidence to support reevaluation of current clinical guidelines to include traumatic fracture as a trigger for osteoporosis screening.

Commentary

Osteoporosis is one of the most common age-associated disease that affects 1 in 4 women and 1 in 20 men over the age of 65.1 It increases the risk of fracture, and its clinical sequelae include reduced mobility, health decline, and increased all-cause mortality. The high prevalence of osteoporosis poses a clinical challenge as the global population continues to age. Pharmacological treatments such as bisphosphonates are highly effective in preventing or slowing bone mineral density (BMD) loss and reducing risk of fragility fractures (eg, nontraumatic fractures of the vertebra, hip, and femur) and are commonly used to mitigate adverse effects of degenerative bone changes secondary to osteoporosis.1

 

 

The high prevalence of osteoporosis and effectiveness of bisphosphonates raises the question of how to optimally identify adults at risk for osteoporosis so that pharmacologic therapy can be promptly initiated to prevent disease progression. Multiple osteoporosis screening guidelines, including those from the United States Preventive Services Task Force (USPSTF), American Association of Family Physicians, and National Osteoporosis Foundation, are widely used in the clinical setting to address this important clinical question. In general, the prevailing wisdom is to screen osteoporosis in postmenopausal women over the age of 65, women under the age of 65 who have a significant 10-year fracture risk, or women over the age of 50 who have experienced a fragility fracture.1 In the study reported by Crandall et al, it was shown that the risks of having subsequent fractures were similar after an initial traumatic or nontraumatic (fragility) fracture in postmenopausal women aged 50 years and older.2 This finding brings into question whether traumatic fractures should be viewed any differently than nontraumatic fractures in women over the age of 50 in light of evaluation for osteoporosis. Furthermore, these results suggest that most fractures in postmenopausal women may indicate decreased bone integrity, thus adding to the rationale that osteoporosis screening needs to be considered and expanded to include postmenopausal women under the age of 65 who endured a traumatic fracture.

Per current guidelines, a woman under the age of 65 is recommended for osteoporosis screening only if she has an increased 10-year fracture risk compared to women aged 65 years and older. This risk is calculated based on the World Health Organization fracture-risk algorithm (WHO FRAX) tool which uses multiple factors such as age, weight, and history of fragility fractures to predict whether an individual is at risk of developing a fracture in the next 10 years. The WHO FRAX tool does not include traumatic fractures in its risk calculation and current clinical guidelines do not account for traumatic fractures as a red flag to initiate osteoporosis screening. Therefore, postmenopausal women under the age of 65 are less likely to be screened for osteoporosis when they experience a traumatic fracture compared to a fragility fracture, despite being at a demonstrably higher risk for subsequent fracture. As an unintended consequence, this may lead to the under diagnosis of osteoporosis in postmenopausal women under the age of 65. Thus, Crandall et al conclude that a fracture due to any cause warrants follow up evaluation for osteoporosis including BMD testing in women older than 50 years of age.

Older men constitute another population who are commonly under screened for osteoporosis. The current USPSTF guidelines indicate that there is an insufficient body of evidence to screen men for osteoporosis given its lower prevalence.1 However, it is important to note that men have significantly increased mortality after a hip fracture, are less likely to be on pharmacological treatment for osteoporosis, and are under diagnosed for osteoporosis.3 Consistent with findings from the current study, Leslie et al showed that high-trauma and low-trauma fractures have similarly elevated subsequent fracture risk in both men and women over the age of 40 in a Canadian study.4 Moreover, in the same study, BMD was decreased in both men and women who suffered a fracture regardless of the injury mechanism. This finding further underscores a need to consider traumatic fractures as a risk factor for osteoporosis. Taken together, given that men are under screened and treated for osteoporosis but have increased mortality post-fracture, considerations to initiate osteoporosis evaluation should be similarly given to men who endured a traumatic fracture.

The study conducted by Crandall et al has several strengths. It is noteworthy for the large size of the WHI cohort with participants from across the United States which enables the capture of a wider range of age groups as women under the age of 65 are not common participants of osteoporosis studies. Additionally, data ascertainment and outcome adjudication utilizing medical records and physician review assure data quality. A limitation of the study is that the study cohort consists exclusively of women and therefore the findings are not generalizable to men. However, findings from this study echo those from other studies that investigate the relationship between fracture mechanisms and subsequent fracture risk in men and women.3,4 Collectively, these comparable findings highlight the need for additional research to validate traumatic fracture as a risk factor for osteoporosis and to incorporate it into clinical guidelines for osteoporosis screening.

Applications for Clinical Practice

The findings from the current study indicate that traumatic and fragility fractures may be more alike than previously recognized in regards to bone health and subsequent fracture prevention in postmenopausal women. If validated, these results may lead to changes in clinical practice whereby all fractures in postmenopausal women could trigger osteoporosis screening, assessment, and treatment if indicated for the secondary prevention of fractures.

References

1. US Preventive Services Task Force, Curry SJ, Krist Ah, et al. Screening for Osteoporosis to Prevent Fractures: US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319(24):2521–2531. doi:10.1001/jama.2018.7498

2. Crandall CJ, Larson JC, LaCroix AZ, et al. Risk of Subsequent Fractures in Postmenopausal Women After Nontraumatic vs Traumatic Fractures. JAMA Intern Med. Published online June 7, 2021. doi:10.1001/jamainternmed.2021.2617

3. Mackey DC, Lui L, Cawthon PM, et al. High-Trauma Fractures and Low Bone Mineral Density in Older Women and Men. JAMA. 2007;298(20):2381–2388. doi:10.1001/jama.298.20.2381

4. Leslie WD, Schousboe JT, Morin SN, et al. Fracture risk following high-trauma versus low-trauma fracture: a registry-based cohort study. Osteoporos Int. 2020;31(6):1059–1067. doi:10.1007/s00198-019-05274-2

Article PDF
Issue
Journal of Clinical Outcomes Management - 28(4)
Publications
Topics
Page Number
149-151
Sections
Article PDF
Article PDF

Study Overview

Objective. To compare the risk of subsequent fractures after an initial traumatic or nontraumatic fracture in postmenopausal women.

Design. A prospective observational study utilizing data from the Women’s Health Initiative (WHI) Study, WHI Clinical Trials (WHI-CT), and WHI Bone Density Substudy to evaluate rates at which patients who suffered a traumatic fracture vs nontraumatic fracture develop a subsequent fracture.

Setting and participants. The WHI study, implemented at 40 United States clinical sites, enrolled 161 808 postmenopausal women aged 50 to 79 years at baseline between 1993 and 1998. The study cohort consisted of 75 335 patients who had self-reported fractures from September 1994 to December 1998 that were confirmed by the WHI Bone Density Substudy and WHI-CT. Of these participants, 253 (0.3%) were excluded because of a lack of follow-up information regarding incident fractures, and 8208 (10.9%) were excluded due to incomplete information on covariates, thus resulting in an analytic sample of 66 874 (88.8%) participants. Prospective fracture ascertainment with participants was conducted at least annually and the mechanism of fracture was assessed to differentiate traumatic vs nontraumatic incident fractures. Traumatic fractures were defined as fractures caused by motor vehicle collisions, falls from a height, falls downstairs, or sports injury. Nontraumatic fractures were defined as fractures caused by a trip and fall.

Main outcome measures. The primary outcome was an incident fracture at an anatomically distinct body part. Fractures were classified as upper extremity (carpal, elbow, lower or upper end of humerus, shaft of humerus, upper radius/ulna, or radius/ulna), lower extremity (ankle, hip, patella, pelvis, shaft of femur, tibia/fibula, or tibial plateau), or spine (lumbar and/or thoracic spine). Self-reported fractures were verified via medical chart review by WHI study physicians; hip fractures were confirmed by review of written reports of radiographic studies; and nonhip fractures were confirmed by review of radiography reports or clinical documentations.

Main results. In total, 66 874 women in the study (mean [SD] age) 63.1 (7.0) years without clinical fracture and 65.3 (7.2) years with clinical fracture at baseline were followed for 8.1 (1.6) years. Of these participants, 7142 (10.7%) experienced incident fracture during the study follow-up period (13.9 per 1000 person-years), and 721 (10.1%) of whom had a subsequent fracture. The adjusted hazard ratio (aHR) of subsequent fracture after an initial fracture was 1.49 (95% CI, 1.38-1.61, P < .001). Covariates adjusted were age, race, ethnicity, body mass index, treated diabetes, frequency of falls in the previous year, and physical function and activity. In women with initial traumatic fracture, the association between initial and subsequent fracture was increased (aHR, 1.25; 95% CI, 1.06-1.48, P = .01). Among women with initial nontraumatic fracture, the association between initial and subsequent fracture was also increased (aHR, 1.52; 95% CI, 1.37-1.68, P < .001). The confidence intervals for the 2 preceding associations for traumatic and nontraumatic initial fracture strata were overlapping.

Conclusion. Fractures, regardless of mechanism of injury, are similarly associated with an increased risk of subsequent fractures in postmenopausal women aged 50 years and older. Findings from this study provide evidence to support reevaluation of current clinical guidelines to include traumatic fracture as a trigger for osteoporosis screening.

Commentary

Osteoporosis is one of the most common age-associated disease that affects 1 in 4 women and 1 in 20 men over the age of 65.1 It increases the risk of fracture, and its clinical sequelae include reduced mobility, health decline, and increased all-cause mortality. The high prevalence of osteoporosis poses a clinical challenge as the global population continues to age. Pharmacological treatments such as bisphosphonates are highly effective in preventing or slowing bone mineral density (BMD) loss and reducing risk of fragility fractures (eg, nontraumatic fractures of the vertebra, hip, and femur) and are commonly used to mitigate adverse effects of degenerative bone changes secondary to osteoporosis.1

 

 

The high prevalence of osteoporosis and effectiveness of bisphosphonates raises the question of how to optimally identify adults at risk for osteoporosis so that pharmacologic therapy can be promptly initiated to prevent disease progression. Multiple osteoporosis screening guidelines, including those from the United States Preventive Services Task Force (USPSTF), American Association of Family Physicians, and National Osteoporosis Foundation, are widely used in the clinical setting to address this important clinical question. In general, the prevailing wisdom is to screen osteoporosis in postmenopausal women over the age of 65, women under the age of 65 who have a significant 10-year fracture risk, or women over the age of 50 who have experienced a fragility fracture.1 In the study reported by Crandall et al, it was shown that the risks of having subsequent fractures were similar after an initial traumatic or nontraumatic (fragility) fracture in postmenopausal women aged 50 years and older.2 This finding brings into question whether traumatic fractures should be viewed any differently than nontraumatic fractures in women over the age of 50 in light of evaluation for osteoporosis. Furthermore, these results suggest that most fractures in postmenopausal women may indicate decreased bone integrity, thus adding to the rationale that osteoporosis screening needs to be considered and expanded to include postmenopausal women under the age of 65 who endured a traumatic fracture.

Per current guidelines, a woman under the age of 65 is recommended for osteoporosis screening only if she has an increased 10-year fracture risk compared to women aged 65 years and older. This risk is calculated based on the World Health Organization fracture-risk algorithm (WHO FRAX) tool which uses multiple factors such as age, weight, and history of fragility fractures to predict whether an individual is at risk of developing a fracture in the next 10 years. The WHO FRAX tool does not include traumatic fractures in its risk calculation and current clinical guidelines do not account for traumatic fractures as a red flag to initiate osteoporosis screening. Therefore, postmenopausal women under the age of 65 are less likely to be screened for osteoporosis when they experience a traumatic fracture compared to a fragility fracture, despite being at a demonstrably higher risk for subsequent fracture. As an unintended consequence, this may lead to the under diagnosis of osteoporosis in postmenopausal women under the age of 65. Thus, Crandall et al conclude that a fracture due to any cause warrants follow up evaluation for osteoporosis including BMD testing in women older than 50 years of age.

Older men constitute another population who are commonly under screened for osteoporosis. The current USPSTF guidelines indicate that there is an insufficient body of evidence to screen men for osteoporosis given its lower prevalence.1 However, it is important to note that men have significantly increased mortality after a hip fracture, are less likely to be on pharmacological treatment for osteoporosis, and are under diagnosed for osteoporosis.3 Consistent with findings from the current study, Leslie et al showed that high-trauma and low-trauma fractures have similarly elevated subsequent fracture risk in both men and women over the age of 40 in a Canadian study.4 Moreover, in the same study, BMD was decreased in both men and women who suffered a fracture regardless of the injury mechanism. This finding further underscores a need to consider traumatic fractures as a risk factor for osteoporosis. Taken together, given that men are under screened and treated for osteoporosis but have increased mortality post-fracture, considerations to initiate osteoporosis evaluation should be similarly given to men who endured a traumatic fracture.

The study conducted by Crandall et al has several strengths. It is noteworthy for the large size of the WHI cohort with participants from across the United States which enables the capture of a wider range of age groups as women under the age of 65 are not common participants of osteoporosis studies. Additionally, data ascertainment and outcome adjudication utilizing medical records and physician review assure data quality. A limitation of the study is that the study cohort consists exclusively of women and therefore the findings are not generalizable to men. However, findings from this study echo those from other studies that investigate the relationship between fracture mechanisms and subsequent fracture risk in men and women.3,4 Collectively, these comparable findings highlight the need for additional research to validate traumatic fracture as a risk factor for osteoporosis and to incorporate it into clinical guidelines for osteoporosis screening.

Applications for Clinical Practice

The findings from the current study indicate that traumatic and fragility fractures may be more alike than previously recognized in regards to bone health and subsequent fracture prevention in postmenopausal women. If validated, these results may lead to changes in clinical practice whereby all fractures in postmenopausal women could trigger osteoporosis screening, assessment, and treatment if indicated for the secondary prevention of fractures.

Study Overview

Objective. To compare the risk of subsequent fractures after an initial traumatic or nontraumatic fracture in postmenopausal women.

Design. A prospective observational study utilizing data from the Women’s Health Initiative (WHI) Study, WHI Clinical Trials (WHI-CT), and WHI Bone Density Substudy to evaluate rates at which patients who suffered a traumatic fracture vs nontraumatic fracture develop a subsequent fracture.

Setting and participants. The WHI study, implemented at 40 United States clinical sites, enrolled 161 808 postmenopausal women aged 50 to 79 years at baseline between 1993 and 1998. The study cohort consisted of 75 335 patients who had self-reported fractures from September 1994 to December 1998 that were confirmed by the WHI Bone Density Substudy and WHI-CT. Of these participants, 253 (0.3%) were excluded because of a lack of follow-up information regarding incident fractures, and 8208 (10.9%) were excluded due to incomplete information on covariates, thus resulting in an analytic sample of 66 874 (88.8%) participants. Prospective fracture ascertainment with participants was conducted at least annually and the mechanism of fracture was assessed to differentiate traumatic vs nontraumatic incident fractures. Traumatic fractures were defined as fractures caused by motor vehicle collisions, falls from a height, falls downstairs, or sports injury. Nontraumatic fractures were defined as fractures caused by a trip and fall.

Main outcome measures. The primary outcome was an incident fracture at an anatomically distinct body part. Fractures were classified as upper extremity (carpal, elbow, lower or upper end of humerus, shaft of humerus, upper radius/ulna, or radius/ulna), lower extremity (ankle, hip, patella, pelvis, shaft of femur, tibia/fibula, or tibial plateau), or spine (lumbar and/or thoracic spine). Self-reported fractures were verified via medical chart review by WHI study physicians; hip fractures were confirmed by review of written reports of radiographic studies; and nonhip fractures were confirmed by review of radiography reports or clinical documentations.

Main results. In total, 66 874 women in the study (mean [SD] age) 63.1 (7.0) years without clinical fracture and 65.3 (7.2) years with clinical fracture at baseline were followed for 8.1 (1.6) years. Of these participants, 7142 (10.7%) experienced incident fracture during the study follow-up period (13.9 per 1000 person-years), and 721 (10.1%) of whom had a subsequent fracture. The adjusted hazard ratio (aHR) of subsequent fracture after an initial fracture was 1.49 (95% CI, 1.38-1.61, P < .001). Covariates adjusted were age, race, ethnicity, body mass index, treated diabetes, frequency of falls in the previous year, and physical function and activity. In women with initial traumatic fracture, the association between initial and subsequent fracture was increased (aHR, 1.25; 95% CI, 1.06-1.48, P = .01). Among women with initial nontraumatic fracture, the association between initial and subsequent fracture was also increased (aHR, 1.52; 95% CI, 1.37-1.68, P < .001). The confidence intervals for the 2 preceding associations for traumatic and nontraumatic initial fracture strata were overlapping.

Conclusion. Fractures, regardless of mechanism of injury, are similarly associated with an increased risk of subsequent fractures in postmenopausal women aged 50 years and older. Findings from this study provide evidence to support reevaluation of current clinical guidelines to include traumatic fracture as a trigger for osteoporosis screening.

Commentary

Osteoporosis is one of the most common age-associated disease that affects 1 in 4 women and 1 in 20 men over the age of 65.1 It increases the risk of fracture, and its clinical sequelae include reduced mobility, health decline, and increased all-cause mortality. The high prevalence of osteoporosis poses a clinical challenge as the global population continues to age. Pharmacological treatments such as bisphosphonates are highly effective in preventing or slowing bone mineral density (BMD) loss and reducing risk of fragility fractures (eg, nontraumatic fractures of the vertebra, hip, and femur) and are commonly used to mitigate adverse effects of degenerative bone changes secondary to osteoporosis.1

 

 

The high prevalence of osteoporosis and effectiveness of bisphosphonates raises the question of how to optimally identify adults at risk for osteoporosis so that pharmacologic therapy can be promptly initiated to prevent disease progression. Multiple osteoporosis screening guidelines, including those from the United States Preventive Services Task Force (USPSTF), American Association of Family Physicians, and National Osteoporosis Foundation, are widely used in the clinical setting to address this important clinical question. In general, the prevailing wisdom is to screen osteoporosis in postmenopausal women over the age of 65, women under the age of 65 who have a significant 10-year fracture risk, or women over the age of 50 who have experienced a fragility fracture.1 In the study reported by Crandall et al, it was shown that the risks of having subsequent fractures were similar after an initial traumatic or nontraumatic (fragility) fracture in postmenopausal women aged 50 years and older.2 This finding brings into question whether traumatic fractures should be viewed any differently than nontraumatic fractures in women over the age of 50 in light of evaluation for osteoporosis. Furthermore, these results suggest that most fractures in postmenopausal women may indicate decreased bone integrity, thus adding to the rationale that osteoporosis screening needs to be considered and expanded to include postmenopausal women under the age of 65 who endured a traumatic fracture.

Per current guidelines, a woman under the age of 65 is recommended for osteoporosis screening only if she has an increased 10-year fracture risk compared to women aged 65 years and older. This risk is calculated based on the World Health Organization fracture-risk algorithm (WHO FRAX) tool which uses multiple factors such as age, weight, and history of fragility fractures to predict whether an individual is at risk of developing a fracture in the next 10 years. The WHO FRAX tool does not include traumatic fractures in its risk calculation and current clinical guidelines do not account for traumatic fractures as a red flag to initiate osteoporosis screening. Therefore, postmenopausal women under the age of 65 are less likely to be screened for osteoporosis when they experience a traumatic fracture compared to a fragility fracture, despite being at a demonstrably higher risk for subsequent fracture. As an unintended consequence, this may lead to the under diagnosis of osteoporosis in postmenopausal women under the age of 65. Thus, Crandall et al conclude that a fracture due to any cause warrants follow up evaluation for osteoporosis including BMD testing in women older than 50 years of age.

Older men constitute another population who are commonly under screened for osteoporosis. The current USPSTF guidelines indicate that there is an insufficient body of evidence to screen men for osteoporosis given its lower prevalence.1 However, it is important to note that men have significantly increased mortality after a hip fracture, are less likely to be on pharmacological treatment for osteoporosis, and are under diagnosed for osteoporosis.3 Consistent with findings from the current study, Leslie et al showed that high-trauma and low-trauma fractures have similarly elevated subsequent fracture risk in both men and women over the age of 40 in a Canadian study.4 Moreover, in the same study, BMD was decreased in both men and women who suffered a fracture regardless of the injury mechanism. This finding further underscores a need to consider traumatic fractures as a risk factor for osteoporosis. Taken together, given that men are under screened and treated for osteoporosis but have increased mortality post-fracture, considerations to initiate osteoporosis evaluation should be similarly given to men who endured a traumatic fracture.

The study conducted by Crandall et al has several strengths. It is noteworthy for the large size of the WHI cohort with participants from across the United States which enables the capture of a wider range of age groups as women under the age of 65 are not common participants of osteoporosis studies. Additionally, data ascertainment and outcome adjudication utilizing medical records and physician review assure data quality. A limitation of the study is that the study cohort consists exclusively of women and therefore the findings are not generalizable to men. However, findings from this study echo those from other studies that investigate the relationship between fracture mechanisms and subsequent fracture risk in men and women.3,4 Collectively, these comparable findings highlight the need for additional research to validate traumatic fracture as a risk factor for osteoporosis and to incorporate it into clinical guidelines for osteoporosis screening.

Applications for Clinical Practice

The findings from the current study indicate that traumatic and fragility fractures may be more alike than previously recognized in regards to bone health and subsequent fracture prevention in postmenopausal women. If validated, these results may lead to changes in clinical practice whereby all fractures in postmenopausal women could trigger osteoporosis screening, assessment, and treatment if indicated for the secondary prevention of fractures.

References

1. US Preventive Services Task Force, Curry SJ, Krist Ah, et al. Screening for Osteoporosis to Prevent Fractures: US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319(24):2521–2531. doi:10.1001/jama.2018.7498

2. Crandall CJ, Larson JC, LaCroix AZ, et al. Risk of Subsequent Fractures in Postmenopausal Women After Nontraumatic vs Traumatic Fractures. JAMA Intern Med. Published online June 7, 2021. doi:10.1001/jamainternmed.2021.2617

3. Mackey DC, Lui L, Cawthon PM, et al. High-Trauma Fractures and Low Bone Mineral Density in Older Women and Men. JAMA. 2007;298(20):2381–2388. doi:10.1001/jama.298.20.2381

4. Leslie WD, Schousboe JT, Morin SN, et al. Fracture risk following high-trauma versus low-trauma fracture: a registry-based cohort study. Osteoporos Int. 2020;31(6):1059–1067. doi:10.1007/s00198-019-05274-2

References

1. US Preventive Services Task Force, Curry SJ, Krist Ah, et al. Screening for Osteoporosis to Prevent Fractures: US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319(24):2521–2531. doi:10.1001/jama.2018.7498

2. Crandall CJ, Larson JC, LaCroix AZ, et al. Risk of Subsequent Fractures in Postmenopausal Women After Nontraumatic vs Traumatic Fractures. JAMA Intern Med. Published online June 7, 2021. doi:10.1001/jamainternmed.2021.2617

3. Mackey DC, Lui L, Cawthon PM, et al. High-Trauma Fractures and Low Bone Mineral Density in Older Women and Men. JAMA. 2007;298(20):2381–2388. doi:10.1001/jama.298.20.2381

4. Leslie WD, Schousboe JT, Morin SN, et al. Fracture risk following high-trauma versus low-trauma fracture: a registry-based cohort study. Osteoporos Int. 2020;31(6):1059–1067. doi:10.1007/s00198-019-05274-2

Issue
Journal of Clinical Outcomes Management - 28(4)
Issue
Journal of Clinical Outcomes Management - 28(4)
Page Number
149-151
Page Number
149-151
Publications
Publications
Topics
Article Type
Display Headline
Traumatic Fractures Should Trigger Osteoporosis Assessment in Postmenopausal Women
Display Headline
Traumatic Fractures Should Trigger Osteoporosis Assessment in Postmenopausal Women
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media
Image
Teambase ID
18001D60.SIG
Disable zoom
Off

Remdesivir in Hospitalized Adults With Severe COVID-19: Lessons Learned From the First Randomized Trial

Article Type
Changed
Thu, 08/26/2021 - 16:07
Display Headline
Remdesivir in Hospitalized Adults With Severe COVID-19: Lessons Learned From the First Randomized Trial

Study Overview

Objective. To assess the efficacy, safety, and clinical benefit of remdesivir in hospitalized adults with confirmed pneumonia due to severe SARS-CoV-2 infection.

Design. Randomized, investigator-initiated, placebo-controlled, double-blind, multicenter trial.

Setting and participants. The trial took place between February 6, 2020 and March 12, 2020, at 10 hospitals in Wuhan, China. Study participants included adult patients (aged ≥ 18 years) admitted to hospital who tested positive for SARS-CoV-2 by reverse transcription polymerase chain reaction assay and had the following clinical characteristics: radiographic evidence of pneumonia; hypoxia with oxygen saturation ≤ 94% on room air or a ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤ 300 mm Hg; and symptom onset to enrollment ≤ 12 days. Some of the exclusion criteria for participation in the study were pregnancy or breast feeding, liver cirrhosis, abnormal liver enzymes ≥ 5 times the upper limit of normal, severe renal impairment or receipt of renal replacement therapy, plan for transfer to a non-study hospital, and enrollment in a trial for COVID-19 within the previous month.

Intervention. Participants were randomized in a 2:1 ratio to the remdesivir group or the placebo group and were administered either intravenous infusions of remdesivir (200 mg on day 1 followed by 100 mg daily on days 2-10) or the same volume of placebo for 10 days. Clinical and safety data assessed included laboratory testing, electrocardiogram, and medication adverse effects. Testing of oropharyngeal and nasopharyngeal swab samples, anal swab samples, sputum, and stool was performed for viral RNA detection and quantification on days 1, 3, 5, 7, 10, 14, 21, and 28.

Main outcome measures. The primary endpoint of this study was time to clinical improvement within 28 days after randomization. Clinical improvement was defined as a 2-point reduction in participants’ admission status on a 6-point ordinal scale (1 = discharged or clinical recovery, 6 = death) or live discharge from hospital, whichever came first. Secondary outcomes included all-cause mortality at day 28 and duration of hospital admission, oxygen support, and invasive mechanical ventilation. Virological measures and safety outcomes ascertained included treatment-emergent adverse events, serious adverse events, and premature discontinuation of remdesivir.

The sample size estimate for the original study design was a total of 453 patients (302 in the remdesivir group and 151 in the placebo group). This sample size would provide 80% power, assuming a hazard ratio (HR) of 1.4 comparing remdesivir to placebo, and corresponding to a change in time to clinical improvement of 6 days. The analysis of primary outcome was performed on an intention-to-treat basis. Time to clinical improvement within 28 days was assessed with Kaplan-Meier plots.

Main results. A total of 255 patients were screened, of whom 237 were enrolled and randomized to remdesivir (158) or placebo (79) group. Of the participants in the remdesivir group, 155 started study treatment and 150 completed treatment per protocol. For the participants in the placebo group, 78 started study treatment and 76 completed treatment per-protocol. Study enrollment was terminated after March 12, 2020, before attaining the prespecified sample size, because no additional patients met study eligibility criteria due to various public health measures implemented in Wuhan. The median age of participants was 65 years (IQR, 56-71), the majority were men (56% in remdesivir group vs 65% in placebo group), and the most common comorbidities included hypertension, diabetes, and coronary artery disease. Median time from symptom onset to study enrollment was 10 days (IQR, 9-12). The time to clinical improvement between treatments (21 days for remdesivir group vs 23 days for placebo group) was not significantly different (HR, 1.23; 95% confidence interval [CI], 0.87-1.75). In addition, in participants who received treatment within 10 days of symptom onset, those who were administered remdesivir had a nonsignificant (HR, 1.52; 95% CI, 0.95-2.43) but faster time (18 days) to clinical improvement, compared to those administered placebo (23 days). Moreover, treatment with remdesivir versus placebo did not lead to differences in secondary outcomes (eg, 28-day mortality and duration of hospital stay, oxygen support, and invasive mechanical ventilation), changes in viral load over time, or adverse events between the groups.

 

 

Conclusion. This study found that, compared with placebo, intravenous remdesivir did not significantly improve the time to clinical improvement, mortality, or time to clearance of SARS-CoV-2 in hospitalized adults with severe COVID-19. A numeric reduction in time to clinical improvement with early remdesivir treatment (ie, within 10 days of symptom onset) that approached statistical significance was observed in this underpowered study.

Commentary

Within a few short months since its emergence. SARS-CoV-2 infection has caused a global pandemic, posing a dire threat to public health due to its adverse effects on morbidity (eg, respiratory failure, thromboembolic diseases, multiorgan failure) and mortality. To date, no pharmacologic treatment has been shown to effectively improve clinical outcomes in patients with COVID-19. Multiple ongoing clinical trials are being conducted globally to determine potential therapeutic treatments for severe COVID-19. The first clinical trials of hydroxychloroquine and lopinavir-ritonavir, agents traditionally used for other indications, such as malaria and HIV, did not show a clear benefit in COVID-19.1,2 Remdesivir, a nucleoside analogue prodrug, is a broad-spectrum antiviral agent that was previously used for treatment of Ebola and has been shown to have inhibitory effects on pathogenic coronaviruses. The study reported by Wang and colleagues was the first randomized controlled trial (RCT) aimed at evaluating whether remdesivir improves outcomes in patients with severe COVID-19. Thus, the worsening COVID-19 pandemic, coupled with the absence of a curative treatment, underscore the urgency of this trial.

The study was grounded on observational data from several recent case reports and case series centering on the potential efficacy of remdesivir in treating COVID-19.3 The study itself was designed well (ie, randomized, placebo-controlled, double-blind, multicenter) and carefully implemented (ie, high protocol adherence to treatments, no loss to follow-up). The principal limitation of this study was its inability to reach the estimated statistical power of study. Due to successful epidemic control in Wuhan, which led to marked reductions in hospital admission of patients with COVID-19, and implementation of stringent termination criteria per the study protocol, only 237 participants were enrolled, instead of the 453, as specified by the sample estimate. This corresponded to a reduction of statistical power from 80% to 58%. Due to this limitation, the study was underpowered, rendering its findings inconclusive.

Despite this limitation, the study found that those treated with remdesivir within 10 days of symptom onset had a numerically faster time (although not statistically significant) to clinical improvement. This leads to an interesting question: whether remdesivir administration early in COVID-19 course could improve clinical outcomes, a question that warrants further investigation by an adequately powered trial. Also, data from this study provided evidence that intravenous remdesivir administration is likely safe in adults during the treatment period, although the long-term drug effects, as well as the safety profile in pediatric patients, remain unknown at this time.

While the study reported by Wang and colleagues was underpowered and is thus inconclusive, several other ongoing RCTs are evaluating the potential clinical benefit of remdesivir treatment in patients hospitalized with COVID-19. On the date of online publication of this report in The Lancet, the National Institutes of Health (NIH) published a news release summarizing preliminary findings from the Adaptive COVID-19 Treatment Trial (ACTT), which showed positive effects of remdesivir on clinical recovery from advanced COVID-19.4 The ACTT, the first RCT launched in the United States to evaluate experimental treatment for COVID-19, included 1063 hospitalized participants with advanced COVID-19 and lung involvement. Participants who were administered remdesivir had a 31% faster time to recovery compared to those in the placebo group (median time to recovery, 11 days vs 15 days, respectively; P < 0.001), and had near statistically significant improved survival (mortality rate, 8.0% vs 11.6%, respectively; P = 0.059). In response to these findings, the US Food and Drug Administration (FDA) issued an emergency use authorization for remdesivir on May 1, 2020, for the treatment of suspected or laboratory-confirmed COVID-19 in adults and children hospitalized with severe disease.5 While the findings noted from the NIH news release are very encouraging and provide the first evidence of a potentially beneficial antiviral treatment for severe COVID-19 in humans, the scientific community awaits the peer-reviewed publication of the ACTT to better assess the safety and effectiveness of remdesivir therapy and determine the trial’s implications in the management of COVID-19.

 

 

Applications for Clinical Practice

The discovery of an effective pharmacologic intervention for COVID-19 is of utmost urgency. While the present study was unable to answer the question of whether remdesivir is effective in improving clinical outcomes in patients with severe COVID-19, other ongoing or completed (ie, ACTT) studies will likely address this knowledge gap in the coming months. The FDA’s emergency use authorization for remdesivir provides a glimpse into this possibility.

–Katerina Oikonomou, MD, Brookdale Department of Geriatrics & Palliative Medicine, Icahn School of Medicine at Mount Sinai, New York, NY

–Fred Ko, MD

References

1. Tang W, Cao Z, Han M, et al. Hydroxychloroquine in patients with COVID-19: an open-label, randomized, controlled trial [published online April 14, 2020]. medRxiv.org. doi:10.1101/2020.04.10.20060558.

2. Cao B, Wang Y, Wen D, et al. A trial of lopinavir–ritonavir in adults hospitalized with severe COVID-19. N Engl J Med. 2020;382:1787-1799. 

3. Grein J, Ohmagari N, Shin D, et al. Compassionate use of remdesivir for patients with severe COVID-19 [published online April 10, 2020]. N Engl J Med. doi:10.1056/NEJMoa2007016.

4. NIH clinical trial shows remdesivir accelerates recovery from advanced COVID-19. www.niaid.nih.gov/news-events/nih-clinical-trial-shows-remdesivir-accelerates-recovery-advanced-covid-19. Accessed May 9, 2020

5. Coronavirus (COVID-19) update: FDA issues Emergency Use Authorization for potential COVID-19 treatment. www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-issues-emergency-use-authorization-potential-covid-19-treatment. Accessed May 9, 2020.

Article PDF
Issue
Journal of Clinical Outcomes Management - 27(3)
Publications
Topics
Page Number
104-106
Sections
Article PDF
Article PDF

Study Overview

Objective. To assess the efficacy, safety, and clinical benefit of remdesivir in hospitalized adults with confirmed pneumonia due to severe SARS-CoV-2 infection.

Design. Randomized, investigator-initiated, placebo-controlled, double-blind, multicenter trial.

Setting and participants. The trial took place between February 6, 2020 and March 12, 2020, at 10 hospitals in Wuhan, China. Study participants included adult patients (aged ≥ 18 years) admitted to hospital who tested positive for SARS-CoV-2 by reverse transcription polymerase chain reaction assay and had the following clinical characteristics: radiographic evidence of pneumonia; hypoxia with oxygen saturation ≤ 94% on room air or a ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤ 300 mm Hg; and symptom onset to enrollment ≤ 12 days. Some of the exclusion criteria for participation in the study were pregnancy or breast feeding, liver cirrhosis, abnormal liver enzymes ≥ 5 times the upper limit of normal, severe renal impairment or receipt of renal replacement therapy, plan for transfer to a non-study hospital, and enrollment in a trial for COVID-19 within the previous month.

Intervention. Participants were randomized in a 2:1 ratio to the remdesivir group or the placebo group and were administered either intravenous infusions of remdesivir (200 mg on day 1 followed by 100 mg daily on days 2-10) or the same volume of placebo for 10 days. Clinical and safety data assessed included laboratory testing, electrocardiogram, and medication adverse effects. Testing of oropharyngeal and nasopharyngeal swab samples, anal swab samples, sputum, and stool was performed for viral RNA detection and quantification on days 1, 3, 5, 7, 10, 14, 21, and 28.

Main outcome measures. The primary endpoint of this study was time to clinical improvement within 28 days after randomization. Clinical improvement was defined as a 2-point reduction in participants’ admission status on a 6-point ordinal scale (1 = discharged or clinical recovery, 6 = death) or live discharge from hospital, whichever came first. Secondary outcomes included all-cause mortality at day 28 and duration of hospital admission, oxygen support, and invasive mechanical ventilation. Virological measures and safety outcomes ascertained included treatment-emergent adverse events, serious adverse events, and premature discontinuation of remdesivir.

The sample size estimate for the original study design was a total of 453 patients (302 in the remdesivir group and 151 in the placebo group). This sample size would provide 80% power, assuming a hazard ratio (HR) of 1.4 comparing remdesivir to placebo, and corresponding to a change in time to clinical improvement of 6 days. The analysis of primary outcome was performed on an intention-to-treat basis. Time to clinical improvement within 28 days was assessed with Kaplan-Meier plots.

Main results. A total of 255 patients were screened, of whom 237 were enrolled and randomized to remdesivir (158) or placebo (79) group. Of the participants in the remdesivir group, 155 started study treatment and 150 completed treatment per protocol. For the participants in the placebo group, 78 started study treatment and 76 completed treatment per-protocol. Study enrollment was terminated after March 12, 2020, before attaining the prespecified sample size, because no additional patients met study eligibility criteria due to various public health measures implemented in Wuhan. The median age of participants was 65 years (IQR, 56-71), the majority were men (56% in remdesivir group vs 65% in placebo group), and the most common comorbidities included hypertension, diabetes, and coronary artery disease. Median time from symptom onset to study enrollment was 10 days (IQR, 9-12). The time to clinical improvement between treatments (21 days for remdesivir group vs 23 days for placebo group) was not significantly different (HR, 1.23; 95% confidence interval [CI], 0.87-1.75). In addition, in participants who received treatment within 10 days of symptom onset, those who were administered remdesivir had a nonsignificant (HR, 1.52; 95% CI, 0.95-2.43) but faster time (18 days) to clinical improvement, compared to those administered placebo (23 days). Moreover, treatment with remdesivir versus placebo did not lead to differences in secondary outcomes (eg, 28-day mortality and duration of hospital stay, oxygen support, and invasive mechanical ventilation), changes in viral load over time, or adverse events between the groups.

 

 

Conclusion. This study found that, compared with placebo, intravenous remdesivir did not significantly improve the time to clinical improvement, mortality, or time to clearance of SARS-CoV-2 in hospitalized adults with severe COVID-19. A numeric reduction in time to clinical improvement with early remdesivir treatment (ie, within 10 days of symptom onset) that approached statistical significance was observed in this underpowered study.

Commentary

Within a few short months since its emergence. SARS-CoV-2 infection has caused a global pandemic, posing a dire threat to public health due to its adverse effects on morbidity (eg, respiratory failure, thromboembolic diseases, multiorgan failure) and mortality. To date, no pharmacologic treatment has been shown to effectively improve clinical outcomes in patients with COVID-19. Multiple ongoing clinical trials are being conducted globally to determine potential therapeutic treatments for severe COVID-19. The first clinical trials of hydroxychloroquine and lopinavir-ritonavir, agents traditionally used for other indications, such as malaria and HIV, did not show a clear benefit in COVID-19.1,2 Remdesivir, a nucleoside analogue prodrug, is a broad-spectrum antiviral agent that was previously used for treatment of Ebola and has been shown to have inhibitory effects on pathogenic coronaviruses. The study reported by Wang and colleagues was the first randomized controlled trial (RCT) aimed at evaluating whether remdesivir improves outcomes in patients with severe COVID-19. Thus, the worsening COVID-19 pandemic, coupled with the absence of a curative treatment, underscore the urgency of this trial.

The study was grounded on observational data from several recent case reports and case series centering on the potential efficacy of remdesivir in treating COVID-19.3 The study itself was designed well (ie, randomized, placebo-controlled, double-blind, multicenter) and carefully implemented (ie, high protocol adherence to treatments, no loss to follow-up). The principal limitation of this study was its inability to reach the estimated statistical power of study. Due to successful epidemic control in Wuhan, which led to marked reductions in hospital admission of patients with COVID-19, and implementation of stringent termination criteria per the study protocol, only 237 participants were enrolled, instead of the 453, as specified by the sample estimate. This corresponded to a reduction of statistical power from 80% to 58%. Due to this limitation, the study was underpowered, rendering its findings inconclusive.

Despite this limitation, the study found that those treated with remdesivir within 10 days of symptom onset had a numerically faster time (although not statistically significant) to clinical improvement. This leads to an interesting question: whether remdesivir administration early in COVID-19 course could improve clinical outcomes, a question that warrants further investigation by an adequately powered trial. Also, data from this study provided evidence that intravenous remdesivir administration is likely safe in adults during the treatment period, although the long-term drug effects, as well as the safety profile in pediatric patients, remain unknown at this time.

While the study reported by Wang and colleagues was underpowered and is thus inconclusive, several other ongoing RCTs are evaluating the potential clinical benefit of remdesivir treatment in patients hospitalized with COVID-19. On the date of online publication of this report in The Lancet, the National Institutes of Health (NIH) published a news release summarizing preliminary findings from the Adaptive COVID-19 Treatment Trial (ACTT), which showed positive effects of remdesivir on clinical recovery from advanced COVID-19.4 The ACTT, the first RCT launched in the United States to evaluate experimental treatment for COVID-19, included 1063 hospitalized participants with advanced COVID-19 and lung involvement. Participants who were administered remdesivir had a 31% faster time to recovery compared to those in the placebo group (median time to recovery, 11 days vs 15 days, respectively; P < 0.001), and had near statistically significant improved survival (mortality rate, 8.0% vs 11.6%, respectively; P = 0.059). In response to these findings, the US Food and Drug Administration (FDA) issued an emergency use authorization for remdesivir on May 1, 2020, for the treatment of suspected or laboratory-confirmed COVID-19 in adults and children hospitalized with severe disease.5 While the findings noted from the NIH news release are very encouraging and provide the first evidence of a potentially beneficial antiviral treatment for severe COVID-19 in humans, the scientific community awaits the peer-reviewed publication of the ACTT to better assess the safety and effectiveness of remdesivir therapy and determine the trial’s implications in the management of COVID-19.

 

 

Applications for Clinical Practice

The discovery of an effective pharmacologic intervention for COVID-19 is of utmost urgency. While the present study was unable to answer the question of whether remdesivir is effective in improving clinical outcomes in patients with severe COVID-19, other ongoing or completed (ie, ACTT) studies will likely address this knowledge gap in the coming months. The FDA’s emergency use authorization for remdesivir provides a glimpse into this possibility.

–Katerina Oikonomou, MD, Brookdale Department of Geriatrics & Palliative Medicine, Icahn School of Medicine at Mount Sinai, New York, NY

–Fred Ko, MD

Study Overview

Objective. To assess the efficacy, safety, and clinical benefit of remdesivir in hospitalized adults with confirmed pneumonia due to severe SARS-CoV-2 infection.

Design. Randomized, investigator-initiated, placebo-controlled, double-blind, multicenter trial.

Setting and participants. The trial took place between February 6, 2020 and March 12, 2020, at 10 hospitals in Wuhan, China. Study participants included adult patients (aged ≥ 18 years) admitted to hospital who tested positive for SARS-CoV-2 by reverse transcription polymerase chain reaction assay and had the following clinical characteristics: radiographic evidence of pneumonia; hypoxia with oxygen saturation ≤ 94% on room air or a ratio of arterial oxygen partial pressure to fractional inspired oxygen ≤ 300 mm Hg; and symptom onset to enrollment ≤ 12 days. Some of the exclusion criteria for participation in the study were pregnancy or breast feeding, liver cirrhosis, abnormal liver enzymes ≥ 5 times the upper limit of normal, severe renal impairment or receipt of renal replacement therapy, plan for transfer to a non-study hospital, and enrollment in a trial for COVID-19 within the previous month.

Intervention. Participants were randomized in a 2:1 ratio to the remdesivir group or the placebo group and were administered either intravenous infusions of remdesivir (200 mg on day 1 followed by 100 mg daily on days 2-10) or the same volume of placebo for 10 days. Clinical and safety data assessed included laboratory testing, electrocardiogram, and medication adverse effects. Testing of oropharyngeal and nasopharyngeal swab samples, anal swab samples, sputum, and stool was performed for viral RNA detection and quantification on days 1, 3, 5, 7, 10, 14, 21, and 28.

Main outcome measures. The primary endpoint of this study was time to clinical improvement within 28 days after randomization. Clinical improvement was defined as a 2-point reduction in participants’ admission status on a 6-point ordinal scale (1 = discharged or clinical recovery, 6 = death) or live discharge from hospital, whichever came first. Secondary outcomes included all-cause mortality at day 28 and duration of hospital admission, oxygen support, and invasive mechanical ventilation. Virological measures and safety outcomes ascertained included treatment-emergent adverse events, serious adverse events, and premature discontinuation of remdesivir.

The sample size estimate for the original study design was a total of 453 patients (302 in the remdesivir group and 151 in the placebo group). This sample size would provide 80% power, assuming a hazard ratio (HR) of 1.4 comparing remdesivir to placebo, and corresponding to a change in time to clinical improvement of 6 days. The analysis of primary outcome was performed on an intention-to-treat basis. Time to clinical improvement within 28 days was assessed with Kaplan-Meier plots.

Main results. A total of 255 patients were screened, of whom 237 were enrolled and randomized to remdesivir (158) or placebo (79) group. Of the participants in the remdesivir group, 155 started study treatment and 150 completed treatment per protocol. For the participants in the placebo group, 78 started study treatment and 76 completed treatment per-protocol. Study enrollment was terminated after March 12, 2020, before attaining the prespecified sample size, because no additional patients met study eligibility criteria due to various public health measures implemented in Wuhan. The median age of participants was 65 years (IQR, 56-71), the majority were men (56% in remdesivir group vs 65% in placebo group), and the most common comorbidities included hypertension, diabetes, and coronary artery disease. Median time from symptom onset to study enrollment was 10 days (IQR, 9-12). The time to clinical improvement between treatments (21 days for remdesivir group vs 23 days for placebo group) was not significantly different (HR, 1.23; 95% confidence interval [CI], 0.87-1.75). In addition, in participants who received treatment within 10 days of symptom onset, those who were administered remdesivir had a nonsignificant (HR, 1.52; 95% CI, 0.95-2.43) but faster time (18 days) to clinical improvement, compared to those administered placebo (23 days). Moreover, treatment with remdesivir versus placebo did not lead to differences in secondary outcomes (eg, 28-day mortality and duration of hospital stay, oxygen support, and invasive mechanical ventilation), changes in viral load over time, or adverse events between the groups.

 

 

Conclusion. This study found that, compared with placebo, intravenous remdesivir did not significantly improve the time to clinical improvement, mortality, or time to clearance of SARS-CoV-2 in hospitalized adults with severe COVID-19. A numeric reduction in time to clinical improvement with early remdesivir treatment (ie, within 10 days of symptom onset) that approached statistical significance was observed in this underpowered study.

Commentary

Within a few short months since its emergence. SARS-CoV-2 infection has caused a global pandemic, posing a dire threat to public health due to its adverse effects on morbidity (eg, respiratory failure, thromboembolic diseases, multiorgan failure) and mortality. To date, no pharmacologic treatment has been shown to effectively improve clinical outcomes in patients with COVID-19. Multiple ongoing clinical trials are being conducted globally to determine potential therapeutic treatments for severe COVID-19. The first clinical trials of hydroxychloroquine and lopinavir-ritonavir, agents traditionally used for other indications, such as malaria and HIV, did not show a clear benefit in COVID-19.1,2 Remdesivir, a nucleoside analogue prodrug, is a broad-spectrum antiviral agent that was previously used for treatment of Ebola and has been shown to have inhibitory effects on pathogenic coronaviruses. The study reported by Wang and colleagues was the first randomized controlled trial (RCT) aimed at evaluating whether remdesivir improves outcomes in patients with severe COVID-19. Thus, the worsening COVID-19 pandemic, coupled with the absence of a curative treatment, underscore the urgency of this trial.

The study was grounded on observational data from several recent case reports and case series centering on the potential efficacy of remdesivir in treating COVID-19.3 The study itself was designed well (ie, randomized, placebo-controlled, double-blind, multicenter) and carefully implemented (ie, high protocol adherence to treatments, no loss to follow-up). The principal limitation of this study was its inability to reach the estimated statistical power of study. Due to successful epidemic control in Wuhan, which led to marked reductions in hospital admission of patients with COVID-19, and implementation of stringent termination criteria per the study protocol, only 237 participants were enrolled, instead of the 453, as specified by the sample estimate. This corresponded to a reduction of statistical power from 80% to 58%. Due to this limitation, the study was underpowered, rendering its findings inconclusive.

Despite this limitation, the study found that those treated with remdesivir within 10 days of symptom onset had a numerically faster time (although not statistically significant) to clinical improvement. This leads to an interesting question: whether remdesivir administration early in COVID-19 course could improve clinical outcomes, a question that warrants further investigation by an adequately powered trial. Also, data from this study provided evidence that intravenous remdesivir administration is likely safe in adults during the treatment period, although the long-term drug effects, as well as the safety profile in pediatric patients, remain unknown at this time.

While the study reported by Wang and colleagues was underpowered and is thus inconclusive, several other ongoing RCTs are evaluating the potential clinical benefit of remdesivir treatment in patients hospitalized with COVID-19. On the date of online publication of this report in The Lancet, the National Institutes of Health (NIH) published a news release summarizing preliminary findings from the Adaptive COVID-19 Treatment Trial (ACTT), which showed positive effects of remdesivir on clinical recovery from advanced COVID-19.4 The ACTT, the first RCT launched in the United States to evaluate experimental treatment for COVID-19, included 1063 hospitalized participants with advanced COVID-19 and lung involvement. Participants who were administered remdesivir had a 31% faster time to recovery compared to those in the placebo group (median time to recovery, 11 days vs 15 days, respectively; P < 0.001), and had near statistically significant improved survival (mortality rate, 8.0% vs 11.6%, respectively; P = 0.059). In response to these findings, the US Food and Drug Administration (FDA) issued an emergency use authorization for remdesivir on May 1, 2020, for the treatment of suspected or laboratory-confirmed COVID-19 in adults and children hospitalized with severe disease.5 While the findings noted from the NIH news release are very encouraging and provide the first evidence of a potentially beneficial antiviral treatment for severe COVID-19 in humans, the scientific community awaits the peer-reviewed publication of the ACTT to better assess the safety and effectiveness of remdesivir therapy and determine the trial’s implications in the management of COVID-19.

 

 

Applications for Clinical Practice

The discovery of an effective pharmacologic intervention for COVID-19 is of utmost urgency. While the present study was unable to answer the question of whether remdesivir is effective in improving clinical outcomes in patients with severe COVID-19, other ongoing or completed (ie, ACTT) studies will likely address this knowledge gap in the coming months. The FDA’s emergency use authorization for remdesivir provides a glimpse into this possibility.

–Katerina Oikonomou, MD, Brookdale Department of Geriatrics & Palliative Medicine, Icahn School of Medicine at Mount Sinai, New York, NY

–Fred Ko, MD

References

1. Tang W, Cao Z, Han M, et al. Hydroxychloroquine in patients with COVID-19: an open-label, randomized, controlled trial [published online April 14, 2020]. medRxiv.org. doi:10.1101/2020.04.10.20060558.

2. Cao B, Wang Y, Wen D, et al. A trial of lopinavir–ritonavir in adults hospitalized with severe COVID-19. N Engl J Med. 2020;382:1787-1799. 

3. Grein J, Ohmagari N, Shin D, et al. Compassionate use of remdesivir for patients with severe COVID-19 [published online April 10, 2020]. N Engl J Med. doi:10.1056/NEJMoa2007016.

4. NIH clinical trial shows remdesivir accelerates recovery from advanced COVID-19. www.niaid.nih.gov/news-events/nih-clinical-trial-shows-remdesivir-accelerates-recovery-advanced-covid-19. Accessed May 9, 2020

5. Coronavirus (COVID-19) update: FDA issues Emergency Use Authorization for potential COVID-19 treatment. www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-issues-emergency-use-authorization-potential-covid-19-treatment. Accessed May 9, 2020.

References

1. Tang W, Cao Z, Han M, et al. Hydroxychloroquine in patients with COVID-19: an open-label, randomized, controlled trial [published online April 14, 2020]. medRxiv.org. doi:10.1101/2020.04.10.20060558.

2. Cao B, Wang Y, Wen D, et al. A trial of lopinavir–ritonavir in adults hospitalized with severe COVID-19. N Engl J Med. 2020;382:1787-1799. 

3. Grein J, Ohmagari N, Shin D, et al. Compassionate use of remdesivir for patients with severe COVID-19 [published online April 10, 2020]. N Engl J Med. doi:10.1056/NEJMoa2007016.

4. NIH clinical trial shows remdesivir accelerates recovery from advanced COVID-19. www.niaid.nih.gov/news-events/nih-clinical-trial-shows-remdesivir-accelerates-recovery-advanced-covid-19. Accessed May 9, 2020

5. Coronavirus (COVID-19) update: FDA issues Emergency Use Authorization for potential COVID-19 treatment. www.fda.gov/news-events/press-announcements/coronavirus-covid-19-update-fda-issues-emergency-use-authorization-potential-covid-19-treatment. Accessed May 9, 2020.

Issue
Journal of Clinical Outcomes Management - 27(3)
Issue
Journal of Clinical Outcomes Management - 27(3)
Page Number
104-106
Page Number
104-106
Publications
Publications
Topics
Article Type
Display Headline
Remdesivir in Hospitalized Adults With Severe COVID-19: Lessons Learned From the First Randomized Trial
Display Headline
Remdesivir in Hospitalized Adults With Severe COVID-19: Lessons Learned From the First Randomized Trial
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Article PDF Media
Image
Teambase ID
1800178F.SIG
Disable zoom
Off

Higher Step Volume Is Associated with Lower Mortality in Older Women

Article Type
Changed
Thu, 04/23/2020 - 12:41
Display Headline
Higher Step Volume Is Associated with Lower Mortality in Older Women

Study Overview

Objective. To evaluate the association of number of steps taken per day and stepping intensity with all-cause mortality in older women.

Design. This was a prospective cohort study of US women participating in the Women’s Health Study (WHS). Participants wore an accelerometer device (ActiGraph GT3X+, ActiGraph Corp, Pensacola, FL) on the hip during waking hours for 7 consecutive days between 2011 and 2015. The accelerator data were collected at 30 Hz and aggregated into 60-second, time-stamped epochs. Data from participants who were adherent with wearing devices (defined as ≥ 10 hours/day of wear on ≥ 4 days) were used in an analysis that was conducted between 2018 and 2019. The exposure variables were defined as steps taken per day and measures of stepping intensity (ie, peak 1-minute cadence; peak 30-minute cadence; maximum 5-minute cadence; and time spent at a stepping rate of ≥ 40 steps/minute, reflecting purposeful steps).

Setting and participants. In total, 18,289 women participated in this study. Of these, 17,708 wore and returned their accelerometer devices, and data were downloaded successfully from 17,466 devices. Compliant wearers of the device (≥ 10 hours/day of wear on ≥4 days) included 16,741 participants (96% compliance rate of all downloaded device data).

Main outcome measure. All-cause mortality as ascertained through the National Death Index or confirmed by medical records and death certificates.

Main results. In this cohort of 16,741 women, average age at baseline was 72.0 ± 5.7 years (range, 62 to 101 years) and the mean step count was 5499 per day (median, 5094 steps/day) during the 7-day data capture period between 2011 and 2015. Not taking steps (0 steps/minute) accounted for 51.4% of the recorded time, incidental steps (1 to 39 steps/minute) accounted for 45.5%, and purposeful steps (≥ 40 steps/minute) accounted for 3.1%. The mean follow-up period was 4.3 years; during this time, 504 participants died. The median steps per day across quartiles were 2718 (lowest), 4363, 5905, and 8442 (highest). The corresponding quartile hazard ratios (HRs) associated with mortality adjusted for confounders were 1.00 (reference; lowest quartile), 0.59 (95% confidence interval [CI], 0.47-0.75), 0.54 (95% CI, 0.41-0.72), and 0.42 (95% CI, 0.30-0.60; highest quartile), respectively (P < 0.01). A higher mean step count per day, up to approximately 7500 steps/day, corresponded with progressive and steady decline in mortality HRs using spline analyses. Similar results were observed using sensitivity analyses that minimized reverse causation bias. While the adjusted analysis of measures of stepping intensity showed an inverse association with mortality rates, these associations were no longer significant after accounting for steps per day. Specifically, adjusted HRs comparing highest to lowest quartile were 0.87 (95% CI, 0.68-1.11) for peak 1-minute cadence; 0.86 (95% CI, 0.65-1.13) for peak 30-minute cadence; 0.80 (95% CI, 0.62-1.05) for maximum 5-minute cadence; and 1.27 (95% CI, 0.96-1.68) for time spent at a stepping rate of ≥ 40 steps/minute.

Conclusion. Older women who took approximately 4400 steps per day had lower all-cause mortality rates during a follow-up period of 4.3 years compared to those who took approximately 2700 steps each day. Progressive reduction in mortality rates was associated with increased steps per day before leveling at about 7500 steps/day. Stepping intensity, when accounting for number of steps taken per day, was not associated with reduction in mortality rates in older women.

Commentary

The health and mortality benefits of exercise are well recognized. The 2018 Department of Health and Human Services Physical Activity Guidelines (DHHS-PAG) recommend that adults should do at least 150 to 300 minutes of moderate-intensity aerobic physical activity per week, or 75 to 150 minutes of vigorous-intensity aerobic physical activity per week, in addition to doing muscle-strengthening activities on 2 or more days a week.1 Importantly, the guidelines emphasize that moving more and sitting less benefit nearly everyone, and note that measures of steps as a metric of ambulation can further promote translation of research into public health recommendations for exercise interventions. Despite this recognition, there is limited information centering on the number of daily steps (step volume) and the intensity of stepping that are needed to achieve optimal health outcomes in older adults. The study reported by Lee and colleagues adds new knowledge regarding the relationship between step volume and intensity and mortality in older women.

 

 

To date, only a handful of studies conducted outside of the United States have investigated the association between mortality and objectively measured step volume as determined by pedometer or accelerometer.2-4 While these studies observed that higher step counts are associated with lower mortality rates during follow-up periods of 5 to 10 years, their sample sizes were smaller and the study populations were different from those included in the study reported by Lee and colleagues. For example, the cohort from the United Kingdom included only men,2 and the participants in the Australian study were considerably younger, with a mean age of 59 years.4 In the current study, the largest of its kind thus far, it was observed that older women in the United States who take about 4400 steps a day have a lower mortality rate compared to those who take about 2700 steps a day. Moreover, the benefit of increased step volume on mortality progressively increases until plateauing at about 7500 steps per day. On the other hand, stepping intensity does not appear to lower mortality when step volume is accounted for. These results are important in that they add novel evidence that in older women, a patient population that tends to be sedentary, increased step volume (steps per day) but not stepping intensity (how quickly steps are taken) is associated with a reduction in mortality. Thus, these findings help to better characterize steps as a metric of ambulation in sedentary older adults per DHHS-PAG and add to the evidence necessary to translate this line of research into public health recommendations and programs.

While the health benefit of regular physical activity is well known and has been brought to the foreground with DDHA-PAG, only a small percentage of older adults engage in the recommended amounts and types of exercises. In other words, finding motivation to exercise is hard. Thus, identifying practical methods to facilitate behavioral change that increase and sustain physical activity in sedentary older adults would be essential to promoting health in this population. The use of wearable technologies such as fitness trackers and smartphone apps, devices that are now widely used, has shown promise for measuring and encouraging physical activity. The study by Lee and colleagues adds to this notion and further highlights the potential significance of step volume and mortality benefits in older women. Thus, future research in fitness technology should aim to integrate behavior change techniques (such as goal setting, feedback rewards, and action planning) and physical activity levels in order to improve health outcomes in older adults.5

In this study, the large sample size (> 16,000 participants), high compliance rate of accelerometer use (96% compliance rate), and reliable and continuous data capture (a built-in device feature) provide a large and complete dataset. This dataset, a major strength of the study, allowed the investigators to adequately control for potential confounders of physical activity, such as history of smoking, alcohol use, diet, and self-rated health, and therefore statistically minimize biases that are common in observational studies. However, some limitations inherent to the observational design are noted in this study. For instance, the observed association between step volume and mortality is correlational rather than causal, and a one-time assessment of steps taken over 7 consecutive days (ie, exposure) may not accurately reflect step volume and intensity of study participants over the span of 4.3 years of follow-up. Also, participants of WHS are predominately white, have higher socioeconomic status, and are more physically active than a national sample in the United States; therefore, caution should be exercised when making inferences to the general population.

 

Applications for Clinical Practice

Increased steps taken each day, up to about 7500 steps per day, is associated with lower mortality in older women. This finding can help inform the discussion when clinicians offer physical activity recommendations to older sedentary patients.

—Fred Ko, MD

References

1. Piercy KL, Troiano RP, Ballard RM, et al. The physical activity guidelines for Americans. JAMA. 2018;320:2020-2028.

2. Jefferis BJ, Parsons TJ, Sartini C, et al. Objectively measured physical activity, sedentary behaviour and all-cause mortality in older men: does volume of activity matter more than pattern of accumulation? Br J Sports Med. 2019;53:1013-1020.

3. Yamamoto N, Miyazaki H, Shimada M, et al. Daily step count and all-cause mortality in a sample of Japanese elderly people: a cohort study. BMC Public Health. 2018;18:540.

4. Dwyer T, Pezic A, Sun C, et al. Objectively measured daily steps and subsequent long term all-cause mortality: the Tasped prospective cohort study. PLoS One. 2015;10:e0141274.

5. Sullivan AN, Lachman ME. Behavior change with fitness technology in sedentary adults: a review of the evidence for increasing physical activity. Front Public Health. 2016;4:289.

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(5)
Publications
Topics
Page Number
204-206
Sections
Article PDF
Article PDF

Study Overview

Objective. To evaluate the association of number of steps taken per day and stepping intensity with all-cause mortality in older women.

Design. This was a prospective cohort study of US women participating in the Women’s Health Study (WHS). Participants wore an accelerometer device (ActiGraph GT3X+, ActiGraph Corp, Pensacola, FL) on the hip during waking hours for 7 consecutive days between 2011 and 2015. The accelerator data were collected at 30 Hz and aggregated into 60-second, time-stamped epochs. Data from participants who were adherent with wearing devices (defined as ≥ 10 hours/day of wear on ≥ 4 days) were used in an analysis that was conducted between 2018 and 2019. The exposure variables were defined as steps taken per day and measures of stepping intensity (ie, peak 1-minute cadence; peak 30-minute cadence; maximum 5-minute cadence; and time spent at a stepping rate of ≥ 40 steps/minute, reflecting purposeful steps).

Setting and participants. In total, 18,289 women participated in this study. Of these, 17,708 wore and returned their accelerometer devices, and data were downloaded successfully from 17,466 devices. Compliant wearers of the device (≥ 10 hours/day of wear on ≥4 days) included 16,741 participants (96% compliance rate of all downloaded device data).

Main outcome measure. All-cause mortality as ascertained through the National Death Index or confirmed by medical records and death certificates.

Main results. In this cohort of 16,741 women, average age at baseline was 72.0 ± 5.7 years (range, 62 to 101 years) and the mean step count was 5499 per day (median, 5094 steps/day) during the 7-day data capture period between 2011 and 2015. Not taking steps (0 steps/minute) accounted for 51.4% of the recorded time, incidental steps (1 to 39 steps/minute) accounted for 45.5%, and purposeful steps (≥ 40 steps/minute) accounted for 3.1%. The mean follow-up period was 4.3 years; during this time, 504 participants died. The median steps per day across quartiles were 2718 (lowest), 4363, 5905, and 8442 (highest). The corresponding quartile hazard ratios (HRs) associated with mortality adjusted for confounders were 1.00 (reference; lowest quartile), 0.59 (95% confidence interval [CI], 0.47-0.75), 0.54 (95% CI, 0.41-0.72), and 0.42 (95% CI, 0.30-0.60; highest quartile), respectively (P < 0.01). A higher mean step count per day, up to approximately 7500 steps/day, corresponded with progressive and steady decline in mortality HRs using spline analyses. Similar results were observed using sensitivity analyses that minimized reverse causation bias. While the adjusted analysis of measures of stepping intensity showed an inverse association with mortality rates, these associations were no longer significant after accounting for steps per day. Specifically, adjusted HRs comparing highest to lowest quartile were 0.87 (95% CI, 0.68-1.11) for peak 1-minute cadence; 0.86 (95% CI, 0.65-1.13) for peak 30-minute cadence; 0.80 (95% CI, 0.62-1.05) for maximum 5-minute cadence; and 1.27 (95% CI, 0.96-1.68) for time spent at a stepping rate of ≥ 40 steps/minute.

Conclusion. Older women who took approximately 4400 steps per day had lower all-cause mortality rates during a follow-up period of 4.3 years compared to those who took approximately 2700 steps each day. Progressive reduction in mortality rates was associated with increased steps per day before leveling at about 7500 steps/day. Stepping intensity, when accounting for number of steps taken per day, was not associated with reduction in mortality rates in older women.

Commentary

The health and mortality benefits of exercise are well recognized. The 2018 Department of Health and Human Services Physical Activity Guidelines (DHHS-PAG) recommend that adults should do at least 150 to 300 minutes of moderate-intensity aerobic physical activity per week, or 75 to 150 minutes of vigorous-intensity aerobic physical activity per week, in addition to doing muscle-strengthening activities on 2 or more days a week.1 Importantly, the guidelines emphasize that moving more and sitting less benefit nearly everyone, and note that measures of steps as a metric of ambulation can further promote translation of research into public health recommendations for exercise interventions. Despite this recognition, there is limited information centering on the number of daily steps (step volume) and the intensity of stepping that are needed to achieve optimal health outcomes in older adults. The study reported by Lee and colleagues adds new knowledge regarding the relationship between step volume and intensity and mortality in older women.

 

 

To date, only a handful of studies conducted outside of the United States have investigated the association between mortality and objectively measured step volume as determined by pedometer or accelerometer.2-4 While these studies observed that higher step counts are associated with lower mortality rates during follow-up periods of 5 to 10 years, their sample sizes were smaller and the study populations were different from those included in the study reported by Lee and colleagues. For example, the cohort from the United Kingdom included only men,2 and the participants in the Australian study were considerably younger, with a mean age of 59 years.4 In the current study, the largest of its kind thus far, it was observed that older women in the United States who take about 4400 steps a day have a lower mortality rate compared to those who take about 2700 steps a day. Moreover, the benefit of increased step volume on mortality progressively increases until plateauing at about 7500 steps per day. On the other hand, stepping intensity does not appear to lower mortality when step volume is accounted for. These results are important in that they add novel evidence that in older women, a patient population that tends to be sedentary, increased step volume (steps per day) but not stepping intensity (how quickly steps are taken) is associated with a reduction in mortality. Thus, these findings help to better characterize steps as a metric of ambulation in sedentary older adults per DHHS-PAG and add to the evidence necessary to translate this line of research into public health recommendations and programs.

While the health benefit of regular physical activity is well known and has been brought to the foreground with DDHA-PAG, only a small percentage of older adults engage in the recommended amounts and types of exercises. In other words, finding motivation to exercise is hard. Thus, identifying practical methods to facilitate behavioral change that increase and sustain physical activity in sedentary older adults would be essential to promoting health in this population. The use of wearable technologies such as fitness trackers and smartphone apps, devices that are now widely used, has shown promise for measuring and encouraging physical activity. The study by Lee and colleagues adds to this notion and further highlights the potential significance of step volume and mortality benefits in older women. Thus, future research in fitness technology should aim to integrate behavior change techniques (such as goal setting, feedback rewards, and action planning) and physical activity levels in order to improve health outcomes in older adults.5

In this study, the large sample size (> 16,000 participants), high compliance rate of accelerometer use (96% compliance rate), and reliable and continuous data capture (a built-in device feature) provide a large and complete dataset. This dataset, a major strength of the study, allowed the investigators to adequately control for potential confounders of physical activity, such as history of smoking, alcohol use, diet, and self-rated health, and therefore statistically minimize biases that are common in observational studies. However, some limitations inherent to the observational design are noted in this study. For instance, the observed association between step volume and mortality is correlational rather than causal, and a one-time assessment of steps taken over 7 consecutive days (ie, exposure) may not accurately reflect step volume and intensity of study participants over the span of 4.3 years of follow-up. Also, participants of WHS are predominately white, have higher socioeconomic status, and are more physically active than a national sample in the United States; therefore, caution should be exercised when making inferences to the general population.

 

Applications for Clinical Practice

Increased steps taken each day, up to about 7500 steps per day, is associated with lower mortality in older women. This finding can help inform the discussion when clinicians offer physical activity recommendations to older sedentary patients.

—Fred Ko, MD

Study Overview

Objective. To evaluate the association of number of steps taken per day and stepping intensity with all-cause mortality in older women.

Design. This was a prospective cohort study of US women participating in the Women’s Health Study (WHS). Participants wore an accelerometer device (ActiGraph GT3X+, ActiGraph Corp, Pensacola, FL) on the hip during waking hours for 7 consecutive days between 2011 and 2015. The accelerator data were collected at 30 Hz and aggregated into 60-second, time-stamped epochs. Data from participants who were adherent with wearing devices (defined as ≥ 10 hours/day of wear on ≥ 4 days) were used in an analysis that was conducted between 2018 and 2019. The exposure variables were defined as steps taken per day and measures of stepping intensity (ie, peak 1-minute cadence; peak 30-minute cadence; maximum 5-minute cadence; and time spent at a stepping rate of ≥ 40 steps/minute, reflecting purposeful steps).

Setting and participants. In total, 18,289 women participated in this study. Of these, 17,708 wore and returned their accelerometer devices, and data were downloaded successfully from 17,466 devices. Compliant wearers of the device (≥ 10 hours/day of wear on ≥4 days) included 16,741 participants (96% compliance rate of all downloaded device data).

Main outcome measure. All-cause mortality as ascertained through the National Death Index or confirmed by medical records and death certificates.

Main results. In this cohort of 16,741 women, average age at baseline was 72.0 ± 5.7 years (range, 62 to 101 years) and the mean step count was 5499 per day (median, 5094 steps/day) during the 7-day data capture period between 2011 and 2015. Not taking steps (0 steps/minute) accounted for 51.4% of the recorded time, incidental steps (1 to 39 steps/minute) accounted for 45.5%, and purposeful steps (≥ 40 steps/minute) accounted for 3.1%. The mean follow-up period was 4.3 years; during this time, 504 participants died. The median steps per day across quartiles were 2718 (lowest), 4363, 5905, and 8442 (highest). The corresponding quartile hazard ratios (HRs) associated with mortality adjusted for confounders were 1.00 (reference; lowest quartile), 0.59 (95% confidence interval [CI], 0.47-0.75), 0.54 (95% CI, 0.41-0.72), and 0.42 (95% CI, 0.30-0.60; highest quartile), respectively (P < 0.01). A higher mean step count per day, up to approximately 7500 steps/day, corresponded with progressive and steady decline in mortality HRs using spline analyses. Similar results were observed using sensitivity analyses that minimized reverse causation bias. While the adjusted analysis of measures of stepping intensity showed an inverse association with mortality rates, these associations were no longer significant after accounting for steps per day. Specifically, adjusted HRs comparing highest to lowest quartile were 0.87 (95% CI, 0.68-1.11) for peak 1-minute cadence; 0.86 (95% CI, 0.65-1.13) for peak 30-minute cadence; 0.80 (95% CI, 0.62-1.05) for maximum 5-minute cadence; and 1.27 (95% CI, 0.96-1.68) for time spent at a stepping rate of ≥ 40 steps/minute.

Conclusion. Older women who took approximately 4400 steps per day had lower all-cause mortality rates during a follow-up period of 4.3 years compared to those who took approximately 2700 steps each day. Progressive reduction in mortality rates was associated with increased steps per day before leveling at about 7500 steps/day. Stepping intensity, when accounting for number of steps taken per day, was not associated with reduction in mortality rates in older women.

Commentary

The health and mortality benefits of exercise are well recognized. The 2018 Department of Health and Human Services Physical Activity Guidelines (DHHS-PAG) recommend that adults should do at least 150 to 300 minutes of moderate-intensity aerobic physical activity per week, or 75 to 150 minutes of vigorous-intensity aerobic physical activity per week, in addition to doing muscle-strengthening activities on 2 or more days a week.1 Importantly, the guidelines emphasize that moving more and sitting less benefit nearly everyone, and note that measures of steps as a metric of ambulation can further promote translation of research into public health recommendations for exercise interventions. Despite this recognition, there is limited information centering on the number of daily steps (step volume) and the intensity of stepping that are needed to achieve optimal health outcomes in older adults. The study reported by Lee and colleagues adds new knowledge regarding the relationship between step volume and intensity and mortality in older women.

 

 

To date, only a handful of studies conducted outside of the United States have investigated the association between mortality and objectively measured step volume as determined by pedometer or accelerometer.2-4 While these studies observed that higher step counts are associated with lower mortality rates during follow-up periods of 5 to 10 years, their sample sizes were smaller and the study populations were different from those included in the study reported by Lee and colleagues. For example, the cohort from the United Kingdom included only men,2 and the participants in the Australian study were considerably younger, with a mean age of 59 years.4 In the current study, the largest of its kind thus far, it was observed that older women in the United States who take about 4400 steps a day have a lower mortality rate compared to those who take about 2700 steps a day. Moreover, the benefit of increased step volume on mortality progressively increases until plateauing at about 7500 steps per day. On the other hand, stepping intensity does not appear to lower mortality when step volume is accounted for. These results are important in that they add novel evidence that in older women, a patient population that tends to be sedentary, increased step volume (steps per day) but not stepping intensity (how quickly steps are taken) is associated with a reduction in mortality. Thus, these findings help to better characterize steps as a metric of ambulation in sedentary older adults per DHHS-PAG and add to the evidence necessary to translate this line of research into public health recommendations and programs.

While the health benefit of regular physical activity is well known and has been brought to the foreground with DDHA-PAG, only a small percentage of older adults engage in the recommended amounts and types of exercises. In other words, finding motivation to exercise is hard. Thus, identifying practical methods to facilitate behavioral change that increase and sustain physical activity in sedentary older adults would be essential to promoting health in this population. The use of wearable technologies such as fitness trackers and smartphone apps, devices that are now widely used, has shown promise for measuring and encouraging physical activity. The study by Lee and colleagues adds to this notion and further highlights the potential significance of step volume and mortality benefits in older women. Thus, future research in fitness technology should aim to integrate behavior change techniques (such as goal setting, feedback rewards, and action planning) and physical activity levels in order to improve health outcomes in older adults.5

In this study, the large sample size (> 16,000 participants), high compliance rate of accelerometer use (96% compliance rate), and reliable and continuous data capture (a built-in device feature) provide a large and complete dataset. This dataset, a major strength of the study, allowed the investigators to adequately control for potential confounders of physical activity, such as history of smoking, alcohol use, diet, and self-rated health, and therefore statistically minimize biases that are common in observational studies. However, some limitations inherent to the observational design are noted in this study. For instance, the observed association between step volume and mortality is correlational rather than causal, and a one-time assessment of steps taken over 7 consecutive days (ie, exposure) may not accurately reflect step volume and intensity of study participants over the span of 4.3 years of follow-up. Also, participants of WHS are predominately white, have higher socioeconomic status, and are more physically active than a national sample in the United States; therefore, caution should be exercised when making inferences to the general population.

 

Applications for Clinical Practice

Increased steps taken each day, up to about 7500 steps per day, is associated with lower mortality in older women. This finding can help inform the discussion when clinicians offer physical activity recommendations to older sedentary patients.

—Fred Ko, MD

References

1. Piercy KL, Troiano RP, Ballard RM, et al. The physical activity guidelines for Americans. JAMA. 2018;320:2020-2028.

2. Jefferis BJ, Parsons TJ, Sartini C, et al. Objectively measured physical activity, sedentary behaviour and all-cause mortality in older men: does volume of activity matter more than pattern of accumulation? Br J Sports Med. 2019;53:1013-1020.

3. Yamamoto N, Miyazaki H, Shimada M, et al. Daily step count and all-cause mortality in a sample of Japanese elderly people: a cohort study. BMC Public Health. 2018;18:540.

4. Dwyer T, Pezic A, Sun C, et al. Objectively measured daily steps and subsequent long term all-cause mortality: the Tasped prospective cohort study. PLoS One. 2015;10:e0141274.

5. Sullivan AN, Lachman ME. Behavior change with fitness technology in sedentary adults: a review of the evidence for increasing physical activity. Front Public Health. 2016;4:289.

References

1. Piercy KL, Troiano RP, Ballard RM, et al. The physical activity guidelines for Americans. JAMA. 2018;320:2020-2028.

2. Jefferis BJ, Parsons TJ, Sartini C, et al. Objectively measured physical activity, sedentary behaviour and all-cause mortality in older men: does volume of activity matter more than pattern of accumulation? Br J Sports Med. 2019;53:1013-1020.

3. Yamamoto N, Miyazaki H, Shimada M, et al. Daily step count and all-cause mortality in a sample of Japanese elderly people: a cohort study. BMC Public Health. 2018;18:540.

4. Dwyer T, Pezic A, Sun C, et al. Objectively measured daily steps and subsequent long term all-cause mortality: the Tasped prospective cohort study. PLoS One. 2015;10:e0141274.

5. Sullivan AN, Lachman ME. Behavior change with fitness technology in sedentary adults: a review of the evidence for increasing physical activity. Front Public Health. 2016;4:289.

Issue
Journal of Clinical Outcomes Management - 26(5)
Issue
Journal of Clinical Outcomes Management - 26(5)
Page Number
204-206
Page Number
204-206
Publications
Publications
Topics
Article Type
Display Headline
Higher Step Volume Is Associated with Lower Mortality in Older Women
Display Headline
Higher Step Volume Is Associated with Lower Mortality in Older Women
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media
Image
Teambase ID
180014C9.SIG
Disable zoom
Off

Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?

Article Type
Changed
Thu, 04/23/2020 - 15:18
Display Headline
Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?

Study Overview

Objective. To test the effect of 12 months of vitamin D supplementation on lower-extremity power and function in older community-dwelling adults screened for low serum 25-hydroxyvitamin D (25(OH)D).

Design. A single-center, double-blind, randomized placebo-controlled study in which participants were assigned to 800 IU of vitamin D3 supplementation or placebo daily and were followed over a total period of 12 months.

Setting and participants. A total of 100 community-dwelling men and women aged ≥ 60 years with serum 25(OH)D ≤ 20 ng/mL at screening participated. Participants were prescreened by phone, and were excluded if they met any of the following exclusion criteria: vitamin D supplement use > 600 IU/day (for age 60-70 years) or > 800 IU/day (for age ≥ 71 years); vitamin D injection within the previous 3 months; > 2 falls or 1 fall with injury in past year; use of cane, walker, or other indoor walking aid; history of kidney stones within past 3 years; hypercalcemia (serum calcium > 10.8 mg/dL); renal dysfunction (glomerular filtration rate, < 30 mL/min); history of liver disease, sarcoidosis, lymphoma, dysphagia, or other gastrointestinal disorder; neuromuscular disorder affecting lower-extremity function; hip replacement within the past year; cancer treatment in the past 3 years; treatment with thiazide diuretics > 37.5 mg, teriparatide, denosumab, or bisphosphonates within the past 2 years; oral steroids (for > 3 weeks in the past 6 months); and use of fat malabsorption products or anticonvulsive therapy.

Main outcome measures. The primary outcome was leg extensor power assessed using a computer-interfaced bilateral Keiser pneumatic leg press. Secondary outcomes to measure physical function included: (1) backward tandem walk test (which is an indicator of balance and postural control during movement1); (2) Short Physical Performance Battery (SPPB) testing, which includes a balance assessment (ability to stand with feet positioned normally, semi-tandem, and tandem for 10s), a timed 4-m walk, and a chair stand test (time to complete 5 repeated chair stands); (3) stair climbing (ie, time to climb 10 steps, as a measure of knee extensor strength and functional capacity); and (4) handgrip strength (using a dynamometer). Lean tissue mass was assessed by dual X-ray absorptiometry (DEXA scan). Finally, other measures included serum total 25(OH)D levels measured at baseline, 4, 8, and 12 months, as well as 24-hour urine collection for urea-nitrogen and creatinine measurements.

Main results. Of the 2289 individuals screened for the study, 100 met eligibility criteria and underwent randomization to receive either 800 IU vitamin D supplementation daily (n = 49) or placebo (n = 51). Three patients (2 in vitamin D group and 1 in placebo group) were lost to follow up. The mean age of all participants was 69.6 ± 6.9 years. In the vitamin D group versus the control group, respectively, the percent male: female ratio was 66:34 versus 63:37, and percent Caucasian was 75% versus 82%. Mean body mass index was 28.2 ± 7.0 and mean serum 25(OH)D was 20.2 ± 6.7 ng/mL. At the end of the study (12 months), 70% of participants given vitamin D supplementation had 25(OH)D levels ≥ 30 ng/mL and all participants had levels ≥ 20 ng/mL. In the placebo group, the serum 25(OH)D level was ≥ 20 ng/mL in 54% and ≥ 30 ng/mL in 6%. The mean serum 25(OH)D level increased to 32.5 ± 5.1 ng/mL in the vitamin D–supplemented group, but no significant change was found in the placebo group (treatment × time, P < 0.001). Overall, the serum 1,25 (OH)2D3 levels did not differ between the 2 groups over the intervention period (time, P = 0.49; treatment × time, P = 0.27). Dietary intake of vitamin D, calcium, nitrogen, and protein did not differ or change over time between the 2 groups. The change in leg press power, function, and strength did not differ between the groups over 12 months (all treatment × time, P values ≥ 0.60). A total of 27 falls were reported (14 in vitamin D versus 9 in control group), of which 9 were associated with injuries. There was no significant change in lean body mass at the end of the study period in either group (treatment × time, P = 0.98).

Conclusion. In community-dwelling older adults with vitamin D deficiency (≤ 20 ng/mL), 12-month daily supplementation with 800 IU of vitamin D3 resulted in sufficient increases in serum 25(OH)D levels, but did not improve lower-extremity power, strength, or lean mass.

Commentary

Vitamin D deficiency is common in older adults (prevalence of about 41% in US adults ≥ 65 years old, according to Forrest et al2) and is likely due to dietary deficiency, reduced sun exposure (lifestyle), and decreased intestinal calcium absorption. As such, vitamin D deficiency has historically been a topic of debate and of interest in geriatric medicine, as it relates to muscle weakness, which in turn leads to increased susceptibility to falls.3 Interestingly, vitamin D receptors are expressed in human skeletal muscle,4 and in one study, 3-month supplementation of vitamin D led to an increase in type II skeletal muscle fibers in older women.5 Similarly, results from a meta-analysis of 5 randomized controlled trials (RCTs)6 showed that vitamin D supplementation may reduce fall risk in older adults by 22% (corrected odds ratio, 0.78; 95% confidence interval, 0.64-0.92). Thus, in keeping with this general theme of vitamin D supplementation yielding beneficial effects in clinical outcomes, clinicians have long accepted and practiced routine vitamin D supplementation in caring for older adults.

 

 

In more recent years, the role of vitamin D supplementation in primary care has become controversial,7 as observed in a recent paradigm shift of moving away from routine supplementation for fall and fracture prevention in clinical practice.8 In a recent meta-analysis of 33 RCTs in older community-dwelling adults, supplementation with vitamin D with or without calcium did not result in a reduction of hip fracture or total number of fractures.9 Moreover, the United States Preventive Services Task Force (USPSTF) recently published updated recommendations on the use of vitamin D supplementation for primary prevention of fractures10 and prevention of falls11 in community-dwelling adults. In these updated recommendations, the USPSTF indicated that insufficient evidence exists to recommend vitamin D supplementation to prevent fractures in men and premenopausal women, and recommends against vitamin D supplementation for prevention of falls. Finally, USPSTF recommends against low-dose vitamin D (400 IU or less) supplementation for primary prevention of fractures in community-dwelling, postmenopausal women.10 Nevertheless, these statements are not applicable for individuals with a prior history of osteoporotic fractures, increased risk of falls, or a diagnosis of vitamin D deficiency or osteoporosis. Therefore, vitamin D supplementation for prevention of fall and fractures should be practiced with caution.

Vitamin D supplementation is no longer routinely recommended for fall and fracture prevention. However, if we believe that poor lower extremity muscle strength is a risk factor for falls,12 then the question of whether vitamin D has a beneficial role in improving lower extremity strength in older adults needs to be addressed. Results regarding the effect of vitamin D supplementation on muscle function have so far been mixed. For example, in a randomized, double-blinded, placebo-controlled trial of 160 postmenopausal women with low vitamin D level (< 20 ng/mL), vitamin D3 supplementation at 1000 IU/day for 9 months showed a significant increase in lower extremity muscle strength.13 However, in another randomized double-blinded, placebo-controlled trial of 130 men aged 65 to 90 years with low vitamin D level (< 30 ng/mL) and an SPPB score of ≤ 9 (mild-moderate limitation in mobility), daily supplementation with 4000 IU of vitamin D3 for 9 months did not result in improved SPPB score or gait speed.14 In the study reported by Shea et al, the authors showed that 800 IU of daily vitamin D supplementation (consistent with the Institute of Medicine [IOM] recommendations for older adults15) in community-dwelling older adults with vitamin D deficiency (< 20 ng/mL) did not improve lower extremity muscle strength. This finding is significant in that it adds further evidence to support the rationale against using vitamin D supplementation for the sole purpose of improving lower extremity muscle function in older adults with vitamin D deficiency.

Valuable strengths of this study include its randomized, double-blinded, placebo-controlled trial design testing the IOM recommended dose of daily vitamin D supplementation for older adults. In addition, compared to some of the prior studies mentioned above, the study population included both males and females, although the final study population resulted in some gender bias (with male predominance). Moreover, participants were followed for a sufficient amount of time (1 year), with an excellent adherence rate (only 3 were lost to follow-up) and with corresponding improvement in vitamin D levels. Finally, the use of SPPB as a readout for primary outcome should also be commended, as this assessment is a well-validated method for measuring lower extremity function with scaled scores that predict poor outcomes.16 However, some limitations include the aforementioned predominance of male participants and Caucasian race in both intervention and control groups, as well as discrepancies between the measurement methods for serum vitamin D levels (ie, finger-stick cards versus clinical lab measurement) that may have underestimated the actual serum 25(OH)D levels.

 

Applications for Clinical Practice

While the null findings from the Shea and colleagues study are applicable to healthier community-dwelling older adults, they may not be generalizable to the care of more frail older patients due to their increased risks for falls and high vulnerability to adverse outcomes. Thus, further studies that account for baseline sarcopenia, frailty, and other fall-risk factors (eg, polypharmacy) are needed to better evaluate the value of vitamin D supplementation in this most vulnerable population.

Caroline Park, MD, PhD, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai, New York, NY

References

1. Husu P, Suni J, Pasanen M, Miilunpalo S. Health-related fitness tests as predictors of difficulties in long-distance walking among high-functioning older adults. Aging Clin Exp Res. 2007;19:444-450.

2. Forrest KYZ, Stuhldreher WL. Prevalence and correlates of vitamin D deficiency in US adults. Nutr Res. 2011;31:48-54.

3. Bischoff-Ferrari HA, Giovannucci E, Willett WC, et al. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84:1253.

4. Simpson RU, Thomas GA, Arnold AJ. Identification of 1,25-dihydroxyvitamin-D3 receptors and activities in muscle. J Biol Chem. 1985;260:8882-8891.

5. Sorensen OH, Lund BI, Saltin B, et al. Myopathy in bone loss ofaging - improvement by treatment with 1alpha-hydroxycholecalciferol and calcium. Clinical Science. 1979;56:157-161.

6. Bischoff-Ferrari HA, Dawson-Hughes B, Willett WC, et al. Effect of vitamin D on falls - A meta-analysis. JAMA. 2004;291:1999-2006.

7. Lewis JR SM, Daly RM. The vitamin D and calcium controversy: an update. Curr Opin Rheumatol. 2019;31:91-97.

8. Schwenk T. No value for routine vitamin D supplementation. NEJM Journal Watch. December 26, 2018.

9. Zhao JG, Zeng XT, Wang J, Liu L. Association between calcium or vitamin D supplementation and fracture incidence in community-dwelling older adults: a systematic review and meta-analysis. JAMA. 2017;318:2466-2482.

10. Grossman DC, Curry SJ, Owens DK, et al. Vitamin D, calcium, or combined supplementation for the primary prevention of fractures in community-dwelling adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1592-1599.

11. Grossman DC, Curry SJ, Owens DK, et al. Interventions to prevent falls in community-dwelling older adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1696-1704.

12. Tinetti ME, Speechley M, Ginter SF. Risk-factors for falls among elderly persons living in the community. N Engl J Med. 1988;319:1701-1707.

13. Cangussu LM, Nahas-Neto J, Orsatti CL, et al. Effect of vitamin D supplementation alone on muscle function in postmenopausal women: a randomized, double-blind, placebo-controlled clinical trial. Osteoporos Int. 2015;26:2413-2421.

14. Levis S, Gomez-Marin O. Vitamin D and physical function in sedentary older men. J Am Geriatr Soc. 2017;65:323-331.

15. Ross CA TC, Yaktine AL, Del Valle HB. Institute of Medicine (US) Committee to Review Dietary Reference Intakes for Vitamin D and Calcium. National Academies Press. 2011.

16. Guralnik JM, Ferrucci L, Simonsick EM, et al. Lower-extremity function in persons over the age of 70 years as a predictor of subsequent disability. N Engl J Med. 1995;332:556-561

Article PDF
Issue
Journal of Clinical Outcomes Management - 26(3)
Publications
Topics
Page Number
110-112
Sections
Article PDF
Article PDF

Study Overview

Objective. To test the effect of 12 months of vitamin D supplementation on lower-extremity power and function in older community-dwelling adults screened for low serum 25-hydroxyvitamin D (25(OH)D).

Design. A single-center, double-blind, randomized placebo-controlled study in which participants were assigned to 800 IU of vitamin D3 supplementation or placebo daily and were followed over a total period of 12 months.

Setting and participants. A total of 100 community-dwelling men and women aged ≥ 60 years with serum 25(OH)D ≤ 20 ng/mL at screening participated. Participants were prescreened by phone, and were excluded if they met any of the following exclusion criteria: vitamin D supplement use > 600 IU/day (for age 60-70 years) or > 800 IU/day (for age ≥ 71 years); vitamin D injection within the previous 3 months; > 2 falls or 1 fall with injury in past year; use of cane, walker, or other indoor walking aid; history of kidney stones within past 3 years; hypercalcemia (serum calcium > 10.8 mg/dL); renal dysfunction (glomerular filtration rate, < 30 mL/min); history of liver disease, sarcoidosis, lymphoma, dysphagia, or other gastrointestinal disorder; neuromuscular disorder affecting lower-extremity function; hip replacement within the past year; cancer treatment in the past 3 years; treatment with thiazide diuretics > 37.5 mg, teriparatide, denosumab, or bisphosphonates within the past 2 years; oral steroids (for > 3 weeks in the past 6 months); and use of fat malabsorption products or anticonvulsive therapy.

Main outcome measures. The primary outcome was leg extensor power assessed using a computer-interfaced bilateral Keiser pneumatic leg press. Secondary outcomes to measure physical function included: (1) backward tandem walk test (which is an indicator of balance and postural control during movement1); (2) Short Physical Performance Battery (SPPB) testing, which includes a balance assessment (ability to stand with feet positioned normally, semi-tandem, and tandem for 10s), a timed 4-m walk, and a chair stand test (time to complete 5 repeated chair stands); (3) stair climbing (ie, time to climb 10 steps, as a measure of knee extensor strength and functional capacity); and (4) handgrip strength (using a dynamometer). Lean tissue mass was assessed by dual X-ray absorptiometry (DEXA scan). Finally, other measures included serum total 25(OH)D levels measured at baseline, 4, 8, and 12 months, as well as 24-hour urine collection for urea-nitrogen and creatinine measurements.

Main results. Of the 2289 individuals screened for the study, 100 met eligibility criteria and underwent randomization to receive either 800 IU vitamin D supplementation daily (n = 49) or placebo (n = 51). Three patients (2 in vitamin D group and 1 in placebo group) were lost to follow up. The mean age of all participants was 69.6 ± 6.9 years. In the vitamin D group versus the control group, respectively, the percent male: female ratio was 66:34 versus 63:37, and percent Caucasian was 75% versus 82%. Mean body mass index was 28.2 ± 7.0 and mean serum 25(OH)D was 20.2 ± 6.7 ng/mL. At the end of the study (12 months), 70% of participants given vitamin D supplementation had 25(OH)D levels ≥ 30 ng/mL and all participants had levels ≥ 20 ng/mL. In the placebo group, the serum 25(OH)D level was ≥ 20 ng/mL in 54% and ≥ 30 ng/mL in 6%. The mean serum 25(OH)D level increased to 32.5 ± 5.1 ng/mL in the vitamin D–supplemented group, but no significant change was found in the placebo group (treatment × time, P < 0.001). Overall, the serum 1,25 (OH)2D3 levels did not differ between the 2 groups over the intervention period (time, P = 0.49; treatment × time, P = 0.27). Dietary intake of vitamin D, calcium, nitrogen, and protein did not differ or change over time between the 2 groups. The change in leg press power, function, and strength did not differ between the groups over 12 months (all treatment × time, P values ≥ 0.60). A total of 27 falls were reported (14 in vitamin D versus 9 in control group), of which 9 were associated with injuries. There was no significant change in lean body mass at the end of the study period in either group (treatment × time, P = 0.98).

Conclusion. In community-dwelling older adults with vitamin D deficiency (≤ 20 ng/mL), 12-month daily supplementation with 800 IU of vitamin D3 resulted in sufficient increases in serum 25(OH)D levels, but did not improve lower-extremity power, strength, or lean mass.

Commentary

Vitamin D deficiency is common in older adults (prevalence of about 41% in US adults ≥ 65 years old, according to Forrest et al2) and is likely due to dietary deficiency, reduced sun exposure (lifestyle), and decreased intestinal calcium absorption. As such, vitamin D deficiency has historically been a topic of debate and of interest in geriatric medicine, as it relates to muscle weakness, which in turn leads to increased susceptibility to falls.3 Interestingly, vitamin D receptors are expressed in human skeletal muscle,4 and in one study, 3-month supplementation of vitamin D led to an increase in type II skeletal muscle fibers in older women.5 Similarly, results from a meta-analysis of 5 randomized controlled trials (RCTs)6 showed that vitamin D supplementation may reduce fall risk in older adults by 22% (corrected odds ratio, 0.78; 95% confidence interval, 0.64-0.92). Thus, in keeping with this general theme of vitamin D supplementation yielding beneficial effects in clinical outcomes, clinicians have long accepted and practiced routine vitamin D supplementation in caring for older adults.

 

 

In more recent years, the role of vitamin D supplementation in primary care has become controversial,7 as observed in a recent paradigm shift of moving away from routine supplementation for fall and fracture prevention in clinical practice.8 In a recent meta-analysis of 33 RCTs in older community-dwelling adults, supplementation with vitamin D with or without calcium did not result in a reduction of hip fracture or total number of fractures.9 Moreover, the United States Preventive Services Task Force (USPSTF) recently published updated recommendations on the use of vitamin D supplementation for primary prevention of fractures10 and prevention of falls11 in community-dwelling adults. In these updated recommendations, the USPSTF indicated that insufficient evidence exists to recommend vitamin D supplementation to prevent fractures in men and premenopausal women, and recommends against vitamin D supplementation for prevention of falls. Finally, USPSTF recommends against low-dose vitamin D (400 IU or less) supplementation for primary prevention of fractures in community-dwelling, postmenopausal women.10 Nevertheless, these statements are not applicable for individuals with a prior history of osteoporotic fractures, increased risk of falls, or a diagnosis of vitamin D deficiency or osteoporosis. Therefore, vitamin D supplementation for prevention of fall and fractures should be practiced with caution.

Vitamin D supplementation is no longer routinely recommended for fall and fracture prevention. However, if we believe that poor lower extremity muscle strength is a risk factor for falls,12 then the question of whether vitamin D has a beneficial role in improving lower extremity strength in older adults needs to be addressed. Results regarding the effect of vitamin D supplementation on muscle function have so far been mixed. For example, in a randomized, double-blinded, placebo-controlled trial of 160 postmenopausal women with low vitamin D level (< 20 ng/mL), vitamin D3 supplementation at 1000 IU/day for 9 months showed a significant increase in lower extremity muscle strength.13 However, in another randomized double-blinded, placebo-controlled trial of 130 men aged 65 to 90 years with low vitamin D level (< 30 ng/mL) and an SPPB score of ≤ 9 (mild-moderate limitation in mobility), daily supplementation with 4000 IU of vitamin D3 for 9 months did not result in improved SPPB score or gait speed.14 In the study reported by Shea et al, the authors showed that 800 IU of daily vitamin D supplementation (consistent with the Institute of Medicine [IOM] recommendations for older adults15) in community-dwelling older adults with vitamin D deficiency (< 20 ng/mL) did not improve lower extremity muscle strength. This finding is significant in that it adds further evidence to support the rationale against using vitamin D supplementation for the sole purpose of improving lower extremity muscle function in older adults with vitamin D deficiency.

Valuable strengths of this study include its randomized, double-blinded, placebo-controlled trial design testing the IOM recommended dose of daily vitamin D supplementation for older adults. In addition, compared to some of the prior studies mentioned above, the study population included both males and females, although the final study population resulted in some gender bias (with male predominance). Moreover, participants were followed for a sufficient amount of time (1 year), with an excellent adherence rate (only 3 were lost to follow-up) and with corresponding improvement in vitamin D levels. Finally, the use of SPPB as a readout for primary outcome should also be commended, as this assessment is a well-validated method for measuring lower extremity function with scaled scores that predict poor outcomes.16 However, some limitations include the aforementioned predominance of male participants and Caucasian race in both intervention and control groups, as well as discrepancies between the measurement methods for serum vitamin D levels (ie, finger-stick cards versus clinical lab measurement) that may have underestimated the actual serum 25(OH)D levels.

 

Applications for Clinical Practice

While the null findings from the Shea and colleagues study are applicable to healthier community-dwelling older adults, they may not be generalizable to the care of more frail older patients due to their increased risks for falls and high vulnerability to adverse outcomes. Thus, further studies that account for baseline sarcopenia, frailty, and other fall-risk factors (eg, polypharmacy) are needed to better evaluate the value of vitamin D supplementation in this most vulnerable population.

Caroline Park, MD, PhD, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai, New York, NY

Study Overview

Objective. To test the effect of 12 months of vitamin D supplementation on lower-extremity power and function in older community-dwelling adults screened for low serum 25-hydroxyvitamin D (25(OH)D).

Design. A single-center, double-blind, randomized placebo-controlled study in which participants were assigned to 800 IU of vitamin D3 supplementation or placebo daily and were followed over a total period of 12 months.

Setting and participants. A total of 100 community-dwelling men and women aged ≥ 60 years with serum 25(OH)D ≤ 20 ng/mL at screening participated. Participants were prescreened by phone, and were excluded if they met any of the following exclusion criteria: vitamin D supplement use > 600 IU/day (for age 60-70 years) or > 800 IU/day (for age ≥ 71 years); vitamin D injection within the previous 3 months; > 2 falls or 1 fall with injury in past year; use of cane, walker, or other indoor walking aid; history of kidney stones within past 3 years; hypercalcemia (serum calcium > 10.8 mg/dL); renal dysfunction (glomerular filtration rate, < 30 mL/min); history of liver disease, sarcoidosis, lymphoma, dysphagia, or other gastrointestinal disorder; neuromuscular disorder affecting lower-extremity function; hip replacement within the past year; cancer treatment in the past 3 years; treatment with thiazide diuretics > 37.5 mg, teriparatide, denosumab, or bisphosphonates within the past 2 years; oral steroids (for > 3 weeks in the past 6 months); and use of fat malabsorption products or anticonvulsive therapy.

Main outcome measures. The primary outcome was leg extensor power assessed using a computer-interfaced bilateral Keiser pneumatic leg press. Secondary outcomes to measure physical function included: (1) backward tandem walk test (which is an indicator of balance and postural control during movement1); (2) Short Physical Performance Battery (SPPB) testing, which includes a balance assessment (ability to stand with feet positioned normally, semi-tandem, and tandem for 10s), a timed 4-m walk, and a chair stand test (time to complete 5 repeated chair stands); (3) stair climbing (ie, time to climb 10 steps, as a measure of knee extensor strength and functional capacity); and (4) handgrip strength (using a dynamometer). Lean tissue mass was assessed by dual X-ray absorptiometry (DEXA scan). Finally, other measures included serum total 25(OH)D levels measured at baseline, 4, 8, and 12 months, as well as 24-hour urine collection for urea-nitrogen and creatinine measurements.

Main results. Of the 2289 individuals screened for the study, 100 met eligibility criteria and underwent randomization to receive either 800 IU vitamin D supplementation daily (n = 49) or placebo (n = 51). Three patients (2 in vitamin D group and 1 in placebo group) were lost to follow up. The mean age of all participants was 69.6 ± 6.9 years. In the vitamin D group versus the control group, respectively, the percent male: female ratio was 66:34 versus 63:37, and percent Caucasian was 75% versus 82%. Mean body mass index was 28.2 ± 7.0 and mean serum 25(OH)D was 20.2 ± 6.7 ng/mL. At the end of the study (12 months), 70% of participants given vitamin D supplementation had 25(OH)D levels ≥ 30 ng/mL and all participants had levels ≥ 20 ng/mL. In the placebo group, the serum 25(OH)D level was ≥ 20 ng/mL in 54% and ≥ 30 ng/mL in 6%. The mean serum 25(OH)D level increased to 32.5 ± 5.1 ng/mL in the vitamin D–supplemented group, but no significant change was found in the placebo group (treatment × time, P < 0.001). Overall, the serum 1,25 (OH)2D3 levels did not differ between the 2 groups over the intervention period (time, P = 0.49; treatment × time, P = 0.27). Dietary intake of vitamin D, calcium, nitrogen, and protein did not differ or change over time between the 2 groups. The change in leg press power, function, and strength did not differ between the groups over 12 months (all treatment × time, P values ≥ 0.60). A total of 27 falls were reported (14 in vitamin D versus 9 in control group), of which 9 were associated with injuries. There was no significant change in lean body mass at the end of the study period in either group (treatment × time, P = 0.98).

Conclusion. In community-dwelling older adults with vitamin D deficiency (≤ 20 ng/mL), 12-month daily supplementation with 800 IU of vitamin D3 resulted in sufficient increases in serum 25(OH)D levels, but did not improve lower-extremity power, strength, or lean mass.

Commentary

Vitamin D deficiency is common in older adults (prevalence of about 41% in US adults ≥ 65 years old, according to Forrest et al2) and is likely due to dietary deficiency, reduced sun exposure (lifestyle), and decreased intestinal calcium absorption. As such, vitamin D deficiency has historically been a topic of debate and of interest in geriatric medicine, as it relates to muscle weakness, which in turn leads to increased susceptibility to falls.3 Interestingly, vitamin D receptors are expressed in human skeletal muscle,4 and in one study, 3-month supplementation of vitamin D led to an increase in type II skeletal muscle fibers in older women.5 Similarly, results from a meta-analysis of 5 randomized controlled trials (RCTs)6 showed that vitamin D supplementation may reduce fall risk in older adults by 22% (corrected odds ratio, 0.78; 95% confidence interval, 0.64-0.92). Thus, in keeping with this general theme of vitamin D supplementation yielding beneficial effects in clinical outcomes, clinicians have long accepted and practiced routine vitamin D supplementation in caring for older adults.

 

 

In more recent years, the role of vitamin D supplementation in primary care has become controversial,7 as observed in a recent paradigm shift of moving away from routine supplementation for fall and fracture prevention in clinical practice.8 In a recent meta-analysis of 33 RCTs in older community-dwelling adults, supplementation with vitamin D with or without calcium did not result in a reduction of hip fracture or total number of fractures.9 Moreover, the United States Preventive Services Task Force (USPSTF) recently published updated recommendations on the use of vitamin D supplementation for primary prevention of fractures10 and prevention of falls11 in community-dwelling adults. In these updated recommendations, the USPSTF indicated that insufficient evidence exists to recommend vitamin D supplementation to prevent fractures in men and premenopausal women, and recommends against vitamin D supplementation for prevention of falls. Finally, USPSTF recommends against low-dose vitamin D (400 IU or less) supplementation for primary prevention of fractures in community-dwelling, postmenopausal women.10 Nevertheless, these statements are not applicable for individuals with a prior history of osteoporotic fractures, increased risk of falls, or a diagnosis of vitamin D deficiency or osteoporosis. Therefore, vitamin D supplementation for prevention of fall and fractures should be practiced with caution.

Vitamin D supplementation is no longer routinely recommended for fall and fracture prevention. However, if we believe that poor lower extremity muscle strength is a risk factor for falls,12 then the question of whether vitamin D has a beneficial role in improving lower extremity strength in older adults needs to be addressed. Results regarding the effect of vitamin D supplementation on muscle function have so far been mixed. For example, in a randomized, double-blinded, placebo-controlled trial of 160 postmenopausal women with low vitamin D level (< 20 ng/mL), vitamin D3 supplementation at 1000 IU/day for 9 months showed a significant increase in lower extremity muscle strength.13 However, in another randomized double-blinded, placebo-controlled trial of 130 men aged 65 to 90 years with low vitamin D level (< 30 ng/mL) and an SPPB score of ≤ 9 (mild-moderate limitation in mobility), daily supplementation with 4000 IU of vitamin D3 for 9 months did not result in improved SPPB score or gait speed.14 In the study reported by Shea et al, the authors showed that 800 IU of daily vitamin D supplementation (consistent with the Institute of Medicine [IOM] recommendations for older adults15) in community-dwelling older adults with vitamin D deficiency (< 20 ng/mL) did not improve lower extremity muscle strength. This finding is significant in that it adds further evidence to support the rationale against using vitamin D supplementation for the sole purpose of improving lower extremity muscle function in older adults with vitamin D deficiency.

Valuable strengths of this study include its randomized, double-blinded, placebo-controlled trial design testing the IOM recommended dose of daily vitamin D supplementation for older adults. In addition, compared to some of the prior studies mentioned above, the study population included both males and females, although the final study population resulted in some gender bias (with male predominance). Moreover, participants were followed for a sufficient amount of time (1 year), with an excellent adherence rate (only 3 were lost to follow-up) and with corresponding improvement in vitamin D levels. Finally, the use of SPPB as a readout for primary outcome should also be commended, as this assessment is a well-validated method for measuring lower extremity function with scaled scores that predict poor outcomes.16 However, some limitations include the aforementioned predominance of male participants and Caucasian race in both intervention and control groups, as well as discrepancies between the measurement methods for serum vitamin D levels (ie, finger-stick cards versus clinical lab measurement) that may have underestimated the actual serum 25(OH)D levels.

 

Applications for Clinical Practice

While the null findings from the Shea and colleagues study are applicable to healthier community-dwelling older adults, they may not be generalizable to the care of more frail older patients due to their increased risks for falls and high vulnerability to adverse outcomes. Thus, further studies that account for baseline sarcopenia, frailty, and other fall-risk factors (eg, polypharmacy) are needed to better evaluate the value of vitamin D supplementation in this most vulnerable population.

Caroline Park, MD, PhD, and Fred Ko, MD
Icahn School of Medicine at Mount Sinai, New York, NY

References

1. Husu P, Suni J, Pasanen M, Miilunpalo S. Health-related fitness tests as predictors of difficulties in long-distance walking among high-functioning older adults. Aging Clin Exp Res. 2007;19:444-450.

2. Forrest KYZ, Stuhldreher WL. Prevalence and correlates of vitamin D deficiency in US adults. Nutr Res. 2011;31:48-54.

3. Bischoff-Ferrari HA, Giovannucci E, Willett WC, et al. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84:1253.

4. Simpson RU, Thomas GA, Arnold AJ. Identification of 1,25-dihydroxyvitamin-D3 receptors and activities in muscle. J Biol Chem. 1985;260:8882-8891.

5. Sorensen OH, Lund BI, Saltin B, et al. Myopathy in bone loss ofaging - improvement by treatment with 1alpha-hydroxycholecalciferol and calcium. Clinical Science. 1979;56:157-161.

6. Bischoff-Ferrari HA, Dawson-Hughes B, Willett WC, et al. Effect of vitamin D on falls - A meta-analysis. JAMA. 2004;291:1999-2006.

7. Lewis JR SM, Daly RM. The vitamin D and calcium controversy: an update. Curr Opin Rheumatol. 2019;31:91-97.

8. Schwenk T. No value for routine vitamin D supplementation. NEJM Journal Watch. December 26, 2018.

9. Zhao JG, Zeng XT, Wang J, Liu L. Association between calcium or vitamin D supplementation and fracture incidence in community-dwelling older adults: a systematic review and meta-analysis. JAMA. 2017;318:2466-2482.

10. Grossman DC, Curry SJ, Owens DK, et al. Vitamin D, calcium, or combined supplementation for the primary prevention of fractures in community-dwelling adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1592-1599.

11. Grossman DC, Curry SJ, Owens DK, et al. Interventions to prevent falls in community-dwelling older adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1696-1704.

12. Tinetti ME, Speechley M, Ginter SF. Risk-factors for falls among elderly persons living in the community. N Engl J Med. 1988;319:1701-1707.

13. Cangussu LM, Nahas-Neto J, Orsatti CL, et al. Effect of vitamin D supplementation alone on muscle function in postmenopausal women: a randomized, double-blind, placebo-controlled clinical trial. Osteoporos Int. 2015;26:2413-2421.

14. Levis S, Gomez-Marin O. Vitamin D and physical function in sedentary older men. J Am Geriatr Soc. 2017;65:323-331.

15. Ross CA TC, Yaktine AL, Del Valle HB. Institute of Medicine (US) Committee to Review Dietary Reference Intakes for Vitamin D and Calcium. National Academies Press. 2011.

16. Guralnik JM, Ferrucci L, Simonsick EM, et al. Lower-extremity function in persons over the age of 70 years as a predictor of subsequent disability. N Engl J Med. 1995;332:556-561

References

1. Husu P, Suni J, Pasanen M, Miilunpalo S. Health-related fitness tests as predictors of difficulties in long-distance walking among high-functioning older adults. Aging Clin Exp Res. 2007;19:444-450.

2. Forrest KYZ, Stuhldreher WL. Prevalence and correlates of vitamin D deficiency in US adults. Nutr Res. 2011;31:48-54.

3. Bischoff-Ferrari HA, Giovannucci E, Willett WC, et al. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84:1253.

4. Simpson RU, Thomas GA, Arnold AJ. Identification of 1,25-dihydroxyvitamin-D3 receptors and activities in muscle. J Biol Chem. 1985;260:8882-8891.

5. Sorensen OH, Lund BI, Saltin B, et al. Myopathy in bone loss ofaging - improvement by treatment with 1alpha-hydroxycholecalciferol and calcium. Clinical Science. 1979;56:157-161.

6. Bischoff-Ferrari HA, Dawson-Hughes B, Willett WC, et al. Effect of vitamin D on falls - A meta-analysis. JAMA. 2004;291:1999-2006.

7. Lewis JR SM, Daly RM. The vitamin D and calcium controversy: an update. Curr Opin Rheumatol. 2019;31:91-97.

8. Schwenk T. No value for routine vitamin D supplementation. NEJM Journal Watch. December 26, 2018.

9. Zhao JG, Zeng XT, Wang J, Liu L. Association between calcium or vitamin D supplementation and fracture incidence in community-dwelling older adults: a systematic review and meta-analysis. JAMA. 2017;318:2466-2482.

10. Grossman DC, Curry SJ, Owens DK, et al. Vitamin D, calcium, or combined supplementation for the primary prevention of fractures in community-dwelling adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1592-1599.

11. Grossman DC, Curry SJ, Owens DK, et al. Interventions to prevent falls in community-dwelling older adults US Preventive Services Task Force Recommendation Statement. JAMA. 2018;319:1696-1704.

12. Tinetti ME, Speechley M, Ginter SF. Risk-factors for falls among elderly persons living in the community. N Engl J Med. 1988;319:1701-1707.

13. Cangussu LM, Nahas-Neto J, Orsatti CL, et al. Effect of vitamin D supplementation alone on muscle function in postmenopausal women: a randomized, double-blind, placebo-controlled clinical trial. Osteoporos Int. 2015;26:2413-2421.

14. Levis S, Gomez-Marin O. Vitamin D and physical function in sedentary older men. J Am Geriatr Soc. 2017;65:323-331.

15. Ross CA TC, Yaktine AL, Del Valle HB. Institute of Medicine (US) Committee to Review Dietary Reference Intakes for Vitamin D and Calcium. National Academies Press. 2011.

16. Guralnik JM, Ferrucci L, Simonsick EM, et al. Lower-extremity function in persons over the age of 70 years as a predictor of subsequent disability. N Engl J Med. 1995;332:556-561

Issue
Journal of Clinical Outcomes Management - 26(3)
Issue
Journal of Clinical Outcomes Management - 26(3)
Page Number
110-112
Page Number
110-112
Publications
Publications
Topics
Article Type
Display Headline
Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?
Display Headline
Does Vitamin D Supplementation Improve Lower Extremity Power and Function in Community-Dwelling Older Adults?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Article PDF Media
Image
Teambase ID
18001354.SIG
Disable zoom
Off