Slot System
Featured Buckets
Featured Buckets Admin

Overall Survival Gain With Adding Darolutamide to ADT and Docetaxel in Metastatic, Hormone-Sensitive Prostate Cancer

Article Type
Changed
Thu, 06/02/2022 - 08:19
Display Headline
Overall Survival Gain With Adding Darolutamide to ADT and Docetaxel in Metastatic, Hormone-Sensitive Prostate Cancer

Study Overview

Objective: To evaluate whether the addition of the potent androgen-receptor inhibitor (ARA) darolutamide to the standard doublet androgen-deprivation therapy (ADT) and docetaxel in metastatic, hormone-sensitive prostate cancer (mHSPC) would increase survival.

Design: A randomized, double-blind, placebo-controlled, multicenter, phase 3 study. The results reported in this publication are from the prespecified interim analysis.

Intervention: Patients with mHSPC were randomly assigned to receive either darolutamide 600 mg twice daily or placebo. All patients received standard ADT with 6 cycles of docetaxel 75 mg/m2 on day 1 every 21 days along with prednisone given within 6 weeks after randomization. Patients receiving luteinizing hormone–releasing hormone (LHRH) agonists as ADT were bridged with at least 4 weeks of first-generation antiandrogen therapy, which was discontinued before randomization. Treatments were continued until symptomatic disease progression, a change in neoplastic therapy, unacceptable toxicity, patient or physician decision, death, or nonadherence.

Setting and participants: Eligible patients included those newly diagnosed with mHSPC with metastases detected on contrast-enhanced computed tomography (CT) or magnetic resonance imaging (MRI) and bone scan. Patients were excluded if they had regional lymph node–only involvement or if they had received more than 12 weeks of ADT before randomization. Between November 2016 and June 2018, 1306 patients (651 in the darolutamide group and 655 in the placebo group) were randomized in a 1:1 manner to receive darolutamide 600 mg twice daily or placebo in addition to ADT and docetaxel. Randomization was stratified based on the TNM staging system (M1a—nonregional lymph node–only metastasis, M1b—bone metastasis with or without lymph node, or M1c—bone metastases) as well as baseline alkaline phosphatase levels.

Main outcome measures: The primary end point for the study was overall survival. Other meaningful secondary end points included time to castration resistance, time to pain progression, time to first symptomatic skeletal event, symptomatic skeletal event-free survival, time to subsequent systemic antineoplastic therapy, time to worsening of disease-related physical symptoms, initiation of opioid therapy for ≥7 days, and safety.

Results: The baseline and demographic characteristics were well balanced between the 2 groups. Median age was 67 years. Nearly 80% of patients had bone metastasis, and approximately 17% had visceral metastasis. At the data cutoff date for the primary analysis, the median duration of therapy was 41 months for darolutamide compared with 16.7 months in the placebo group; 45.9% in the darolutamide group and 19.1% in the placebo group were receiving the allotted trial therapy at the time of the analysis. Six cycles of docetaxel were completed in approximately 85% of patients in both arms. Median overall survival follow-up was 43.7 months (darolutamide) and 42.4 months (placebo). A significant improvement in overall survival was observed in the darolutamide group. The risk of death was 32.5% lower in the darolutamide cohort than in the placebo cohort (hazard ratio [HR], 0.68; 95% CI, 0.57-0.80; P < .001). The overall survival at 4 years was 62.7% (95% CI, 58.7-66.7) in the darolutamide arm and 50.4% (95% CI, 46.3-54.6) in the placebo arm. The overall survival results remained favorable across most subgroups.

Darolutamide was associated with improvement in all key secondary endpoints. Time to castration-resistance was significantly longer in the darolutamide group (HR, 0.36; 95% CI, 0.30-0.42; P < .001). Time to pain progression was also significantly longer in the darolutamide group (HR, 0.79; 95% CI, 0.66-0.95; P = .01). Time to first symptomatic skeletal events (HR, 0.71; 95% CI, 0.54-0.94; P = .02) and time to initiation of subsequent systemic therapy (HR, 0.39; 95% CI, 0.33-0.46; P < .001) were also found to be longer in the darolutamide group.

 

 

Safety: The risk of grade 3 or higher adverse events was similar across the 2 groups. Most common adverse events were known toxic effects of docetaxel therapy and were highest during the initial period when both groups received this therapy. These side effects progressively decreased after the initial period. The most common grade 3 or 4 adverse event was neutropenia, and its frequency was similar between the darolutamide and placebo groups (33.7% and 34.2%, respectively). The most frequently reported adverse events were alopecia, neutropenia, fatigue, and anemia and were similar between the groups. Adverse events of special significance, including fatigue, falls, fractures, and cardiovascular events, were also similar between the 2 groups. Adverse events causing deaths in each arm were low and similar (4.1% in the darolutamide group and 4.0% in the placebo group). The rates of discontinuation of darolutamide or placebo were similar (13.5% and 10.6%, respectively).

Conclusion: Among patients with mHSPC, overall survival was significantly longer among patients who received darolutamide plus ADT and docetaxel than among those who received ADT and docetaxel alone. This was observed despite a high percentage of patients in the placebo group receiving subsequent systemic therapy at the time of progression. The survival benefit of darolutamide was maintained across most subgroups. An improvement was also observed in the darolutamide arm in terms of key secondary end points. The adverse events were similar across the groups and were consistent with known safety profiles of ADT and docetaxel, and no new safety signals were identified in this trial.

Commentary

The results of the current study add to the body of literature supporting multi-agent systemic therapy in newly diagnosed mHSPC. Prior phase 3 trials of combination therapy using androgen-receptor pathway inhibitors, ADT, and docetaxel have shown conflicting results. The results from the previously reported PEACE-1 study showed improved overall survival among patients who received abiraterone with ADT and docetaxel as compared with those who received ADT and docetaxel alone.1 However, as noted by the authors, the subgroup of patients in the ENZAMET trial who received docetaxel, enzalutamide, and ADT did not appear to have a survival advantage compared with those who received ADT and docetaxel alone.2 The results from the current ARASENS trial provide compelling evidence in a population of prospectively randomized patients that combination therapy with darolutamide, docetaxel, and ADT improves overall survival in men with mHSPC. The survival advantage was maintained across subgroups analyzed in this study. Improvements were observed in regards to several key secondary end points with use of darolutamide. This benefit was maintained despite many patients receiving subsequent therapy at the time of progression. Importantly, there did not appear to be a significant increase in toxicity with triplet therapy. However, it is important to note that this cohort of patients appeared largely asymptomatic at the time of enrollment, with 70% of patients having an Eastern Cooperative Oncology Group performance status of 0.

Additionally, the average age in this study was 67 years, with only about 15% of the population being older than 75 years. In the reported subgroup analysis, those older than 75 years appeared to derive a similar benefit in overall survival, however. Whether triplet therapy should be universally adopted in all patients remains unclear. For example, there is a subset of patients with mHSPC with favorable- risk disease (ie, those with recurrent metastatic disease, node-only disease). In this population, the risk-benefit analysis is less clear, and whether these patients should receive this combination is not certain. Nevertheless, the results of this well-designed study are compelling and certainly represent a potential new standard treatment option for men with mHSPC. One of the strengths of this study was its large sample size that allowed for vigorous statistical analysis to evaluate the efficacy of darolutamide in combination with ADT and docetaxel.

Application for Clinical Practice

The ARASENS study provides convincing evidence that in men with mHSPC, the addition of darolutamide to docetaxel and ADT improves overall survival. This combination appeared to be well tolerated, with no evidence of increased toxicity noted. Certainly, this combination represents a potential new standard treatment option in this population; however, further understanding of which subgroups of men benefit from enhanced therapy is needed to aid in proper patient selection. 

—Santosh Kagathur, MD, and Daniel Isaac, DO, MS
Michigan State University, East Lansing, MI

References

1. Fizazi K, Carles Galceran J, Foulon S, et al. LBA5 A phase III trial with a 2x2 factorial design in men with de novo metastatic castration-sensitive prostate cancer: overall survival with abiraterone acetate plus prednisone in PEACE-1. Ann Oncol. 2021;32:Suppl 5:S1299. doi:10.1016/j.annonc.2021.08.2099

2. Davis ID, Martin AJ, Stockler MR, et al. Enzalutamide with standard first-line therapy in metastatic prostate cancer. N Engl J Med. 2019;381:121-131. doi:10.1056/NEJMoa1903835

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(3)
Publications
Topics
Page Number
108-110
Sections
Article PDF
Article PDF

Study Overview

Objective: To evaluate whether the addition of the potent androgen-receptor inhibitor (ARA) darolutamide to the standard doublet androgen-deprivation therapy (ADT) and docetaxel in metastatic, hormone-sensitive prostate cancer (mHSPC) would increase survival.

Design: A randomized, double-blind, placebo-controlled, multicenter, phase 3 study. The results reported in this publication are from the prespecified interim analysis.

Intervention: Patients with mHSPC were randomly assigned to receive either darolutamide 600 mg twice daily or placebo. All patients received standard ADT with 6 cycles of docetaxel 75 mg/m2 on day 1 every 21 days along with prednisone given within 6 weeks after randomization. Patients receiving luteinizing hormone–releasing hormone (LHRH) agonists as ADT were bridged with at least 4 weeks of first-generation antiandrogen therapy, which was discontinued before randomization. Treatments were continued until symptomatic disease progression, a change in neoplastic therapy, unacceptable toxicity, patient or physician decision, death, or nonadherence.

Setting and participants: Eligible patients included those newly diagnosed with mHSPC with metastases detected on contrast-enhanced computed tomography (CT) or magnetic resonance imaging (MRI) and bone scan. Patients were excluded if they had regional lymph node–only involvement or if they had received more than 12 weeks of ADT before randomization. Between November 2016 and June 2018, 1306 patients (651 in the darolutamide group and 655 in the placebo group) were randomized in a 1:1 manner to receive darolutamide 600 mg twice daily or placebo in addition to ADT and docetaxel. Randomization was stratified based on the TNM staging system (M1a—nonregional lymph node–only metastasis, M1b—bone metastasis with or without lymph node, or M1c—bone metastases) as well as baseline alkaline phosphatase levels.

Main outcome measures: The primary end point for the study was overall survival. Other meaningful secondary end points included time to castration resistance, time to pain progression, time to first symptomatic skeletal event, symptomatic skeletal event-free survival, time to subsequent systemic antineoplastic therapy, time to worsening of disease-related physical symptoms, initiation of opioid therapy for ≥7 days, and safety.

Results: The baseline and demographic characteristics were well balanced between the 2 groups. Median age was 67 years. Nearly 80% of patients had bone metastasis, and approximately 17% had visceral metastasis. At the data cutoff date for the primary analysis, the median duration of therapy was 41 months for darolutamide compared with 16.7 months in the placebo group; 45.9% in the darolutamide group and 19.1% in the placebo group were receiving the allotted trial therapy at the time of the analysis. Six cycles of docetaxel were completed in approximately 85% of patients in both arms. Median overall survival follow-up was 43.7 months (darolutamide) and 42.4 months (placebo). A significant improvement in overall survival was observed in the darolutamide group. The risk of death was 32.5% lower in the darolutamide cohort than in the placebo cohort (hazard ratio [HR], 0.68; 95% CI, 0.57-0.80; P < .001). The overall survival at 4 years was 62.7% (95% CI, 58.7-66.7) in the darolutamide arm and 50.4% (95% CI, 46.3-54.6) in the placebo arm. The overall survival results remained favorable across most subgroups.

Darolutamide was associated with improvement in all key secondary endpoints. Time to castration-resistance was significantly longer in the darolutamide group (HR, 0.36; 95% CI, 0.30-0.42; P < .001). Time to pain progression was also significantly longer in the darolutamide group (HR, 0.79; 95% CI, 0.66-0.95; P = .01). Time to first symptomatic skeletal events (HR, 0.71; 95% CI, 0.54-0.94; P = .02) and time to initiation of subsequent systemic therapy (HR, 0.39; 95% CI, 0.33-0.46; P < .001) were also found to be longer in the darolutamide group.

 

 

Safety: The risk of grade 3 or higher adverse events was similar across the 2 groups. Most common adverse events were known toxic effects of docetaxel therapy and were highest during the initial period when both groups received this therapy. These side effects progressively decreased after the initial period. The most common grade 3 or 4 adverse event was neutropenia, and its frequency was similar between the darolutamide and placebo groups (33.7% and 34.2%, respectively). The most frequently reported adverse events were alopecia, neutropenia, fatigue, and anemia and were similar between the groups. Adverse events of special significance, including fatigue, falls, fractures, and cardiovascular events, were also similar between the 2 groups. Adverse events causing deaths in each arm were low and similar (4.1% in the darolutamide group and 4.0% in the placebo group). The rates of discontinuation of darolutamide or placebo were similar (13.5% and 10.6%, respectively).

Conclusion: Among patients with mHSPC, overall survival was significantly longer among patients who received darolutamide plus ADT and docetaxel than among those who received ADT and docetaxel alone. This was observed despite a high percentage of patients in the placebo group receiving subsequent systemic therapy at the time of progression. The survival benefit of darolutamide was maintained across most subgroups. An improvement was also observed in the darolutamide arm in terms of key secondary end points. The adverse events were similar across the groups and were consistent with known safety profiles of ADT and docetaxel, and no new safety signals were identified in this trial.

Commentary

The results of the current study add to the body of literature supporting multi-agent systemic therapy in newly diagnosed mHSPC. Prior phase 3 trials of combination therapy using androgen-receptor pathway inhibitors, ADT, and docetaxel have shown conflicting results. The results from the previously reported PEACE-1 study showed improved overall survival among patients who received abiraterone with ADT and docetaxel as compared with those who received ADT and docetaxel alone.1 However, as noted by the authors, the subgroup of patients in the ENZAMET trial who received docetaxel, enzalutamide, and ADT did not appear to have a survival advantage compared with those who received ADT and docetaxel alone.2 The results from the current ARASENS trial provide compelling evidence in a population of prospectively randomized patients that combination therapy with darolutamide, docetaxel, and ADT improves overall survival in men with mHSPC. The survival advantage was maintained across subgroups analyzed in this study. Improvements were observed in regards to several key secondary end points with use of darolutamide. This benefit was maintained despite many patients receiving subsequent therapy at the time of progression. Importantly, there did not appear to be a significant increase in toxicity with triplet therapy. However, it is important to note that this cohort of patients appeared largely asymptomatic at the time of enrollment, with 70% of patients having an Eastern Cooperative Oncology Group performance status of 0.

Additionally, the average age in this study was 67 years, with only about 15% of the population being older than 75 years. In the reported subgroup analysis, those older than 75 years appeared to derive a similar benefit in overall survival, however. Whether triplet therapy should be universally adopted in all patients remains unclear. For example, there is a subset of patients with mHSPC with favorable- risk disease (ie, those with recurrent metastatic disease, node-only disease). In this population, the risk-benefit analysis is less clear, and whether these patients should receive this combination is not certain. Nevertheless, the results of this well-designed study are compelling and certainly represent a potential new standard treatment option for men with mHSPC. One of the strengths of this study was its large sample size that allowed for vigorous statistical analysis to evaluate the efficacy of darolutamide in combination with ADT and docetaxel.

Application for Clinical Practice

The ARASENS study provides convincing evidence that in men with mHSPC, the addition of darolutamide to docetaxel and ADT improves overall survival. This combination appeared to be well tolerated, with no evidence of increased toxicity noted. Certainly, this combination represents a potential new standard treatment option in this population; however, further understanding of which subgroups of men benefit from enhanced therapy is needed to aid in proper patient selection. 

—Santosh Kagathur, MD, and Daniel Isaac, DO, MS
Michigan State University, East Lansing, MI

Study Overview

Objective: To evaluate whether the addition of the potent androgen-receptor inhibitor (ARA) darolutamide to the standard doublet androgen-deprivation therapy (ADT) and docetaxel in metastatic, hormone-sensitive prostate cancer (mHSPC) would increase survival.

Design: A randomized, double-blind, placebo-controlled, multicenter, phase 3 study. The results reported in this publication are from the prespecified interim analysis.

Intervention: Patients with mHSPC were randomly assigned to receive either darolutamide 600 mg twice daily or placebo. All patients received standard ADT with 6 cycles of docetaxel 75 mg/m2 on day 1 every 21 days along with prednisone given within 6 weeks after randomization. Patients receiving luteinizing hormone–releasing hormone (LHRH) agonists as ADT were bridged with at least 4 weeks of first-generation antiandrogen therapy, which was discontinued before randomization. Treatments were continued until symptomatic disease progression, a change in neoplastic therapy, unacceptable toxicity, patient or physician decision, death, or nonadherence.

Setting and participants: Eligible patients included those newly diagnosed with mHSPC with metastases detected on contrast-enhanced computed tomography (CT) or magnetic resonance imaging (MRI) and bone scan. Patients were excluded if they had regional lymph node–only involvement or if they had received more than 12 weeks of ADT before randomization. Between November 2016 and June 2018, 1306 patients (651 in the darolutamide group and 655 in the placebo group) were randomized in a 1:1 manner to receive darolutamide 600 mg twice daily or placebo in addition to ADT and docetaxel. Randomization was stratified based on the TNM staging system (M1a—nonregional lymph node–only metastasis, M1b—bone metastasis with or without lymph node, or M1c—bone metastases) as well as baseline alkaline phosphatase levels.

Main outcome measures: The primary end point for the study was overall survival. Other meaningful secondary end points included time to castration resistance, time to pain progression, time to first symptomatic skeletal event, symptomatic skeletal event-free survival, time to subsequent systemic antineoplastic therapy, time to worsening of disease-related physical symptoms, initiation of opioid therapy for ≥7 days, and safety.

Results: The baseline and demographic characteristics were well balanced between the 2 groups. Median age was 67 years. Nearly 80% of patients had bone metastasis, and approximately 17% had visceral metastasis. At the data cutoff date for the primary analysis, the median duration of therapy was 41 months for darolutamide compared with 16.7 months in the placebo group; 45.9% in the darolutamide group and 19.1% in the placebo group were receiving the allotted trial therapy at the time of the analysis. Six cycles of docetaxel were completed in approximately 85% of patients in both arms. Median overall survival follow-up was 43.7 months (darolutamide) and 42.4 months (placebo). A significant improvement in overall survival was observed in the darolutamide group. The risk of death was 32.5% lower in the darolutamide cohort than in the placebo cohort (hazard ratio [HR], 0.68; 95% CI, 0.57-0.80; P < .001). The overall survival at 4 years was 62.7% (95% CI, 58.7-66.7) in the darolutamide arm and 50.4% (95% CI, 46.3-54.6) in the placebo arm. The overall survival results remained favorable across most subgroups.

Darolutamide was associated with improvement in all key secondary endpoints. Time to castration-resistance was significantly longer in the darolutamide group (HR, 0.36; 95% CI, 0.30-0.42; P < .001). Time to pain progression was also significantly longer in the darolutamide group (HR, 0.79; 95% CI, 0.66-0.95; P = .01). Time to first symptomatic skeletal events (HR, 0.71; 95% CI, 0.54-0.94; P = .02) and time to initiation of subsequent systemic therapy (HR, 0.39; 95% CI, 0.33-0.46; P < .001) were also found to be longer in the darolutamide group.

 

 

Safety: The risk of grade 3 or higher adverse events was similar across the 2 groups. Most common adverse events were known toxic effects of docetaxel therapy and were highest during the initial period when both groups received this therapy. These side effects progressively decreased after the initial period. The most common grade 3 or 4 adverse event was neutropenia, and its frequency was similar between the darolutamide and placebo groups (33.7% and 34.2%, respectively). The most frequently reported adverse events were alopecia, neutropenia, fatigue, and anemia and were similar between the groups. Adverse events of special significance, including fatigue, falls, fractures, and cardiovascular events, were also similar between the 2 groups. Adverse events causing deaths in each arm were low and similar (4.1% in the darolutamide group and 4.0% in the placebo group). The rates of discontinuation of darolutamide or placebo were similar (13.5% and 10.6%, respectively).

Conclusion: Among patients with mHSPC, overall survival was significantly longer among patients who received darolutamide plus ADT and docetaxel than among those who received ADT and docetaxel alone. This was observed despite a high percentage of patients in the placebo group receiving subsequent systemic therapy at the time of progression. The survival benefit of darolutamide was maintained across most subgroups. An improvement was also observed in the darolutamide arm in terms of key secondary end points. The adverse events were similar across the groups and were consistent with known safety profiles of ADT and docetaxel, and no new safety signals were identified in this trial.

Commentary

The results of the current study add to the body of literature supporting multi-agent systemic therapy in newly diagnosed mHSPC. Prior phase 3 trials of combination therapy using androgen-receptor pathway inhibitors, ADT, and docetaxel have shown conflicting results. The results from the previously reported PEACE-1 study showed improved overall survival among patients who received abiraterone with ADT and docetaxel as compared with those who received ADT and docetaxel alone.1 However, as noted by the authors, the subgroup of patients in the ENZAMET trial who received docetaxel, enzalutamide, and ADT did not appear to have a survival advantage compared with those who received ADT and docetaxel alone.2 The results from the current ARASENS trial provide compelling evidence in a population of prospectively randomized patients that combination therapy with darolutamide, docetaxel, and ADT improves overall survival in men with mHSPC. The survival advantage was maintained across subgroups analyzed in this study. Improvements were observed in regards to several key secondary end points with use of darolutamide. This benefit was maintained despite many patients receiving subsequent therapy at the time of progression. Importantly, there did not appear to be a significant increase in toxicity with triplet therapy. However, it is important to note that this cohort of patients appeared largely asymptomatic at the time of enrollment, with 70% of patients having an Eastern Cooperative Oncology Group performance status of 0.

Additionally, the average age in this study was 67 years, with only about 15% of the population being older than 75 years. In the reported subgroup analysis, those older than 75 years appeared to derive a similar benefit in overall survival, however. Whether triplet therapy should be universally adopted in all patients remains unclear. For example, there is a subset of patients with mHSPC with favorable- risk disease (ie, those with recurrent metastatic disease, node-only disease). In this population, the risk-benefit analysis is less clear, and whether these patients should receive this combination is not certain. Nevertheless, the results of this well-designed study are compelling and certainly represent a potential new standard treatment option for men with mHSPC. One of the strengths of this study was its large sample size that allowed for vigorous statistical analysis to evaluate the efficacy of darolutamide in combination with ADT and docetaxel.

Application for Clinical Practice

The ARASENS study provides convincing evidence that in men with mHSPC, the addition of darolutamide to docetaxel and ADT improves overall survival. This combination appeared to be well tolerated, with no evidence of increased toxicity noted. Certainly, this combination represents a potential new standard treatment option in this population; however, further understanding of which subgroups of men benefit from enhanced therapy is needed to aid in proper patient selection. 

—Santosh Kagathur, MD, and Daniel Isaac, DO, MS
Michigan State University, East Lansing, MI

References

1. Fizazi K, Carles Galceran J, Foulon S, et al. LBA5 A phase III trial with a 2x2 factorial design in men with de novo metastatic castration-sensitive prostate cancer: overall survival with abiraterone acetate plus prednisone in PEACE-1. Ann Oncol. 2021;32:Suppl 5:S1299. doi:10.1016/j.annonc.2021.08.2099

2. Davis ID, Martin AJ, Stockler MR, et al. Enzalutamide with standard first-line therapy in metastatic prostate cancer. N Engl J Med. 2019;381:121-131. doi:10.1056/NEJMoa1903835

References

1. Fizazi K, Carles Galceran J, Foulon S, et al. LBA5 A phase III trial with a 2x2 factorial design in men with de novo metastatic castration-sensitive prostate cancer: overall survival with abiraterone acetate plus prednisone in PEACE-1. Ann Oncol. 2021;32:Suppl 5:S1299. doi:10.1016/j.annonc.2021.08.2099

2. Davis ID, Martin AJ, Stockler MR, et al. Enzalutamide with standard first-line therapy in metastatic prostate cancer. N Engl J Med. 2019;381:121-131. doi:10.1056/NEJMoa1903835

Issue
Journal of Clinical Outcomes Management - 29(3)
Issue
Journal of Clinical Outcomes Management - 29(3)
Page Number
108-110
Page Number
108-110
Publications
Publications
Topics
Article Type
Display Headline
Overall Survival Gain With Adding Darolutamide to ADT and Docetaxel in Metastatic, Hormone-Sensitive Prostate Cancer
Display Headline
Overall Survival Gain With Adding Darolutamide to ADT and Docetaxel in Metastatic, Hormone-Sensitive Prostate Cancer
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain

Article Type
Changed
Thu, 06/02/2022 - 08:20
Display Headline
Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain

Study 1 Overview (SCOT-HEART Investigators)

Objective: To assess cardiovascular mortality and nonfatal myocardial infarction at 5 years in patients with stable chest pain referred to cardiology clinic for management with either standard care plus computed tomography angiography (CTA) or standard care alone.

Design: Multicenter, randomized, open-label prospective study.

Setting and participants: A total of 4146 patients with stable chest pain were randomized to standard care or standard care plus CTA at 12 centers across Scotland and were followed for 5 years.

Main outcome measures: The primary end point was a composite of death from coronary heart disease or nonfatal myocardial infarction. Main secondary end points were nonfatal myocardial infarction, nonfatal stroke, and frequency of invasive coronary angiography (ICA) and coronary revascularization with percutaneous coronary intervention or coronary artery bypass grafting.

Main results: The primary outcome including the composite of cardiovascular death or nonfatal myocardial infarction was lower in the CTA group than in the standard-care group at 2.3% (48 of 2073 patients) vs 3.9% (81 of 2073 patients), respectively (hazard ratio, 0.59; 95% CI, 0.41-0.84; P = .004). Although there was a higher rate of ICA and coronary revascularization in the CTA group than in the standard-care group in the first few months of follow-up, the overall rates were similar at 5 years, with ICA performed in 491 patients and 502 patients in the CTA vs standard-care groups, respectively (hazard ratio, 1.00; 95% CI, 0.88-1.13). Similarly, coronary revascularization was performed in 279 patients in the CTA group and in 267 patients in the standard-care group (hazard ratio, 1.07; 95% CI, 0.91-1.27). There were, however, more preventive therapies initiated in patients in the CTA group than in the standard-care group (odds ratio, 1.40; 95% CI, 1.19-1.65).

Conclusion: In patients with stable chest pain, the use of CTA in addition to standard care resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years; the main contributor to this outcome was a reduced nonfatal myocardial infarction rate. There was no difference in the rate of coronary angiography or coronary revascularization between the 2 groups at 5 years.

 

 

Study 2 Overview (DISCHARGE Trial Group)

Objective: To compare the effectiveness of computed tomography (CT) with ICA as a diagnostic tool in patients with stable chest pain and intermediate pretest probability of coronary artery disease (CAD).

Design: Multicenter, randomized, assessor-blinded pragmatic prospective study.

Setting and participants: A total of 3667 patients with stable chest pain and intermediate pretest probability of CAD were enrolled at 26 centers and randomized into CT or ICA groups. Only 3561 patients were included in the modified intention-to-treat analysis, with 1808 patients and 1753 patients in the CT and ICA groups, respectively.

Main outcome measures: The primary outcome was a composite of cardiovascular death, nonfatal myocardial infarction, and nonfatal stroke over 3.5 years. The main secondary outcomes were major procedure-related complications and patient-reported angina pectoris during the last 4 weeks of follow up.

Main results: The primary outcome occurred in 38 of 1808 patients (2.1%) in the CT group and in 52 of 1753 patients (3.0%) in the ICA group (hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). The secondary outcomes showed that major procedure-related complications occurred in 9 patients (0.5%) in the CT group and in 33 patients (1.9%) in the ICA group (hazard ratio, 0.26; 95% CI, 0.13-0.55). Rates of patient-reported angina in the final 4 weeks of follow-up were 8.8% in the CT group and 7.5% in the ICA group (odds ratio, 1.17; 95% CI, 0.92-1.48).

Conclusion: Risk of major adverse cardiovascular events from the primary outcome were similar in both the CT and ICA groups among patients with stable chest pain and intermediate pretest probability of CAD. Patients referred for CT had a lower rate of coronary angiography leading to fewer major procedure-related complications in these patients than in those referred for ICA.

 

 

Commentary

Evaluation and treatment of obstructive atherosclerosis is an important part of clinical care in patients presenting with angina symptoms.1 Thus, the initial investigation for patients with suspected obstructive CAD includes ruling out acute coronary syndrome and assessing quality of life.1 The diagnostic test should be tailored to the pretest probability for the diagnosis of obstructive CAD.2

In the United States, stress testing traditionally has been used for the initial assessment in patients with suspected CAD,3 but recently CTA has been utilized more frequently for this purpose. Compared to a stress test, which often helps identify and assess ischemia, CTA can provide anatomical assessment, with higher sensitivity to identify CAD.4 Furthermore, it can distinguish nonobstructive plaques that can be challenging to identify with stress test alone.

Whether CTA is superior to stress testing as the initial assessment for CAD has been debated. The randomized PROMISE trial compared patients with stable angina who underwent functional stress testing or CTA as an initial strategy.5 They reported a similar outcome between the 2 groups at a median follow-up of 2 years. However, in the original SCOT-HEART trial (CT coronary angiography in patients with suspected angina due to coronary heart disease), which was published in the same year as the PROMISE trial, the patients who underwent initial assessment with CTA had a numerically lower composite end point of cardiac death and myocardial infarction at a median follow-up of 1.7 years (1.3% vs 2.0%, P = .053).6

Given this result, the SCOT-HEART investigators extended the follow-up to evaluate the composite end point of death from coronary heart disease or nonfatal myocardial infarction at 5 years.7 This trial enrolled patients who were initially referred to a cardiology clinic for evaluation of chest pain, and they were randomized to standard care plus CTA or standard care alone. At a median duration of 4.8 years, the primary outcome was lower in the CTA group (2.3%, 48 patients) than in the standard-care group (3.9%, 81 patients) (hazard ratio, 0.58; 95% CI, 0.41-0.84; P = .004). Both groups had similar rates of invasive coronary angiography and had similar coronary revascularization rates.

It is hypothesized that this lower rate of nonfatal myocardial infarction in patients with CTA plus standard care is associated with a higher rate of preventive therapies initiated in patients in the CTA-plus-standard-care group compared to standard care alone. However, the difference in the standard-care group should be noted when compared to the PROMISE trial. In the PROMISE trial, the comparator group had predominantly stress imaging (either nuclear stress test or echocardiography), while in the SCOT-HEART trial, the group had predominantly stress electrocardiogram (ECG), and only 10% of the patients underwent stress imaging. It is possible the difference seen in the rate of nonfatal myocardial infarction was due to suboptimal diagnosis of CAD with stress ECG, which has lower sensitivity compared to stress imaging.

The DISCHARGE trial investigated the effectiveness of CTA vs ICA as the initial diagnostic test in the management of patients with stable chest pain and an intermediate pretest probability of obstructive CAD.8 At 3.5 years of follow-up, the primary composite of cardiovascular death, myocardial infarction, or stroke was similar in both groups (2.1% vs 3.0; hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). Importantly, as fewer patients underwent ICA, the risk of procedure-related complication was lower in the CTA group than in the ICA group. However, it is important to note that only 25% of the patients diagnosed with obstructive CAD had greater than 50% vessel stenosis, which raises the question of whether an initial invasive strategy is appropriate for this population.

The strengths of these 2 studies include the large number of patients enrolled along with adequate follow-up, 5 years in the SCOT-HEART trial and 3.5 years in the DISCHARGE trial. The 2 studies overall suggest the usefulness of CTA for assessment of CAD. However, the control groups were very different in these 2 trials. In the SCOT-HEART study, the comparator group was primarily assessed by stress ECG, while in the DISCHARGE study, the comparator group was primary assessed by ICA. In the PROMISE trial, the composite end point of death, myocardial infarction, hospitalization for unstable angina, or major procedural complication was similar when the strategy of initial CTA was compared to functional testing with imaging (exercise ECG, nuclear stress testing, or echocardiography).5 Thus, clinical assessment is still needed when clinicians are selecting the appropriate diagnostic test for patients with suspected CAD. The most recent guidelines give similar recommendations for CTA compared to stress imaging.9 Whether further improvement in CTA acquisition or the addition of CT fractional flow reserve can further improve outcomes requires additional study.

Applications for Clinical Practice and System Implementation

In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful in diagnosis compared to stress ECG and in reducing utilization of low-yield ICA. Whether CTA is more useful compared to the other noninvasive stress imaging modalities in this population requires further study.

Practice Points

  • In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful compared to stress ECG.
  • Use of CTA can potentially reduce the use of low-yield coronary angiography.

–Thai Nguyen, MD, Albert Chan, MD, Taishi Hirai, MD
University of Missouri, Columbia, MO

References

1. Knuuti J, Wijns W, Saraste A, et al. 2019 ESC Guidelines for the diagnosis and management of chronic coronary syndromes. Eur Heart J. 2020;41(3):407-477. doi:10.1093/eurheartj/ehz425

2. Nakano S, Kohsaka S, Chikamori T et al. JCS 2022 guideline focused update on diagnosis and treatment in patients with stable coronary artery disease. Circ J. 2022;86(5):882-915. doi:10.1253/circj.CJ-21-1041.

3. Fihn SD, Gardin JM, Abrams J, et al. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013

4. Arbab-Zadeh A, Di Carli MF, Cerci R, et al. Accuracy of computed tomographic angiography and single-photon emission computed tomography-acquired myocardial perfusion imaging for the diagnosis of coronary artery disease. Circ Cardiovasc Imaging. 2015;8(10):e003533. doi:10.1161/CIRCIMAGING

5. Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-300. doi:10.1056/NEJMoa1415516

6. SCOT-HEART investigators. CT coronary angiography in patients with suspected angina due to coronary heart disease (SCOT-HEART): an open-label, parallel-group, multicentre trial. Lancet. 2015;385:2383-2391. doi:10.1016/S0140-6736(15)60291-4

7. SCOT-HEART Investigators, Newby DE, Adamson PD, et al. Coronary CT angiography and 5-year risk of myocardial infarction. N Engl J Med. 2018;379(10):924-933. doi:10.1056/NEJMoa1805971

8. DISCHARGE Trial Group, Maurovich-Horvat P, Bosserdt M, et al. CT or invasive coronary angiography in stable chest pain. N Engl J Med. 2022;386(17):1591-1602. doi:10.1056/NEJMoa2200963

9. Writing Committee Members, Lawton JS, Tamis-Holland JE, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(3)
Publications
Topics
Page Number
105-108
Sections
Article PDF
Article PDF

Study 1 Overview (SCOT-HEART Investigators)

Objective: To assess cardiovascular mortality and nonfatal myocardial infarction at 5 years in patients with stable chest pain referred to cardiology clinic for management with either standard care plus computed tomography angiography (CTA) or standard care alone.

Design: Multicenter, randomized, open-label prospective study.

Setting and participants: A total of 4146 patients with stable chest pain were randomized to standard care or standard care plus CTA at 12 centers across Scotland and were followed for 5 years.

Main outcome measures: The primary end point was a composite of death from coronary heart disease or nonfatal myocardial infarction. Main secondary end points were nonfatal myocardial infarction, nonfatal stroke, and frequency of invasive coronary angiography (ICA) and coronary revascularization with percutaneous coronary intervention or coronary artery bypass grafting.

Main results: The primary outcome including the composite of cardiovascular death or nonfatal myocardial infarction was lower in the CTA group than in the standard-care group at 2.3% (48 of 2073 patients) vs 3.9% (81 of 2073 patients), respectively (hazard ratio, 0.59; 95% CI, 0.41-0.84; P = .004). Although there was a higher rate of ICA and coronary revascularization in the CTA group than in the standard-care group in the first few months of follow-up, the overall rates were similar at 5 years, with ICA performed in 491 patients and 502 patients in the CTA vs standard-care groups, respectively (hazard ratio, 1.00; 95% CI, 0.88-1.13). Similarly, coronary revascularization was performed in 279 patients in the CTA group and in 267 patients in the standard-care group (hazard ratio, 1.07; 95% CI, 0.91-1.27). There were, however, more preventive therapies initiated in patients in the CTA group than in the standard-care group (odds ratio, 1.40; 95% CI, 1.19-1.65).

Conclusion: In patients with stable chest pain, the use of CTA in addition to standard care resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years; the main contributor to this outcome was a reduced nonfatal myocardial infarction rate. There was no difference in the rate of coronary angiography or coronary revascularization between the 2 groups at 5 years.

 

 

Study 2 Overview (DISCHARGE Trial Group)

Objective: To compare the effectiveness of computed tomography (CT) with ICA as a diagnostic tool in patients with stable chest pain and intermediate pretest probability of coronary artery disease (CAD).

Design: Multicenter, randomized, assessor-blinded pragmatic prospective study.

Setting and participants: A total of 3667 patients with stable chest pain and intermediate pretest probability of CAD were enrolled at 26 centers and randomized into CT or ICA groups. Only 3561 patients were included in the modified intention-to-treat analysis, with 1808 patients and 1753 patients in the CT and ICA groups, respectively.

Main outcome measures: The primary outcome was a composite of cardiovascular death, nonfatal myocardial infarction, and nonfatal stroke over 3.5 years. The main secondary outcomes were major procedure-related complications and patient-reported angina pectoris during the last 4 weeks of follow up.

Main results: The primary outcome occurred in 38 of 1808 patients (2.1%) in the CT group and in 52 of 1753 patients (3.0%) in the ICA group (hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). The secondary outcomes showed that major procedure-related complications occurred in 9 patients (0.5%) in the CT group and in 33 patients (1.9%) in the ICA group (hazard ratio, 0.26; 95% CI, 0.13-0.55). Rates of patient-reported angina in the final 4 weeks of follow-up were 8.8% in the CT group and 7.5% in the ICA group (odds ratio, 1.17; 95% CI, 0.92-1.48).

Conclusion: Risk of major adverse cardiovascular events from the primary outcome were similar in both the CT and ICA groups among patients with stable chest pain and intermediate pretest probability of CAD. Patients referred for CT had a lower rate of coronary angiography leading to fewer major procedure-related complications in these patients than in those referred for ICA.

 

 

Commentary

Evaluation and treatment of obstructive atherosclerosis is an important part of clinical care in patients presenting with angina symptoms.1 Thus, the initial investigation for patients with suspected obstructive CAD includes ruling out acute coronary syndrome and assessing quality of life.1 The diagnostic test should be tailored to the pretest probability for the diagnosis of obstructive CAD.2

In the United States, stress testing traditionally has been used for the initial assessment in patients with suspected CAD,3 but recently CTA has been utilized more frequently for this purpose. Compared to a stress test, which often helps identify and assess ischemia, CTA can provide anatomical assessment, with higher sensitivity to identify CAD.4 Furthermore, it can distinguish nonobstructive plaques that can be challenging to identify with stress test alone.

Whether CTA is superior to stress testing as the initial assessment for CAD has been debated. The randomized PROMISE trial compared patients with stable angina who underwent functional stress testing or CTA as an initial strategy.5 They reported a similar outcome between the 2 groups at a median follow-up of 2 years. However, in the original SCOT-HEART trial (CT coronary angiography in patients with suspected angina due to coronary heart disease), which was published in the same year as the PROMISE trial, the patients who underwent initial assessment with CTA had a numerically lower composite end point of cardiac death and myocardial infarction at a median follow-up of 1.7 years (1.3% vs 2.0%, P = .053).6

Given this result, the SCOT-HEART investigators extended the follow-up to evaluate the composite end point of death from coronary heart disease or nonfatal myocardial infarction at 5 years.7 This trial enrolled patients who were initially referred to a cardiology clinic for evaluation of chest pain, and they were randomized to standard care plus CTA or standard care alone. At a median duration of 4.8 years, the primary outcome was lower in the CTA group (2.3%, 48 patients) than in the standard-care group (3.9%, 81 patients) (hazard ratio, 0.58; 95% CI, 0.41-0.84; P = .004). Both groups had similar rates of invasive coronary angiography and had similar coronary revascularization rates.

It is hypothesized that this lower rate of nonfatal myocardial infarction in patients with CTA plus standard care is associated with a higher rate of preventive therapies initiated in patients in the CTA-plus-standard-care group compared to standard care alone. However, the difference in the standard-care group should be noted when compared to the PROMISE trial. In the PROMISE trial, the comparator group had predominantly stress imaging (either nuclear stress test or echocardiography), while in the SCOT-HEART trial, the group had predominantly stress electrocardiogram (ECG), and only 10% of the patients underwent stress imaging. It is possible the difference seen in the rate of nonfatal myocardial infarction was due to suboptimal diagnosis of CAD with stress ECG, which has lower sensitivity compared to stress imaging.

The DISCHARGE trial investigated the effectiveness of CTA vs ICA as the initial diagnostic test in the management of patients with stable chest pain and an intermediate pretest probability of obstructive CAD.8 At 3.5 years of follow-up, the primary composite of cardiovascular death, myocardial infarction, or stroke was similar in both groups (2.1% vs 3.0; hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). Importantly, as fewer patients underwent ICA, the risk of procedure-related complication was lower in the CTA group than in the ICA group. However, it is important to note that only 25% of the patients diagnosed with obstructive CAD had greater than 50% vessel stenosis, which raises the question of whether an initial invasive strategy is appropriate for this population.

The strengths of these 2 studies include the large number of patients enrolled along with adequate follow-up, 5 years in the SCOT-HEART trial and 3.5 years in the DISCHARGE trial. The 2 studies overall suggest the usefulness of CTA for assessment of CAD. However, the control groups were very different in these 2 trials. In the SCOT-HEART study, the comparator group was primarily assessed by stress ECG, while in the DISCHARGE study, the comparator group was primary assessed by ICA. In the PROMISE trial, the composite end point of death, myocardial infarction, hospitalization for unstable angina, or major procedural complication was similar when the strategy of initial CTA was compared to functional testing with imaging (exercise ECG, nuclear stress testing, or echocardiography).5 Thus, clinical assessment is still needed when clinicians are selecting the appropriate diagnostic test for patients with suspected CAD. The most recent guidelines give similar recommendations for CTA compared to stress imaging.9 Whether further improvement in CTA acquisition or the addition of CT fractional flow reserve can further improve outcomes requires additional study.

Applications for Clinical Practice and System Implementation

In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful in diagnosis compared to stress ECG and in reducing utilization of low-yield ICA. Whether CTA is more useful compared to the other noninvasive stress imaging modalities in this population requires further study.

Practice Points

  • In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful compared to stress ECG.
  • Use of CTA can potentially reduce the use of low-yield coronary angiography.

–Thai Nguyen, MD, Albert Chan, MD, Taishi Hirai, MD
University of Missouri, Columbia, MO

Study 1 Overview (SCOT-HEART Investigators)

Objective: To assess cardiovascular mortality and nonfatal myocardial infarction at 5 years in patients with stable chest pain referred to cardiology clinic for management with either standard care plus computed tomography angiography (CTA) or standard care alone.

Design: Multicenter, randomized, open-label prospective study.

Setting and participants: A total of 4146 patients with stable chest pain were randomized to standard care or standard care plus CTA at 12 centers across Scotland and were followed for 5 years.

Main outcome measures: The primary end point was a composite of death from coronary heart disease or nonfatal myocardial infarction. Main secondary end points were nonfatal myocardial infarction, nonfatal stroke, and frequency of invasive coronary angiography (ICA) and coronary revascularization with percutaneous coronary intervention or coronary artery bypass grafting.

Main results: The primary outcome including the composite of cardiovascular death or nonfatal myocardial infarction was lower in the CTA group than in the standard-care group at 2.3% (48 of 2073 patients) vs 3.9% (81 of 2073 patients), respectively (hazard ratio, 0.59; 95% CI, 0.41-0.84; P = .004). Although there was a higher rate of ICA and coronary revascularization in the CTA group than in the standard-care group in the first few months of follow-up, the overall rates were similar at 5 years, with ICA performed in 491 patients and 502 patients in the CTA vs standard-care groups, respectively (hazard ratio, 1.00; 95% CI, 0.88-1.13). Similarly, coronary revascularization was performed in 279 patients in the CTA group and in 267 patients in the standard-care group (hazard ratio, 1.07; 95% CI, 0.91-1.27). There were, however, more preventive therapies initiated in patients in the CTA group than in the standard-care group (odds ratio, 1.40; 95% CI, 1.19-1.65).

Conclusion: In patients with stable chest pain, the use of CTA in addition to standard care resulted in a significantly lower rate of death from coronary heart disease or nonfatal myocardial infarction at 5 years; the main contributor to this outcome was a reduced nonfatal myocardial infarction rate. There was no difference in the rate of coronary angiography or coronary revascularization between the 2 groups at 5 years.

 

 

Study 2 Overview (DISCHARGE Trial Group)

Objective: To compare the effectiveness of computed tomography (CT) with ICA as a diagnostic tool in patients with stable chest pain and intermediate pretest probability of coronary artery disease (CAD).

Design: Multicenter, randomized, assessor-blinded pragmatic prospective study.

Setting and participants: A total of 3667 patients with stable chest pain and intermediate pretest probability of CAD were enrolled at 26 centers and randomized into CT or ICA groups. Only 3561 patients were included in the modified intention-to-treat analysis, with 1808 patients and 1753 patients in the CT and ICA groups, respectively.

Main outcome measures: The primary outcome was a composite of cardiovascular death, nonfatal myocardial infarction, and nonfatal stroke over 3.5 years. The main secondary outcomes were major procedure-related complications and patient-reported angina pectoris during the last 4 weeks of follow up.

Main results: The primary outcome occurred in 38 of 1808 patients (2.1%) in the CT group and in 52 of 1753 patients (3.0%) in the ICA group (hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). The secondary outcomes showed that major procedure-related complications occurred in 9 patients (0.5%) in the CT group and in 33 patients (1.9%) in the ICA group (hazard ratio, 0.26; 95% CI, 0.13-0.55). Rates of patient-reported angina in the final 4 weeks of follow-up were 8.8% in the CT group and 7.5% in the ICA group (odds ratio, 1.17; 95% CI, 0.92-1.48).

Conclusion: Risk of major adverse cardiovascular events from the primary outcome were similar in both the CT and ICA groups among patients with stable chest pain and intermediate pretest probability of CAD. Patients referred for CT had a lower rate of coronary angiography leading to fewer major procedure-related complications in these patients than in those referred for ICA.

 

 

Commentary

Evaluation and treatment of obstructive atherosclerosis is an important part of clinical care in patients presenting with angina symptoms.1 Thus, the initial investigation for patients with suspected obstructive CAD includes ruling out acute coronary syndrome and assessing quality of life.1 The diagnostic test should be tailored to the pretest probability for the diagnosis of obstructive CAD.2

In the United States, stress testing traditionally has been used for the initial assessment in patients with suspected CAD,3 but recently CTA has been utilized more frequently for this purpose. Compared to a stress test, which often helps identify and assess ischemia, CTA can provide anatomical assessment, with higher sensitivity to identify CAD.4 Furthermore, it can distinguish nonobstructive plaques that can be challenging to identify with stress test alone.

Whether CTA is superior to stress testing as the initial assessment for CAD has been debated. The randomized PROMISE trial compared patients with stable angina who underwent functional stress testing or CTA as an initial strategy.5 They reported a similar outcome between the 2 groups at a median follow-up of 2 years. However, in the original SCOT-HEART trial (CT coronary angiography in patients with suspected angina due to coronary heart disease), which was published in the same year as the PROMISE trial, the patients who underwent initial assessment with CTA had a numerically lower composite end point of cardiac death and myocardial infarction at a median follow-up of 1.7 years (1.3% vs 2.0%, P = .053).6

Given this result, the SCOT-HEART investigators extended the follow-up to evaluate the composite end point of death from coronary heart disease or nonfatal myocardial infarction at 5 years.7 This trial enrolled patients who were initially referred to a cardiology clinic for evaluation of chest pain, and they were randomized to standard care plus CTA or standard care alone. At a median duration of 4.8 years, the primary outcome was lower in the CTA group (2.3%, 48 patients) than in the standard-care group (3.9%, 81 patients) (hazard ratio, 0.58; 95% CI, 0.41-0.84; P = .004). Both groups had similar rates of invasive coronary angiography and had similar coronary revascularization rates.

It is hypothesized that this lower rate of nonfatal myocardial infarction in patients with CTA plus standard care is associated with a higher rate of preventive therapies initiated in patients in the CTA-plus-standard-care group compared to standard care alone. However, the difference in the standard-care group should be noted when compared to the PROMISE trial. In the PROMISE trial, the comparator group had predominantly stress imaging (either nuclear stress test or echocardiography), while in the SCOT-HEART trial, the group had predominantly stress electrocardiogram (ECG), and only 10% of the patients underwent stress imaging. It is possible the difference seen in the rate of nonfatal myocardial infarction was due to suboptimal diagnosis of CAD with stress ECG, which has lower sensitivity compared to stress imaging.

The DISCHARGE trial investigated the effectiveness of CTA vs ICA as the initial diagnostic test in the management of patients with stable chest pain and an intermediate pretest probability of obstructive CAD.8 At 3.5 years of follow-up, the primary composite of cardiovascular death, myocardial infarction, or stroke was similar in both groups (2.1% vs 3.0; hazard ratio, 0.70; 95% CI, 0.46-1.07; P = .10). Importantly, as fewer patients underwent ICA, the risk of procedure-related complication was lower in the CTA group than in the ICA group. However, it is important to note that only 25% of the patients diagnosed with obstructive CAD had greater than 50% vessel stenosis, which raises the question of whether an initial invasive strategy is appropriate for this population.

The strengths of these 2 studies include the large number of patients enrolled along with adequate follow-up, 5 years in the SCOT-HEART trial and 3.5 years in the DISCHARGE trial. The 2 studies overall suggest the usefulness of CTA for assessment of CAD. However, the control groups were very different in these 2 trials. In the SCOT-HEART study, the comparator group was primarily assessed by stress ECG, while in the DISCHARGE study, the comparator group was primary assessed by ICA. In the PROMISE trial, the composite end point of death, myocardial infarction, hospitalization for unstable angina, or major procedural complication was similar when the strategy of initial CTA was compared to functional testing with imaging (exercise ECG, nuclear stress testing, or echocardiography).5 Thus, clinical assessment is still needed when clinicians are selecting the appropriate diagnostic test for patients with suspected CAD. The most recent guidelines give similar recommendations for CTA compared to stress imaging.9 Whether further improvement in CTA acquisition or the addition of CT fractional flow reserve can further improve outcomes requires additional study.

Applications for Clinical Practice and System Implementation

In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful in diagnosis compared to stress ECG and in reducing utilization of low-yield ICA. Whether CTA is more useful compared to the other noninvasive stress imaging modalities in this population requires further study.

Practice Points

  • In patients with stable chest pain and intermediate pretest probability of CAD, CTA is useful compared to stress ECG.
  • Use of CTA can potentially reduce the use of low-yield coronary angiography.

–Thai Nguyen, MD, Albert Chan, MD, Taishi Hirai, MD
University of Missouri, Columbia, MO

References

1. Knuuti J, Wijns W, Saraste A, et al. 2019 ESC Guidelines for the diagnosis and management of chronic coronary syndromes. Eur Heart J. 2020;41(3):407-477. doi:10.1093/eurheartj/ehz425

2. Nakano S, Kohsaka S, Chikamori T et al. JCS 2022 guideline focused update on diagnosis and treatment in patients with stable coronary artery disease. Circ J. 2022;86(5):882-915. doi:10.1253/circj.CJ-21-1041.

3. Fihn SD, Gardin JM, Abrams J, et al. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013

4. Arbab-Zadeh A, Di Carli MF, Cerci R, et al. Accuracy of computed tomographic angiography and single-photon emission computed tomography-acquired myocardial perfusion imaging for the diagnosis of coronary artery disease. Circ Cardiovasc Imaging. 2015;8(10):e003533. doi:10.1161/CIRCIMAGING

5. Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-300. doi:10.1056/NEJMoa1415516

6. SCOT-HEART investigators. CT coronary angiography in patients with suspected angina due to coronary heart disease (SCOT-HEART): an open-label, parallel-group, multicentre trial. Lancet. 2015;385:2383-2391. doi:10.1016/S0140-6736(15)60291-4

7. SCOT-HEART Investigators, Newby DE, Adamson PD, et al. Coronary CT angiography and 5-year risk of myocardial infarction. N Engl J Med. 2018;379(10):924-933. doi:10.1056/NEJMoa1805971

8. DISCHARGE Trial Group, Maurovich-Horvat P, Bosserdt M, et al. CT or invasive coronary angiography in stable chest pain. N Engl J Med. 2022;386(17):1591-1602. doi:10.1056/NEJMoa2200963

9. Writing Committee Members, Lawton JS, Tamis-Holland JE, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

References

1. Knuuti J, Wijns W, Saraste A, et al. 2019 ESC Guidelines for the diagnosis and management of chronic coronary syndromes. Eur Heart J. 2020;41(3):407-477. doi:10.1093/eurheartj/ehz425

2. Nakano S, Kohsaka S, Chikamori T et al. JCS 2022 guideline focused update on diagnosis and treatment in patients with stable coronary artery disease. Circ J. 2022;86(5):882-915. doi:10.1253/circj.CJ-21-1041.

3. Fihn SD, Gardin JM, Abrams J, et al. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS Guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Practice Guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol. 2012;60(24):e44-e164. doi:10.1016/j.jacc.2012.07.013

4. Arbab-Zadeh A, Di Carli MF, Cerci R, et al. Accuracy of computed tomographic angiography and single-photon emission computed tomography-acquired myocardial perfusion imaging for the diagnosis of coronary artery disease. Circ Cardiovasc Imaging. 2015;8(10):e003533. doi:10.1161/CIRCIMAGING

5. Douglas PS, Hoffmann U, Patel MR, et al. Outcomes of anatomical versus functional testing for coronary artery disease. N Engl J Med. 2015;372(14):1291-300. doi:10.1056/NEJMoa1415516

6. SCOT-HEART investigators. CT coronary angiography in patients with suspected angina due to coronary heart disease (SCOT-HEART): an open-label, parallel-group, multicentre trial. Lancet. 2015;385:2383-2391. doi:10.1016/S0140-6736(15)60291-4

7. SCOT-HEART Investigators, Newby DE, Adamson PD, et al. Coronary CT angiography and 5-year risk of myocardial infarction. N Engl J Med. 2018;379(10):924-933. doi:10.1056/NEJMoa1805971

8. DISCHARGE Trial Group, Maurovich-Horvat P, Bosserdt M, et al. CT or invasive coronary angiography in stable chest pain. N Engl J Med. 2022;386(17):1591-1602. doi:10.1056/NEJMoa2200963

9. Writing Committee Members, Lawton JS, Tamis-Holland JE, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

Issue
Journal of Clinical Outcomes Management - 29(3)
Issue
Journal of Clinical Outcomes Management - 29(3)
Page Number
105-108
Page Number
105-108
Publications
Publications
Topics
Article Type
Display Headline
Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain
Display Headline
Coronary CT Angiography Compared to Coronary Angiography or Standard of Care in Patients With Intermediate-Risk Stable Chest Pain
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program

Article Type
Changed
Thu, 06/02/2022 - 08:20
Display Headline
Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program

Study 1 Overview (Bhasin et al)

Objective: To examine the effect of a multifactorial intervention for fall prevention on fall injury in community-dwelling older adults.

Design: This was a pragmatic, cluster randomized trial conducted in 86 primary care practices across 10 health care systems.

Setting and participants: The primary care sites were selected based on the prespecified criteria of size, ability to implement the intervention, proximity to other practices, accessibility to electronic health records, and access to community-based exercise programs. The primary care practices were randomly assigned to intervention or control.

Eligibility criteria for participants at those practices included age 70 years or older, dwelling in the community, and having an increased risk of falls, as determined by a history of fall-related injury in the past year, 2 or more falls in the past year, or being afraid of falling because of problems with balance or walking. Exclusion criteria were inability to provide consent or lack of proxy consent for participants who were determined to have cognitive impairment based on screening, and inability to speak English or Spanish. A total of 2802 participants were enrolled in the intervention group, and 2649 participants were enrolled in the control group.

Intervention: The intervention contained 5 components: a standardized assessment of 7 modifiable risk factors for fall injuries; standardized protocol-driven recommendations for management of risk factors; an individualized care plan focused on 1 to 3 risk factors; implementation of care plans, including referrals to community-based programs; and follow-up care conducted by telephone or in person. The modifiable risk factors included impairment of strength, gait, or balance; use of medications related to falls; postural hypotension; problems with feet or footwear; visual impairment; osteoporosis or vitamin D deficiency; and home safety hazards. The intervention was delivered by nurses who had completed online training modules and face-to-face training sessions focused on the intervention and motivational interviewing along with continuing education, in partnership with participants and their primary care providers. In the control group, participants received enhanced usual care, including an informational pamphlet, and were encouraged to discuss fall prevention with their primary care provider, including the results of their screening evaluation.

Main outcome measures: The primary outcome of the study was the first serious fall injury in a time-to-event analysis, defined as a fall resulting in a fracture (other than thoracic or lumbar vertebral fracture), joint dislocation, cut requiring closure, head injury requiring hospitalization, sprain or strain, bruising or swelling, or other serious injury. The secondary outcome was first patient-reported fall injury, also in a time-to-event analysis, ascertained by telephone interviews conducted every 4 months. Other outcomes included hospital admissions, emergency department visits, and other health care utilization. Adjudication of fall events and injuries was conducted by a team blinded to treatment assignment and verified using administrative claims data, encounter data, or electronic health record review.

Main results: The intervention and control groups were similar in terms of sex and age: 62.5% vs 61.5% of participants were women, and mean (SD) age was 79.9 (5.7) years and 79.5 (5.8) years, respectively. Other demographic characteristics were similar between groups. For the primary outcome, the rate of first serious injury was 4.9 per 100 person-years in the intervention group and 5.3 per 100 person-years in the control group, with a hazard ratio of 0.92 (95% CI, 0.80-1.06; P = .25). For the secondary outcome of patient-reported fall injury, there were 25.6 events per 100 person-years in the intervention group and 28.6 in the control group, with a hazard ratio of 0.90 (95% CI, 0.83-0.99; P =0.004). Rates of hospitalization and other secondary outcomes were similar between groups.

Conclusion: The multifactorial STRIDE intervention did not reduce the rate of serious fall injury when compared to enhanced usual care. The intervention did result in lower rates of fall injury by patient report, but no other significant outcomes were seen.

 

 

Study 2 Overview (Stark et al)

Objective: To examine the effect of a behavioral home hazard removal intervention for fall prevention on risk of fall in community-dwelling older adults.

Design: This randomized clinical trial was conducted at a single site in St. Louis, Missouri. Participants were community-dwelling older adults who received services from the Area Agency on Aging (AAA). Inclusion criteria included age 65 years and older, having 1 or more falls in the previous 12 months or being worried about falling by self report, and currently receiving services from an AAA. Exclusion criteria included living in an institution or being severely cognitively impaired and unable to follow directions or report falls. Participants who met the criteria were contacted by phone and invited to participate. A total of 310 participants were enrolled in the study, with an equal number of participants assigned to the intervention and control groups.

Intervention: The intervention included hazard identification and removal after a comprehensive assessment of participants, their behaviors, and the environment; this assessment took place during the first visit, which lasted approximately 80 minutes. A home hazard removal plan was developed, and in the second session, which lasted approximately 40 minutes, remediation of hazards was carried out. A third session for home modification that lasted approximately 30 minutes was conducted, if needed. At 6 months after the intervention, a booster session to identify and remediate any new home hazards and address issues was conducted. Specific interventions, as identified by the assessment, included minor home repair such as grab bars, adaptive equipment, task modification, and education. Shared decision making that enabled older adults to control changes in their homes, self-management strategies to improve awareness, and motivational enhancement strategies to improve acceptance were employed. Scripted algorithms and checklists were used to deliver the intervention. For usual care, an annual assessment and referrals to community services, if needed, were conducted in the AAA.

Main outcome measures: The primary outcome of the study was the number of days to first fall in 12 months. Falls were defined as unintentional movements to the floor, ground, or object below knee level, and falls were recorded through a daily journal for 12 months. Participants were contacted by phone if they did not return the journal or reported a fall. Participants were interviewed to verify falls and determine whether a fall was injurious. Secondary outcomes included rate of falls per person per 12 months; daily activity performance measured using the Older Americans Resources and Services Activities of Daily Living scale; falls self-efficacy, which measures confidence performing daily activities without falling; and quality of life using the SF-36 at 12 months.

Main results: Most of the study participants were women (74%), and mean (SD) age was 75 (7.4) years. Study retention was similar between the intervention and control groups, with 82% completing the study in the intervention group compared with 81% in the control group. Fidelity to the intervention, as measured by a checklist by the interventionist, was 99%, and adherence to home modification, as measured by number of home modifications in use by self report, was high at 92% at 6 months and 91% at 12 months. For the primary outcome, fall hazard was not different between the intervention and control groups (hazard ratio, 0.9; 95% CI, 0.66-1.27). For the secondary outcomes, the rate of falling was lower in the intervention group compared with the control group, with a relative risk of 0.62 (95% CI, 0.40-0.95). There was no difference in other secondary outcomes of daily activity performance, falls self-efficacy, or quality of life.

Conclusion: Despite high adherence to home modifications and fidelity to the intervention, this home hazard removal program did not reduce the risk of falling when compared to usual care. It did reduce the rate of falls, although no other effects were observed.

 

 

Commentary

Observational studies have identified factors that contribute to falls,1 and over the past 30 years a number of intervention trials designed to reduce the risk of falling have been conducted. A recent Cochrane review, published prior to the Bhasin et al and Stark et al trials, looked at the effect of multifactorial interventions for fall prevention across 62 trials that included 19,935 older adults living in the community. The review concluded that multifactorial interventions may reduce the rate of falls, but this conclusion was based on low-quality evidence and there was significant heterogeneity across the studies.2

The STRIDE randomized trial represents the latest effort to address the evidence gap around fall prevention, with the STRIDE investigators hoping this would be the definitive trial that leads to practice change in fall prevention. Smaller trials that have demonstrated effectiveness were brought to scale in this large randomized trial that included 86 practices and more than 5000 participants. The investigators used risk of injurious falls as the primary outcome, as this outcome is considered the most clinically meaningful for the study population. The results, however, were disappointing: the multifactorial intervention in STRIDE did not result in a reduction of risk of injurious falls. Challenges in the implementation of this large trial may have contributed to its results; falls care managers, key to this multifactorial intervention, reported difficulties in navigating complex relationships with patients, families, study staff, and primary care practices during the study. Barriers reported included clinical space limitations, variable buy-in from providers, and turnover of practice staff and providers.3 Such implementation factors may have resulted in the divergent results between smaller clinical trials and this large-scale trial conducted across multiple settings.

The second study, by Stark et al, examined a home modification program and its effect on risk of falls. A prior Cochrane review examining the effect of home safety assessment and modification indicates that these strategies are effective in reducing the rate of falls as well as the risk of falling.4 The results of the current trial showed a reduction in the rate of falls but not in the risk of falling; however, this study did not examine outcomes of serious injurious falls, which may be more clinically meaningful. The Stark et al study adds to the existing literature showing that home modification may have an impact on fall rates. One noteworthy aspect of the Stark et al trial is the high adherence rate to home modification in a community-based approach; perhaps the investigators’ approach can be translated to real-world use.

Applications for Clinical Practice and System Implementation

The role of exercise programs in reducing fall rates is well established,5 but neither of these studies focused on exercise interventions. STRIDE offered community-based exercise program referral, but there is variability in such programs and study staff reported challenges in matching participants with appropriate exercise programs.3 Further studies that examine combinations of multifactorial falls risk reduction, exercise, and home safety, with careful consideration of implementation challenges to assure fidelity and adherence to the intervention, are needed to ascertain the best strategy for fall prevention for older adults at risk.

Given the results of these trials, it is difficult to recommend one falls prevention intervention over another. Clinicians should continue to identify falls risk factors using standardized assessments and determine which factors are modifiable.

Practice Points

  • Incorporating assessments of falls risk in primary care is feasible, and such assessments can identify important risk factors.
  • Clinicians and health systems should identify avenues, such as developing programmatic approaches, to providing home safety assessment and intervention, exercise options, medication review, and modification of other risk factors.
  • Ensuring delivery of these elements reliably through programmatic approaches with adequate follow-up is key to preventing falls in this population.

—William W. Hung, MD, MPH

References

1. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988; 319:1701-1707. doi:10.1056/NEJM198812293192604

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:0.1002/14651858.CD012221.pub2

3. Reckrey JM, Gazarian P, Reuben DB, et al. Barriers to implementation of STRIDE, a national study to prevent fall-related injuries. J Am Geriatr Soc. 2021;69(5):1334-1342. doi:10.1111/jgs.17056

4. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2012;2012(9):CD007146. doi:10.1002/14651858.CD007146.pub3

5. Sherrington C, Fairhall NJ, Wallbank GK, et al. Exercise for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2019;1(1):CD012424. doi:10.1002/14651858.CD012424.pub2

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(3)
Publications
Topics
Page Number
102-105
Sections
Article PDF
Article PDF

Study 1 Overview (Bhasin et al)

Objective: To examine the effect of a multifactorial intervention for fall prevention on fall injury in community-dwelling older adults.

Design: This was a pragmatic, cluster randomized trial conducted in 86 primary care practices across 10 health care systems.

Setting and participants: The primary care sites were selected based on the prespecified criteria of size, ability to implement the intervention, proximity to other practices, accessibility to electronic health records, and access to community-based exercise programs. The primary care practices were randomly assigned to intervention or control.

Eligibility criteria for participants at those practices included age 70 years or older, dwelling in the community, and having an increased risk of falls, as determined by a history of fall-related injury in the past year, 2 or more falls in the past year, or being afraid of falling because of problems with balance or walking. Exclusion criteria were inability to provide consent or lack of proxy consent for participants who were determined to have cognitive impairment based on screening, and inability to speak English or Spanish. A total of 2802 participants were enrolled in the intervention group, and 2649 participants were enrolled in the control group.

Intervention: The intervention contained 5 components: a standardized assessment of 7 modifiable risk factors for fall injuries; standardized protocol-driven recommendations for management of risk factors; an individualized care plan focused on 1 to 3 risk factors; implementation of care plans, including referrals to community-based programs; and follow-up care conducted by telephone or in person. The modifiable risk factors included impairment of strength, gait, or balance; use of medications related to falls; postural hypotension; problems with feet or footwear; visual impairment; osteoporosis or vitamin D deficiency; and home safety hazards. The intervention was delivered by nurses who had completed online training modules and face-to-face training sessions focused on the intervention and motivational interviewing along with continuing education, in partnership with participants and their primary care providers. In the control group, participants received enhanced usual care, including an informational pamphlet, and were encouraged to discuss fall prevention with their primary care provider, including the results of their screening evaluation.

Main outcome measures: The primary outcome of the study was the first serious fall injury in a time-to-event analysis, defined as a fall resulting in a fracture (other than thoracic or lumbar vertebral fracture), joint dislocation, cut requiring closure, head injury requiring hospitalization, sprain or strain, bruising or swelling, or other serious injury. The secondary outcome was first patient-reported fall injury, also in a time-to-event analysis, ascertained by telephone interviews conducted every 4 months. Other outcomes included hospital admissions, emergency department visits, and other health care utilization. Adjudication of fall events and injuries was conducted by a team blinded to treatment assignment and verified using administrative claims data, encounter data, or electronic health record review.

Main results: The intervention and control groups were similar in terms of sex and age: 62.5% vs 61.5% of participants were women, and mean (SD) age was 79.9 (5.7) years and 79.5 (5.8) years, respectively. Other demographic characteristics were similar between groups. For the primary outcome, the rate of first serious injury was 4.9 per 100 person-years in the intervention group and 5.3 per 100 person-years in the control group, with a hazard ratio of 0.92 (95% CI, 0.80-1.06; P = .25). For the secondary outcome of patient-reported fall injury, there were 25.6 events per 100 person-years in the intervention group and 28.6 in the control group, with a hazard ratio of 0.90 (95% CI, 0.83-0.99; P =0.004). Rates of hospitalization and other secondary outcomes were similar between groups.

Conclusion: The multifactorial STRIDE intervention did not reduce the rate of serious fall injury when compared to enhanced usual care. The intervention did result in lower rates of fall injury by patient report, but no other significant outcomes were seen.

 

 

Study 2 Overview (Stark et al)

Objective: To examine the effect of a behavioral home hazard removal intervention for fall prevention on risk of fall in community-dwelling older adults.

Design: This randomized clinical trial was conducted at a single site in St. Louis, Missouri. Participants were community-dwelling older adults who received services from the Area Agency on Aging (AAA). Inclusion criteria included age 65 years and older, having 1 or more falls in the previous 12 months or being worried about falling by self report, and currently receiving services from an AAA. Exclusion criteria included living in an institution or being severely cognitively impaired and unable to follow directions or report falls. Participants who met the criteria were contacted by phone and invited to participate. A total of 310 participants were enrolled in the study, with an equal number of participants assigned to the intervention and control groups.

Intervention: The intervention included hazard identification and removal after a comprehensive assessment of participants, their behaviors, and the environment; this assessment took place during the first visit, which lasted approximately 80 minutes. A home hazard removal plan was developed, and in the second session, which lasted approximately 40 minutes, remediation of hazards was carried out. A third session for home modification that lasted approximately 30 minutes was conducted, if needed. At 6 months after the intervention, a booster session to identify and remediate any new home hazards and address issues was conducted. Specific interventions, as identified by the assessment, included minor home repair such as grab bars, adaptive equipment, task modification, and education. Shared decision making that enabled older adults to control changes in their homes, self-management strategies to improve awareness, and motivational enhancement strategies to improve acceptance were employed. Scripted algorithms and checklists were used to deliver the intervention. For usual care, an annual assessment and referrals to community services, if needed, were conducted in the AAA.

Main outcome measures: The primary outcome of the study was the number of days to first fall in 12 months. Falls were defined as unintentional movements to the floor, ground, or object below knee level, and falls were recorded through a daily journal for 12 months. Participants were contacted by phone if they did not return the journal or reported a fall. Participants were interviewed to verify falls and determine whether a fall was injurious. Secondary outcomes included rate of falls per person per 12 months; daily activity performance measured using the Older Americans Resources and Services Activities of Daily Living scale; falls self-efficacy, which measures confidence performing daily activities without falling; and quality of life using the SF-36 at 12 months.

Main results: Most of the study participants were women (74%), and mean (SD) age was 75 (7.4) years. Study retention was similar between the intervention and control groups, with 82% completing the study in the intervention group compared with 81% in the control group. Fidelity to the intervention, as measured by a checklist by the interventionist, was 99%, and adherence to home modification, as measured by number of home modifications in use by self report, was high at 92% at 6 months and 91% at 12 months. For the primary outcome, fall hazard was not different between the intervention and control groups (hazard ratio, 0.9; 95% CI, 0.66-1.27). For the secondary outcomes, the rate of falling was lower in the intervention group compared with the control group, with a relative risk of 0.62 (95% CI, 0.40-0.95). There was no difference in other secondary outcomes of daily activity performance, falls self-efficacy, or quality of life.

Conclusion: Despite high adherence to home modifications and fidelity to the intervention, this home hazard removal program did not reduce the risk of falling when compared to usual care. It did reduce the rate of falls, although no other effects were observed.

 

 

Commentary

Observational studies have identified factors that contribute to falls,1 and over the past 30 years a number of intervention trials designed to reduce the risk of falling have been conducted. A recent Cochrane review, published prior to the Bhasin et al and Stark et al trials, looked at the effect of multifactorial interventions for fall prevention across 62 trials that included 19,935 older adults living in the community. The review concluded that multifactorial interventions may reduce the rate of falls, but this conclusion was based on low-quality evidence and there was significant heterogeneity across the studies.2

The STRIDE randomized trial represents the latest effort to address the evidence gap around fall prevention, with the STRIDE investigators hoping this would be the definitive trial that leads to practice change in fall prevention. Smaller trials that have demonstrated effectiveness were brought to scale in this large randomized trial that included 86 practices and more than 5000 participants. The investigators used risk of injurious falls as the primary outcome, as this outcome is considered the most clinically meaningful for the study population. The results, however, were disappointing: the multifactorial intervention in STRIDE did not result in a reduction of risk of injurious falls. Challenges in the implementation of this large trial may have contributed to its results; falls care managers, key to this multifactorial intervention, reported difficulties in navigating complex relationships with patients, families, study staff, and primary care practices during the study. Barriers reported included clinical space limitations, variable buy-in from providers, and turnover of practice staff and providers.3 Such implementation factors may have resulted in the divergent results between smaller clinical trials and this large-scale trial conducted across multiple settings.

The second study, by Stark et al, examined a home modification program and its effect on risk of falls. A prior Cochrane review examining the effect of home safety assessment and modification indicates that these strategies are effective in reducing the rate of falls as well as the risk of falling.4 The results of the current trial showed a reduction in the rate of falls but not in the risk of falling; however, this study did not examine outcomes of serious injurious falls, which may be more clinically meaningful. The Stark et al study adds to the existing literature showing that home modification may have an impact on fall rates. One noteworthy aspect of the Stark et al trial is the high adherence rate to home modification in a community-based approach; perhaps the investigators’ approach can be translated to real-world use.

Applications for Clinical Practice and System Implementation

The role of exercise programs in reducing fall rates is well established,5 but neither of these studies focused on exercise interventions. STRIDE offered community-based exercise program referral, but there is variability in such programs and study staff reported challenges in matching participants with appropriate exercise programs.3 Further studies that examine combinations of multifactorial falls risk reduction, exercise, and home safety, with careful consideration of implementation challenges to assure fidelity and adherence to the intervention, are needed to ascertain the best strategy for fall prevention for older adults at risk.

Given the results of these trials, it is difficult to recommend one falls prevention intervention over another. Clinicians should continue to identify falls risk factors using standardized assessments and determine which factors are modifiable.

Practice Points

  • Incorporating assessments of falls risk in primary care is feasible, and such assessments can identify important risk factors.
  • Clinicians and health systems should identify avenues, such as developing programmatic approaches, to providing home safety assessment and intervention, exercise options, medication review, and modification of other risk factors.
  • Ensuring delivery of these elements reliably through programmatic approaches with adequate follow-up is key to preventing falls in this population.

—William W. Hung, MD, MPH

Study 1 Overview (Bhasin et al)

Objective: To examine the effect of a multifactorial intervention for fall prevention on fall injury in community-dwelling older adults.

Design: This was a pragmatic, cluster randomized trial conducted in 86 primary care practices across 10 health care systems.

Setting and participants: The primary care sites were selected based on the prespecified criteria of size, ability to implement the intervention, proximity to other practices, accessibility to electronic health records, and access to community-based exercise programs. The primary care practices were randomly assigned to intervention or control.

Eligibility criteria for participants at those practices included age 70 years or older, dwelling in the community, and having an increased risk of falls, as determined by a history of fall-related injury in the past year, 2 or more falls in the past year, or being afraid of falling because of problems with balance or walking. Exclusion criteria were inability to provide consent or lack of proxy consent for participants who were determined to have cognitive impairment based on screening, and inability to speak English or Spanish. A total of 2802 participants were enrolled in the intervention group, and 2649 participants were enrolled in the control group.

Intervention: The intervention contained 5 components: a standardized assessment of 7 modifiable risk factors for fall injuries; standardized protocol-driven recommendations for management of risk factors; an individualized care plan focused on 1 to 3 risk factors; implementation of care plans, including referrals to community-based programs; and follow-up care conducted by telephone or in person. The modifiable risk factors included impairment of strength, gait, or balance; use of medications related to falls; postural hypotension; problems with feet or footwear; visual impairment; osteoporosis or vitamin D deficiency; and home safety hazards. The intervention was delivered by nurses who had completed online training modules and face-to-face training sessions focused on the intervention and motivational interviewing along with continuing education, in partnership with participants and their primary care providers. In the control group, participants received enhanced usual care, including an informational pamphlet, and were encouraged to discuss fall prevention with their primary care provider, including the results of their screening evaluation.

Main outcome measures: The primary outcome of the study was the first serious fall injury in a time-to-event analysis, defined as a fall resulting in a fracture (other than thoracic or lumbar vertebral fracture), joint dislocation, cut requiring closure, head injury requiring hospitalization, sprain or strain, bruising or swelling, or other serious injury. The secondary outcome was first patient-reported fall injury, also in a time-to-event analysis, ascertained by telephone interviews conducted every 4 months. Other outcomes included hospital admissions, emergency department visits, and other health care utilization. Adjudication of fall events and injuries was conducted by a team blinded to treatment assignment and verified using administrative claims data, encounter data, or electronic health record review.

Main results: The intervention and control groups were similar in terms of sex and age: 62.5% vs 61.5% of participants were women, and mean (SD) age was 79.9 (5.7) years and 79.5 (5.8) years, respectively. Other demographic characteristics were similar between groups. For the primary outcome, the rate of first serious injury was 4.9 per 100 person-years in the intervention group and 5.3 per 100 person-years in the control group, with a hazard ratio of 0.92 (95% CI, 0.80-1.06; P = .25). For the secondary outcome of patient-reported fall injury, there were 25.6 events per 100 person-years in the intervention group and 28.6 in the control group, with a hazard ratio of 0.90 (95% CI, 0.83-0.99; P =0.004). Rates of hospitalization and other secondary outcomes were similar between groups.

Conclusion: The multifactorial STRIDE intervention did not reduce the rate of serious fall injury when compared to enhanced usual care. The intervention did result in lower rates of fall injury by patient report, but no other significant outcomes were seen.

 

 

Study 2 Overview (Stark et al)

Objective: To examine the effect of a behavioral home hazard removal intervention for fall prevention on risk of fall in community-dwelling older adults.

Design: This randomized clinical trial was conducted at a single site in St. Louis, Missouri. Participants were community-dwelling older adults who received services from the Area Agency on Aging (AAA). Inclusion criteria included age 65 years and older, having 1 or more falls in the previous 12 months or being worried about falling by self report, and currently receiving services from an AAA. Exclusion criteria included living in an institution or being severely cognitively impaired and unable to follow directions or report falls. Participants who met the criteria were contacted by phone and invited to participate. A total of 310 participants were enrolled in the study, with an equal number of participants assigned to the intervention and control groups.

Intervention: The intervention included hazard identification and removal after a comprehensive assessment of participants, their behaviors, and the environment; this assessment took place during the first visit, which lasted approximately 80 minutes. A home hazard removal plan was developed, and in the second session, which lasted approximately 40 minutes, remediation of hazards was carried out. A third session for home modification that lasted approximately 30 minutes was conducted, if needed. At 6 months after the intervention, a booster session to identify and remediate any new home hazards and address issues was conducted. Specific interventions, as identified by the assessment, included minor home repair such as grab bars, adaptive equipment, task modification, and education. Shared decision making that enabled older adults to control changes in their homes, self-management strategies to improve awareness, and motivational enhancement strategies to improve acceptance were employed. Scripted algorithms and checklists were used to deliver the intervention. For usual care, an annual assessment and referrals to community services, if needed, were conducted in the AAA.

Main outcome measures: The primary outcome of the study was the number of days to first fall in 12 months. Falls were defined as unintentional movements to the floor, ground, or object below knee level, and falls were recorded through a daily journal for 12 months. Participants were contacted by phone if they did not return the journal or reported a fall. Participants were interviewed to verify falls and determine whether a fall was injurious. Secondary outcomes included rate of falls per person per 12 months; daily activity performance measured using the Older Americans Resources and Services Activities of Daily Living scale; falls self-efficacy, which measures confidence performing daily activities without falling; and quality of life using the SF-36 at 12 months.

Main results: Most of the study participants were women (74%), and mean (SD) age was 75 (7.4) years. Study retention was similar between the intervention and control groups, with 82% completing the study in the intervention group compared with 81% in the control group. Fidelity to the intervention, as measured by a checklist by the interventionist, was 99%, and adherence to home modification, as measured by number of home modifications in use by self report, was high at 92% at 6 months and 91% at 12 months. For the primary outcome, fall hazard was not different between the intervention and control groups (hazard ratio, 0.9; 95% CI, 0.66-1.27). For the secondary outcomes, the rate of falling was lower in the intervention group compared with the control group, with a relative risk of 0.62 (95% CI, 0.40-0.95). There was no difference in other secondary outcomes of daily activity performance, falls self-efficacy, or quality of life.

Conclusion: Despite high adherence to home modifications and fidelity to the intervention, this home hazard removal program did not reduce the risk of falling when compared to usual care. It did reduce the rate of falls, although no other effects were observed.

 

 

Commentary

Observational studies have identified factors that contribute to falls,1 and over the past 30 years a number of intervention trials designed to reduce the risk of falling have been conducted. A recent Cochrane review, published prior to the Bhasin et al and Stark et al trials, looked at the effect of multifactorial interventions for fall prevention across 62 trials that included 19,935 older adults living in the community. The review concluded that multifactorial interventions may reduce the rate of falls, but this conclusion was based on low-quality evidence and there was significant heterogeneity across the studies.2

The STRIDE randomized trial represents the latest effort to address the evidence gap around fall prevention, with the STRIDE investigators hoping this would be the definitive trial that leads to practice change in fall prevention. Smaller trials that have demonstrated effectiveness were brought to scale in this large randomized trial that included 86 practices and more than 5000 participants. The investigators used risk of injurious falls as the primary outcome, as this outcome is considered the most clinically meaningful for the study population. The results, however, were disappointing: the multifactorial intervention in STRIDE did not result in a reduction of risk of injurious falls. Challenges in the implementation of this large trial may have contributed to its results; falls care managers, key to this multifactorial intervention, reported difficulties in navigating complex relationships with patients, families, study staff, and primary care practices during the study. Barriers reported included clinical space limitations, variable buy-in from providers, and turnover of practice staff and providers.3 Such implementation factors may have resulted in the divergent results between smaller clinical trials and this large-scale trial conducted across multiple settings.

The second study, by Stark et al, examined a home modification program and its effect on risk of falls. A prior Cochrane review examining the effect of home safety assessment and modification indicates that these strategies are effective in reducing the rate of falls as well as the risk of falling.4 The results of the current trial showed a reduction in the rate of falls but not in the risk of falling; however, this study did not examine outcomes of serious injurious falls, which may be more clinically meaningful. The Stark et al study adds to the existing literature showing that home modification may have an impact on fall rates. One noteworthy aspect of the Stark et al trial is the high adherence rate to home modification in a community-based approach; perhaps the investigators’ approach can be translated to real-world use.

Applications for Clinical Practice and System Implementation

The role of exercise programs in reducing fall rates is well established,5 but neither of these studies focused on exercise interventions. STRIDE offered community-based exercise program referral, but there is variability in such programs and study staff reported challenges in matching participants with appropriate exercise programs.3 Further studies that examine combinations of multifactorial falls risk reduction, exercise, and home safety, with careful consideration of implementation challenges to assure fidelity and adherence to the intervention, are needed to ascertain the best strategy for fall prevention for older adults at risk.

Given the results of these trials, it is difficult to recommend one falls prevention intervention over another. Clinicians should continue to identify falls risk factors using standardized assessments and determine which factors are modifiable.

Practice Points

  • Incorporating assessments of falls risk in primary care is feasible, and such assessments can identify important risk factors.
  • Clinicians and health systems should identify avenues, such as developing programmatic approaches, to providing home safety assessment and intervention, exercise options, medication review, and modification of other risk factors.
  • Ensuring delivery of these elements reliably through programmatic approaches with adequate follow-up is key to preventing falls in this population.

—William W. Hung, MD, MPH

References

1. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988; 319:1701-1707. doi:10.1056/NEJM198812293192604

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:0.1002/14651858.CD012221.pub2

3. Reckrey JM, Gazarian P, Reuben DB, et al. Barriers to implementation of STRIDE, a national study to prevent fall-related injuries. J Am Geriatr Soc. 2021;69(5):1334-1342. doi:10.1111/jgs.17056

4. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2012;2012(9):CD007146. doi:10.1002/14651858.CD007146.pub3

5. Sherrington C, Fairhall NJ, Wallbank GK, et al. Exercise for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2019;1(1):CD012424. doi:10.1002/14651858.CD012424.pub2

References

1. Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988; 319:1701-1707. doi:10.1056/NEJM198812293192604

2. Hopewell S, Adedire O, Copsey BJ, et al. Multifactorial and multiple component interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2018;7(7):CD012221. doi:0.1002/14651858.CD012221.pub2

3. Reckrey JM, Gazarian P, Reuben DB, et al. Barriers to implementation of STRIDE, a national study to prevent fall-related injuries. J Am Geriatr Soc. 2021;69(5):1334-1342. doi:10.1111/jgs.17056

4. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2012;2012(9):CD007146. doi:10.1002/14651858.CD007146.pub3

5. Sherrington C, Fairhall NJ, Wallbank GK, et al. Exercise for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2019;1(1):CD012424. doi:10.1002/14651858.CD012424.pub2

Issue
Journal of Clinical Outcomes Management - 29(3)
Issue
Journal of Clinical Outcomes Management - 29(3)
Page Number
102-105
Page Number
102-105
Publications
Publications
Topics
Article Type
Display Headline
Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program
Display Headline
Fall Injury Among Community-Dwelling Older Adults: Effect of a Multifactorial Intervention and a Home Hazard Removal Program
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital

Article Type
Changed
Fri, 03/25/2022 - 11:41
Display Headline
Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital

Study Overview

Objective: This study evaluated whether a clinical-decision-support (CDS) tool that utilizes a real-time algorithm incorporating patient vital sign data can identify hospitalized patients who can forgo overnight vital sign checks and thus reduce delirium incidence.

Design: This was a parallel randomized clinical trial of adult inpatients admitted to the general medical service of a tertiary care academic medical center in the United States. The trial intervention consisted of a CDS notification in the electronic health record (EHR) that informed the physician if a patient had a high likelihood of nighttime vital signs within the reference ranges based on a logistic regression model of real-time patient data input. This notification provided the physician an opportunity to discontinue nighttime vital sign checks, dismiss the notification for 1 hour, or dismiss the notification until the next day.

Setting and participants: This clinical trial was conducted at the University of California, San Francisco Medical Center from March 11 to November 24, 2019. Participants included physicians who served on the primary team (eg, attending, resident) of 1699 patients on the general medical service who were outside of the intensive care unit (ICU). The hospital encounters were randomized (allocation ratio of 1:1) to sleep promotion vitals CDS (SPV CDS) intervention or usual care.

Main outcome and measures: The primary outcome was delirium as determined by bedside nurse assessment using the Nursing Delirium Screening Scale (Nu-DESC) recorded once per nursing shift. The Nu-DESC is a standardized delirium screening tool that defines delirium with a score ≥2. Secondary outcomes included sleep opportunity (ie, EHR-based sleep metrics that reflected the maximum time between iatrogenic interruptions, such as nighttime vital sign checks) and patient satisfaction (ie, patient satisfaction measured by standardized Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey). Potential balancing outcomes were assessed to ensure that reduced vital sign checks were not causing harms; these included ICU transfers, rapid response calls, and code blue alarms. All analyses were conducted on the basis of intention-to-treat.

Main results: A total of 3025 inpatient encounters were screened and 1930 encounters were randomized (966 SPV CDS intervention; 964 usual care). The randomized encounters consisted of 1699 patients; demographic factors between the 2 trial arms were similar. Specifically, the intervention arm included 566 men (59%) and mean (SD) age was 53 (15) years. The incidence of delirium was similar between the intervention and usual care arms: 108 (11%) vs 123 (13%) (P = .32). Compared to the usual care arm, the intervention arm had a higher mean (SD) number of sleep opportunity hours per night (4.95 [1.45] vs 4.57 [1.30], P < .001) and fewer nighttime vital sign checks (0.97 [0.95] vs 1.41 [0.86], P < .001). The post-discharge HCAHPS survey measuring patient satisfaction was completed by only 5% of patients (53 intervention, 49 usual care), and survey results were similar between the 2 arms (P = .86). In addition, safety outcomes including ICU transfers (49 [5%] vs 47 [5%], P = .92), rapid response calls (68 [7%] vs 55 [6%], P = .27), and code blue alarms (2 [0.2%] vs 9 [0.9%], P = .07) were similar between the study arms.

Conclusion: In this randomized clinical trial, a CDS tool utilizing a real-time prediction algorithm embedded in EHR did not reduce the incidence of delirium in hospitalized patients. However, this SPV CDS intervention helped physicians identify clinically stable patients who can forgo routine nighttime vital sign checks and facilitated greater opportunity for patients to sleep. These findings suggest that augmenting physician judgment using a real-time prediction algorithm can help to improve sleep opportunity without an accompanying increased risk of clinical decompensation during acute care.

 

 

Commentary

High-quality sleep is fundamental to health and well-being. Sleep deprivation and disorders are associated with many adverse health outcomes, including increased risks for obesity, diabetes, hypertension, myocardial infarction, and depression.1 In hospitalized patients who are acutely ill, restorative sleep is critical to facilitating recovery. However, poor sleep is exceedingly common in hospitalized patients and is associated with deleterious outcomes, such as high blood pressure, hyperglycemia, and delirium.2,3 Moreover, some of these adverse sleep-induced cardiometabolic outcomes, as well as sleep disruption itself, may persist after hospital discharge.4 Factors that precipitate interrupted sleep during hospitalization include iatrogenic causes such as frequent vital sign checks, nighttime procedures or early morning blood draws, and environmental factors such as loud ambient noise.3 Thus, a potential intervention to improve sleep quality in the hospital is to reduce nighttime interruptions such as frequent vital sign checks.

In the current study, Najafi and colleagues conducted a randomized trial to evaluate whether a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, can be utilized to identify patients in whom vital sign checks can be safely discontinued at nighttime. The authors found a modest but statistically significant reduction in the number of nighttime vital sign checks in patients who underwent the SPV CDS intervention, and a corresponding higher sleep opportunity per night in those who received the intervention. Importantly, this reduction in nighttime vital sign checks did not cause a higher risk of clinical decompensation as measured by ICU transfers, rapid response calls, or code blue alarms. Thus, the results demonstrated the feasibility of using a real-time, patient data-driven CDS tool to augment physician judgment in managing sleep disruption, an important hospital-associated stressor and a common hazard of hospitalization in older patients.

Delirium is a common clinical problem in hospitalized older patients that is associated with prolonged hospitalization, functional and cognitive decline, institutionalization, death, and increased health care costs.5 Despite a potential benefit of SPV CDS intervention in reducing vital sign checks and increasing sleep opportunity, this intervention did not reduce the incidence of delirium in hospitalized patients. This finding is not surprising given that delirium has a multifactorial etiology (eg, metabolic derangements, infections, medication side effects and drug toxicity, hospital environment). A small modification in nighttime vital sign checks and sleep opportunity may have limited impact on optimizing sleep quality and does not address other risk factors for delirium. As such, a multicomponent nonpharmacologic approach that includes sleep enhancement, early mobilization, feeding assistance, fluid repletion, infection prevention, and other interventions should guide delirium prevention in the hospital setting. The SPV CDS intervention may play a role in the delivery of a multifaceted, nonpharmacologic delirium prevention intervention in high-risk individuals.

Sleep disruption is one of the multiple hazards of hospitalization frequently experience by hospitalized older patients. Other hazards, or hospital-associated stressors, include mobility restriction (eg, physical restraints such as urinary catheters and intravenous lines, bed elevation and rails), malnourishment and dehydration (eg, frequent use of no-food-by-mouth order, lack of easy access to hydration), and pain (eg, poor pain control). Extended exposures to these stressors may lead to a maladaptive state called allostatic overload that transiently increases vulnerability to post-hospitalization adverse events, including emergency department use, hospital readmission, or death (ie, post-hospital syndrome).6 Thus, the optimization of sleep during hospitalization in vulnerable patients may have benefits that extend beyond delirium prevention. It is perceivable that a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, may be applied to reduce some of these hazards of hospitalization in addition to improving sleep opportunity.

Applications for Clinical Practice

Findings from the current study indicate that a CDS tool embedded in EHR that utilizes a real-time prediction algorithm of patient data may help to safely improve sleep opportunity in hospitalized patients. The participants in the current study were relatively young (53 [15] years). Given that age is a risk factor for delirium, the effects of this intervention on delirium prevention in the most susceptible population (ie, those over the age of 65) remain unknown and further investigation is warranted. Additional studies are needed to determine whether this approach yields similar results in geriatric patients and improves clinical outcomes.

—Fred Ko, MD

References

1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem. Colten HR, Altevogt BM, editors. National Academies Press (US); 2006.

2. Pilkington S. Causes and consequences of sleep deprivation in hospitalised patients. Nurs Stand. 2013;27(49):350-342. doi:10.7748/ns2013.08.27.49.35.e7649

3. Stewart NH, Arora VM. Sleep in hospitalized older adults. Sleep Med Clin. 2018;13(1):127-135. doi:10.1016/j.jsmc.2017.09.012

4. Altman MT, Knauert MP, Pisani MA. Sleep disturbance after hospitalization and critical illness: a systematic review. Ann Am Thorac Soc. 2017;14(9):1457-1468. doi:10.1513/AnnalsATS.201702-148SR

5. Oh ES, Fong TG, Hshieh TT, Inouye SK. Delirium in older persons: advances in diagnosis and treatment. JAMA. 2017;318(12):1161-1174. doi:10.1001/jama.2017.12067

6. Goldwater DS, Dharmarajan K, McEwan BS, Krumholz HM. Is posthospital syndrome a result of hospitalization-induced allostatic overload? J Hosp Med. 2018;13(5). doi:10.12788/jhm.2986

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(2)
Publications
Topics
Page Number
54-56
Sections
Article PDF
Article PDF

Study Overview

Objective: This study evaluated whether a clinical-decision-support (CDS) tool that utilizes a real-time algorithm incorporating patient vital sign data can identify hospitalized patients who can forgo overnight vital sign checks and thus reduce delirium incidence.

Design: This was a parallel randomized clinical trial of adult inpatients admitted to the general medical service of a tertiary care academic medical center in the United States. The trial intervention consisted of a CDS notification in the electronic health record (EHR) that informed the physician if a patient had a high likelihood of nighttime vital signs within the reference ranges based on a logistic regression model of real-time patient data input. This notification provided the physician an opportunity to discontinue nighttime vital sign checks, dismiss the notification for 1 hour, or dismiss the notification until the next day.

Setting and participants: This clinical trial was conducted at the University of California, San Francisco Medical Center from March 11 to November 24, 2019. Participants included physicians who served on the primary team (eg, attending, resident) of 1699 patients on the general medical service who were outside of the intensive care unit (ICU). The hospital encounters were randomized (allocation ratio of 1:1) to sleep promotion vitals CDS (SPV CDS) intervention or usual care.

Main outcome and measures: The primary outcome was delirium as determined by bedside nurse assessment using the Nursing Delirium Screening Scale (Nu-DESC) recorded once per nursing shift. The Nu-DESC is a standardized delirium screening tool that defines delirium with a score ≥2. Secondary outcomes included sleep opportunity (ie, EHR-based sleep metrics that reflected the maximum time between iatrogenic interruptions, such as nighttime vital sign checks) and patient satisfaction (ie, patient satisfaction measured by standardized Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey). Potential balancing outcomes were assessed to ensure that reduced vital sign checks were not causing harms; these included ICU transfers, rapid response calls, and code blue alarms. All analyses were conducted on the basis of intention-to-treat.

Main results: A total of 3025 inpatient encounters were screened and 1930 encounters were randomized (966 SPV CDS intervention; 964 usual care). The randomized encounters consisted of 1699 patients; demographic factors between the 2 trial arms were similar. Specifically, the intervention arm included 566 men (59%) and mean (SD) age was 53 (15) years. The incidence of delirium was similar between the intervention and usual care arms: 108 (11%) vs 123 (13%) (P = .32). Compared to the usual care arm, the intervention arm had a higher mean (SD) number of sleep opportunity hours per night (4.95 [1.45] vs 4.57 [1.30], P < .001) and fewer nighttime vital sign checks (0.97 [0.95] vs 1.41 [0.86], P < .001). The post-discharge HCAHPS survey measuring patient satisfaction was completed by only 5% of patients (53 intervention, 49 usual care), and survey results were similar between the 2 arms (P = .86). In addition, safety outcomes including ICU transfers (49 [5%] vs 47 [5%], P = .92), rapid response calls (68 [7%] vs 55 [6%], P = .27), and code blue alarms (2 [0.2%] vs 9 [0.9%], P = .07) were similar between the study arms.

Conclusion: In this randomized clinical trial, a CDS tool utilizing a real-time prediction algorithm embedded in EHR did not reduce the incidence of delirium in hospitalized patients. However, this SPV CDS intervention helped physicians identify clinically stable patients who can forgo routine nighttime vital sign checks and facilitated greater opportunity for patients to sleep. These findings suggest that augmenting physician judgment using a real-time prediction algorithm can help to improve sleep opportunity without an accompanying increased risk of clinical decompensation during acute care.

 

 

Commentary

High-quality sleep is fundamental to health and well-being. Sleep deprivation and disorders are associated with many adverse health outcomes, including increased risks for obesity, diabetes, hypertension, myocardial infarction, and depression.1 In hospitalized patients who are acutely ill, restorative sleep is critical to facilitating recovery. However, poor sleep is exceedingly common in hospitalized patients and is associated with deleterious outcomes, such as high blood pressure, hyperglycemia, and delirium.2,3 Moreover, some of these adverse sleep-induced cardiometabolic outcomes, as well as sleep disruption itself, may persist after hospital discharge.4 Factors that precipitate interrupted sleep during hospitalization include iatrogenic causes such as frequent vital sign checks, nighttime procedures or early morning blood draws, and environmental factors such as loud ambient noise.3 Thus, a potential intervention to improve sleep quality in the hospital is to reduce nighttime interruptions such as frequent vital sign checks.

In the current study, Najafi and colleagues conducted a randomized trial to evaluate whether a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, can be utilized to identify patients in whom vital sign checks can be safely discontinued at nighttime. The authors found a modest but statistically significant reduction in the number of nighttime vital sign checks in patients who underwent the SPV CDS intervention, and a corresponding higher sleep opportunity per night in those who received the intervention. Importantly, this reduction in nighttime vital sign checks did not cause a higher risk of clinical decompensation as measured by ICU transfers, rapid response calls, or code blue alarms. Thus, the results demonstrated the feasibility of using a real-time, patient data-driven CDS tool to augment physician judgment in managing sleep disruption, an important hospital-associated stressor and a common hazard of hospitalization in older patients.

Delirium is a common clinical problem in hospitalized older patients that is associated with prolonged hospitalization, functional and cognitive decline, institutionalization, death, and increased health care costs.5 Despite a potential benefit of SPV CDS intervention in reducing vital sign checks and increasing sleep opportunity, this intervention did not reduce the incidence of delirium in hospitalized patients. This finding is not surprising given that delirium has a multifactorial etiology (eg, metabolic derangements, infections, medication side effects and drug toxicity, hospital environment). A small modification in nighttime vital sign checks and sleep opportunity may have limited impact on optimizing sleep quality and does not address other risk factors for delirium. As such, a multicomponent nonpharmacologic approach that includes sleep enhancement, early mobilization, feeding assistance, fluid repletion, infection prevention, and other interventions should guide delirium prevention in the hospital setting. The SPV CDS intervention may play a role in the delivery of a multifaceted, nonpharmacologic delirium prevention intervention in high-risk individuals.

Sleep disruption is one of the multiple hazards of hospitalization frequently experience by hospitalized older patients. Other hazards, or hospital-associated stressors, include mobility restriction (eg, physical restraints such as urinary catheters and intravenous lines, bed elevation and rails), malnourishment and dehydration (eg, frequent use of no-food-by-mouth order, lack of easy access to hydration), and pain (eg, poor pain control). Extended exposures to these stressors may lead to a maladaptive state called allostatic overload that transiently increases vulnerability to post-hospitalization adverse events, including emergency department use, hospital readmission, or death (ie, post-hospital syndrome).6 Thus, the optimization of sleep during hospitalization in vulnerable patients may have benefits that extend beyond delirium prevention. It is perceivable that a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, may be applied to reduce some of these hazards of hospitalization in addition to improving sleep opportunity.

Applications for Clinical Practice

Findings from the current study indicate that a CDS tool embedded in EHR that utilizes a real-time prediction algorithm of patient data may help to safely improve sleep opportunity in hospitalized patients. The participants in the current study were relatively young (53 [15] years). Given that age is a risk factor for delirium, the effects of this intervention on delirium prevention in the most susceptible population (ie, those over the age of 65) remain unknown and further investigation is warranted. Additional studies are needed to determine whether this approach yields similar results in geriatric patients and improves clinical outcomes.

—Fred Ko, MD

Study Overview

Objective: This study evaluated whether a clinical-decision-support (CDS) tool that utilizes a real-time algorithm incorporating patient vital sign data can identify hospitalized patients who can forgo overnight vital sign checks and thus reduce delirium incidence.

Design: This was a parallel randomized clinical trial of adult inpatients admitted to the general medical service of a tertiary care academic medical center in the United States. The trial intervention consisted of a CDS notification in the electronic health record (EHR) that informed the physician if a patient had a high likelihood of nighttime vital signs within the reference ranges based on a logistic regression model of real-time patient data input. This notification provided the physician an opportunity to discontinue nighttime vital sign checks, dismiss the notification for 1 hour, or dismiss the notification until the next day.

Setting and participants: This clinical trial was conducted at the University of California, San Francisco Medical Center from March 11 to November 24, 2019. Participants included physicians who served on the primary team (eg, attending, resident) of 1699 patients on the general medical service who were outside of the intensive care unit (ICU). The hospital encounters were randomized (allocation ratio of 1:1) to sleep promotion vitals CDS (SPV CDS) intervention or usual care.

Main outcome and measures: The primary outcome was delirium as determined by bedside nurse assessment using the Nursing Delirium Screening Scale (Nu-DESC) recorded once per nursing shift. The Nu-DESC is a standardized delirium screening tool that defines delirium with a score ≥2. Secondary outcomes included sleep opportunity (ie, EHR-based sleep metrics that reflected the maximum time between iatrogenic interruptions, such as nighttime vital sign checks) and patient satisfaction (ie, patient satisfaction measured by standardized Hospital Consumer Assessment of Healthcare Providers and Systems [HCAHPS] survey). Potential balancing outcomes were assessed to ensure that reduced vital sign checks were not causing harms; these included ICU transfers, rapid response calls, and code blue alarms. All analyses were conducted on the basis of intention-to-treat.

Main results: A total of 3025 inpatient encounters were screened and 1930 encounters were randomized (966 SPV CDS intervention; 964 usual care). The randomized encounters consisted of 1699 patients; demographic factors between the 2 trial arms were similar. Specifically, the intervention arm included 566 men (59%) and mean (SD) age was 53 (15) years. The incidence of delirium was similar between the intervention and usual care arms: 108 (11%) vs 123 (13%) (P = .32). Compared to the usual care arm, the intervention arm had a higher mean (SD) number of sleep opportunity hours per night (4.95 [1.45] vs 4.57 [1.30], P < .001) and fewer nighttime vital sign checks (0.97 [0.95] vs 1.41 [0.86], P < .001). The post-discharge HCAHPS survey measuring patient satisfaction was completed by only 5% of patients (53 intervention, 49 usual care), and survey results were similar between the 2 arms (P = .86). In addition, safety outcomes including ICU transfers (49 [5%] vs 47 [5%], P = .92), rapid response calls (68 [7%] vs 55 [6%], P = .27), and code blue alarms (2 [0.2%] vs 9 [0.9%], P = .07) were similar between the study arms.

Conclusion: In this randomized clinical trial, a CDS tool utilizing a real-time prediction algorithm embedded in EHR did not reduce the incidence of delirium in hospitalized patients. However, this SPV CDS intervention helped physicians identify clinically stable patients who can forgo routine nighttime vital sign checks and facilitated greater opportunity for patients to sleep. These findings suggest that augmenting physician judgment using a real-time prediction algorithm can help to improve sleep opportunity without an accompanying increased risk of clinical decompensation during acute care.

 

 

Commentary

High-quality sleep is fundamental to health and well-being. Sleep deprivation and disorders are associated with many adverse health outcomes, including increased risks for obesity, diabetes, hypertension, myocardial infarction, and depression.1 In hospitalized patients who are acutely ill, restorative sleep is critical to facilitating recovery. However, poor sleep is exceedingly common in hospitalized patients and is associated with deleterious outcomes, such as high blood pressure, hyperglycemia, and delirium.2,3 Moreover, some of these adverse sleep-induced cardiometabolic outcomes, as well as sleep disruption itself, may persist after hospital discharge.4 Factors that precipitate interrupted sleep during hospitalization include iatrogenic causes such as frequent vital sign checks, nighttime procedures or early morning blood draws, and environmental factors such as loud ambient noise.3 Thus, a potential intervention to improve sleep quality in the hospital is to reduce nighttime interruptions such as frequent vital sign checks.

In the current study, Najafi and colleagues conducted a randomized trial to evaluate whether a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, can be utilized to identify patients in whom vital sign checks can be safely discontinued at nighttime. The authors found a modest but statistically significant reduction in the number of nighttime vital sign checks in patients who underwent the SPV CDS intervention, and a corresponding higher sleep opportunity per night in those who received the intervention. Importantly, this reduction in nighttime vital sign checks did not cause a higher risk of clinical decompensation as measured by ICU transfers, rapid response calls, or code blue alarms. Thus, the results demonstrated the feasibility of using a real-time, patient data-driven CDS tool to augment physician judgment in managing sleep disruption, an important hospital-associated stressor and a common hazard of hospitalization in older patients.

Delirium is a common clinical problem in hospitalized older patients that is associated with prolonged hospitalization, functional and cognitive decline, institutionalization, death, and increased health care costs.5 Despite a potential benefit of SPV CDS intervention in reducing vital sign checks and increasing sleep opportunity, this intervention did not reduce the incidence of delirium in hospitalized patients. This finding is not surprising given that delirium has a multifactorial etiology (eg, metabolic derangements, infections, medication side effects and drug toxicity, hospital environment). A small modification in nighttime vital sign checks and sleep opportunity may have limited impact on optimizing sleep quality and does not address other risk factors for delirium. As such, a multicomponent nonpharmacologic approach that includes sleep enhancement, early mobilization, feeding assistance, fluid repletion, infection prevention, and other interventions should guide delirium prevention in the hospital setting. The SPV CDS intervention may play a role in the delivery of a multifaceted, nonpharmacologic delirium prevention intervention in high-risk individuals.

Sleep disruption is one of the multiple hazards of hospitalization frequently experience by hospitalized older patients. Other hazards, or hospital-associated stressors, include mobility restriction (eg, physical restraints such as urinary catheters and intravenous lines, bed elevation and rails), malnourishment and dehydration (eg, frequent use of no-food-by-mouth order, lack of easy access to hydration), and pain (eg, poor pain control). Extended exposures to these stressors may lead to a maladaptive state called allostatic overload that transiently increases vulnerability to post-hospitalization adverse events, including emergency department use, hospital readmission, or death (ie, post-hospital syndrome).6 Thus, the optimization of sleep during hospitalization in vulnerable patients may have benefits that extend beyond delirium prevention. It is perceivable that a CDS tool embedded in EHR, powered by a real-time prediction algorithm of patient data, may be applied to reduce some of these hazards of hospitalization in addition to improving sleep opportunity.

Applications for Clinical Practice

Findings from the current study indicate that a CDS tool embedded in EHR that utilizes a real-time prediction algorithm of patient data may help to safely improve sleep opportunity in hospitalized patients. The participants in the current study were relatively young (53 [15] years). Given that age is a risk factor for delirium, the effects of this intervention on delirium prevention in the most susceptible population (ie, those over the age of 65) remain unknown and further investigation is warranted. Additional studies are needed to determine whether this approach yields similar results in geriatric patients and improves clinical outcomes.

—Fred Ko, MD

References

1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem. Colten HR, Altevogt BM, editors. National Academies Press (US); 2006.

2. Pilkington S. Causes and consequences of sleep deprivation in hospitalised patients. Nurs Stand. 2013;27(49):350-342. doi:10.7748/ns2013.08.27.49.35.e7649

3. Stewart NH, Arora VM. Sleep in hospitalized older adults. Sleep Med Clin. 2018;13(1):127-135. doi:10.1016/j.jsmc.2017.09.012

4. Altman MT, Knauert MP, Pisani MA. Sleep disturbance after hospitalization and critical illness: a systematic review. Ann Am Thorac Soc. 2017;14(9):1457-1468. doi:10.1513/AnnalsATS.201702-148SR

5. Oh ES, Fong TG, Hshieh TT, Inouye SK. Delirium in older persons: advances in diagnosis and treatment. JAMA. 2017;318(12):1161-1174. doi:10.1001/jama.2017.12067

6. Goldwater DS, Dharmarajan K, McEwan BS, Krumholz HM. Is posthospital syndrome a result of hospitalization-induced allostatic overload? J Hosp Med. 2018;13(5). doi:10.12788/jhm.2986

References

1. Institute of Medicine (US) Committee on Sleep Medicine and Research. Sleep Disorders and Sleep Deprivation: An Unmet Public Health Problem. Colten HR, Altevogt BM, editors. National Academies Press (US); 2006.

2. Pilkington S. Causes and consequences of sleep deprivation in hospitalised patients. Nurs Stand. 2013;27(49):350-342. doi:10.7748/ns2013.08.27.49.35.e7649

3. Stewart NH, Arora VM. Sleep in hospitalized older adults. Sleep Med Clin. 2018;13(1):127-135. doi:10.1016/j.jsmc.2017.09.012

4. Altman MT, Knauert MP, Pisani MA. Sleep disturbance after hospitalization and critical illness: a systematic review. Ann Am Thorac Soc. 2017;14(9):1457-1468. doi:10.1513/AnnalsATS.201702-148SR

5. Oh ES, Fong TG, Hshieh TT, Inouye SK. Delirium in older persons: advances in diagnosis and treatment. JAMA. 2017;318(12):1161-1174. doi:10.1001/jama.2017.12067

6. Goldwater DS, Dharmarajan K, McEwan BS, Krumholz HM. Is posthospital syndrome a result of hospitalization-induced allostatic overload? J Hosp Med. 2018;13(5). doi:10.12788/jhm.2986

Issue
Journal of Clinical Outcomes Management - 29(2)
Issue
Journal of Clinical Outcomes Management - 29(2)
Page Number
54-56
Page Number
54-56
Publications
Publications
Topics
Article Type
Display Headline
Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital
Display Headline
Using a Real-Time Prediction Algorithm to Improve Sleep in the Hospital
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Early Hospital Discharge Following PCI for Patients With STEMI

Article Type
Changed
Fri, 03/25/2022 - 11:40
Display Headline
Early Hospital Discharge Following PCI for Patients With STEMI

Study Overview

Objective: To assess the safety and efficacy of early hospital discharge (EHD) for selected low-risk patients with ST-segment elevation myocardial infarction (STEMI) after primary percutaneous coronary intervention (PCI).

Design: Single-center retrospective analysis of prospectively collected data.

Setting and participants: An EHD group comprised of 600 patients who were discharged at <48 hours between April 2020 and June 2021 was compared to a control group of 700 patients who met EHD criteria but were discharged at >48 hour between October 2018 and June 2021. Patients were selected into the EHD group based on the following criteria, in accordance with recommendations from the European Society of Cardiology, and all patients had close follow-up with a combination of structured telephone follow-up at 48 hours post discharge and virtual visits at 2, 6, and 8 weeks and at 3 months:

  • Left ventricular ejection fraction ≥40%
  • Successful primary PCI (that achieved thrombolysis in myocardial infarction flow grade 3)
  • Absence of severe nonculprit disease requiring further inpatient revascularization
  • Absence of ischemic symptoms post PCI
  • Absence of heart failure or hemodynamic instability
  • Absence of significant arrhythmia (ventricular fibrillation, ventricular tachycardia, or atrial fibrillation or atrial flutter requiring prolonged stay)
  • Mobility with suitable social circumstances for discharge

Main outcome measures: The outcomes measured were length of hospitalization and a composite primary endpoint of cardiovascular mortality and major adverse cardiovascular event (MACE) rates, defined as a composite of all-cause mortality, recurrent MI, and target lesion revascularization.

Main results: The median length of stay of hospitalization in the EHD group was 24.6 hours compared to 56.1 hours in the >48-hour historical control group. On median follow-up of 271 days, the EHD group demonstrated 0% cardiovascular mortality and a MACE rate of 1.2%. This was shown to be noninferior compared to the >48-hour historical control group, which had mortality of 0.7% and a MACE rate of 1.9%.

Conclusion: Selected low-risk STEMI patients can be safely discharged early with appropriate follow-up after primary PCI.

Commentary

Patients with STEMI have a higher risk of postprocedural adverse events such as MI, arrhythmia, or acute heart failure compared to patients with stable ischemic heart disease, and thus are monitored after primary PCI. Although patients were traditionally monitored for 5 to 7 days a few decades ago,1 with improvements in PCI techniques, devices, and pharmacotherapy as well as in door-to-balloon time, the in-hospital complication rates for patients with STEMI have been decreasing, leading to earlier discharge. Currently in the United States, patients are most commonly monitored for 48 to 72 hours post PCI.2 The current guidelines support this practice, recommending early discharge within 48 to 72 hours in selected low-risk patients if adequate follow-up and rehabilitation are arranged.3

Given the COVID-19 pandemic and decreased hospital bed availability, Rathod et al took one step further on the question of whether low-risk STEMI patients with primary PCI can be discharged safely within 48 hours with adequate follow-up. They found that at a median follow-up of 271 days, EHD patients had 2 COVID-related deaths, with 0% cardiovascular mortality and a MACE rate of 1.2%, including deaths, MI, and ischemic revascularization. The median time to discharge was 25 hours. This was noninferior to the >48-hour historical control group, which had mortality of 0.7% (P = 0.349) and a MACE rate of 1.9% (P = .674). The results remained similar after propensity matching for mortality (0.34% vs 0.69%, P = .410) or MACE (1.2% vs 1.9%, P = .342).

This is the first prospective study to systematically assess the safety and feasibility of discharge of low-risk STEMI patients with primary PCI within 48 hours. This study is unique in that it involved the use of telemedicine, including a virtual platform to collect data such as heart rate, blood pressure, and blood glucose, and virtual visits to facilitate follow-up and reduce clinic travel, cost, and potential COVID-19 exposure. The investigators’ protocol included virtual follow-up by cardiology advanced practitioners at 2, 6, and 8 weeks and by an interventional cardiologist at 12 weeks. This protocol led to an increase in patient satisfaction. The study’s main limitation is that it is a single-center trial with a smaller sample size. Further studies are necessary to confirm the safety and feasibility of this approach. In addition, further refinement of the patient selection criteria for EHD should be considered.

Applications for Clinical Practice

In low-risk STEMI patients after primary PCI, discharge within 48 hours may be considered if close follow-up is arranged. However, further studies are necessary to confirm this finding.

—Thai Nguyen, MD, Albert Chan, MD, and Taishi Hirai MD

References

1. Grines CL, Marsalese DL, Brodie B, et al. Safety and cost-effectiveness of early discharge after primary angioplasty in low risk patients with acute myocardial infarction. PAMI-II Investigators. Primary Angioplasty in Myocardial Infarction. J Am Coll Cardiol. 1998;31:967-72. doi:10.1016/s0735-1097(98)00031-x

2. Seto AH, Shroff A, Abu-Fadel M, et al. Length of stay following percutaneous coronary intervention: An expert consensus document update from the society for cardiovascular angiography and interventions. Catheter Cardiovasc Interv. 2018;92:717-731. doi:10.1002/ccd.27637

3. Ibanez B, James S, Agewall S, et al. 2017 ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation. Eur Heart J. 2018;39:119-177. doi:10.1093/eurheartj/ehx393

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(2)
Publications
Topics
Page Number
52-53
Sections
Article PDF
Article PDF

Study Overview

Objective: To assess the safety and efficacy of early hospital discharge (EHD) for selected low-risk patients with ST-segment elevation myocardial infarction (STEMI) after primary percutaneous coronary intervention (PCI).

Design: Single-center retrospective analysis of prospectively collected data.

Setting and participants: An EHD group comprised of 600 patients who were discharged at <48 hours between April 2020 and June 2021 was compared to a control group of 700 patients who met EHD criteria but were discharged at >48 hour between October 2018 and June 2021. Patients were selected into the EHD group based on the following criteria, in accordance with recommendations from the European Society of Cardiology, and all patients had close follow-up with a combination of structured telephone follow-up at 48 hours post discharge and virtual visits at 2, 6, and 8 weeks and at 3 months:

  • Left ventricular ejection fraction ≥40%
  • Successful primary PCI (that achieved thrombolysis in myocardial infarction flow grade 3)
  • Absence of severe nonculprit disease requiring further inpatient revascularization
  • Absence of ischemic symptoms post PCI
  • Absence of heart failure or hemodynamic instability
  • Absence of significant arrhythmia (ventricular fibrillation, ventricular tachycardia, or atrial fibrillation or atrial flutter requiring prolonged stay)
  • Mobility with suitable social circumstances for discharge

Main outcome measures: The outcomes measured were length of hospitalization and a composite primary endpoint of cardiovascular mortality and major adverse cardiovascular event (MACE) rates, defined as a composite of all-cause mortality, recurrent MI, and target lesion revascularization.

Main results: The median length of stay of hospitalization in the EHD group was 24.6 hours compared to 56.1 hours in the >48-hour historical control group. On median follow-up of 271 days, the EHD group demonstrated 0% cardiovascular mortality and a MACE rate of 1.2%. This was shown to be noninferior compared to the >48-hour historical control group, which had mortality of 0.7% and a MACE rate of 1.9%.

Conclusion: Selected low-risk STEMI patients can be safely discharged early with appropriate follow-up after primary PCI.

Commentary

Patients with STEMI have a higher risk of postprocedural adverse events such as MI, arrhythmia, or acute heart failure compared to patients with stable ischemic heart disease, and thus are monitored after primary PCI. Although patients were traditionally monitored for 5 to 7 days a few decades ago,1 with improvements in PCI techniques, devices, and pharmacotherapy as well as in door-to-balloon time, the in-hospital complication rates for patients with STEMI have been decreasing, leading to earlier discharge. Currently in the United States, patients are most commonly monitored for 48 to 72 hours post PCI.2 The current guidelines support this practice, recommending early discharge within 48 to 72 hours in selected low-risk patients if adequate follow-up and rehabilitation are arranged.3

Given the COVID-19 pandemic and decreased hospital bed availability, Rathod et al took one step further on the question of whether low-risk STEMI patients with primary PCI can be discharged safely within 48 hours with adequate follow-up. They found that at a median follow-up of 271 days, EHD patients had 2 COVID-related deaths, with 0% cardiovascular mortality and a MACE rate of 1.2%, including deaths, MI, and ischemic revascularization. The median time to discharge was 25 hours. This was noninferior to the >48-hour historical control group, which had mortality of 0.7% (P = 0.349) and a MACE rate of 1.9% (P = .674). The results remained similar after propensity matching for mortality (0.34% vs 0.69%, P = .410) or MACE (1.2% vs 1.9%, P = .342).

This is the first prospective study to systematically assess the safety and feasibility of discharge of low-risk STEMI patients with primary PCI within 48 hours. This study is unique in that it involved the use of telemedicine, including a virtual platform to collect data such as heart rate, blood pressure, and blood glucose, and virtual visits to facilitate follow-up and reduce clinic travel, cost, and potential COVID-19 exposure. The investigators’ protocol included virtual follow-up by cardiology advanced practitioners at 2, 6, and 8 weeks and by an interventional cardiologist at 12 weeks. This protocol led to an increase in patient satisfaction. The study’s main limitation is that it is a single-center trial with a smaller sample size. Further studies are necessary to confirm the safety and feasibility of this approach. In addition, further refinement of the patient selection criteria for EHD should be considered.

Applications for Clinical Practice

In low-risk STEMI patients after primary PCI, discharge within 48 hours may be considered if close follow-up is arranged. However, further studies are necessary to confirm this finding.

—Thai Nguyen, MD, Albert Chan, MD, and Taishi Hirai MD

Study Overview

Objective: To assess the safety and efficacy of early hospital discharge (EHD) for selected low-risk patients with ST-segment elevation myocardial infarction (STEMI) after primary percutaneous coronary intervention (PCI).

Design: Single-center retrospective analysis of prospectively collected data.

Setting and participants: An EHD group comprised of 600 patients who were discharged at <48 hours between April 2020 and June 2021 was compared to a control group of 700 patients who met EHD criteria but were discharged at >48 hour between October 2018 and June 2021. Patients were selected into the EHD group based on the following criteria, in accordance with recommendations from the European Society of Cardiology, and all patients had close follow-up with a combination of structured telephone follow-up at 48 hours post discharge and virtual visits at 2, 6, and 8 weeks and at 3 months:

  • Left ventricular ejection fraction ≥40%
  • Successful primary PCI (that achieved thrombolysis in myocardial infarction flow grade 3)
  • Absence of severe nonculprit disease requiring further inpatient revascularization
  • Absence of ischemic symptoms post PCI
  • Absence of heart failure or hemodynamic instability
  • Absence of significant arrhythmia (ventricular fibrillation, ventricular tachycardia, or atrial fibrillation or atrial flutter requiring prolonged stay)
  • Mobility with suitable social circumstances for discharge

Main outcome measures: The outcomes measured were length of hospitalization and a composite primary endpoint of cardiovascular mortality and major adverse cardiovascular event (MACE) rates, defined as a composite of all-cause mortality, recurrent MI, and target lesion revascularization.

Main results: The median length of stay of hospitalization in the EHD group was 24.6 hours compared to 56.1 hours in the >48-hour historical control group. On median follow-up of 271 days, the EHD group demonstrated 0% cardiovascular mortality and a MACE rate of 1.2%. This was shown to be noninferior compared to the >48-hour historical control group, which had mortality of 0.7% and a MACE rate of 1.9%.

Conclusion: Selected low-risk STEMI patients can be safely discharged early with appropriate follow-up after primary PCI.

Commentary

Patients with STEMI have a higher risk of postprocedural adverse events such as MI, arrhythmia, or acute heart failure compared to patients with stable ischemic heart disease, and thus are monitored after primary PCI. Although patients were traditionally monitored for 5 to 7 days a few decades ago,1 with improvements in PCI techniques, devices, and pharmacotherapy as well as in door-to-balloon time, the in-hospital complication rates for patients with STEMI have been decreasing, leading to earlier discharge. Currently in the United States, patients are most commonly monitored for 48 to 72 hours post PCI.2 The current guidelines support this practice, recommending early discharge within 48 to 72 hours in selected low-risk patients if adequate follow-up and rehabilitation are arranged.3

Given the COVID-19 pandemic and decreased hospital bed availability, Rathod et al took one step further on the question of whether low-risk STEMI patients with primary PCI can be discharged safely within 48 hours with adequate follow-up. They found that at a median follow-up of 271 days, EHD patients had 2 COVID-related deaths, with 0% cardiovascular mortality and a MACE rate of 1.2%, including deaths, MI, and ischemic revascularization. The median time to discharge was 25 hours. This was noninferior to the >48-hour historical control group, which had mortality of 0.7% (P = 0.349) and a MACE rate of 1.9% (P = .674). The results remained similar after propensity matching for mortality (0.34% vs 0.69%, P = .410) or MACE (1.2% vs 1.9%, P = .342).

This is the first prospective study to systematically assess the safety and feasibility of discharge of low-risk STEMI patients with primary PCI within 48 hours. This study is unique in that it involved the use of telemedicine, including a virtual platform to collect data such as heart rate, blood pressure, and blood glucose, and virtual visits to facilitate follow-up and reduce clinic travel, cost, and potential COVID-19 exposure. The investigators’ protocol included virtual follow-up by cardiology advanced practitioners at 2, 6, and 8 weeks and by an interventional cardiologist at 12 weeks. This protocol led to an increase in patient satisfaction. The study’s main limitation is that it is a single-center trial with a smaller sample size. Further studies are necessary to confirm the safety and feasibility of this approach. In addition, further refinement of the patient selection criteria for EHD should be considered.

Applications for Clinical Practice

In low-risk STEMI patients after primary PCI, discharge within 48 hours may be considered if close follow-up is arranged. However, further studies are necessary to confirm this finding.

—Thai Nguyen, MD, Albert Chan, MD, and Taishi Hirai MD

References

1. Grines CL, Marsalese DL, Brodie B, et al. Safety and cost-effectiveness of early discharge after primary angioplasty in low risk patients with acute myocardial infarction. PAMI-II Investigators. Primary Angioplasty in Myocardial Infarction. J Am Coll Cardiol. 1998;31:967-72. doi:10.1016/s0735-1097(98)00031-x

2. Seto AH, Shroff A, Abu-Fadel M, et al. Length of stay following percutaneous coronary intervention: An expert consensus document update from the society for cardiovascular angiography and interventions. Catheter Cardiovasc Interv. 2018;92:717-731. doi:10.1002/ccd.27637

3. Ibanez B, James S, Agewall S, et al. 2017 ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation. Eur Heart J. 2018;39:119-177. doi:10.1093/eurheartj/ehx393

References

1. Grines CL, Marsalese DL, Brodie B, et al. Safety and cost-effectiveness of early discharge after primary angioplasty in low risk patients with acute myocardial infarction. PAMI-II Investigators. Primary Angioplasty in Myocardial Infarction. J Am Coll Cardiol. 1998;31:967-72. doi:10.1016/s0735-1097(98)00031-x

2. Seto AH, Shroff A, Abu-Fadel M, et al. Length of stay following percutaneous coronary intervention: An expert consensus document update from the society for cardiovascular angiography and interventions. Catheter Cardiovasc Interv. 2018;92:717-731. doi:10.1002/ccd.27637

3. Ibanez B, James S, Agewall S, et al. 2017 ESC Guidelines for the management of acute myocardial infarction in patients presenting with ST-segment elevation. Eur Heart J. 2018;39:119-177. doi:10.1093/eurheartj/ehx393

Issue
Journal of Clinical Outcomes Management - 29(2)
Issue
Journal of Clinical Outcomes Management - 29(2)
Page Number
52-53
Page Number
52-53
Publications
Publications
Topics
Article Type
Display Headline
Early Hospital Discharge Following PCI for Patients With STEMI
Display Headline
Early Hospital Discharge Following PCI for Patients With STEMI
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Intervention in Acute Hospital Unit Reduces Delirium Incidence for Older Adults, Has No Effect on Length of Stay, Other Complications

Article Type
Changed
Wed, 02/09/2022 - 16:23
Display Headline
Intervention in Acute Hospital Unit Reduces Delirium Incidence for Older Adults, Has No Effect on Length of Stay, Other Complications

Study Overview

Objective: To examine the effect of the intervention “Eat Walk Engage,” a program that is designed to more consistently deliver age-friendly principles of care to older individuals in acute medical and surgical wards.

Design: This cluster randomized trial to examine the effect of an intervention in acute medical and surgical wards on older adults was conducted in 8 acute medical and surgical wards in 4 public hospitals in Australia from 2016 to 2017. To be eligible to participate in this trial, wards had to have the following: a patient population with 50% of patients aged 65 years and older; perceived alignment with hospital priorities; and nurse manager agreement to participation. Randomization was stratified by hospital, resulting in 4 wards with the intervention (a general medicine ward, an orthopedic ward, a general surgery ward, and a respiratory medicine ward) and 4 control wards (2 general medicine wards, a respiratory medicine ward, and a general surgery ward). Participants were consecutive inpatients aged 65 years or older who were admitted to the ward for at least 3 consecutive days during the study time period. Exclusion criteria included terminal or critical illness, severe cognitive impairment without a surrogate decision-maker, non-English speaking, or previously enrolled in the trial. Of a total of 453 patients who were eligible from the intervention wards, 188 were excluded and 6 died, yielding 259 participants in the intervention group. There were 413 patients eligible from the control wards, with 139 excluded and 3 deaths, yielding 271 participants in the control group.

Intervention: The intervention, called “Eat Walk Engage,” was developed to target older adults at risk for hospital-associated complications of delirium, functional decline, pressure injuries, falls, and incontinence, and aimed to improve care practices, environment, and culture to support age-friendly principles. This ward-based program delivered a structured improvement intervention through a site facilitator who is a nurse or allied health professional. The site facilitator identified opportunities for improvement using structured assessments of context, patient-experience interviews, and audits of care processes, and engaged an interdisciplinary working group from the intervention wards to participate in an hour-per-month meeting to develop plans for iterative improvements. Each site developed their own intervention plan; examples of interventions include shifting priorities to enable staff to increase the proportion of patients sitting in a chair for meals; designating the patient lounge as a walking destination to increase the proportion of time patients spend mobile; and using orientation boards and small groups to engage older patients in meaningful activities.

Main outcome measures: Study outcome measures included hospital-associated complications for older people, which is a composite of hospital-associated delirium, hospital-associated disability, hospital-associated incontinence, and fall or pressure injury during hospitalization. Delirium was assessed using the 3-minute diagnostic interview for Confusion Assessment Method (3D-CAM); hospital-associated disability was defined as new disability at discharge compared to 2 weeks prior to hospitalization. The primary outcome was defined as incidence of any complications and hospital length of stay. Secondary outcomes included incidence of individual complications, hospital discharge to facility, mortality at 6 months, and readmission for any cause at 6 months.

Main results: Patient characteristics for the intervention and control groups, respectively, were: 47% women with a mean age of 75.9 years (SD, 7.3), and 53% women with a mean age of 78.0 years (SD, 8.2). For the primary outcome, 46.4% of participants in the intervention group experienced any hospital complications compared with 51.8% in the control group (odds ratio [OR], 1.07; 95% CI, 0.71-1.61). The incidence of delirium was lower in the intervention group as compared with the control group (15.9% vs 31.4%; OR, 0.53; 95% CI, 0.31-0.90), while there were no other differences in the incidence rates of other complications. There was also no difference in hospital length of stay; median length of stay in the intervention group was 6 days (interquartile range [IQR], 4-9 days) compared with 7 days in the control group (IQR, 5-10), with an estimated mean difference in length of stay of 0.16 days (95% CI, –0.43 to 0.78 days). There was also no significant difference in mortality or all-cause readmission at 6 months.

Conclusion: The intervention “Eat Walk Engage” did not reduce hospital-associated complications overall or hospital length of stay, but it did reduce the incidence of hospital-associated delirium.

 

 

Commentary

Older adults, often with reduced physiologic reserve, when admitted to the hospital with an acute illness may be vulnerable to potential hazards of hospitalization, such as complications from prolonged periods of immobility, pressure injury, and delirium.1 Models of care in the inpatient setting to reduce these hazards, including the Acute Care for the Elderly model and the Mobile Acute Care for the Elderly Team model, have been examined in clinical trials.2,3 Specifically, models of care to prevent and treat delirium have been developed and tested over the past decade.4 The effect of these models in improving function, reducing complications, and reducing delirium incidence has been well documented. The present study adds to the literature by testing a model that utilizes implementation science methods to take into account real-world settings. In contrast with prior models-of-care studies, the implementation of the intervention at each ward was not prescriptive, but rather was developed in each ward in an iterative manner with stakeholder input. The advantage of this approach is that engagement of stakeholders at each intervention ward obtains buy-in from staff, mobilizing staff in a way that a prescriptive model of care may not; this ultimately may lead to longer-lasting change. The iterative approach also allows for the intervention to be adapted to conditions and settings over time. Other studies have taken this approach of using implementation science to drive change.5 Although the intervention in the present study failed to improve the primary outcome, it did reduce the incidence of delirium, which is a significant outcome and one that may confer considerable benefits to older adults under the model’s care.

A limitation of the intervention’s nonprescriptive approach is that, because of the variation of the interventions across sites, it is difficult to discern what elements drove the clinical outcomes. In addition, it would be challenging to consider what aspects of the intervention did not work should refinement or changes be needed. How one may measure fidelity to the intervention or how well a site implements the intervention and its relationship with clinical outcomes will need to be examined further.

Application for Clinical Practice

Clinicians look to effective models of care to improve clinical outcomes for older adults in the hospital. The intervention described in this study offers a real-world approach that may need less upfront investment than other recently studied models, such as the Acute Care for the Elderly model, which requires structural and staffing enhancements. Clinicians and health system leaders may consider implementing this model to improve the care delivered to older adults in the hospital as it may help reduce the incidence of delirium among the older adults they serve.

–William W. Hung, MD, MPH

Disclosures: None.

 

References

1. Creditor MC. Hazards of hospitalization of the elderly. Ann Intern Med. 1993;118(3):219-223. doi:10.7326/0003-4819-118-3-199302010-00011

2. Fox MT, Persaud M, Maimets I, et al. Effectiveness of acute geriatric unit care using acute care for elders components: a systematic review and meta-analysis. J Am Geriatr Soc. 2012;60(12):2237-2245. doi:10.1111/jgs.12028

3. Hung WW, Ross JS, Farber J, Siu AL. Evaluation of the Mobile Acute Care of the Elderly (MACE) service. JAMA Intern Med. 2013;173(11):990-996. doi:10.1001/jamainternmed.2013.478

4. Hshieh TT, Yang T, Gartaganis SL, Yue J, Inouye SK. Hospital Elder Life Program: systematic review and meta-analysis of effectiveness. Am J Geriatr Psychiatry. 2018;26(10):1015-1033. doi:10.1016/j.jagp.2018.06.007

5. Naughton C, Cummins H, de Foubert M, et al. Implementation of the Frailty Care Bundle (FCB) to promote mobilisation, nutrition and cognitive engagement in older people in acute care settings: protocol for an implementation science study. [version 1; peer review: 1 approved]. HRB Open Res. 2022;5:3. doi:10.12688/hrbopenres.134731

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(1)
Publications
Topics
Page Number
1-3
Sections
Article PDF
Article PDF

Study Overview

Objective: To examine the effect of the intervention “Eat Walk Engage,” a program that is designed to more consistently deliver age-friendly principles of care to older individuals in acute medical and surgical wards.

Design: This cluster randomized trial to examine the effect of an intervention in acute medical and surgical wards on older adults was conducted in 8 acute medical and surgical wards in 4 public hospitals in Australia from 2016 to 2017. To be eligible to participate in this trial, wards had to have the following: a patient population with 50% of patients aged 65 years and older; perceived alignment with hospital priorities; and nurse manager agreement to participation. Randomization was stratified by hospital, resulting in 4 wards with the intervention (a general medicine ward, an orthopedic ward, a general surgery ward, and a respiratory medicine ward) and 4 control wards (2 general medicine wards, a respiratory medicine ward, and a general surgery ward). Participants were consecutive inpatients aged 65 years or older who were admitted to the ward for at least 3 consecutive days during the study time period. Exclusion criteria included terminal or critical illness, severe cognitive impairment without a surrogate decision-maker, non-English speaking, or previously enrolled in the trial. Of a total of 453 patients who were eligible from the intervention wards, 188 were excluded and 6 died, yielding 259 participants in the intervention group. There were 413 patients eligible from the control wards, with 139 excluded and 3 deaths, yielding 271 participants in the control group.

Intervention: The intervention, called “Eat Walk Engage,” was developed to target older adults at risk for hospital-associated complications of delirium, functional decline, pressure injuries, falls, and incontinence, and aimed to improve care practices, environment, and culture to support age-friendly principles. This ward-based program delivered a structured improvement intervention through a site facilitator who is a nurse or allied health professional. The site facilitator identified opportunities for improvement using structured assessments of context, patient-experience interviews, and audits of care processes, and engaged an interdisciplinary working group from the intervention wards to participate in an hour-per-month meeting to develop plans for iterative improvements. Each site developed their own intervention plan; examples of interventions include shifting priorities to enable staff to increase the proportion of patients sitting in a chair for meals; designating the patient lounge as a walking destination to increase the proportion of time patients spend mobile; and using orientation boards and small groups to engage older patients in meaningful activities.

Main outcome measures: Study outcome measures included hospital-associated complications for older people, which is a composite of hospital-associated delirium, hospital-associated disability, hospital-associated incontinence, and fall or pressure injury during hospitalization. Delirium was assessed using the 3-minute diagnostic interview for Confusion Assessment Method (3D-CAM); hospital-associated disability was defined as new disability at discharge compared to 2 weeks prior to hospitalization. The primary outcome was defined as incidence of any complications and hospital length of stay. Secondary outcomes included incidence of individual complications, hospital discharge to facility, mortality at 6 months, and readmission for any cause at 6 months.

Main results: Patient characteristics for the intervention and control groups, respectively, were: 47% women with a mean age of 75.9 years (SD, 7.3), and 53% women with a mean age of 78.0 years (SD, 8.2). For the primary outcome, 46.4% of participants in the intervention group experienced any hospital complications compared with 51.8% in the control group (odds ratio [OR], 1.07; 95% CI, 0.71-1.61). The incidence of delirium was lower in the intervention group as compared with the control group (15.9% vs 31.4%; OR, 0.53; 95% CI, 0.31-0.90), while there were no other differences in the incidence rates of other complications. There was also no difference in hospital length of stay; median length of stay in the intervention group was 6 days (interquartile range [IQR], 4-9 days) compared with 7 days in the control group (IQR, 5-10), with an estimated mean difference in length of stay of 0.16 days (95% CI, –0.43 to 0.78 days). There was also no significant difference in mortality or all-cause readmission at 6 months.

Conclusion: The intervention “Eat Walk Engage” did not reduce hospital-associated complications overall or hospital length of stay, but it did reduce the incidence of hospital-associated delirium.

 

 

Commentary

Older adults, often with reduced physiologic reserve, when admitted to the hospital with an acute illness may be vulnerable to potential hazards of hospitalization, such as complications from prolonged periods of immobility, pressure injury, and delirium.1 Models of care in the inpatient setting to reduce these hazards, including the Acute Care for the Elderly model and the Mobile Acute Care for the Elderly Team model, have been examined in clinical trials.2,3 Specifically, models of care to prevent and treat delirium have been developed and tested over the past decade.4 The effect of these models in improving function, reducing complications, and reducing delirium incidence has been well documented. The present study adds to the literature by testing a model that utilizes implementation science methods to take into account real-world settings. In contrast with prior models-of-care studies, the implementation of the intervention at each ward was not prescriptive, but rather was developed in each ward in an iterative manner with stakeholder input. The advantage of this approach is that engagement of stakeholders at each intervention ward obtains buy-in from staff, mobilizing staff in a way that a prescriptive model of care may not; this ultimately may lead to longer-lasting change. The iterative approach also allows for the intervention to be adapted to conditions and settings over time. Other studies have taken this approach of using implementation science to drive change.5 Although the intervention in the present study failed to improve the primary outcome, it did reduce the incidence of delirium, which is a significant outcome and one that may confer considerable benefits to older adults under the model’s care.

A limitation of the intervention’s nonprescriptive approach is that, because of the variation of the interventions across sites, it is difficult to discern what elements drove the clinical outcomes. In addition, it would be challenging to consider what aspects of the intervention did not work should refinement or changes be needed. How one may measure fidelity to the intervention or how well a site implements the intervention and its relationship with clinical outcomes will need to be examined further.

Application for Clinical Practice

Clinicians look to effective models of care to improve clinical outcomes for older adults in the hospital. The intervention described in this study offers a real-world approach that may need less upfront investment than other recently studied models, such as the Acute Care for the Elderly model, which requires structural and staffing enhancements. Clinicians and health system leaders may consider implementing this model to improve the care delivered to older adults in the hospital as it may help reduce the incidence of delirium among the older adults they serve.

–William W. Hung, MD, MPH

Disclosures: None.

 

Study Overview

Objective: To examine the effect of the intervention “Eat Walk Engage,” a program that is designed to more consistently deliver age-friendly principles of care to older individuals in acute medical and surgical wards.

Design: This cluster randomized trial to examine the effect of an intervention in acute medical and surgical wards on older adults was conducted in 8 acute medical and surgical wards in 4 public hospitals in Australia from 2016 to 2017. To be eligible to participate in this trial, wards had to have the following: a patient population with 50% of patients aged 65 years and older; perceived alignment with hospital priorities; and nurse manager agreement to participation. Randomization was stratified by hospital, resulting in 4 wards with the intervention (a general medicine ward, an orthopedic ward, a general surgery ward, and a respiratory medicine ward) and 4 control wards (2 general medicine wards, a respiratory medicine ward, and a general surgery ward). Participants were consecutive inpatients aged 65 years or older who were admitted to the ward for at least 3 consecutive days during the study time period. Exclusion criteria included terminal or critical illness, severe cognitive impairment without a surrogate decision-maker, non-English speaking, or previously enrolled in the trial. Of a total of 453 patients who were eligible from the intervention wards, 188 were excluded and 6 died, yielding 259 participants in the intervention group. There were 413 patients eligible from the control wards, with 139 excluded and 3 deaths, yielding 271 participants in the control group.

Intervention: The intervention, called “Eat Walk Engage,” was developed to target older adults at risk for hospital-associated complications of delirium, functional decline, pressure injuries, falls, and incontinence, and aimed to improve care practices, environment, and culture to support age-friendly principles. This ward-based program delivered a structured improvement intervention through a site facilitator who is a nurse or allied health professional. The site facilitator identified opportunities for improvement using structured assessments of context, patient-experience interviews, and audits of care processes, and engaged an interdisciplinary working group from the intervention wards to participate in an hour-per-month meeting to develop plans for iterative improvements. Each site developed their own intervention plan; examples of interventions include shifting priorities to enable staff to increase the proportion of patients sitting in a chair for meals; designating the patient lounge as a walking destination to increase the proportion of time patients spend mobile; and using orientation boards and small groups to engage older patients in meaningful activities.

Main outcome measures: Study outcome measures included hospital-associated complications for older people, which is a composite of hospital-associated delirium, hospital-associated disability, hospital-associated incontinence, and fall or pressure injury during hospitalization. Delirium was assessed using the 3-minute diagnostic interview for Confusion Assessment Method (3D-CAM); hospital-associated disability was defined as new disability at discharge compared to 2 weeks prior to hospitalization. The primary outcome was defined as incidence of any complications and hospital length of stay. Secondary outcomes included incidence of individual complications, hospital discharge to facility, mortality at 6 months, and readmission for any cause at 6 months.

Main results: Patient characteristics for the intervention and control groups, respectively, were: 47% women with a mean age of 75.9 years (SD, 7.3), and 53% women with a mean age of 78.0 years (SD, 8.2). For the primary outcome, 46.4% of participants in the intervention group experienced any hospital complications compared with 51.8% in the control group (odds ratio [OR], 1.07; 95% CI, 0.71-1.61). The incidence of delirium was lower in the intervention group as compared with the control group (15.9% vs 31.4%; OR, 0.53; 95% CI, 0.31-0.90), while there were no other differences in the incidence rates of other complications. There was also no difference in hospital length of stay; median length of stay in the intervention group was 6 days (interquartile range [IQR], 4-9 days) compared with 7 days in the control group (IQR, 5-10), with an estimated mean difference in length of stay of 0.16 days (95% CI, –0.43 to 0.78 days). There was also no significant difference in mortality or all-cause readmission at 6 months.

Conclusion: The intervention “Eat Walk Engage” did not reduce hospital-associated complications overall or hospital length of stay, but it did reduce the incidence of hospital-associated delirium.

 

 

Commentary

Older adults, often with reduced physiologic reserve, when admitted to the hospital with an acute illness may be vulnerable to potential hazards of hospitalization, such as complications from prolonged periods of immobility, pressure injury, and delirium.1 Models of care in the inpatient setting to reduce these hazards, including the Acute Care for the Elderly model and the Mobile Acute Care for the Elderly Team model, have been examined in clinical trials.2,3 Specifically, models of care to prevent and treat delirium have been developed and tested over the past decade.4 The effect of these models in improving function, reducing complications, and reducing delirium incidence has been well documented. The present study adds to the literature by testing a model that utilizes implementation science methods to take into account real-world settings. In contrast with prior models-of-care studies, the implementation of the intervention at each ward was not prescriptive, but rather was developed in each ward in an iterative manner with stakeholder input. The advantage of this approach is that engagement of stakeholders at each intervention ward obtains buy-in from staff, mobilizing staff in a way that a prescriptive model of care may not; this ultimately may lead to longer-lasting change. The iterative approach also allows for the intervention to be adapted to conditions and settings over time. Other studies have taken this approach of using implementation science to drive change.5 Although the intervention in the present study failed to improve the primary outcome, it did reduce the incidence of delirium, which is a significant outcome and one that may confer considerable benefits to older adults under the model’s care.

A limitation of the intervention’s nonprescriptive approach is that, because of the variation of the interventions across sites, it is difficult to discern what elements drove the clinical outcomes. In addition, it would be challenging to consider what aspects of the intervention did not work should refinement or changes be needed. How one may measure fidelity to the intervention or how well a site implements the intervention and its relationship with clinical outcomes will need to be examined further.

Application for Clinical Practice

Clinicians look to effective models of care to improve clinical outcomes for older adults in the hospital. The intervention described in this study offers a real-world approach that may need less upfront investment than other recently studied models, such as the Acute Care for the Elderly model, which requires structural and staffing enhancements. Clinicians and health system leaders may consider implementing this model to improve the care delivered to older adults in the hospital as it may help reduce the incidence of delirium among the older adults they serve.

–William W. Hung, MD, MPH

Disclosures: None.

 

References

1. Creditor MC. Hazards of hospitalization of the elderly. Ann Intern Med. 1993;118(3):219-223. doi:10.7326/0003-4819-118-3-199302010-00011

2. Fox MT, Persaud M, Maimets I, et al. Effectiveness of acute geriatric unit care using acute care for elders components: a systematic review and meta-analysis. J Am Geriatr Soc. 2012;60(12):2237-2245. doi:10.1111/jgs.12028

3. Hung WW, Ross JS, Farber J, Siu AL. Evaluation of the Mobile Acute Care of the Elderly (MACE) service. JAMA Intern Med. 2013;173(11):990-996. doi:10.1001/jamainternmed.2013.478

4. Hshieh TT, Yang T, Gartaganis SL, Yue J, Inouye SK. Hospital Elder Life Program: systematic review and meta-analysis of effectiveness. Am J Geriatr Psychiatry. 2018;26(10):1015-1033. doi:10.1016/j.jagp.2018.06.007

5. Naughton C, Cummins H, de Foubert M, et al. Implementation of the Frailty Care Bundle (FCB) to promote mobilisation, nutrition and cognitive engagement in older people in acute care settings: protocol for an implementation science study. [version 1; peer review: 1 approved]. HRB Open Res. 2022;5:3. doi:10.12688/hrbopenres.134731

References

1. Creditor MC. Hazards of hospitalization of the elderly. Ann Intern Med. 1993;118(3):219-223. doi:10.7326/0003-4819-118-3-199302010-00011

2. Fox MT, Persaud M, Maimets I, et al. Effectiveness of acute geriatric unit care using acute care for elders components: a systematic review and meta-analysis. J Am Geriatr Soc. 2012;60(12):2237-2245. doi:10.1111/jgs.12028

3. Hung WW, Ross JS, Farber J, Siu AL. Evaluation of the Mobile Acute Care of the Elderly (MACE) service. JAMA Intern Med. 2013;173(11):990-996. doi:10.1001/jamainternmed.2013.478

4. Hshieh TT, Yang T, Gartaganis SL, Yue J, Inouye SK. Hospital Elder Life Program: systematic review and meta-analysis of effectiveness. Am J Geriatr Psychiatry. 2018;26(10):1015-1033. doi:10.1016/j.jagp.2018.06.007

5. Naughton C, Cummins H, de Foubert M, et al. Implementation of the Frailty Care Bundle (FCB) to promote mobilisation, nutrition and cognitive engagement in older people in acute care settings: protocol for an implementation science study. [version 1; peer review: 1 approved]. HRB Open Res. 2022;5:3. doi:10.12688/hrbopenres.134731

Issue
Journal of Clinical Outcomes Management - 29(1)
Issue
Journal of Clinical Outcomes Management - 29(1)
Page Number
1-3
Page Number
1-3
Publications
Publications
Topics
Article Type
Display Headline
Intervention in Acute Hospital Unit Reduces Delirium Incidence for Older Adults, Has No Effect on Length of Stay, Other Complications
Display Headline
Intervention in Acute Hospital Unit Reduces Delirium Incidence for Older Adults, Has No Effect on Length of Stay, Other Complications
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Comparison of Fractional Flow Reserve–Guided PCI and Coronary Bypass Surgery in 3-Vessel Disease

Article Type
Changed
Wed, 02/09/2022 - 16:24
Display Headline
Comparison of Fractional Flow Reserve–Guided PCI and Coronary Bypass Surgery in 3-Vessel Disease

Study Overview

Objective: To determine whether fractional flow reserve (FFR)–guided percutaneous coronary intervention (PCI) is noninferior to coronary-artery bypass grafting (CABG) in patients with 3-vessel coronary artery disease (CAD).

Design: Investigator-initiated, multicenter, international, randomized, controlled trial conducted at 48 sites.

Setting and participants: A total of 1500 patients with angiographically identified 3-vessel CAD not involving the left main coronary artery were randomly assigned to receive FFR-guided PCI with zotarolimus-eluting stents or CABG in a 1:1 ratio. Randomization was stratified according to trial site and diabetes status.

Main outcome measures: The primary end point was major adverse cardiac or cerebrovascular event, defined as death from any cause, myocardial infarction (MI), stroke, or repeat revascularization. The secondary end point was defined as a composite of death, MI, or stroke.

Results: At 1 year, the incidence of the composite primary end point was 10.6% for patients with FFR-guided PCI and 6.9% for patients with CABG (hazard ratio [HR], 1.5; 95% CI, 1.1-2.2; P = .35 for noninferiority), which was not consistent with noninferiority of FFR-guided PCI compared to CABG. The secondary end point occurred in 7.3% of patients in the FFR-guided PCI group compared with 5.2% in the CABG group (HR, 1.4; 95% CI, 0.9-2.1). Individual findings for the outcomes comprising the primary end point for the FFR-guided PCI group vs the CABG group were as follows: death, 1.6% vs 0.9%; MI, 5.2% vs 3.5%; stroke, 0.9% vs 1.1%; and repeat revascularization, 5.9% vs 3.9%. The CABG group had more extended hospital stays and higher incidences of major bleeding, arrhythmia, acute kidney injury, and rehospitalization within 30 days than the FFR-guided PCI group.

Conclusion: FFR-guided PCI was not found to be noninferior to CABG with respect to the incidence of a composite of death, MI, stroke, or repeat revascularization at 1 year.

Commentary

Revascularization for multivessel CAD can be performed by CABG or PCI. Previous studies have shown superior outcomes in patients with multivessel CAD who were treated with CABG compared to PCI.1-3 The Synergy between PCI with Taxus and Cardiac Surgery (SYNTAX) trial, which compared CABG to PCI in patients with multivessel disease or unprotected left main CAD, stratified the anatomic complexity based on SYNTAX score and found that patients with higher anatomic complexity with a high SYNTAX score derive larger benefit from CABG compared to PCI.4 Therefore, the current guidelines favor CABG over PCI in patients with severe 3-vessel disease, except for patients with a lower SYNTAX score (<22) without diabetes.5,6 However, except for a smaller size study,3 the previous trials that led to this recommendation used mostly first-generation drug-eluting stents and have not evaluated second-generation stents that have lower rates of in-stent restenosis and stent thrombosis. In addition, there have been significant improvements in PCI techniques since the study period, including the adoption of a radial approach and superior adjunct pharmacologic therapy. Furthermore, previous studies have not systematically investigated the use of FFR-guided PCI, which has been shown to be superior to angiography-guided PCI or medical treatment alone.7-9

 

 

In this context, Fearon and the FAME-3 trial investigators studied the use of FFR-guided PCI with second-generation zotarolimus drug-eluting stents compared to CABG in patients with 3-vessel CAD. They randomized patients with angiographically identified 3-vessel CAD in a 1:1 ratio to receive FFR-guided PCI or CABG at 48 sites internationally. Patients with left main CAD, recent ST-elevation MI, cardiogenic shock, and left-ventricular ejection fraction <30% were excluded. The study results (composite primary end point incidence of 10.6% for patients with FFR-guided PCI vs 6.9% in the CABG group [HR, 1.5; 95% CI, 1.1-2.2; P = 0.35 for noninferiority]) showed that FFR-guided PCI did not meet the noninferiority criterion.

Although the FAME-3 study is an important study, there are a few points to consider. First, 24% of the lesions had a FFR measured at >0.80. The benefit of FFR-guided PCI lies in the number of lesions that are safely deferred compared to angiography-guided PCI. The small number of deferred lesions could have limited the benefit of FFR guidance compared with angiography. Second, this study did not include all comers who had angiographic 3-vessel disease. Patients who had FFR assessment of moderate lesions at the time of diagnostic angiogram and were found to have FFR >0.80 or were deemed single- or 2-vessel disease were likely treated with PCI. Therefore, as the authors point out, the patients included in this study may have been skewed to a higher-risk population compared to previous studies.

Third, the study may not reflect contemporary interventional practice, as the use of intravascular ultrasound was very low (12%). Intravascular ultrasound–guided PCI has been associated with increased luminal gain and improved outcomes compared to angiography-guided PCI.10 Although 20% of the patients in each arm were found to have chronic total occlusions, the completeness of revascularization has not yet been reported. It is possible that the PCI arm had fewer complete revascularizations, which has been shown in previous observational studies to be associated with worse clinical outcomes.11,12

Although the current guidelines favor CABG over PCI in patients with multivessel disease, this recommendation is stratified by anatomic complexity.6 In fact, in the European guidelines, CABG and PCI are both class I recommendations for the treatment of 3-vessel disease with low SYNTAX score in patients without diabetes.5 Although the FAME-3 study failed to show noninferiority in the overall population, when stratified by the SYNTAX score, the major adverse cardiac event rate for the PCI group was numerically lower than that of the CABG group. The results from the FAME-3 study are overall in line with the previous studies and the current guidelines. Future studies are necessary to assess the outcomes of multivessel PCI compared to CABG using the most contemporary interventional practice and achieving complete revascularization in the PCI arm.

Applications for Clinical Practice

In patients with 3-vessel disease, FFR-guided PCI was not found to be noninferior to CABG; this finding is consistent with previous studies.

—Shubham Kanake, BS, Chirag Bavishi, MD, MPH, and Taishi Hirai, MD, University of Missouri, Columbia, MO

Disclosures: None.

References

1. Farkouh ME, Domanski M, Sleeper LA, et al; FREEDOM Trial Investigators. Strategies for multivessel revascularization in patients with diabetes. N Engl J Med. 2012;367(25):2375-2384. doi:10.1056/NEJMoa1211585

2. Serruys PW, Morice MC, Kappetein AP, et al; SYNTAX Investigators. Percutaneous coronary intervention versus coronary-artery bypass grafting for severe coronary artery disease. N Engl J Med. 2009;360(10):961-972. doi:10.1056/NEJMoa0804626

3. Park SJ, Ahn JM, Kim YH, et al; BEST Trial Investigators. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med. 2015;372(13):1204-1212. doi:10.1056/NEJMoa1415447

4. Stone GW, Kappetein AP, Sabik JF, et al; EXCEL Trial Investigators. Five-year outcomes after PCI or CABG for left main coronary disease. N Engl J Med. 2019; 381(19):1820-1830. doi:10.1056/NEJMoa1909406

5. Neumann FJ, Sousa-Uva M, Ahlsson A, et al; ESC Scientific Document Group. 2018 ESC/EACTS guidelines on myocardial revascularization. Eur Heart J. 2019;40(2):87-165. doi:10.1093/eurheartj/ehy394

6. Writing Committee Members, Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

7. Tonino PAL, De Bruyne B, Pijls NHJ, et al; FAME Study Investigators. Fractional flow reserve versus angiography for guiding percutaneous coronary intervention. N Engl J Med. 2009;360(3):213-224. doi:10.1056/NEJMoa0807611

8. De Bruyne B, Fearon WF, Pijls NHJ, et al; FAME 2 Trial Investigators. Fractional flow reserve-guided PCI for stable coronary artery disease. N Engl J Med. 2014;371(13):1208-1217. doi:10.1056/NEJMoa1408758

9. Xaplanteris P, Fournier S, Pijls NHJ, et al; FAME 2 Investigators. Five-year outcomes with PCI guided by fractional flow reserve. N Engl J Med. 2018;379(3):250-259. doi:10.1056/NEJMoa1803538

10. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial. J Am Coll Cardiol. 2018;72:3126-3137. doi:10.1016/j.jacc.2018.09.013

11. Garcia S, Sandoval Y, Roukoz H, et al. Outcomes after complete versus incomplete revascularization of patients with multivessel coronary artery disease: a meta-analysis of 89,883 patients enrolled in randomized clinical trials and observational studies. J Am Coll Cardiol. 2013;62:1421-1431. doi:10.1016/j.jacc.2013.05.033

12. Farooq V, Serruys PW, Garcia-Garcia HM et al. The negative impact of incomplete angiographic revascularization on clinical outcomes and its association with total occlusions: the SYNTAX (Synergy Between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery) trial. J Am Coll Cardiol. 2013;61:282-294. doi: 10.1016/j.jacc.2012.10.017

Article PDF
Issue
Journal of Clinical Outcomes Management - 29(1)
Publications
Topics
Page Number
1-3
Sections
Article PDF
Article PDF

Study Overview

Objective: To determine whether fractional flow reserve (FFR)–guided percutaneous coronary intervention (PCI) is noninferior to coronary-artery bypass grafting (CABG) in patients with 3-vessel coronary artery disease (CAD).

Design: Investigator-initiated, multicenter, international, randomized, controlled trial conducted at 48 sites.

Setting and participants: A total of 1500 patients with angiographically identified 3-vessel CAD not involving the left main coronary artery were randomly assigned to receive FFR-guided PCI with zotarolimus-eluting stents or CABG in a 1:1 ratio. Randomization was stratified according to trial site and diabetes status.

Main outcome measures: The primary end point was major adverse cardiac or cerebrovascular event, defined as death from any cause, myocardial infarction (MI), stroke, or repeat revascularization. The secondary end point was defined as a composite of death, MI, or stroke.

Results: At 1 year, the incidence of the composite primary end point was 10.6% for patients with FFR-guided PCI and 6.9% for patients with CABG (hazard ratio [HR], 1.5; 95% CI, 1.1-2.2; P = .35 for noninferiority), which was not consistent with noninferiority of FFR-guided PCI compared to CABG. The secondary end point occurred in 7.3% of patients in the FFR-guided PCI group compared with 5.2% in the CABG group (HR, 1.4; 95% CI, 0.9-2.1). Individual findings for the outcomes comprising the primary end point for the FFR-guided PCI group vs the CABG group were as follows: death, 1.6% vs 0.9%; MI, 5.2% vs 3.5%; stroke, 0.9% vs 1.1%; and repeat revascularization, 5.9% vs 3.9%. The CABG group had more extended hospital stays and higher incidences of major bleeding, arrhythmia, acute kidney injury, and rehospitalization within 30 days than the FFR-guided PCI group.

Conclusion: FFR-guided PCI was not found to be noninferior to CABG with respect to the incidence of a composite of death, MI, stroke, or repeat revascularization at 1 year.

Commentary

Revascularization for multivessel CAD can be performed by CABG or PCI. Previous studies have shown superior outcomes in patients with multivessel CAD who were treated with CABG compared to PCI.1-3 The Synergy between PCI with Taxus and Cardiac Surgery (SYNTAX) trial, which compared CABG to PCI in patients with multivessel disease or unprotected left main CAD, stratified the anatomic complexity based on SYNTAX score and found that patients with higher anatomic complexity with a high SYNTAX score derive larger benefit from CABG compared to PCI.4 Therefore, the current guidelines favor CABG over PCI in patients with severe 3-vessel disease, except for patients with a lower SYNTAX score (<22) without diabetes.5,6 However, except for a smaller size study,3 the previous trials that led to this recommendation used mostly first-generation drug-eluting stents and have not evaluated second-generation stents that have lower rates of in-stent restenosis and stent thrombosis. In addition, there have been significant improvements in PCI techniques since the study period, including the adoption of a radial approach and superior adjunct pharmacologic therapy. Furthermore, previous studies have not systematically investigated the use of FFR-guided PCI, which has been shown to be superior to angiography-guided PCI or medical treatment alone.7-9

 

 

In this context, Fearon and the FAME-3 trial investigators studied the use of FFR-guided PCI with second-generation zotarolimus drug-eluting stents compared to CABG in patients with 3-vessel CAD. They randomized patients with angiographically identified 3-vessel CAD in a 1:1 ratio to receive FFR-guided PCI or CABG at 48 sites internationally. Patients with left main CAD, recent ST-elevation MI, cardiogenic shock, and left-ventricular ejection fraction <30% were excluded. The study results (composite primary end point incidence of 10.6% for patients with FFR-guided PCI vs 6.9% in the CABG group [HR, 1.5; 95% CI, 1.1-2.2; P = 0.35 for noninferiority]) showed that FFR-guided PCI did not meet the noninferiority criterion.

Although the FAME-3 study is an important study, there are a few points to consider. First, 24% of the lesions had a FFR measured at >0.80. The benefit of FFR-guided PCI lies in the number of lesions that are safely deferred compared to angiography-guided PCI. The small number of deferred lesions could have limited the benefit of FFR guidance compared with angiography. Second, this study did not include all comers who had angiographic 3-vessel disease. Patients who had FFR assessment of moderate lesions at the time of diagnostic angiogram and were found to have FFR >0.80 or were deemed single- or 2-vessel disease were likely treated with PCI. Therefore, as the authors point out, the patients included in this study may have been skewed to a higher-risk population compared to previous studies.

Third, the study may not reflect contemporary interventional practice, as the use of intravascular ultrasound was very low (12%). Intravascular ultrasound–guided PCI has been associated with increased luminal gain and improved outcomes compared to angiography-guided PCI.10 Although 20% of the patients in each arm were found to have chronic total occlusions, the completeness of revascularization has not yet been reported. It is possible that the PCI arm had fewer complete revascularizations, which has been shown in previous observational studies to be associated with worse clinical outcomes.11,12

Although the current guidelines favor CABG over PCI in patients with multivessel disease, this recommendation is stratified by anatomic complexity.6 In fact, in the European guidelines, CABG and PCI are both class I recommendations for the treatment of 3-vessel disease with low SYNTAX score in patients without diabetes.5 Although the FAME-3 study failed to show noninferiority in the overall population, when stratified by the SYNTAX score, the major adverse cardiac event rate for the PCI group was numerically lower than that of the CABG group. The results from the FAME-3 study are overall in line with the previous studies and the current guidelines. Future studies are necessary to assess the outcomes of multivessel PCI compared to CABG using the most contemporary interventional practice and achieving complete revascularization in the PCI arm.

Applications for Clinical Practice

In patients with 3-vessel disease, FFR-guided PCI was not found to be noninferior to CABG; this finding is consistent with previous studies.

—Shubham Kanake, BS, Chirag Bavishi, MD, MPH, and Taishi Hirai, MD, University of Missouri, Columbia, MO

Disclosures: None.

Study Overview

Objective: To determine whether fractional flow reserve (FFR)–guided percutaneous coronary intervention (PCI) is noninferior to coronary-artery bypass grafting (CABG) in patients with 3-vessel coronary artery disease (CAD).

Design: Investigator-initiated, multicenter, international, randomized, controlled trial conducted at 48 sites.

Setting and participants: A total of 1500 patients with angiographically identified 3-vessel CAD not involving the left main coronary artery were randomly assigned to receive FFR-guided PCI with zotarolimus-eluting stents or CABG in a 1:1 ratio. Randomization was stratified according to trial site and diabetes status.

Main outcome measures: The primary end point was major adverse cardiac or cerebrovascular event, defined as death from any cause, myocardial infarction (MI), stroke, or repeat revascularization. The secondary end point was defined as a composite of death, MI, or stroke.

Results: At 1 year, the incidence of the composite primary end point was 10.6% for patients with FFR-guided PCI and 6.9% for patients with CABG (hazard ratio [HR], 1.5; 95% CI, 1.1-2.2; P = .35 for noninferiority), which was not consistent with noninferiority of FFR-guided PCI compared to CABG. The secondary end point occurred in 7.3% of patients in the FFR-guided PCI group compared with 5.2% in the CABG group (HR, 1.4; 95% CI, 0.9-2.1). Individual findings for the outcomes comprising the primary end point for the FFR-guided PCI group vs the CABG group were as follows: death, 1.6% vs 0.9%; MI, 5.2% vs 3.5%; stroke, 0.9% vs 1.1%; and repeat revascularization, 5.9% vs 3.9%. The CABG group had more extended hospital stays and higher incidences of major bleeding, arrhythmia, acute kidney injury, and rehospitalization within 30 days than the FFR-guided PCI group.

Conclusion: FFR-guided PCI was not found to be noninferior to CABG with respect to the incidence of a composite of death, MI, stroke, or repeat revascularization at 1 year.

Commentary

Revascularization for multivessel CAD can be performed by CABG or PCI. Previous studies have shown superior outcomes in patients with multivessel CAD who were treated with CABG compared to PCI.1-3 The Synergy between PCI with Taxus and Cardiac Surgery (SYNTAX) trial, which compared CABG to PCI in patients with multivessel disease or unprotected left main CAD, stratified the anatomic complexity based on SYNTAX score and found that patients with higher anatomic complexity with a high SYNTAX score derive larger benefit from CABG compared to PCI.4 Therefore, the current guidelines favor CABG over PCI in patients with severe 3-vessel disease, except for patients with a lower SYNTAX score (<22) without diabetes.5,6 However, except for a smaller size study,3 the previous trials that led to this recommendation used mostly first-generation drug-eluting stents and have not evaluated second-generation stents that have lower rates of in-stent restenosis and stent thrombosis. In addition, there have been significant improvements in PCI techniques since the study period, including the adoption of a radial approach and superior adjunct pharmacologic therapy. Furthermore, previous studies have not systematically investigated the use of FFR-guided PCI, which has been shown to be superior to angiography-guided PCI or medical treatment alone.7-9

 

 

In this context, Fearon and the FAME-3 trial investigators studied the use of FFR-guided PCI with second-generation zotarolimus drug-eluting stents compared to CABG in patients with 3-vessel CAD. They randomized patients with angiographically identified 3-vessel CAD in a 1:1 ratio to receive FFR-guided PCI or CABG at 48 sites internationally. Patients with left main CAD, recent ST-elevation MI, cardiogenic shock, and left-ventricular ejection fraction <30% were excluded. The study results (composite primary end point incidence of 10.6% for patients with FFR-guided PCI vs 6.9% in the CABG group [HR, 1.5; 95% CI, 1.1-2.2; P = 0.35 for noninferiority]) showed that FFR-guided PCI did not meet the noninferiority criterion.

Although the FAME-3 study is an important study, there are a few points to consider. First, 24% of the lesions had a FFR measured at >0.80. The benefit of FFR-guided PCI lies in the number of lesions that are safely deferred compared to angiography-guided PCI. The small number of deferred lesions could have limited the benefit of FFR guidance compared with angiography. Second, this study did not include all comers who had angiographic 3-vessel disease. Patients who had FFR assessment of moderate lesions at the time of diagnostic angiogram and were found to have FFR >0.80 or were deemed single- or 2-vessel disease were likely treated with PCI. Therefore, as the authors point out, the patients included in this study may have been skewed to a higher-risk population compared to previous studies.

Third, the study may not reflect contemporary interventional practice, as the use of intravascular ultrasound was very low (12%). Intravascular ultrasound–guided PCI has been associated with increased luminal gain and improved outcomes compared to angiography-guided PCI.10 Although 20% of the patients in each arm were found to have chronic total occlusions, the completeness of revascularization has not yet been reported. It is possible that the PCI arm had fewer complete revascularizations, which has been shown in previous observational studies to be associated with worse clinical outcomes.11,12

Although the current guidelines favor CABG over PCI in patients with multivessel disease, this recommendation is stratified by anatomic complexity.6 In fact, in the European guidelines, CABG and PCI are both class I recommendations for the treatment of 3-vessel disease with low SYNTAX score in patients without diabetes.5 Although the FAME-3 study failed to show noninferiority in the overall population, when stratified by the SYNTAX score, the major adverse cardiac event rate for the PCI group was numerically lower than that of the CABG group. The results from the FAME-3 study are overall in line with the previous studies and the current guidelines. Future studies are necessary to assess the outcomes of multivessel PCI compared to CABG using the most contemporary interventional practice and achieving complete revascularization in the PCI arm.

Applications for Clinical Practice

In patients with 3-vessel disease, FFR-guided PCI was not found to be noninferior to CABG; this finding is consistent with previous studies.

—Shubham Kanake, BS, Chirag Bavishi, MD, MPH, and Taishi Hirai, MD, University of Missouri, Columbia, MO

Disclosures: None.

References

1. Farkouh ME, Domanski M, Sleeper LA, et al; FREEDOM Trial Investigators. Strategies for multivessel revascularization in patients with diabetes. N Engl J Med. 2012;367(25):2375-2384. doi:10.1056/NEJMoa1211585

2. Serruys PW, Morice MC, Kappetein AP, et al; SYNTAX Investigators. Percutaneous coronary intervention versus coronary-artery bypass grafting for severe coronary artery disease. N Engl J Med. 2009;360(10):961-972. doi:10.1056/NEJMoa0804626

3. Park SJ, Ahn JM, Kim YH, et al; BEST Trial Investigators. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med. 2015;372(13):1204-1212. doi:10.1056/NEJMoa1415447

4. Stone GW, Kappetein AP, Sabik JF, et al; EXCEL Trial Investigators. Five-year outcomes after PCI or CABG for left main coronary disease. N Engl J Med. 2019; 381(19):1820-1830. doi:10.1056/NEJMoa1909406

5. Neumann FJ, Sousa-Uva M, Ahlsson A, et al; ESC Scientific Document Group. 2018 ESC/EACTS guidelines on myocardial revascularization. Eur Heart J. 2019;40(2):87-165. doi:10.1093/eurheartj/ehy394

6. Writing Committee Members, Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

7. Tonino PAL, De Bruyne B, Pijls NHJ, et al; FAME Study Investigators. Fractional flow reserve versus angiography for guiding percutaneous coronary intervention. N Engl J Med. 2009;360(3):213-224. doi:10.1056/NEJMoa0807611

8. De Bruyne B, Fearon WF, Pijls NHJ, et al; FAME 2 Trial Investigators. Fractional flow reserve-guided PCI for stable coronary artery disease. N Engl J Med. 2014;371(13):1208-1217. doi:10.1056/NEJMoa1408758

9. Xaplanteris P, Fournier S, Pijls NHJ, et al; FAME 2 Investigators. Five-year outcomes with PCI guided by fractional flow reserve. N Engl J Med. 2018;379(3):250-259. doi:10.1056/NEJMoa1803538

10. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial. J Am Coll Cardiol. 2018;72:3126-3137. doi:10.1016/j.jacc.2018.09.013

11. Garcia S, Sandoval Y, Roukoz H, et al. Outcomes after complete versus incomplete revascularization of patients with multivessel coronary artery disease: a meta-analysis of 89,883 patients enrolled in randomized clinical trials and observational studies. J Am Coll Cardiol. 2013;62:1421-1431. doi:10.1016/j.jacc.2013.05.033

12. Farooq V, Serruys PW, Garcia-Garcia HM et al. The negative impact of incomplete angiographic revascularization on clinical outcomes and its association with total occlusions: the SYNTAX (Synergy Between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery) trial. J Am Coll Cardiol. 2013;61:282-294. doi: 10.1016/j.jacc.2012.10.017

References

1. Farkouh ME, Domanski M, Sleeper LA, et al; FREEDOM Trial Investigators. Strategies for multivessel revascularization in patients with diabetes. N Engl J Med. 2012;367(25):2375-2384. doi:10.1056/NEJMoa1211585

2. Serruys PW, Morice MC, Kappetein AP, et al; SYNTAX Investigators. Percutaneous coronary intervention versus coronary-artery bypass grafting for severe coronary artery disease. N Engl J Med. 2009;360(10):961-972. doi:10.1056/NEJMoa0804626

3. Park SJ, Ahn JM, Kim YH, et al; BEST Trial Investigators. Trial of everolimus-eluting stents or bypass surgery for coronary disease. N Engl J Med. 2015;372(13):1204-1212. doi:10.1056/NEJMoa1415447

4. Stone GW, Kappetein AP, Sabik JF, et al; EXCEL Trial Investigators. Five-year outcomes after PCI or CABG for left main coronary disease. N Engl J Med. 2019; 381(19):1820-1830. doi:10.1056/NEJMoa1909406

5. Neumann FJ, Sousa-Uva M, Ahlsson A, et al; ESC Scientific Document Group. 2018 ESC/EACTS guidelines on myocardial revascularization. Eur Heart J. 2019;40(2):87-165. doi:10.1093/eurheartj/ehy394

6. Writing Committee Members, Lawton JS, Tamis-Holland JE, Bangalore S, et al. 2021 ACC/AHA/SCAI guideline for coronary artery revascularization: a report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol. 2022;79(2):e21-e129. doi:10.1016/j.jacc.2021.09.006

7. Tonino PAL, De Bruyne B, Pijls NHJ, et al; FAME Study Investigators. Fractional flow reserve versus angiography for guiding percutaneous coronary intervention. N Engl J Med. 2009;360(3):213-224. doi:10.1056/NEJMoa0807611

8. De Bruyne B, Fearon WF, Pijls NHJ, et al; FAME 2 Trial Investigators. Fractional flow reserve-guided PCI for stable coronary artery disease. N Engl J Med. 2014;371(13):1208-1217. doi:10.1056/NEJMoa1408758

9. Xaplanteris P, Fournier S, Pijls NHJ, et al; FAME 2 Investigators. Five-year outcomes with PCI guided by fractional flow reserve. N Engl J Med. 2018;379(3):250-259. doi:10.1056/NEJMoa1803538

10. Zhang J, Gao X, Kan J, et al. Intravascular ultrasound versus angiography-guided drug-eluting stent implantation: The ULTIMATE trial. J Am Coll Cardiol. 2018;72:3126-3137. doi:10.1016/j.jacc.2018.09.013

11. Garcia S, Sandoval Y, Roukoz H, et al. Outcomes after complete versus incomplete revascularization of patients with multivessel coronary artery disease: a meta-analysis of 89,883 patients enrolled in randomized clinical trials and observational studies. J Am Coll Cardiol. 2013;62:1421-1431. doi:10.1016/j.jacc.2013.05.033

12. Farooq V, Serruys PW, Garcia-Garcia HM et al. The negative impact of incomplete angiographic revascularization on clinical outcomes and its association with total occlusions: the SYNTAX (Synergy Between Percutaneous Coronary Intervention with Taxus and Cardiac Surgery) trial. J Am Coll Cardiol. 2013;61:282-294. doi: 10.1016/j.jacc.2012.10.017

Issue
Journal of Clinical Outcomes Management - 29(1)
Issue
Journal of Clinical Outcomes Management - 29(1)
Page Number
1-3
Page Number
1-3
Publications
Publications
Topics
Article Type
Display Headline
Comparison of Fractional Flow Reserve–Guided PCI and Coronary Bypass Surgery in 3-Vessel Disease
Display Headline
Comparison of Fractional Flow Reserve–Guided PCI and Coronary Bypass Surgery in 3-Vessel Disease
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Evaluation of Intermittent Energy Restriction and Continuous Energy Restriction on Weight Loss and Blood Pressure Control in Overweight and Obese Patients With Hypertension

Article Type
Changed
Wed, 11/24/2021 - 13:57
Display Headline
Evaluation of Intermittent Energy Restriction and Continuous Energy Restriction on Weight Loss and Blood Pressure Control in Overweight and Obese Patients With Hypertension

Study Overview

Objective. To compare the effects of intermittent energy restriction (IER) with those of continuous energy restriction (CER) on blood pressure control and weight loss in overweight and obese patients with hypertension during a 6-month period.

Design. Randomized controlled trial.

Settings and participants. The trial was conducted at the Affiliated Hospital of Jiaxing University from June 1, 2020, to April 30, 2021. Chinese adults were recruited using advertisements and flyers posted in the hospital and local communities. Prior to participation in study activities, all participants gave informed consent prior to recruitment and were provided compensation in the form of a $38 voucher at 3 and 6 months for their time for participating in the study.

The main inclusion criteria were patients between the ages of 18 and 70 years, hypertension, and body mass index (BMI) ranging from 24 to 40 kg/m2. The exclusion criteria were systolic blood pressure (SBP) ≥ 180 mmHg or diastolic blood pressure (DBP) ≥ 120 mmHg, type 1 or 2 diabetes with a history of severe hypoglycemic episodes, pregnancy or breastfeeding, usage of glucagon-like peptide 1 receptor agonists, weight loss > 5 kg within the past 3 months or previous weight loss surgery, and inability to adhere to the dietary protocol.

Of the 294 participants screened for eligibility, 205 were randomized in a 1:1 ratio to the IER group (n = 102) or the CER group (n = 103), stratified by sex and BMI (as overweight or obese). All participants were required to have a stable medication regimen and weight in the 3 months prior to enrollment and not to use weight-loss drugs or vitamin supplements for the duration of the study. Researchers and participants were not blinded to the study group assignment.

Interventions. Participants randomly assigned to the IER group followed a 5:2 eating pattern: a very-low-energy diet of 500-600 kcal for 2 days of the week along with their usual diet for the other 5 days. The 2 days of calorie restriction could be consecutive or nonconsecutive, with a minimum of 0.8 g supplemental protein per kg of body weight per day, in accordance with the 2016 Dietary Guidelines for Chinese Residents. The CER group was advised to consume 1000 kcal/day for women and 1200 kcal/day for men on a 7-day energy restriction. That is, they were prescribed a daily 25% restriction based on the general principles of a Mediterranean-type diet (30% fat, 45-50% carbohydrate, and 20-25% protein).

Both groups received dietary education from a qualified dietitian and were recommended to maintain their current daily activity levels throughout the trial. Written dietary information brochures with portion advice and sample meal plans were provided to improve compliance in each group. All participants received a digital cooking scale to weigh foods to ensure accuracy of intake and were required to keep a food diary while following the recommended recipe on 2 days/week during calorie restriction to help with adherence. No food was provided. All participants were followed up by regular outpatient visits to both cardiologists and dietitians once a month. Diet checklists, activity schedules, and weight were reviewed to assess compliance with dietary advice at each visit.

 

 

Of note, participants were encouraged to measure and record their BP twice daily, and if 2 consecutive BP readings were < 110/70 mmHg and/or accompanied by hypotensive episodes with symptoms (dizziness, nausea, headache, and fatigue), they were asked to contact the investigators directly. Antihypertensive medication changes were then made in consultation with cardiologists. In addition, a medication management protocol (ie, doses of antidiabetic medications, including insulin and sulfonylurea) was designed to avoid hypoglycemia. Medication could be reduced in the CER group based on the basal dose at the endocrinologist’s discretion. In the IER group, insulin and sulfonylureas were discontinued on calorie restriction days only, and long-acting insulin was discontinued the night before the IER day. Insulin was not to be resumed until a full day’s caloric intake was achieved.

Measures and analysis. The primary outcomes of this study were changes in BP and weight (measured using an automatic digital sphygmomanometer and an electronic scale), and the secondary outcomes were changes in body composition (assessed by dual-energy x-ray absorptiometry scanning), as well as glycosylated hemoglobin A1c (HbA1c) levels and blood lipids after 6 months. All outcome measures were recorded at baseline and at each monthly visit. Incidence rates of hypoglycemia were based on blood glucose (defined as blood glucose < 70 mg/dL) and/or symptomatic hypoglycemia (symptoms of sweating, paleness, dizziness, and confusion). Two cardiologists who were blind to the patients’ diet condition measured and recorded all pertinent clinical parameters and adjudicated serious adverse events.

Data were compared using independent-samples t-tests or the Mann–Whitney U test for continuous variables, and Pearson’s χ2 test or Fisher’s exact test for categorial variables as appropriate. Repeated-measures ANOVA via a linear mixed model was employed to test the effects of diet, time, and their interaction. In subgroup analyses, differential effects of the intervention on the primary outcomes were evaluated with respect to patients’ level of education, domicile, and sex based on the statistical significance of the interaction term for the subgroup of interest in the multivariate model. Analyses were performed based on completers and on an intention-to-treat principle.

Main results. Among the 205 randomized participants, 118 were women and 87 were men; mean (SD) age was 50.5 (8.8) years; mean (SD) BMI was 28.7 (2.6); mean (SD) SBP was 143 (10) mmHg; and mean (SD) DBP was 91 (9) mmHg. At the end of the 6-month intervention, 173 (84.4%) completed the study (IER group: n = 88; CER group: n = 85). Both groups had similar dropout rates at 6 months (IER group: 14 participants [13.7%]; CER group: 18 participants [17.5%]; P = .83) and were well matched for baseline characteristics except for triglyceride levels.

In the completers analysis, both groups experienced significant reductions in weight (mean [SEM]), but there was no difference between treatment groups (−7.2 [0.6] kg in the IER group vs −7.1 [0.6] kg in the CER group; diet by time P = .72). Similarly, the change in SBP and DBP achieved was statistically significant over time, but there was also no difference between the dietary interventions (−8 [0.7] mmHg in the IER group vs −8 [0.6] mmHg in the CER group, diet by time P = .68; −6 [0.6] mmHg in the IER group vs −6 [0.5] mmHg in the CER group, diet by time P = .53]. Subgroup analyses of the association of the intervention with weight, SBP and DBP by sex, education, and domicile showed no significant between-group differences.

 

 

All measures of body composition decreased significantly at 6 months with both groups experiencing comparable reductions in total fat mass (−5.5 [0.6] kg in the IER group vs −4.8 [0.5] kg in the CER group, diet by time P = .08) and android fat mass (−1.1 [0.2] kg in the IER group vs −0.8 [0.2] kg in the CER group, diet by time P = .16). Of note, participants in the CER group lost significantly more total fat-free mass than did participants in the IER group (mean [SEM], −2.3 [0.2] kg vs −1.7 [0.2] kg; P = .03], and there was a trend toward a greater change in total fat mass in the IER group (P = .08). The secondary outcome of mean (SEM) HbA1c (−0.2% [0.1%]) and blood lipid levels (triglyceride level, −1.0 [0.3] mmol/L; total cholesterol level, −0.9 [0.2] mmol/L; low-density lipoprotein cholesterol level, −0.9 [0.2 mmol/L; high-density lipoprotein cholesterol level, 0.7 [0.3] mmol/L] improved with weight loss (P < .05), with no differences between groups (diet by time P > .05).

The intention-to-treat analysis demonstrated that IER and CER are equally effective for weight loss and blood pressure control: both groups experienced significant reductions in weight, SBP, and DBP, but with no difference between treatment groups – mean (SEM) weight change with IER was −7.0 (0.6) kg vs −6.8 (0.6) kg with CER; the mean (SEM) SBP with IER was −7 (0.7) mmHg vs −7 (0.6) mmHg with CER; and the mean (SEM) DBP with IER was −6 (0.5) mmHg vs −5 (0.5) mmHg with CER, (diet by time P = .62, .39, and .41, respectively). There were favorable improvements in body composition, HbA1c, and blood lipid levels, with no differences between groups.

Conclusion. A 2-day severe energy restriction with 5 days of habitual eating compared to 7 days of CER provides an acceptable alternative for BP control and weight loss in overweight and obese individuals with hypertension after 6 months. IER may offer a useful alternative strategy for this population, who find continuous weight-loss diets too difficult to maintain.

Commentary

Globally, obesity represents a major health challenge as it substantially increases the risk of diseases such as hypertension, type 2 diabetes, and coronary heart disease.1 Lifestyle modifications, including weight loss and increased physical activity, are recommended in major guidelines as a first-step intervention in the treatment of hypertensive patients.2 However, lifestyle and behavioral interventions aimed at reducing calorie intake through low-calorie dieting is challenging as it is dependent on individual motivation and adherence to a strict, continuous protocol. Further, CER strategies have limited effectiveness because complex and persistent hormonal, metabolic, and neurochemical adaptations defend against weight loss and promote weight regain.3-4 IER has drawn attention in the popular media as an alternative to CER due to its feasibility and even potential for higher rates of compliance.5

This study adds to the literature as it is the first randomized controlled trial (to the knowledge of the authors at the time of publication) to explore 2 forms of energy restriction – CER and IER – and their impact on weight loss, BP, body composition, HbA1c, and blood lipid levels in overweight and obese patients with high blood pressure. Results from this study showed that IER is as effective as, but not superior to, CER (in terms of the outcomes measures assessed). Specifically, findings highlighted that the 5:2 diet is an effective strategy and noninferior to that of daily calorie restriction for BP and weight control. In addition, both weight loss and BP reduction were greater in a subgroup of obese compared with overweight participants, which indicates that obese populations may benefit more from energy restriction. As the authors highlight, this study both aligns with and expands on current related literature.

 

 

This study has both strengths and limitations, especially with regard to the design and data analysis strategy. A key strength is the randomized controlled trial design which enables increased internal validity and decreases several sources of bias, including selection bias and confounding. In addition, it was also designed as a pragmatic trial, with the protocol reflecting efforts to replicate the real-world environment by not supplying meal replacements or food. Notably, only 9 patients could not comply with the protocol, indicating that acceptability of the diet protocol was high. However, as this was only a 6-month long study, further studies are needed to determine whether a 5:2 diet is sustainable (and effective) in the long-term compared with CER, which the authors highlight. The study was also adequately powered to detect clinically meaningful differences in weight loss and SBP, and appropriate analyses were performed on both the basis of completers and on an intention-to-treat principle. However, further studies are needed that are adequately powered to also detect clinically meaningful differences in the other measures, ie, body composition, HbA1c, and blood lipid levels. Importantly, generalizability of findings from this study is limited as the study population comprises only Chinese adults, predominately middle-aged, overweight, and had mildly to moderately elevated SBP and DBP, and excluded diabetic patients. Thus, findings are not necessarily applicable to individuals with highly elevated blood pressure or poorly controlled diabetes.

Applications for Clinical Practice

Results of this study demonstrated that IER is an effective alternative diet strategy for weight loss and blood pressure control in overweight and obese patients with hypertension and is comparable to CER. This is relevant for clinical practice as IER may be easier to maintain in this population compared to continuous weight-loss diets. Importantly, both types of calorie restriction require clinical oversight as medication changes and periodic monitoring of hypotensive and hypoglycemic episodes are needed. Clinicians should consider what is feasible and sustainable for their patients when recommending intermittent energy restriction.

Financial disclosures: None.

References

1. Blüher M. Obesity: global epidemiology and pathogenesis. Nat Rev Endocrinol. 2019;15(5):288-298. doi:10.1038/s41574-019-0176-8

2. Unger T, Borghi C, Charchar F, et al. 2020 International Society of Hypertension Global hypertension practice guidelines. J Hypertens. 2020;38(6):982-1004. doi:10.1097/HJH.0000000000002453 

3. Müller MJ, Enderle J, Bosy-Westphal A. Changes in Energy Expenditure with Weight Gain and Weight Loss in Humans. Curr Obes Rep. 2016;5(4):413-423. doi:10.1007/s13679-016-0237-4

4. Sainsbury A, Wood RE, Seimon RV, et al. Rationale for novel intermittent dieting strategies to attenuate adaptive responses to energy restriction. Obes Rev. 2018;19 Suppl 1:47–60. doi:10.1111/obr.12787

5. Davis CS, Clarke RE, Coulter SN, et al. Intermittent energy restriction and weight loss: a systematic review. Eur J Clin Nutr. 2016;70(3):292-299. doi:10.1038/ejcn.2015.195

Article PDF
Issue
Journal of Clinical Outcomes Management - 28(6)
Publications
Topics
Page Number
256-259
Sections
Article PDF
Article PDF

Study Overview

Objective. To compare the effects of intermittent energy restriction (IER) with those of continuous energy restriction (CER) on blood pressure control and weight loss in overweight and obese patients with hypertension during a 6-month period.

Design. Randomized controlled trial.

Settings and participants. The trial was conducted at the Affiliated Hospital of Jiaxing University from June 1, 2020, to April 30, 2021. Chinese adults were recruited using advertisements and flyers posted in the hospital and local communities. Prior to participation in study activities, all participants gave informed consent prior to recruitment and were provided compensation in the form of a $38 voucher at 3 and 6 months for their time for participating in the study.

The main inclusion criteria were patients between the ages of 18 and 70 years, hypertension, and body mass index (BMI) ranging from 24 to 40 kg/m2. The exclusion criteria were systolic blood pressure (SBP) ≥ 180 mmHg or diastolic blood pressure (DBP) ≥ 120 mmHg, type 1 or 2 diabetes with a history of severe hypoglycemic episodes, pregnancy or breastfeeding, usage of glucagon-like peptide 1 receptor agonists, weight loss > 5 kg within the past 3 months or previous weight loss surgery, and inability to adhere to the dietary protocol.

Of the 294 participants screened for eligibility, 205 were randomized in a 1:1 ratio to the IER group (n = 102) or the CER group (n = 103), stratified by sex and BMI (as overweight or obese). All participants were required to have a stable medication regimen and weight in the 3 months prior to enrollment and not to use weight-loss drugs or vitamin supplements for the duration of the study. Researchers and participants were not blinded to the study group assignment.

Interventions. Participants randomly assigned to the IER group followed a 5:2 eating pattern: a very-low-energy diet of 500-600 kcal for 2 days of the week along with their usual diet for the other 5 days. The 2 days of calorie restriction could be consecutive or nonconsecutive, with a minimum of 0.8 g supplemental protein per kg of body weight per day, in accordance with the 2016 Dietary Guidelines for Chinese Residents. The CER group was advised to consume 1000 kcal/day for women and 1200 kcal/day for men on a 7-day energy restriction. That is, they were prescribed a daily 25% restriction based on the general principles of a Mediterranean-type diet (30% fat, 45-50% carbohydrate, and 20-25% protein).

Both groups received dietary education from a qualified dietitian and were recommended to maintain their current daily activity levels throughout the trial. Written dietary information brochures with portion advice and sample meal plans were provided to improve compliance in each group. All participants received a digital cooking scale to weigh foods to ensure accuracy of intake and were required to keep a food diary while following the recommended recipe on 2 days/week during calorie restriction to help with adherence. No food was provided. All participants were followed up by regular outpatient visits to both cardiologists and dietitians once a month. Diet checklists, activity schedules, and weight were reviewed to assess compliance with dietary advice at each visit.

 

 

Of note, participants were encouraged to measure and record their BP twice daily, and if 2 consecutive BP readings were < 110/70 mmHg and/or accompanied by hypotensive episodes with symptoms (dizziness, nausea, headache, and fatigue), they were asked to contact the investigators directly. Antihypertensive medication changes were then made in consultation with cardiologists. In addition, a medication management protocol (ie, doses of antidiabetic medications, including insulin and sulfonylurea) was designed to avoid hypoglycemia. Medication could be reduced in the CER group based on the basal dose at the endocrinologist’s discretion. In the IER group, insulin and sulfonylureas were discontinued on calorie restriction days only, and long-acting insulin was discontinued the night before the IER day. Insulin was not to be resumed until a full day’s caloric intake was achieved.

Measures and analysis. The primary outcomes of this study were changes in BP and weight (measured using an automatic digital sphygmomanometer and an electronic scale), and the secondary outcomes were changes in body composition (assessed by dual-energy x-ray absorptiometry scanning), as well as glycosylated hemoglobin A1c (HbA1c) levels and blood lipids after 6 months. All outcome measures were recorded at baseline and at each monthly visit. Incidence rates of hypoglycemia were based on blood glucose (defined as blood glucose < 70 mg/dL) and/or symptomatic hypoglycemia (symptoms of sweating, paleness, dizziness, and confusion). Two cardiologists who were blind to the patients’ diet condition measured and recorded all pertinent clinical parameters and adjudicated serious adverse events.

Data were compared using independent-samples t-tests or the Mann–Whitney U test for continuous variables, and Pearson’s χ2 test or Fisher’s exact test for categorial variables as appropriate. Repeated-measures ANOVA via a linear mixed model was employed to test the effects of diet, time, and their interaction. In subgroup analyses, differential effects of the intervention on the primary outcomes were evaluated with respect to patients’ level of education, domicile, and sex based on the statistical significance of the interaction term for the subgroup of interest in the multivariate model. Analyses were performed based on completers and on an intention-to-treat principle.

Main results. Among the 205 randomized participants, 118 were women and 87 were men; mean (SD) age was 50.5 (8.8) years; mean (SD) BMI was 28.7 (2.6); mean (SD) SBP was 143 (10) mmHg; and mean (SD) DBP was 91 (9) mmHg. At the end of the 6-month intervention, 173 (84.4%) completed the study (IER group: n = 88; CER group: n = 85). Both groups had similar dropout rates at 6 months (IER group: 14 participants [13.7%]; CER group: 18 participants [17.5%]; P = .83) and were well matched for baseline characteristics except for triglyceride levels.

In the completers analysis, both groups experienced significant reductions in weight (mean [SEM]), but there was no difference between treatment groups (−7.2 [0.6] kg in the IER group vs −7.1 [0.6] kg in the CER group; diet by time P = .72). Similarly, the change in SBP and DBP achieved was statistically significant over time, but there was also no difference between the dietary interventions (−8 [0.7] mmHg in the IER group vs −8 [0.6] mmHg in the CER group, diet by time P = .68; −6 [0.6] mmHg in the IER group vs −6 [0.5] mmHg in the CER group, diet by time P = .53]. Subgroup analyses of the association of the intervention with weight, SBP and DBP by sex, education, and domicile showed no significant between-group differences.

 

 

All measures of body composition decreased significantly at 6 months with both groups experiencing comparable reductions in total fat mass (−5.5 [0.6] kg in the IER group vs −4.8 [0.5] kg in the CER group, diet by time P = .08) and android fat mass (−1.1 [0.2] kg in the IER group vs −0.8 [0.2] kg in the CER group, diet by time P = .16). Of note, participants in the CER group lost significantly more total fat-free mass than did participants in the IER group (mean [SEM], −2.3 [0.2] kg vs −1.7 [0.2] kg; P = .03], and there was a trend toward a greater change in total fat mass in the IER group (P = .08). The secondary outcome of mean (SEM) HbA1c (−0.2% [0.1%]) and blood lipid levels (triglyceride level, −1.0 [0.3] mmol/L; total cholesterol level, −0.9 [0.2] mmol/L; low-density lipoprotein cholesterol level, −0.9 [0.2 mmol/L; high-density lipoprotein cholesterol level, 0.7 [0.3] mmol/L] improved with weight loss (P < .05), with no differences between groups (diet by time P > .05).

The intention-to-treat analysis demonstrated that IER and CER are equally effective for weight loss and blood pressure control: both groups experienced significant reductions in weight, SBP, and DBP, but with no difference between treatment groups – mean (SEM) weight change with IER was −7.0 (0.6) kg vs −6.8 (0.6) kg with CER; the mean (SEM) SBP with IER was −7 (0.7) mmHg vs −7 (0.6) mmHg with CER; and the mean (SEM) DBP with IER was −6 (0.5) mmHg vs −5 (0.5) mmHg with CER, (diet by time P = .62, .39, and .41, respectively). There were favorable improvements in body composition, HbA1c, and blood lipid levels, with no differences between groups.

Conclusion. A 2-day severe energy restriction with 5 days of habitual eating compared to 7 days of CER provides an acceptable alternative for BP control and weight loss in overweight and obese individuals with hypertension after 6 months. IER may offer a useful alternative strategy for this population, who find continuous weight-loss diets too difficult to maintain.

Commentary

Globally, obesity represents a major health challenge as it substantially increases the risk of diseases such as hypertension, type 2 diabetes, and coronary heart disease.1 Lifestyle modifications, including weight loss and increased physical activity, are recommended in major guidelines as a first-step intervention in the treatment of hypertensive patients.2 However, lifestyle and behavioral interventions aimed at reducing calorie intake through low-calorie dieting is challenging as it is dependent on individual motivation and adherence to a strict, continuous protocol. Further, CER strategies have limited effectiveness because complex and persistent hormonal, metabolic, and neurochemical adaptations defend against weight loss and promote weight regain.3-4 IER has drawn attention in the popular media as an alternative to CER due to its feasibility and even potential for higher rates of compliance.5

This study adds to the literature as it is the first randomized controlled trial (to the knowledge of the authors at the time of publication) to explore 2 forms of energy restriction – CER and IER – and their impact on weight loss, BP, body composition, HbA1c, and blood lipid levels in overweight and obese patients with high blood pressure. Results from this study showed that IER is as effective as, but not superior to, CER (in terms of the outcomes measures assessed). Specifically, findings highlighted that the 5:2 diet is an effective strategy and noninferior to that of daily calorie restriction for BP and weight control. In addition, both weight loss and BP reduction were greater in a subgroup of obese compared with overweight participants, which indicates that obese populations may benefit more from energy restriction. As the authors highlight, this study both aligns with and expands on current related literature.

 

 

This study has both strengths and limitations, especially with regard to the design and data analysis strategy. A key strength is the randomized controlled trial design which enables increased internal validity and decreases several sources of bias, including selection bias and confounding. In addition, it was also designed as a pragmatic trial, with the protocol reflecting efforts to replicate the real-world environment by not supplying meal replacements or food. Notably, only 9 patients could not comply with the protocol, indicating that acceptability of the diet protocol was high. However, as this was only a 6-month long study, further studies are needed to determine whether a 5:2 diet is sustainable (and effective) in the long-term compared with CER, which the authors highlight. The study was also adequately powered to detect clinically meaningful differences in weight loss and SBP, and appropriate analyses were performed on both the basis of completers and on an intention-to-treat principle. However, further studies are needed that are adequately powered to also detect clinically meaningful differences in the other measures, ie, body composition, HbA1c, and blood lipid levels. Importantly, generalizability of findings from this study is limited as the study population comprises only Chinese adults, predominately middle-aged, overweight, and had mildly to moderately elevated SBP and DBP, and excluded diabetic patients. Thus, findings are not necessarily applicable to individuals with highly elevated blood pressure or poorly controlled diabetes.

Applications for Clinical Practice

Results of this study demonstrated that IER is an effective alternative diet strategy for weight loss and blood pressure control in overweight and obese patients with hypertension and is comparable to CER. This is relevant for clinical practice as IER may be easier to maintain in this population compared to continuous weight-loss diets. Importantly, both types of calorie restriction require clinical oversight as medication changes and periodic monitoring of hypotensive and hypoglycemic episodes are needed. Clinicians should consider what is feasible and sustainable for their patients when recommending intermittent energy restriction.

Financial disclosures: None.

Study Overview

Objective. To compare the effects of intermittent energy restriction (IER) with those of continuous energy restriction (CER) on blood pressure control and weight loss in overweight and obese patients with hypertension during a 6-month period.

Design. Randomized controlled trial.

Settings and participants. The trial was conducted at the Affiliated Hospital of Jiaxing University from June 1, 2020, to April 30, 2021. Chinese adults were recruited using advertisements and flyers posted in the hospital and local communities. Prior to participation in study activities, all participants gave informed consent prior to recruitment and were provided compensation in the form of a $38 voucher at 3 and 6 months for their time for participating in the study.

The main inclusion criteria were patients between the ages of 18 and 70 years, hypertension, and body mass index (BMI) ranging from 24 to 40 kg/m2. The exclusion criteria were systolic blood pressure (SBP) ≥ 180 mmHg or diastolic blood pressure (DBP) ≥ 120 mmHg, type 1 or 2 diabetes with a history of severe hypoglycemic episodes, pregnancy or breastfeeding, usage of glucagon-like peptide 1 receptor agonists, weight loss > 5 kg within the past 3 months or previous weight loss surgery, and inability to adhere to the dietary protocol.

Of the 294 participants screened for eligibility, 205 were randomized in a 1:1 ratio to the IER group (n = 102) or the CER group (n = 103), stratified by sex and BMI (as overweight or obese). All participants were required to have a stable medication regimen and weight in the 3 months prior to enrollment and not to use weight-loss drugs or vitamin supplements for the duration of the study. Researchers and participants were not blinded to the study group assignment.

Interventions. Participants randomly assigned to the IER group followed a 5:2 eating pattern: a very-low-energy diet of 500-600 kcal for 2 days of the week along with their usual diet for the other 5 days. The 2 days of calorie restriction could be consecutive or nonconsecutive, with a minimum of 0.8 g supplemental protein per kg of body weight per day, in accordance with the 2016 Dietary Guidelines for Chinese Residents. The CER group was advised to consume 1000 kcal/day for women and 1200 kcal/day for men on a 7-day energy restriction. That is, they were prescribed a daily 25% restriction based on the general principles of a Mediterranean-type diet (30% fat, 45-50% carbohydrate, and 20-25% protein).

Both groups received dietary education from a qualified dietitian and were recommended to maintain their current daily activity levels throughout the trial. Written dietary information brochures with portion advice and sample meal plans were provided to improve compliance in each group. All participants received a digital cooking scale to weigh foods to ensure accuracy of intake and were required to keep a food diary while following the recommended recipe on 2 days/week during calorie restriction to help with adherence. No food was provided. All participants were followed up by regular outpatient visits to both cardiologists and dietitians once a month. Diet checklists, activity schedules, and weight were reviewed to assess compliance with dietary advice at each visit.

 

 

Of note, participants were encouraged to measure and record their BP twice daily, and if 2 consecutive BP readings were < 110/70 mmHg and/or accompanied by hypotensive episodes with symptoms (dizziness, nausea, headache, and fatigue), they were asked to contact the investigators directly. Antihypertensive medication changes were then made in consultation with cardiologists. In addition, a medication management protocol (ie, doses of antidiabetic medications, including insulin and sulfonylurea) was designed to avoid hypoglycemia. Medication could be reduced in the CER group based on the basal dose at the endocrinologist’s discretion. In the IER group, insulin and sulfonylureas were discontinued on calorie restriction days only, and long-acting insulin was discontinued the night before the IER day. Insulin was not to be resumed until a full day’s caloric intake was achieved.

Measures and analysis. The primary outcomes of this study were changes in BP and weight (measured using an automatic digital sphygmomanometer and an electronic scale), and the secondary outcomes were changes in body composition (assessed by dual-energy x-ray absorptiometry scanning), as well as glycosylated hemoglobin A1c (HbA1c) levels and blood lipids after 6 months. All outcome measures were recorded at baseline and at each monthly visit. Incidence rates of hypoglycemia were based on blood glucose (defined as blood glucose < 70 mg/dL) and/or symptomatic hypoglycemia (symptoms of sweating, paleness, dizziness, and confusion). Two cardiologists who were blind to the patients’ diet condition measured and recorded all pertinent clinical parameters and adjudicated serious adverse events.

Data were compared using independent-samples t-tests or the Mann–Whitney U test for continuous variables, and Pearson’s χ2 test or Fisher’s exact test for categorial variables as appropriate. Repeated-measures ANOVA via a linear mixed model was employed to test the effects of diet, time, and their interaction. In subgroup analyses, differential effects of the intervention on the primary outcomes were evaluated with respect to patients’ level of education, domicile, and sex based on the statistical significance of the interaction term for the subgroup of interest in the multivariate model. Analyses were performed based on completers and on an intention-to-treat principle.

Main results. Among the 205 randomized participants, 118 were women and 87 were men; mean (SD) age was 50.5 (8.8) years; mean (SD) BMI was 28.7 (2.6); mean (SD) SBP was 143 (10) mmHg; and mean (SD) DBP was 91 (9) mmHg. At the end of the 6-month intervention, 173 (84.4%) completed the study (IER group: n = 88; CER group: n = 85). Both groups had similar dropout rates at 6 months (IER group: 14 participants [13.7%]; CER group: 18 participants [17.5%]; P = .83) and were well matched for baseline characteristics except for triglyceride levels.

In the completers analysis, both groups experienced significant reductions in weight (mean [SEM]), but there was no difference between treatment groups (−7.2 [0.6] kg in the IER group vs −7.1 [0.6] kg in the CER group; diet by time P = .72). Similarly, the change in SBP and DBP achieved was statistically significant over time, but there was also no difference between the dietary interventions (−8 [0.7] mmHg in the IER group vs −8 [0.6] mmHg in the CER group, diet by time P = .68; −6 [0.6] mmHg in the IER group vs −6 [0.5] mmHg in the CER group, diet by time P = .53]. Subgroup analyses of the association of the intervention with weight, SBP and DBP by sex, education, and domicile showed no significant between-group differences.

 

 

All measures of body composition decreased significantly at 6 months with both groups experiencing comparable reductions in total fat mass (−5.5 [0.6] kg in the IER group vs −4.8 [0.5] kg in the CER group, diet by time P = .08) and android fat mass (−1.1 [0.2] kg in the IER group vs −0.8 [0.2] kg in the CER group, diet by time P = .16). Of note, participants in the CER group lost significantly more total fat-free mass than did participants in the IER group (mean [SEM], −2.3 [0.2] kg vs −1.7 [0.2] kg; P = .03], and there was a trend toward a greater change in total fat mass in the IER group (P = .08). The secondary outcome of mean (SEM) HbA1c (−0.2% [0.1%]) and blood lipid levels (triglyceride level, −1.0 [0.3] mmol/L; total cholesterol level, −0.9 [0.2] mmol/L; low-density lipoprotein cholesterol level, −0.9 [0.2 mmol/L; high-density lipoprotein cholesterol level, 0.7 [0.3] mmol/L] improved with weight loss (P < .05), with no differences between groups (diet by time P > .05).

The intention-to-treat analysis demonstrated that IER and CER are equally effective for weight loss and blood pressure control: both groups experienced significant reductions in weight, SBP, and DBP, but with no difference between treatment groups – mean (SEM) weight change with IER was −7.0 (0.6) kg vs −6.8 (0.6) kg with CER; the mean (SEM) SBP with IER was −7 (0.7) mmHg vs −7 (0.6) mmHg with CER; and the mean (SEM) DBP with IER was −6 (0.5) mmHg vs −5 (0.5) mmHg with CER, (diet by time P = .62, .39, and .41, respectively). There were favorable improvements in body composition, HbA1c, and blood lipid levels, with no differences between groups.

Conclusion. A 2-day severe energy restriction with 5 days of habitual eating compared to 7 days of CER provides an acceptable alternative for BP control and weight loss in overweight and obese individuals with hypertension after 6 months. IER may offer a useful alternative strategy for this population, who find continuous weight-loss diets too difficult to maintain.

Commentary

Globally, obesity represents a major health challenge as it substantially increases the risk of diseases such as hypertension, type 2 diabetes, and coronary heart disease.1 Lifestyle modifications, including weight loss and increased physical activity, are recommended in major guidelines as a first-step intervention in the treatment of hypertensive patients.2 However, lifestyle and behavioral interventions aimed at reducing calorie intake through low-calorie dieting is challenging as it is dependent on individual motivation and adherence to a strict, continuous protocol. Further, CER strategies have limited effectiveness because complex and persistent hormonal, metabolic, and neurochemical adaptations defend against weight loss and promote weight regain.3-4 IER has drawn attention in the popular media as an alternative to CER due to its feasibility and even potential for higher rates of compliance.5

This study adds to the literature as it is the first randomized controlled trial (to the knowledge of the authors at the time of publication) to explore 2 forms of energy restriction – CER and IER – and their impact on weight loss, BP, body composition, HbA1c, and blood lipid levels in overweight and obese patients with high blood pressure. Results from this study showed that IER is as effective as, but not superior to, CER (in terms of the outcomes measures assessed). Specifically, findings highlighted that the 5:2 diet is an effective strategy and noninferior to that of daily calorie restriction for BP and weight control. In addition, both weight loss and BP reduction were greater in a subgroup of obese compared with overweight participants, which indicates that obese populations may benefit more from energy restriction. As the authors highlight, this study both aligns with and expands on current related literature.

 

 

This study has both strengths and limitations, especially with regard to the design and data analysis strategy. A key strength is the randomized controlled trial design which enables increased internal validity and decreases several sources of bias, including selection bias and confounding. In addition, it was also designed as a pragmatic trial, with the protocol reflecting efforts to replicate the real-world environment by not supplying meal replacements or food. Notably, only 9 patients could not comply with the protocol, indicating that acceptability of the diet protocol was high. However, as this was only a 6-month long study, further studies are needed to determine whether a 5:2 diet is sustainable (and effective) in the long-term compared with CER, which the authors highlight. The study was also adequately powered to detect clinically meaningful differences in weight loss and SBP, and appropriate analyses were performed on both the basis of completers and on an intention-to-treat principle. However, further studies are needed that are adequately powered to also detect clinically meaningful differences in the other measures, ie, body composition, HbA1c, and blood lipid levels. Importantly, generalizability of findings from this study is limited as the study population comprises only Chinese adults, predominately middle-aged, overweight, and had mildly to moderately elevated SBP and DBP, and excluded diabetic patients. Thus, findings are not necessarily applicable to individuals with highly elevated blood pressure or poorly controlled diabetes.

Applications for Clinical Practice

Results of this study demonstrated that IER is an effective alternative diet strategy for weight loss and blood pressure control in overweight and obese patients with hypertension and is comparable to CER. This is relevant for clinical practice as IER may be easier to maintain in this population compared to continuous weight-loss diets. Importantly, both types of calorie restriction require clinical oversight as medication changes and periodic monitoring of hypotensive and hypoglycemic episodes are needed. Clinicians should consider what is feasible and sustainable for their patients when recommending intermittent energy restriction.

Financial disclosures: None.

References

1. Blüher M. Obesity: global epidemiology and pathogenesis. Nat Rev Endocrinol. 2019;15(5):288-298. doi:10.1038/s41574-019-0176-8

2. Unger T, Borghi C, Charchar F, et al. 2020 International Society of Hypertension Global hypertension practice guidelines. J Hypertens. 2020;38(6):982-1004. doi:10.1097/HJH.0000000000002453 

3. Müller MJ, Enderle J, Bosy-Westphal A. Changes in Energy Expenditure with Weight Gain and Weight Loss in Humans. Curr Obes Rep. 2016;5(4):413-423. doi:10.1007/s13679-016-0237-4

4. Sainsbury A, Wood RE, Seimon RV, et al. Rationale for novel intermittent dieting strategies to attenuate adaptive responses to energy restriction. Obes Rev. 2018;19 Suppl 1:47–60. doi:10.1111/obr.12787

5. Davis CS, Clarke RE, Coulter SN, et al. Intermittent energy restriction and weight loss: a systematic review. Eur J Clin Nutr. 2016;70(3):292-299. doi:10.1038/ejcn.2015.195

References

1. Blüher M. Obesity: global epidemiology and pathogenesis. Nat Rev Endocrinol. 2019;15(5):288-298. doi:10.1038/s41574-019-0176-8

2. Unger T, Borghi C, Charchar F, et al. 2020 International Society of Hypertension Global hypertension practice guidelines. J Hypertens. 2020;38(6):982-1004. doi:10.1097/HJH.0000000000002453 

3. Müller MJ, Enderle J, Bosy-Westphal A. Changes in Energy Expenditure with Weight Gain and Weight Loss in Humans. Curr Obes Rep. 2016;5(4):413-423. doi:10.1007/s13679-016-0237-4

4. Sainsbury A, Wood RE, Seimon RV, et al. Rationale for novel intermittent dieting strategies to attenuate adaptive responses to energy restriction. Obes Rev. 2018;19 Suppl 1:47–60. doi:10.1111/obr.12787

5. Davis CS, Clarke RE, Coulter SN, et al. Intermittent energy restriction and weight loss: a systematic review. Eur J Clin Nutr. 2016;70(3):292-299. doi:10.1038/ejcn.2015.195

Issue
Journal of Clinical Outcomes Management - 28(6)
Issue
Journal of Clinical Outcomes Management - 28(6)
Page Number
256-259
Page Number
256-259
Publications
Publications
Topics
Article Type
Display Headline
Evaluation of Intermittent Energy Restriction and Continuous Energy Restriction on Weight Loss and Blood Pressure Control in Overweight and Obese Patients With Hypertension
Display Headline
Evaluation of Intermittent Energy Restriction and Continuous Energy Restriction on Weight Loss and Blood Pressure Control in Overweight and Obese Patients With Hypertension
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media

Preoperative Code Status Discussion in Older Adults: Are We Doing Enough?

Article Type
Changed
Wed, 11/24/2021 - 13:57
Display Headline
Preoperative Code Status Discussion in Older Adults: Are We Doing Enough?

Study Overview

Objective. The objective of this study was to evaluate orders and documentation describing perioperative management of code status in adults.

Design. A retrospective case series of all adult inpatients admitted to hospitals at 1 academic health system in the US.

Setting and participants. This retrospective case series was conducted at 5 hospitals within the University of Pennsylvania Health System. Cases included all adult inpatients admitted to hospitals between March 2017 and September 2018 who had a Do-Not-Resuscitate (DNR) order placed in their medical record during admission and subsequently underwent a surgical procedure that required anesthesia care.

Main outcome measures. Medical records of included cases were manually reviewed by the authors to verify whether a DNR order was in place at the time surgical intervention was discussed with a patient. Clinical notes and DNR orders of eligible cases were reviewed to identify documentation and outcome of goals of care discussions that were conducted within 48 hours prior to the surgical procedure. Collected data included patient demographics (age, sex, race); case characteristics (American Society of Anesthesiologists [ASA] physical status score, anesthesia type [general vs others such as regional], emergency status [emergent vs elective surgery], procedures by service [surgical including hip fracture repair, gastrostomy or jejunostomy, or exploratory laparotomy vs medical including endoscopy, bronchoscopy, or transesophageal echocardiogram]); and hospital policy for perioperative management of DNR orders (written policy encouraging discussion vs written policy plus additional initiatives, including procedure-specific DNR form). The primary outcome was the presence of a preoperative order or note documenting code status discussion or change. Data were analyzed using χ2 and Fisher exact tests and the threshold for statistical significance was P < .05.

Main results. Of the 27 665 inpatient procedures identified across 5 hospitals, 444 (1.6%) cases met the inclusion criteria. Patients from these cases aged 75 (SD 13) years (95% CI, 72-77 years); 247 (56%, 95% CI, 55%-57%) were women; and 300 (68%, 95% CI, 65%-71%) were White. A total of 426 patients (96%, 95% CI, 90%-100%) had an ASA physical status score of 3 or higher and 237 (53%, 95% CI, 51%-56%) received general anesthesia. The most common procedures performed were endoscopy (148 [33%]), hip fracture repair (43 [10%]), and gastrostomy or jejunostomy (28 [6%]). Reevaluation of code status was documented in 126 cases (28%, 95% CI, 25%-31%); code status orders were changed in 20 of 126 cases (16%, 95% CI, 7%-24%); and a note was filed without a corresponding order for 106 of 126 cases (84%, 95% CI, 75%-95%). In the majority of cases (109 of 126 [87%], 95% CI, 78%-95%) in which documented discussion occurred, DNR orders were suspended. Of 126 cases in which a discussion was documented, participants of these discussions included surgeons 10% of the time (13 cases, 95% CI, 8%-13%), members of the anesthesia team 51% of the time (64 cases, 95% CI, 49%-53%), and medicine or palliative care clinicians 39% of the time (49 cases, 95% CI, 37%-41%).

The rate of documented preoperative code status discussion was higher in patients with higher ASA physical status score (35% in patients with an ASA physical status score ≥ 4 [55 of 155] vs 25% in those with an ASA physical status score ≤ 3 [71 of 289]; P = .02). The rates of documented preoperative code status discussion were similar by anesthesia type (29% for general anesthesia [69 of 237 cases] vs 28% [57 of 207 cases] for other modalities; P = .70). The hospitals involved in this study all had a written policy encouraging rediscussion of code status before surgery. However, only 1 hospital reported added measures (eg, provision of a procedure-specific DNR form) to increase documentation of preoperative code status discussions. In this specific hospital, documentation of preoperative code status discussions was higher compared to other hospitals (67% [37 of 55 cases] vs 23% [89 of 389 cases]; P < .01).

Conclusion. In a retrospective case series conducted at 5 hospitals within 1 academic health system in the US, fewer than 1 in 5 patients with preexisting DNR orders had a documented discussion of code status prior to undergoing surgery. Additional strategies including the development of institutional protocols that facilitate perioperative management of advance directives, identification of local champions, and patient education, should be explored as means to improve preoperative code status reevaulation per guideline recommendations.

 

 

Commentary

It is not unusual that patients with a DNR order may require and undergo surgical interventions to treat reversible conditions, prevent progression of underlying disease, or mitigate distressing symptoms such as pain. For instance, intubation, mechanical ventilation, and administration of vasoactive drugs are resuscitative measures that may be needed to safely anesthetize and sedate a patient. As such, the American College of Surgeons1 has provided a statement on advance directives by patients with an existing DNR order to guide management. Specifically, the statement indicates that the best approach for these patients is a policy of “required reconsideration” of the existing DNR order. Required reconsideration means that “the patient or designated surrogate and the physicians who will be responsible for the patient’s care should, when possible, discuss the new intraoperative and perioperative risks associated with the surgical procedure, the patient’s treatment goals, and an approach for potentially life-threatening problems consistent with the patient’s values and preferences.” Moreover, the required reconsideration discussion needs to occur as early as it is practical once a decision is made to have surgery because the discussion “may result in the patient agreeing to suspend the DNR order during surgery and the perioperative period, retaining the original DNR order, or modifying the DNR order.” Given that surgical patients with DNR orders have significant comorbidities, many sustain postoperative complications, and nearly 1 in 4 die within 30 days of surgery, preoperative advance care planning (ACP) and code status discussions are particularly essential to delivering high quality surgical care.2

In the current study, Hadler et al3 conducted a retrospective analysis to evaluate orders and documentation describing perioperative management of code status in patients with existing DNR order at an academic health system in the US. The authors reported that fewer than 20% of patients with existing DNR orders had a documented discussion of code status prior to undergoing surgery. These findings add to the notion that compliance with such guidance on required reconsideration discussion is suboptimal in perioperative care in the US.4,5 A recently published study focused on patients aged more than 60 years undergoing high-risk oncologic or vascular surgeries similarly showed that the frequency of ACP discussions or advance directive documentations among older patients was low.6 This growing body of evidence is highly clinically relevant in that preoperative discussion on code status is highly relevant to the care of older adults, a population group that accounts for the majority of surgeries and is most vulnerable to poor surgical outcomes. Additionally, it highlights a disconnect between the shared recognition by surgeons and patients that ACP discussion is important in perioperative care and its low implementation rates.

Unsurprisingly, Hadler et al3 reported that added measures such as the provision of a procedure-specific DNR form led to an increase in the documentation of preoperative code status discussions in 1 of the hospitals studied. The authors suggested that strategies such as the development of institutional protocols aimed to facilitate perioperative advance directive discussions, identify local champions, and educate patients may be ways to improve preoperative code status reevaulation. The idea that institutional value and culture are key factors impacting surgeon behavior and may influence the practice of ACP discussion is not new. Thus, creative and adaptable strategies, resources, and trainings that are required by medical institutions and hospitals to support preoperative ACP discussions with patients undergoing surgeries need to be identified, validated, and implemented to optimize perioperative care in vulnerable patients.

Applications for Clinical Practice

The findings from the current study indicate that less than 20% of patients with preexisting DNR orders have a documented discussion of code status prior to undergoing surgery. Physicians and health care institutions need to identify barriers to, and implement strategies that, facilitate and optimize preoperative ACP discussions in order to provide patient-centered care in vulnerable surgical patients.

Financial disclosures: None.

References

1. American College of Surgeons Board of Regents. Statement on Advance Directives by Patients: “Do Not Resuscitate” in the Operating Room. American College of Surgeons. January 3, 2014. Accessed November 6, 2021. https://www.facs.org/about-acs/statements/19-advance-directives

2. Kazaure H, Roman S, Sosa JA. High mortality in surgical patients with do-not-resuscitate orders: analysis of 8256 patients. Arch Surg. 2011;146(8):922-928. doi:10.1001/archsurg.2011.69

3. Hadler RA, Fatuzzo M, Sahota G, Neuman MD. Perioperative Management of Do-Not-Resuscitate Orders at a Large Academic Health System. JAMA Surg. 2021;e214135. doi:10.1001/jamasurg.2021.4135

4. Coopmans VC, Gries CA. CRNA awareness and experience with perioperative DNR orders. AANA J. 2000;68(3):247-256.

5. Urman RD, Lilley EJ, Changala M, Lindvall C, Hepner DL, Bader AM. A Pilot Study to Evaluate Compliance with Guidelines for Preprocedural Reconsideration of Code Status Limitations. J Palliat Med. 2018;21(8):1152-1156. doi:10.1089/jpm.2017.0601

6. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521

Article PDF
Issue
Journal of Clinical Outcomes Management - 28(6)
Publications
Topics
Page Number
253-255
Sections
Article PDF
Article PDF

Study Overview

Objective. The objective of this study was to evaluate orders and documentation describing perioperative management of code status in adults.

Design. A retrospective case series of all adult inpatients admitted to hospitals at 1 academic health system in the US.

Setting and participants. This retrospective case series was conducted at 5 hospitals within the University of Pennsylvania Health System. Cases included all adult inpatients admitted to hospitals between March 2017 and September 2018 who had a Do-Not-Resuscitate (DNR) order placed in their medical record during admission and subsequently underwent a surgical procedure that required anesthesia care.

Main outcome measures. Medical records of included cases were manually reviewed by the authors to verify whether a DNR order was in place at the time surgical intervention was discussed with a patient. Clinical notes and DNR orders of eligible cases were reviewed to identify documentation and outcome of goals of care discussions that were conducted within 48 hours prior to the surgical procedure. Collected data included patient demographics (age, sex, race); case characteristics (American Society of Anesthesiologists [ASA] physical status score, anesthesia type [general vs others such as regional], emergency status [emergent vs elective surgery], procedures by service [surgical including hip fracture repair, gastrostomy or jejunostomy, or exploratory laparotomy vs medical including endoscopy, bronchoscopy, or transesophageal echocardiogram]); and hospital policy for perioperative management of DNR orders (written policy encouraging discussion vs written policy plus additional initiatives, including procedure-specific DNR form). The primary outcome was the presence of a preoperative order or note documenting code status discussion or change. Data were analyzed using χ2 and Fisher exact tests and the threshold for statistical significance was P < .05.

Main results. Of the 27 665 inpatient procedures identified across 5 hospitals, 444 (1.6%) cases met the inclusion criteria. Patients from these cases aged 75 (SD 13) years (95% CI, 72-77 years); 247 (56%, 95% CI, 55%-57%) were women; and 300 (68%, 95% CI, 65%-71%) were White. A total of 426 patients (96%, 95% CI, 90%-100%) had an ASA physical status score of 3 or higher and 237 (53%, 95% CI, 51%-56%) received general anesthesia. The most common procedures performed were endoscopy (148 [33%]), hip fracture repair (43 [10%]), and gastrostomy or jejunostomy (28 [6%]). Reevaluation of code status was documented in 126 cases (28%, 95% CI, 25%-31%); code status orders were changed in 20 of 126 cases (16%, 95% CI, 7%-24%); and a note was filed without a corresponding order for 106 of 126 cases (84%, 95% CI, 75%-95%). In the majority of cases (109 of 126 [87%], 95% CI, 78%-95%) in which documented discussion occurred, DNR orders were suspended. Of 126 cases in which a discussion was documented, participants of these discussions included surgeons 10% of the time (13 cases, 95% CI, 8%-13%), members of the anesthesia team 51% of the time (64 cases, 95% CI, 49%-53%), and medicine or palliative care clinicians 39% of the time (49 cases, 95% CI, 37%-41%).

The rate of documented preoperative code status discussion was higher in patients with higher ASA physical status score (35% in patients with an ASA physical status score ≥ 4 [55 of 155] vs 25% in those with an ASA physical status score ≤ 3 [71 of 289]; P = .02). The rates of documented preoperative code status discussion were similar by anesthesia type (29% for general anesthesia [69 of 237 cases] vs 28% [57 of 207 cases] for other modalities; P = .70). The hospitals involved in this study all had a written policy encouraging rediscussion of code status before surgery. However, only 1 hospital reported added measures (eg, provision of a procedure-specific DNR form) to increase documentation of preoperative code status discussions. In this specific hospital, documentation of preoperative code status discussions was higher compared to other hospitals (67% [37 of 55 cases] vs 23% [89 of 389 cases]; P < .01).

Conclusion. In a retrospective case series conducted at 5 hospitals within 1 academic health system in the US, fewer than 1 in 5 patients with preexisting DNR orders had a documented discussion of code status prior to undergoing surgery. Additional strategies including the development of institutional protocols that facilitate perioperative management of advance directives, identification of local champions, and patient education, should be explored as means to improve preoperative code status reevaulation per guideline recommendations.

 

 

Commentary

It is not unusual that patients with a DNR order may require and undergo surgical interventions to treat reversible conditions, prevent progression of underlying disease, or mitigate distressing symptoms such as pain. For instance, intubation, mechanical ventilation, and administration of vasoactive drugs are resuscitative measures that may be needed to safely anesthetize and sedate a patient. As such, the American College of Surgeons1 has provided a statement on advance directives by patients with an existing DNR order to guide management. Specifically, the statement indicates that the best approach for these patients is a policy of “required reconsideration” of the existing DNR order. Required reconsideration means that “the patient or designated surrogate and the physicians who will be responsible for the patient’s care should, when possible, discuss the new intraoperative and perioperative risks associated with the surgical procedure, the patient’s treatment goals, and an approach for potentially life-threatening problems consistent with the patient’s values and preferences.” Moreover, the required reconsideration discussion needs to occur as early as it is practical once a decision is made to have surgery because the discussion “may result in the patient agreeing to suspend the DNR order during surgery and the perioperative period, retaining the original DNR order, or modifying the DNR order.” Given that surgical patients with DNR orders have significant comorbidities, many sustain postoperative complications, and nearly 1 in 4 die within 30 days of surgery, preoperative advance care planning (ACP) and code status discussions are particularly essential to delivering high quality surgical care.2

In the current study, Hadler et al3 conducted a retrospective analysis to evaluate orders and documentation describing perioperative management of code status in patients with existing DNR order at an academic health system in the US. The authors reported that fewer than 20% of patients with existing DNR orders had a documented discussion of code status prior to undergoing surgery. These findings add to the notion that compliance with such guidance on required reconsideration discussion is suboptimal in perioperative care in the US.4,5 A recently published study focused on patients aged more than 60 years undergoing high-risk oncologic or vascular surgeries similarly showed that the frequency of ACP discussions or advance directive documentations among older patients was low.6 This growing body of evidence is highly clinically relevant in that preoperative discussion on code status is highly relevant to the care of older adults, a population group that accounts for the majority of surgeries and is most vulnerable to poor surgical outcomes. Additionally, it highlights a disconnect between the shared recognition by surgeons and patients that ACP discussion is important in perioperative care and its low implementation rates.

Unsurprisingly, Hadler et al3 reported that added measures such as the provision of a procedure-specific DNR form led to an increase in the documentation of preoperative code status discussions in 1 of the hospitals studied. The authors suggested that strategies such as the development of institutional protocols aimed to facilitate perioperative advance directive discussions, identify local champions, and educate patients may be ways to improve preoperative code status reevaulation. The idea that institutional value and culture are key factors impacting surgeon behavior and may influence the practice of ACP discussion is not new. Thus, creative and adaptable strategies, resources, and trainings that are required by medical institutions and hospitals to support preoperative ACP discussions with patients undergoing surgeries need to be identified, validated, and implemented to optimize perioperative care in vulnerable patients.

Applications for Clinical Practice

The findings from the current study indicate that less than 20% of patients with preexisting DNR orders have a documented discussion of code status prior to undergoing surgery. Physicians and health care institutions need to identify barriers to, and implement strategies that, facilitate and optimize preoperative ACP discussions in order to provide patient-centered care in vulnerable surgical patients.

Financial disclosures: None.

Study Overview

Objective. The objective of this study was to evaluate orders and documentation describing perioperative management of code status in adults.

Design. A retrospective case series of all adult inpatients admitted to hospitals at 1 academic health system in the US.

Setting and participants. This retrospective case series was conducted at 5 hospitals within the University of Pennsylvania Health System. Cases included all adult inpatients admitted to hospitals between March 2017 and September 2018 who had a Do-Not-Resuscitate (DNR) order placed in their medical record during admission and subsequently underwent a surgical procedure that required anesthesia care.

Main outcome measures. Medical records of included cases were manually reviewed by the authors to verify whether a DNR order was in place at the time surgical intervention was discussed with a patient. Clinical notes and DNR orders of eligible cases were reviewed to identify documentation and outcome of goals of care discussions that were conducted within 48 hours prior to the surgical procedure. Collected data included patient demographics (age, sex, race); case characteristics (American Society of Anesthesiologists [ASA] physical status score, anesthesia type [general vs others such as regional], emergency status [emergent vs elective surgery], procedures by service [surgical including hip fracture repair, gastrostomy or jejunostomy, or exploratory laparotomy vs medical including endoscopy, bronchoscopy, or transesophageal echocardiogram]); and hospital policy for perioperative management of DNR orders (written policy encouraging discussion vs written policy plus additional initiatives, including procedure-specific DNR form). The primary outcome was the presence of a preoperative order or note documenting code status discussion or change. Data were analyzed using χ2 and Fisher exact tests and the threshold for statistical significance was P < .05.

Main results. Of the 27 665 inpatient procedures identified across 5 hospitals, 444 (1.6%) cases met the inclusion criteria. Patients from these cases aged 75 (SD 13) years (95% CI, 72-77 years); 247 (56%, 95% CI, 55%-57%) were women; and 300 (68%, 95% CI, 65%-71%) were White. A total of 426 patients (96%, 95% CI, 90%-100%) had an ASA physical status score of 3 or higher and 237 (53%, 95% CI, 51%-56%) received general anesthesia. The most common procedures performed were endoscopy (148 [33%]), hip fracture repair (43 [10%]), and gastrostomy or jejunostomy (28 [6%]). Reevaluation of code status was documented in 126 cases (28%, 95% CI, 25%-31%); code status orders were changed in 20 of 126 cases (16%, 95% CI, 7%-24%); and a note was filed without a corresponding order for 106 of 126 cases (84%, 95% CI, 75%-95%). In the majority of cases (109 of 126 [87%], 95% CI, 78%-95%) in which documented discussion occurred, DNR orders were suspended. Of 126 cases in which a discussion was documented, participants of these discussions included surgeons 10% of the time (13 cases, 95% CI, 8%-13%), members of the anesthesia team 51% of the time (64 cases, 95% CI, 49%-53%), and medicine or palliative care clinicians 39% of the time (49 cases, 95% CI, 37%-41%).

The rate of documented preoperative code status discussion was higher in patients with higher ASA physical status score (35% in patients with an ASA physical status score ≥ 4 [55 of 155] vs 25% in those with an ASA physical status score ≤ 3 [71 of 289]; P = .02). The rates of documented preoperative code status discussion were similar by anesthesia type (29% for general anesthesia [69 of 237 cases] vs 28% [57 of 207 cases] for other modalities; P = .70). The hospitals involved in this study all had a written policy encouraging rediscussion of code status before surgery. However, only 1 hospital reported added measures (eg, provision of a procedure-specific DNR form) to increase documentation of preoperative code status discussions. In this specific hospital, documentation of preoperative code status discussions was higher compared to other hospitals (67% [37 of 55 cases] vs 23% [89 of 389 cases]; P < .01).

Conclusion. In a retrospective case series conducted at 5 hospitals within 1 academic health system in the US, fewer than 1 in 5 patients with preexisting DNR orders had a documented discussion of code status prior to undergoing surgery. Additional strategies including the development of institutional protocols that facilitate perioperative management of advance directives, identification of local champions, and patient education, should be explored as means to improve preoperative code status reevaulation per guideline recommendations.

 

 

Commentary

It is not unusual that patients with a DNR order may require and undergo surgical interventions to treat reversible conditions, prevent progression of underlying disease, or mitigate distressing symptoms such as pain. For instance, intubation, mechanical ventilation, and administration of vasoactive drugs are resuscitative measures that may be needed to safely anesthetize and sedate a patient. As such, the American College of Surgeons1 has provided a statement on advance directives by patients with an existing DNR order to guide management. Specifically, the statement indicates that the best approach for these patients is a policy of “required reconsideration” of the existing DNR order. Required reconsideration means that “the patient or designated surrogate and the physicians who will be responsible for the patient’s care should, when possible, discuss the new intraoperative and perioperative risks associated with the surgical procedure, the patient’s treatment goals, and an approach for potentially life-threatening problems consistent with the patient’s values and preferences.” Moreover, the required reconsideration discussion needs to occur as early as it is practical once a decision is made to have surgery because the discussion “may result in the patient agreeing to suspend the DNR order during surgery and the perioperative period, retaining the original DNR order, or modifying the DNR order.” Given that surgical patients with DNR orders have significant comorbidities, many sustain postoperative complications, and nearly 1 in 4 die within 30 days of surgery, preoperative advance care planning (ACP) and code status discussions are particularly essential to delivering high quality surgical care.2

In the current study, Hadler et al3 conducted a retrospective analysis to evaluate orders and documentation describing perioperative management of code status in patients with existing DNR order at an academic health system in the US. The authors reported that fewer than 20% of patients with existing DNR orders had a documented discussion of code status prior to undergoing surgery. These findings add to the notion that compliance with such guidance on required reconsideration discussion is suboptimal in perioperative care in the US.4,5 A recently published study focused on patients aged more than 60 years undergoing high-risk oncologic or vascular surgeries similarly showed that the frequency of ACP discussions or advance directive documentations among older patients was low.6 This growing body of evidence is highly clinically relevant in that preoperative discussion on code status is highly relevant to the care of older adults, a population group that accounts for the majority of surgeries and is most vulnerable to poor surgical outcomes. Additionally, it highlights a disconnect between the shared recognition by surgeons and patients that ACP discussion is important in perioperative care and its low implementation rates.

Unsurprisingly, Hadler et al3 reported that added measures such as the provision of a procedure-specific DNR form led to an increase in the documentation of preoperative code status discussions in 1 of the hospitals studied. The authors suggested that strategies such as the development of institutional protocols aimed to facilitate perioperative advance directive discussions, identify local champions, and educate patients may be ways to improve preoperative code status reevaulation. The idea that institutional value and culture are key factors impacting surgeon behavior and may influence the practice of ACP discussion is not new. Thus, creative and adaptable strategies, resources, and trainings that are required by medical institutions and hospitals to support preoperative ACP discussions with patients undergoing surgeries need to be identified, validated, and implemented to optimize perioperative care in vulnerable patients.

Applications for Clinical Practice

The findings from the current study indicate that less than 20% of patients with preexisting DNR orders have a documented discussion of code status prior to undergoing surgery. Physicians and health care institutions need to identify barriers to, and implement strategies that, facilitate and optimize preoperative ACP discussions in order to provide patient-centered care in vulnerable surgical patients.

Financial disclosures: None.

References

1. American College of Surgeons Board of Regents. Statement on Advance Directives by Patients: “Do Not Resuscitate” in the Operating Room. American College of Surgeons. January 3, 2014. Accessed November 6, 2021. https://www.facs.org/about-acs/statements/19-advance-directives

2. Kazaure H, Roman S, Sosa JA. High mortality in surgical patients with do-not-resuscitate orders: analysis of 8256 patients. Arch Surg. 2011;146(8):922-928. doi:10.1001/archsurg.2011.69

3. Hadler RA, Fatuzzo M, Sahota G, Neuman MD. Perioperative Management of Do-Not-Resuscitate Orders at a Large Academic Health System. JAMA Surg. 2021;e214135. doi:10.1001/jamasurg.2021.4135

4. Coopmans VC, Gries CA. CRNA awareness and experience with perioperative DNR orders. AANA J. 2000;68(3):247-256.

5. Urman RD, Lilley EJ, Changala M, Lindvall C, Hepner DL, Bader AM. A Pilot Study to Evaluate Compliance with Guidelines for Preprocedural Reconsideration of Code Status Limitations. J Palliat Med. 2018;21(8):1152-1156. doi:10.1089/jpm.2017.0601

6. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521

References

1. American College of Surgeons Board of Regents. Statement on Advance Directives by Patients: “Do Not Resuscitate” in the Operating Room. American College of Surgeons. January 3, 2014. Accessed November 6, 2021. https://www.facs.org/about-acs/statements/19-advance-directives

2. Kazaure H, Roman S, Sosa JA. High mortality in surgical patients with do-not-resuscitate orders: analysis of 8256 patients. Arch Surg. 2011;146(8):922-928. doi:10.1001/archsurg.2011.69

3. Hadler RA, Fatuzzo M, Sahota G, Neuman MD. Perioperative Management of Do-Not-Resuscitate Orders at a Large Academic Health System. JAMA Surg. 2021;e214135. doi:10.1001/jamasurg.2021.4135

4. Coopmans VC, Gries CA. CRNA awareness and experience with perioperative DNR orders. AANA J. 2000;68(3):247-256.

5. Urman RD, Lilley EJ, Changala M, Lindvall C, Hepner DL, Bader AM. A Pilot Study to Evaluate Compliance with Guidelines for Preprocedural Reconsideration of Code Status Limitations. J Palliat Med. 2018;21(8):1152-1156. doi:10.1089/jpm.2017.0601

6. Kalbfell E, Kata A, Buffington AS, et al. Frequency of Preoperative Advance Care Planning for Older Adults Undergoing High-risk Surgery: A Secondary Analysis of a Randomized Clinical Trial. JAMA Surg. 2021;156(7):e211521. doi:10.1001/jamasurg.2021.1521

Issue
Journal of Clinical Outcomes Management - 28(6)
Issue
Journal of Clinical Outcomes Management - 28(6)
Page Number
253-255
Page Number
253-255
Publications
Publications
Topics
Article Type
Display Headline
Preoperative Code Status Discussion in Older Adults: Are We Doing Enough?
Display Headline
Preoperative Code Status Discussion in Older Adults: Are We Doing Enough?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
Article PDF Media