Drug Derived from LSD Granted FDA Breakthrough Status for Anxiety

Article Type
Changed
Fri, 03/15/2024 - 11:41

The US Food and Drug Administration (FDA) has granted breakthrough designation to an LSD-based treatment for generalized anxiety disorder (GAD) based on promising topline data from a phase 2b clinical trial. Mind Medicine (MindMed) Inc is developing the treatment — MM120 (lysergide d-tartrate).

In a news release, the company reports that a single oral dose of MM120 met its key secondary endpoint, maintaining “clinically and statistically significant” reductions in Hamilton Anxiety Scale (HAM-A) score, compared with placebo, at 12 weeks with a 65% clinical response rate and 48% clinical remission rate.

The company previously announced statistically significant improvements on the HAM-A compared with placebo at 4 weeks, which was the trial’s primary endpoint.

“I’ve conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable,” study investigator David Feifel, MD, PhD, professor emeritus of psychiatry at the University of California, San Diego, and director of the Kadima Neuropsychiatry Institute in La Jolla, California, said in the news release.

“These results suggest the potential MM120 has in the treatment of anxiety, and those of us who struggle every day to alleviate anxiety in our patients look forward to seeing results from future phase 3 trials,” Dr. Feifel added.

MM120 was administered as a single dose in a monitored clinical setting with no additional therapeutic intervention. Prior to treatment with MM120, study participants were clinically tapered and then washed out from any anxiolytic or antidepressant treatments and did not receive any form of study-related psychotherapy for the duration of their participation in the study.

MM120 100 µg — the dose that demonstrated optimal clinical activity — produced a 7.7-point improvement over placebo at week 12 (P < .003; Cohen’s d = 0.81), with a 65% clinical response rate and a 48% clinical remission rate sustained to week 12.

Also at week 12, Clinical Global Impressions–Severity (CGI-S) scores on average improved from 4.8 to 2.2 in the 100-µg dose group, representing a two-category shift from ‘markedly ill’ to ‘borderline ill’ at week 12 (P < .004), the company reported.

Improvement was noted as early as study day 2, and durable with further improvements observed in mean HAM-A or CGI-S scores between 4 and 12 weeks.

MM120 was generally well-tolerated with most adverse events rated as mild to moderate and transient and occurred on the day of administration day, in line with the expected acute effects of the study drug.

The most common adverse events on dosing day included illusion, hallucinations, euphoric mood, anxiety, abnormal thinking, headache, paresthesia, dizziness, tremor, nausea, vomiting, feeling abnormal, mydriasis, and hyperhidrosis.

The company plans to hold an end-of-phase 2 meeting with the FDA in the first half of 2024 and start phase 3 testing in the second half of 2024.

“The FDA’s decision to designate MM120 as a breakthrough therapy for GAD and the durability data from our phase 2b study provide further validation of the important potential role this treatment can play in addressing the huge unmet need among individuals living with GAD,” Robert Barrow, director and CEO of MindMed said in the release.

The primary data analyses from the trial will be presented at the American Psychiatric Association (APA) annual meeting in May.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

The US Food and Drug Administration (FDA) has granted breakthrough designation to an LSD-based treatment for generalized anxiety disorder (GAD) based on promising topline data from a phase 2b clinical trial. Mind Medicine (MindMed) Inc is developing the treatment — MM120 (lysergide d-tartrate).

In a news release, the company reports that a single oral dose of MM120 met its key secondary endpoint, maintaining “clinically and statistically significant” reductions in Hamilton Anxiety Scale (HAM-A) score, compared with placebo, at 12 weeks with a 65% clinical response rate and 48% clinical remission rate.

The company previously announced statistically significant improvements on the HAM-A compared with placebo at 4 weeks, which was the trial’s primary endpoint.

“I’ve conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable,” study investigator David Feifel, MD, PhD, professor emeritus of psychiatry at the University of California, San Diego, and director of the Kadima Neuropsychiatry Institute in La Jolla, California, said in the news release.

“These results suggest the potential MM120 has in the treatment of anxiety, and those of us who struggle every day to alleviate anxiety in our patients look forward to seeing results from future phase 3 trials,” Dr. Feifel added.

MM120 was administered as a single dose in a monitored clinical setting with no additional therapeutic intervention. Prior to treatment with MM120, study participants were clinically tapered and then washed out from any anxiolytic or antidepressant treatments and did not receive any form of study-related psychotherapy for the duration of their participation in the study.

MM120 100 µg — the dose that demonstrated optimal clinical activity — produced a 7.7-point improvement over placebo at week 12 (P < .003; Cohen’s d = 0.81), with a 65% clinical response rate and a 48% clinical remission rate sustained to week 12.

Also at week 12, Clinical Global Impressions–Severity (CGI-S) scores on average improved from 4.8 to 2.2 in the 100-µg dose group, representing a two-category shift from ‘markedly ill’ to ‘borderline ill’ at week 12 (P < .004), the company reported.

Improvement was noted as early as study day 2, and durable with further improvements observed in mean HAM-A or CGI-S scores between 4 and 12 weeks.

MM120 was generally well-tolerated with most adverse events rated as mild to moderate and transient and occurred on the day of administration day, in line with the expected acute effects of the study drug.

The most common adverse events on dosing day included illusion, hallucinations, euphoric mood, anxiety, abnormal thinking, headache, paresthesia, dizziness, tremor, nausea, vomiting, feeling abnormal, mydriasis, and hyperhidrosis.

The company plans to hold an end-of-phase 2 meeting with the FDA in the first half of 2024 and start phase 3 testing in the second half of 2024.

“The FDA’s decision to designate MM120 as a breakthrough therapy for GAD and the durability data from our phase 2b study provide further validation of the important potential role this treatment can play in addressing the huge unmet need among individuals living with GAD,” Robert Barrow, director and CEO of MindMed said in the release.

The primary data analyses from the trial will be presented at the American Psychiatric Association (APA) annual meeting in May.

A version of this article appeared on Medscape.com.

The US Food and Drug Administration (FDA) has granted breakthrough designation to an LSD-based treatment for generalized anxiety disorder (GAD) based on promising topline data from a phase 2b clinical trial. Mind Medicine (MindMed) Inc is developing the treatment — MM120 (lysergide d-tartrate).

In a news release, the company reports that a single oral dose of MM120 met its key secondary endpoint, maintaining “clinically and statistically significant” reductions in Hamilton Anxiety Scale (HAM-A) score, compared with placebo, at 12 weeks with a 65% clinical response rate and 48% clinical remission rate.

The company previously announced statistically significant improvements on the HAM-A compared with placebo at 4 weeks, which was the trial’s primary endpoint.

“I’ve conducted clinical research studies in psychiatry for over two decades and have seen studies of many drugs under development for the treatment of anxiety. That MM120 exhibited rapid and robust efficacy, solidly sustained for 12 weeks after a single dose, is truly remarkable,” study investigator David Feifel, MD, PhD, professor emeritus of psychiatry at the University of California, San Diego, and director of the Kadima Neuropsychiatry Institute in La Jolla, California, said in the news release.

“These results suggest the potential MM120 has in the treatment of anxiety, and those of us who struggle every day to alleviate anxiety in our patients look forward to seeing results from future phase 3 trials,” Dr. Feifel added.

MM120 was administered as a single dose in a monitored clinical setting with no additional therapeutic intervention. Prior to treatment with MM120, study participants were clinically tapered and then washed out from any anxiolytic or antidepressant treatments and did not receive any form of study-related psychotherapy for the duration of their participation in the study.

MM120 100 µg — the dose that demonstrated optimal clinical activity — produced a 7.7-point improvement over placebo at week 12 (P < .003; Cohen’s d = 0.81), with a 65% clinical response rate and a 48% clinical remission rate sustained to week 12.

Also at week 12, Clinical Global Impressions–Severity (CGI-S) scores on average improved from 4.8 to 2.2 in the 100-µg dose group, representing a two-category shift from ‘markedly ill’ to ‘borderline ill’ at week 12 (P < .004), the company reported.

Improvement was noted as early as study day 2, and durable with further improvements observed in mean HAM-A or CGI-S scores between 4 and 12 weeks.

MM120 was generally well-tolerated with most adverse events rated as mild to moderate and transient and occurred on the day of administration day, in line with the expected acute effects of the study drug.

The most common adverse events on dosing day included illusion, hallucinations, euphoric mood, anxiety, abnormal thinking, headache, paresthesia, dizziness, tremor, nausea, vomiting, feeling abnormal, mydriasis, and hyperhidrosis.

The company plans to hold an end-of-phase 2 meeting with the FDA in the first half of 2024 and start phase 3 testing in the second half of 2024.

“The FDA’s decision to designate MM120 as a breakthrough therapy for GAD and the durability data from our phase 2b study provide further validation of the important potential role this treatment can play in addressing the huge unmet need among individuals living with GAD,” Robert Barrow, director and CEO of MindMed said in the release.

The primary data analyses from the trial will be presented at the American Psychiatric Association (APA) annual meeting in May.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Sleep Apnea Hard on the Brain

Article Type
Changed
Thu, 03/07/2024 - 09:59

 



Symptoms of sleep apnea, including snorting, gasping, or paused breathing during sleep, are associated with a significantly greater risk for problems with cognition and memory, results from a large study showed.

Data from a representative sample of US adults show that those who reported sleep apnea symptoms were about 50% more likely to also report cognitive issues vs their counterparts without such symptoms.

“For clinicians, these findings suggest a potential benefit of considering sleep apnea as a possible contributing or exacerbating factor in individuals experiencing memory or cognitive problems. This could prompt further evaluation for sleep apnea, particularly in at-risk individuals,” study investigator Dominique Low, MD, MPH, department of neurology, Boston Medical Center, told this news organization.

The findings will be presented at the American Academy of Neurology (AAN) 2024 Annual Meeting on April 17.
 

Need to Raise Awareness

The findings are based on 4257 adults who participated in the 2017-2018 National Health and Nutrition Examination Survey and completed questionnaires covering sleep, memory, cognition, and decision-making abilities.

Those who reported snorting, gasping, or breathing pauses during sleep were categorized as experiencing sleep apnea symptoms. Those who reported memory trouble, periods of confusion, difficulty concentrating, or decision-making problems were classified as having memory or cognitive symptoms.

Overall, 1079 participants reported symptoms of sleep apnea. Compared with people without sleep apnea, those with symptoms were more likely to have cognitive problems (33% vs 20%) and have greater odds of having memory or cognitive symptoms, even after adjusting for age, gender, race, and education (adjusted odds ratio, 2.02; P < .001).

“While the study did not establish a cause-and-effect relationship, the findings suggest the importance of raising awareness about the potential link between sleep and cognitive function. Early identification and treatment may improve overall health and potentially lead to a better quality of life,” Dr. Low said.

Limitations of the study include self-reported data on sleep apnea symptoms and cognitive issues sourced from one survey.
 

Consistent Data

Reached for comment, Matthew Pase, PhD, with the Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia, said the results are similar to earlier work that found a link between obstructive sleep apnea (OSA) and cognition.

For example, in a recent study, the presence of mild to severe OSA, identified using overnight polysomnography in five community-based cohorts with more than 5900 adults, was associated with poorer cognitive test performance, Dr. Pase told this news organization.

“These and other results underscore the importance of healthy sleep for optimal brain health. Future research is needed to test if treating OSA and other sleep disorders can reduce the risk of cognitive impairment,” Dr. Pase said.

Yet, in their latest statement on the topic, reported by this news organization, the US Preventive Services Task Force concluded there remains insufficient evidence to weigh the balance of benefits and harms of screening for OSA among asymptomatic adults and those with unrecognized symptoms.

The study had no specific funding. Dr. Low and Dr. Pase had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 



Symptoms of sleep apnea, including snorting, gasping, or paused breathing during sleep, are associated with a significantly greater risk for problems with cognition and memory, results from a large study showed.

Data from a representative sample of US adults show that those who reported sleep apnea symptoms were about 50% more likely to also report cognitive issues vs their counterparts without such symptoms.

“For clinicians, these findings suggest a potential benefit of considering sleep apnea as a possible contributing or exacerbating factor in individuals experiencing memory or cognitive problems. This could prompt further evaluation for sleep apnea, particularly in at-risk individuals,” study investigator Dominique Low, MD, MPH, department of neurology, Boston Medical Center, told this news organization.

The findings will be presented at the American Academy of Neurology (AAN) 2024 Annual Meeting on April 17.
 

Need to Raise Awareness

The findings are based on 4257 adults who participated in the 2017-2018 National Health and Nutrition Examination Survey and completed questionnaires covering sleep, memory, cognition, and decision-making abilities.

Those who reported snorting, gasping, or breathing pauses during sleep were categorized as experiencing sleep apnea symptoms. Those who reported memory trouble, periods of confusion, difficulty concentrating, or decision-making problems were classified as having memory or cognitive symptoms.

Overall, 1079 participants reported symptoms of sleep apnea. Compared with people without sleep apnea, those with symptoms were more likely to have cognitive problems (33% vs 20%) and have greater odds of having memory or cognitive symptoms, even after adjusting for age, gender, race, and education (adjusted odds ratio, 2.02; P < .001).

“While the study did not establish a cause-and-effect relationship, the findings suggest the importance of raising awareness about the potential link between sleep and cognitive function. Early identification and treatment may improve overall health and potentially lead to a better quality of life,” Dr. Low said.

Limitations of the study include self-reported data on sleep apnea symptoms and cognitive issues sourced from one survey.
 

Consistent Data

Reached for comment, Matthew Pase, PhD, with the Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia, said the results are similar to earlier work that found a link between obstructive sleep apnea (OSA) and cognition.

For example, in a recent study, the presence of mild to severe OSA, identified using overnight polysomnography in five community-based cohorts with more than 5900 adults, was associated with poorer cognitive test performance, Dr. Pase told this news organization.

“These and other results underscore the importance of healthy sleep for optimal brain health. Future research is needed to test if treating OSA and other sleep disorders can reduce the risk of cognitive impairment,” Dr. Pase said.

Yet, in their latest statement on the topic, reported by this news organization, the US Preventive Services Task Force concluded there remains insufficient evidence to weigh the balance of benefits and harms of screening for OSA among asymptomatic adults and those with unrecognized symptoms.

The study had no specific funding. Dr. Low and Dr. Pase had no relevant disclosures.

A version of this article appeared on Medscape.com.

 



Symptoms of sleep apnea, including snorting, gasping, or paused breathing during sleep, are associated with a significantly greater risk for problems with cognition and memory, results from a large study showed.

Data from a representative sample of US adults show that those who reported sleep apnea symptoms were about 50% more likely to also report cognitive issues vs their counterparts without such symptoms.

“For clinicians, these findings suggest a potential benefit of considering sleep apnea as a possible contributing or exacerbating factor in individuals experiencing memory or cognitive problems. This could prompt further evaluation for sleep apnea, particularly in at-risk individuals,” study investigator Dominique Low, MD, MPH, department of neurology, Boston Medical Center, told this news organization.

The findings will be presented at the American Academy of Neurology (AAN) 2024 Annual Meeting on April 17.
 

Need to Raise Awareness

The findings are based on 4257 adults who participated in the 2017-2018 National Health and Nutrition Examination Survey and completed questionnaires covering sleep, memory, cognition, and decision-making abilities.

Those who reported snorting, gasping, or breathing pauses during sleep were categorized as experiencing sleep apnea symptoms. Those who reported memory trouble, periods of confusion, difficulty concentrating, or decision-making problems were classified as having memory or cognitive symptoms.

Overall, 1079 participants reported symptoms of sleep apnea. Compared with people without sleep apnea, those with symptoms were more likely to have cognitive problems (33% vs 20%) and have greater odds of having memory or cognitive symptoms, even after adjusting for age, gender, race, and education (adjusted odds ratio, 2.02; P < .001).

“While the study did not establish a cause-and-effect relationship, the findings suggest the importance of raising awareness about the potential link between sleep and cognitive function. Early identification and treatment may improve overall health and potentially lead to a better quality of life,” Dr. Low said.

Limitations of the study include self-reported data on sleep apnea symptoms and cognitive issues sourced from one survey.
 

Consistent Data

Reached for comment, Matthew Pase, PhD, with the Turner Institute for Brain and Mental Health, Monash University, Melbourne, Australia, said the results are similar to earlier work that found a link between obstructive sleep apnea (OSA) and cognition.

For example, in a recent study, the presence of mild to severe OSA, identified using overnight polysomnography in five community-based cohorts with more than 5900 adults, was associated with poorer cognitive test performance, Dr. Pase told this news organization.

“These and other results underscore the importance of healthy sleep for optimal brain health. Future research is needed to test if treating OSA and other sleep disorders can reduce the risk of cognitive impairment,” Dr. Pase said.

Yet, in their latest statement on the topic, reported by this news organization, the US Preventive Services Task Force concluded there remains insufficient evidence to weigh the balance of benefits and harms of screening for OSA among asymptomatic adults and those with unrecognized symptoms.

The study had no specific funding. Dr. Low and Dr. Pase had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Seladelpar Could ‘Raise the Bar’ in Primary Biliary Cholangitis Treatment

Article Type
Changed
Wed, 03/06/2024 - 13:51

Seladelpar, an investigational selective agonist of peroxisome proliferator–activated receptor-delta (PPAR-delta), significantly improves liver biomarkers of disease activity and bothersome symptoms of pruritus in adults with primary biliary cholangitis (PBC), according to the full results of the RESPONSE phase 3 study.

“At a dose of 10 mg daily, 1 in 4 patients normalize their alkaline phosphatase level,” chief investigator Gideon Hirschfield, PhD, BM BChir, with the Toronto Center for Liver Disease at Toronto General Hospital, Toronto, Ontario, Canada, said in an interview.

Dr. Gideon Hirschfield


The study data are “genuinely exciting...and support the potential for seladelpar to raise the bar in PBC treatment,” Dr. Hirschfield added in a news release.

Seladelpar is being developed by CymaBay Therapeutics, which funded the study.

The results were published online in The New England Journal of Medicine.

Topline data from the study were presented in November at The Liver Meeting 2023: American Association for the Study of Liver Diseases.

‘Unequivocal’ Progress

Up to 40% of patients with PBC have an inadequate response to first-line therapy with ursodeoxycholic acid (UDCA) and are at a high risk for disease progression. More than half of patients with the disease fail to respond to second-line therapy with obeticholic acid.

Seladelpar, and the dual PPAR-alpha and PPAR-delta agonist elafibranor, are an “unequivocal sign of progress, marking the arrival of a new era in which PBC treatment is expected to provide both biochemical benefits and amelioration of symptoms for patients,” David N. Assis, MD, with the Section of Digestive Diseases, Yale School of Medicine, New Haven, Connecticut, wrote in a linked editorial.

Dr. David N. Assis


In the RESPONSE study, 193 patients with PBC who had an inadequate response to or a history of unacceptable side effects with UDCA were randomly allocated to either oral seladelpar 10 mg daily or placebo for 12 months. The vast majority (93.8%) continued UDCA as standard-of-care background therapy.

The primary endpoint was a biochemical response, which was defined as an alkaline phosphatase (ALP) level < 1.67 times the upper limit of the normal range, with a decrease of 15% or more from baseline, and a normal total bilirubin level at 12 months.

After 12 months, 61.7% of patients taking seladelpar met the primary endpoint vs 20% of patients taking placebo.

In addition, significantly more patients taking seladelpar than placebo had normalization of the ALP level (25% vs 0%). The average decrease in ALP from baseline was 42.4% in the seladelpar group vs 4.3% in the placebo group.

At 12 months, alanine aminotransferase and gamma-glutamyl transferase levels were reduced by 23.5% and 39.1%, respectively, in the seladelpar group compared with 6.5% and 11.4%, respectively, in the placebo group.

“In PBC, we use target endpoints, so the trial was not powered or able to show yet clinical outcomes because the pace of the disease is quite slow. But we believe that the normalization of liver tests and improvement in quality of life will change the disease trajectory over time,” Dr. Hirschfield said.

Significant Reduction in Pruritus

A key secondary endpoint was change in patient-reported pruritus.

At baseline, 38.3% of patients in the seladelpar group and 35.4% of those in the placebo group had moderate to severe pruritus, with a daily numerical rating scale (NRS) score of 4 or higher out of 10.

Among these patients, the reduction from baseline in the pruritus NRS score at month 6 was significantly greater with seladelpar than with placebo (change from baseline, −3.2 points vs −1.7 points). These improvements were sustained through 12 months.

Improvements on the 5-D Itch Scale in both the moderate to severe pruritus population and the overall population also favored seladelpar over placebo for itch relief, which had a positive impact on sleep. Similar results demonstrating reductions in itch and improvements in sleep were observed using the PBC-40 questionnaire.

Adverse events that led to discontinuation of seladelpar or placebo were rare, and there was no between-group difference in the incidence of serious adverse events.

“No worrisome adverse events affecting the muscles were observed, including among patients receiving statins. Certain gastrointestinal events — abdominal pain, abdominal distention, and nausea — were reported more frequently in the seladelpar group than in the placebo group,” the study authors wrote.

The most common adverse events that occurred in ≥ 5% of patients in either group were COVID-19 and pruritus. A greater percentage of patients treated with placebo reported pruritus (15.4% vs 4.7%) as an adverse event — a finding consistent with the positive effect of seladelpar on reducing pruritus.

The researchers noted that 96.4% of patients who participated in the RESPONSE trial chose to enroll in the extension trial to evaluate long-term safety and the side-effect profile of seladelpar.
 

 

 

Potential First-Line Treatment?

In Dr. Assis’ view, the RESPONSE trial, coupled with the recently reported ELATIVE trial of the dual PPAR-alpha and PPAR-delta agonist elafibranor in PBC, “cement the role of PPAR agonists as the preferred second-line treatment in primary biliary cholangitis.”

“The reduction in serum cholestatic markers and the safety profiles of elafibranor and seladelpar offer clear advantages beyond what was previously shown with obeticholic acid. These trials also cement a new treatment goal for primary biliary cholangitis in which a reduction in pruritus should be expected as part of anticholestatic treatment,” Dr. Assis wrote.

“The results of these trials suggest that the use of PPAR agonists in primary biliary cholangitis could improve treatment outcomes while also improving quality of life, which is a highly desirable alignment of clinician and patient goals,” Dr. Assis added.

Looking ahead, Dr. Hirschfield sees a potential role for seladelpar earlier in the course of PBC treatment, he said in an interview.

“Over time, the way we treat patients will not be to wait to fail. It will be treat to target and treat to success,” Dr. Hirschfield said.

Earlier this month, the US Food and Drug Administration accepted CymaBay Therapeutics’ new drug application for seladelpar for the treatment of PBC, including pruritus in adults without cirrhosis or with compensated cirrhosis (Child Pugh A) who fail to respond adequately or cannot tolerate UDCA. Seladelpar for PBC was granted breakthrough designation in October 2023.

The study was funded by CymaBay Therapeutics. Disclosures for authors and editorialist are available at NEJM.org.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Seladelpar, an investigational selective agonist of peroxisome proliferator–activated receptor-delta (PPAR-delta), significantly improves liver biomarkers of disease activity and bothersome symptoms of pruritus in adults with primary biliary cholangitis (PBC), according to the full results of the RESPONSE phase 3 study.

“At a dose of 10 mg daily, 1 in 4 patients normalize their alkaline phosphatase level,” chief investigator Gideon Hirschfield, PhD, BM BChir, with the Toronto Center for Liver Disease at Toronto General Hospital, Toronto, Ontario, Canada, said in an interview.

Dr. Gideon Hirschfield


The study data are “genuinely exciting...and support the potential for seladelpar to raise the bar in PBC treatment,” Dr. Hirschfield added in a news release.

Seladelpar is being developed by CymaBay Therapeutics, which funded the study.

The results were published online in The New England Journal of Medicine.

Topline data from the study were presented in November at The Liver Meeting 2023: American Association for the Study of Liver Diseases.

‘Unequivocal’ Progress

Up to 40% of patients with PBC have an inadequate response to first-line therapy with ursodeoxycholic acid (UDCA) and are at a high risk for disease progression. More than half of patients with the disease fail to respond to second-line therapy with obeticholic acid.

Seladelpar, and the dual PPAR-alpha and PPAR-delta agonist elafibranor, are an “unequivocal sign of progress, marking the arrival of a new era in which PBC treatment is expected to provide both biochemical benefits and amelioration of symptoms for patients,” David N. Assis, MD, with the Section of Digestive Diseases, Yale School of Medicine, New Haven, Connecticut, wrote in a linked editorial.

Dr. David N. Assis


In the RESPONSE study, 193 patients with PBC who had an inadequate response to or a history of unacceptable side effects with UDCA were randomly allocated to either oral seladelpar 10 mg daily or placebo for 12 months. The vast majority (93.8%) continued UDCA as standard-of-care background therapy.

The primary endpoint was a biochemical response, which was defined as an alkaline phosphatase (ALP) level < 1.67 times the upper limit of the normal range, with a decrease of 15% or more from baseline, and a normal total bilirubin level at 12 months.

After 12 months, 61.7% of patients taking seladelpar met the primary endpoint vs 20% of patients taking placebo.

In addition, significantly more patients taking seladelpar than placebo had normalization of the ALP level (25% vs 0%). The average decrease in ALP from baseline was 42.4% in the seladelpar group vs 4.3% in the placebo group.

At 12 months, alanine aminotransferase and gamma-glutamyl transferase levels were reduced by 23.5% and 39.1%, respectively, in the seladelpar group compared with 6.5% and 11.4%, respectively, in the placebo group.

“In PBC, we use target endpoints, so the trial was not powered or able to show yet clinical outcomes because the pace of the disease is quite slow. But we believe that the normalization of liver tests and improvement in quality of life will change the disease trajectory over time,” Dr. Hirschfield said.

Significant Reduction in Pruritus

A key secondary endpoint was change in patient-reported pruritus.

At baseline, 38.3% of patients in the seladelpar group and 35.4% of those in the placebo group had moderate to severe pruritus, with a daily numerical rating scale (NRS) score of 4 or higher out of 10.

Among these patients, the reduction from baseline in the pruritus NRS score at month 6 was significantly greater with seladelpar than with placebo (change from baseline, −3.2 points vs −1.7 points). These improvements were sustained through 12 months.

Improvements on the 5-D Itch Scale in both the moderate to severe pruritus population and the overall population also favored seladelpar over placebo for itch relief, which had a positive impact on sleep. Similar results demonstrating reductions in itch and improvements in sleep were observed using the PBC-40 questionnaire.

Adverse events that led to discontinuation of seladelpar or placebo were rare, and there was no between-group difference in the incidence of serious adverse events.

“No worrisome adverse events affecting the muscles were observed, including among patients receiving statins. Certain gastrointestinal events — abdominal pain, abdominal distention, and nausea — were reported more frequently in the seladelpar group than in the placebo group,” the study authors wrote.

The most common adverse events that occurred in ≥ 5% of patients in either group were COVID-19 and pruritus. A greater percentage of patients treated with placebo reported pruritus (15.4% vs 4.7%) as an adverse event — a finding consistent with the positive effect of seladelpar on reducing pruritus.

The researchers noted that 96.4% of patients who participated in the RESPONSE trial chose to enroll in the extension trial to evaluate long-term safety and the side-effect profile of seladelpar.
 

 

 

Potential First-Line Treatment?

In Dr. Assis’ view, the RESPONSE trial, coupled with the recently reported ELATIVE trial of the dual PPAR-alpha and PPAR-delta agonist elafibranor in PBC, “cement the role of PPAR agonists as the preferred second-line treatment in primary biliary cholangitis.”

“The reduction in serum cholestatic markers and the safety profiles of elafibranor and seladelpar offer clear advantages beyond what was previously shown with obeticholic acid. These trials also cement a new treatment goal for primary biliary cholangitis in which a reduction in pruritus should be expected as part of anticholestatic treatment,” Dr. Assis wrote.

“The results of these trials suggest that the use of PPAR agonists in primary biliary cholangitis could improve treatment outcomes while also improving quality of life, which is a highly desirable alignment of clinician and patient goals,” Dr. Assis added.

Looking ahead, Dr. Hirschfield sees a potential role for seladelpar earlier in the course of PBC treatment, he said in an interview.

“Over time, the way we treat patients will not be to wait to fail. It will be treat to target and treat to success,” Dr. Hirschfield said.

Earlier this month, the US Food and Drug Administration accepted CymaBay Therapeutics’ new drug application for seladelpar for the treatment of PBC, including pruritus in adults without cirrhosis or with compensated cirrhosis (Child Pugh A) who fail to respond adequately or cannot tolerate UDCA. Seladelpar for PBC was granted breakthrough designation in October 2023.

The study was funded by CymaBay Therapeutics. Disclosures for authors and editorialist are available at NEJM.org.

A version of this article appeared on Medscape.com.

Seladelpar, an investigational selective agonist of peroxisome proliferator–activated receptor-delta (PPAR-delta), significantly improves liver biomarkers of disease activity and bothersome symptoms of pruritus in adults with primary biliary cholangitis (PBC), according to the full results of the RESPONSE phase 3 study.

“At a dose of 10 mg daily, 1 in 4 patients normalize their alkaline phosphatase level,” chief investigator Gideon Hirschfield, PhD, BM BChir, with the Toronto Center for Liver Disease at Toronto General Hospital, Toronto, Ontario, Canada, said in an interview.

Dr. Gideon Hirschfield


The study data are “genuinely exciting...and support the potential for seladelpar to raise the bar in PBC treatment,” Dr. Hirschfield added in a news release.

Seladelpar is being developed by CymaBay Therapeutics, which funded the study.

The results were published online in The New England Journal of Medicine.

Topline data from the study were presented in November at The Liver Meeting 2023: American Association for the Study of Liver Diseases.

‘Unequivocal’ Progress

Up to 40% of patients with PBC have an inadequate response to first-line therapy with ursodeoxycholic acid (UDCA) and are at a high risk for disease progression. More than half of patients with the disease fail to respond to second-line therapy with obeticholic acid.

Seladelpar, and the dual PPAR-alpha and PPAR-delta agonist elafibranor, are an “unequivocal sign of progress, marking the arrival of a new era in which PBC treatment is expected to provide both biochemical benefits and amelioration of symptoms for patients,” David N. Assis, MD, with the Section of Digestive Diseases, Yale School of Medicine, New Haven, Connecticut, wrote in a linked editorial.

Dr. David N. Assis


In the RESPONSE study, 193 patients with PBC who had an inadequate response to or a history of unacceptable side effects with UDCA were randomly allocated to either oral seladelpar 10 mg daily or placebo for 12 months. The vast majority (93.8%) continued UDCA as standard-of-care background therapy.

The primary endpoint was a biochemical response, which was defined as an alkaline phosphatase (ALP) level < 1.67 times the upper limit of the normal range, with a decrease of 15% or more from baseline, and a normal total bilirubin level at 12 months.

After 12 months, 61.7% of patients taking seladelpar met the primary endpoint vs 20% of patients taking placebo.

In addition, significantly more patients taking seladelpar than placebo had normalization of the ALP level (25% vs 0%). The average decrease in ALP from baseline was 42.4% in the seladelpar group vs 4.3% in the placebo group.

At 12 months, alanine aminotransferase and gamma-glutamyl transferase levels were reduced by 23.5% and 39.1%, respectively, in the seladelpar group compared with 6.5% and 11.4%, respectively, in the placebo group.

“In PBC, we use target endpoints, so the trial was not powered or able to show yet clinical outcomes because the pace of the disease is quite slow. But we believe that the normalization of liver tests and improvement in quality of life will change the disease trajectory over time,” Dr. Hirschfield said.

Significant Reduction in Pruritus

A key secondary endpoint was change in patient-reported pruritus.

At baseline, 38.3% of patients in the seladelpar group and 35.4% of those in the placebo group had moderate to severe pruritus, with a daily numerical rating scale (NRS) score of 4 or higher out of 10.

Among these patients, the reduction from baseline in the pruritus NRS score at month 6 was significantly greater with seladelpar than with placebo (change from baseline, −3.2 points vs −1.7 points). These improvements were sustained through 12 months.

Improvements on the 5-D Itch Scale in both the moderate to severe pruritus population and the overall population also favored seladelpar over placebo for itch relief, which had a positive impact on sleep. Similar results demonstrating reductions in itch and improvements in sleep were observed using the PBC-40 questionnaire.

Adverse events that led to discontinuation of seladelpar or placebo were rare, and there was no between-group difference in the incidence of serious adverse events.

“No worrisome adverse events affecting the muscles were observed, including among patients receiving statins. Certain gastrointestinal events — abdominal pain, abdominal distention, and nausea — were reported more frequently in the seladelpar group than in the placebo group,” the study authors wrote.

The most common adverse events that occurred in ≥ 5% of patients in either group were COVID-19 and pruritus. A greater percentage of patients treated with placebo reported pruritus (15.4% vs 4.7%) as an adverse event — a finding consistent with the positive effect of seladelpar on reducing pruritus.

The researchers noted that 96.4% of patients who participated in the RESPONSE trial chose to enroll in the extension trial to evaluate long-term safety and the side-effect profile of seladelpar.
 

 

 

Potential First-Line Treatment?

In Dr. Assis’ view, the RESPONSE trial, coupled with the recently reported ELATIVE trial of the dual PPAR-alpha and PPAR-delta agonist elafibranor in PBC, “cement the role of PPAR agonists as the preferred second-line treatment in primary biliary cholangitis.”

“The reduction in serum cholestatic markers and the safety profiles of elafibranor and seladelpar offer clear advantages beyond what was previously shown with obeticholic acid. These trials also cement a new treatment goal for primary biliary cholangitis in which a reduction in pruritus should be expected as part of anticholestatic treatment,” Dr. Assis wrote.

“The results of these trials suggest that the use of PPAR agonists in primary biliary cholangitis could improve treatment outcomes while also improving quality of life, which is a highly desirable alignment of clinician and patient goals,” Dr. Assis added.

Looking ahead, Dr. Hirschfield sees a potential role for seladelpar earlier in the course of PBC treatment, he said in an interview.

“Over time, the way we treat patients will not be to wait to fail. It will be treat to target and treat to success,” Dr. Hirschfield said.

Earlier this month, the US Food and Drug Administration accepted CymaBay Therapeutics’ new drug application for seladelpar for the treatment of PBC, including pruritus in adults without cirrhosis or with compensated cirrhosis (Child Pugh A) who fail to respond adequately or cannot tolerate UDCA. Seladelpar for PBC was granted breakthrough designation in October 2023.

The study was funded by CymaBay Therapeutics. Disclosures for authors and editorialist are available at NEJM.org.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is Primary Tumor Resection Beneficial in Stage IV CRC?

Article Type
Changed
Mon, 03/11/2024 - 11:37

 

TOPLINE:

Resecting the primary colon tumor before chemotherapy does not improve overall survival compared with chemotherapy alone in patients with metastatic colon cancer not amenable to curative therapy, new data showed.

METHODOLOGY:

  • Chemotherapy is the primary treatment in patients with stage IV  (CRC) and unresectable metastases. It’s unclear whether primary tumor resection before chemotherapy prolongs survival.
  • Among 393 patients with stage IV colon cancer and unresectable metastases enrolled in the  and  trials, 187 were randomly allocated to undergo primary tumor resection and 206 to upfront chemotherapy.
  • The chemotherapy regimen was left up to the treating physician. Overall survival was the primary endpoint. Median follow-up time was 36.7 months.

TAKEAWAY:

  • Median overall survival was 16.7 months with primary tumor resection and 18.6 months with upfront chemotherapy (P = .191).
  • Comparable overall survival between the study groups was further confirmed on multivariate analysis (hazard ratio, 0.944; = .65) and across all subgroups.
  • Serious adverse events were more common with upfront chemo than surgery (18% vs 10%; P = .027), due mainly to a significantly higher incidence of GI-related events (11% vs 5%; P = .031).
  • Overall, 24% of the primary tumor resection group did not receive any chemotherapy.

IN PRACTICE:

“The results of our study provide compelling data that upfront primary tumor resection in treatment-naive stage IV CRC not amenable for curative treatment does not prolong [overall survival]. A relatively low incidence of serious adverse events in patients with an intact primary tumor together with a considerable number of patients who did not receive any chemotherapy in the primary tumor resection group provides further arguments against resection of the primary tumor in this group of patients,” the authors of the combined analysis concluded.

SOURCE:

The study, with first author Nuh N. Rahbari, MD, University of Ulm, Ulm, Germany, was published online in the Journal of Clinical Oncology.

LIMITATIONS:

Neither study completed their planned patient accrual. Although both trials are nearly identical, differences in the individual study cohorts and trial implementation could have introduced bias. Tumor molecular profiling was not performed.

DISCLOSURES:

The study had no commercial funding. Disclosures for authors are available with the original article.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Resecting the primary colon tumor before chemotherapy does not improve overall survival compared with chemotherapy alone in patients with metastatic colon cancer not amenable to curative therapy, new data showed.

METHODOLOGY:

  • Chemotherapy is the primary treatment in patients with stage IV  (CRC) and unresectable metastases. It’s unclear whether primary tumor resection before chemotherapy prolongs survival.
  • Among 393 patients with stage IV colon cancer and unresectable metastases enrolled in the  and  trials, 187 were randomly allocated to undergo primary tumor resection and 206 to upfront chemotherapy.
  • The chemotherapy regimen was left up to the treating physician. Overall survival was the primary endpoint. Median follow-up time was 36.7 months.

TAKEAWAY:

  • Median overall survival was 16.7 months with primary tumor resection and 18.6 months with upfront chemotherapy (P = .191).
  • Comparable overall survival between the study groups was further confirmed on multivariate analysis (hazard ratio, 0.944; = .65) and across all subgroups.
  • Serious adverse events were more common with upfront chemo than surgery (18% vs 10%; P = .027), due mainly to a significantly higher incidence of GI-related events (11% vs 5%; P = .031).
  • Overall, 24% of the primary tumor resection group did not receive any chemotherapy.

IN PRACTICE:

“The results of our study provide compelling data that upfront primary tumor resection in treatment-naive stage IV CRC not amenable for curative treatment does not prolong [overall survival]. A relatively low incidence of serious adverse events in patients with an intact primary tumor together with a considerable number of patients who did not receive any chemotherapy in the primary tumor resection group provides further arguments against resection of the primary tumor in this group of patients,” the authors of the combined analysis concluded.

SOURCE:

The study, with first author Nuh N. Rahbari, MD, University of Ulm, Ulm, Germany, was published online in the Journal of Clinical Oncology.

LIMITATIONS:

Neither study completed their planned patient accrual. Although both trials are nearly identical, differences in the individual study cohorts and trial implementation could have introduced bias. Tumor molecular profiling was not performed.

DISCLOSURES:

The study had no commercial funding. Disclosures for authors are available with the original article.

A version of this article appeared on Medscape.com.

 

TOPLINE:

Resecting the primary colon tumor before chemotherapy does not improve overall survival compared with chemotherapy alone in patients with metastatic colon cancer not amenable to curative therapy, new data showed.

METHODOLOGY:

  • Chemotherapy is the primary treatment in patients with stage IV  (CRC) and unresectable metastases. It’s unclear whether primary tumor resection before chemotherapy prolongs survival.
  • Among 393 patients with stage IV colon cancer and unresectable metastases enrolled in the  and  trials, 187 were randomly allocated to undergo primary tumor resection and 206 to upfront chemotherapy.
  • The chemotherapy regimen was left up to the treating physician. Overall survival was the primary endpoint. Median follow-up time was 36.7 months.

TAKEAWAY:

  • Median overall survival was 16.7 months with primary tumor resection and 18.6 months with upfront chemotherapy (P = .191).
  • Comparable overall survival between the study groups was further confirmed on multivariate analysis (hazard ratio, 0.944; = .65) and across all subgroups.
  • Serious adverse events were more common with upfront chemo than surgery (18% vs 10%; P = .027), due mainly to a significantly higher incidence of GI-related events (11% vs 5%; P = .031).
  • Overall, 24% of the primary tumor resection group did not receive any chemotherapy.

IN PRACTICE:

“The results of our study provide compelling data that upfront primary tumor resection in treatment-naive stage IV CRC not amenable for curative treatment does not prolong [overall survival]. A relatively low incidence of serious adverse events in patients with an intact primary tumor together with a considerable number of patients who did not receive any chemotherapy in the primary tumor resection group provides further arguments against resection of the primary tumor in this group of patients,” the authors of the combined analysis concluded.

SOURCE:

The study, with first author Nuh N. Rahbari, MD, University of Ulm, Ulm, Germany, was published online in the Journal of Clinical Oncology.

LIMITATIONS:

Neither study completed their planned patient accrual. Although both trials are nearly identical, differences in the individual study cohorts and trial implementation could have introduced bias. Tumor molecular profiling was not performed.

DISCLOSURES:

The study had no commercial funding. Disclosures for authors are available with the original article.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Remarkable’ Study Tracks Timeline of Biomarker Changes 20 Years Before Alzheimer’s disease

Article Type
Changed
Mon, 03/04/2024 - 09:31

A study spanning 20 years helps nail down the timing of biomarker changes that occur in the period between normal cognition and a diagnosis of sporadic Alzheimer’s disease, something that hasn’t previously been extensively investigated in longitudinal studies.

By analyzing cerebral spinal fluid (CSF), as well as cognitive and brain imaging assessments conducted every few years for two decades, researchers were able to plot the course of changing levels of amyloid-beta 42 (Abeta42), phosphorylated tau 181 (p-tau181), and neurofilament light chain (NfL) in adults with Alzheimer’s disease and mark when those levels began to deviate from those of adults without Alzheimer’s disease.

Levels of Abeta42 in CSF and the ratio of Abeta42 to Abeta40 in people who developed Alzheimer’s disease diverged from those of peers who remained cognitively normal at 18 years and 14 years, respectively, before clinical signs of disease appeared.

The level of p-tau181 in CSF increased 11 years before disease onset, and NfL levels, a measure of neurodegeneration, increased 9 years before diagnosis.

These changes were followed by hippocampal atrophy and cognitive decline a few years later.

The results also show “an apparent accelerated change in concentrations of CSF biomarkers followed by a slowing of this change up to the time of diagnosis,” report the authors, led by Jianping Jia, MD, PhD, with the Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, Beijing, China.

The study was published online in The New England Journal of Medicine.
 

Time Course of Biomarker Changes

Dr. Jia and colleagues conducted a nested case-control study within the China Cognition and Aging Study (COAST). They matched 648 adults who developed Alzheimer’s disease to 648 who remained cognitively normal. CSF, cognitive, and brain imaging assessments were performed every 2-3 years for a median of about 20 years.

Within both groups, men slightly outnumbered women. At baseline, CSF biomarker levels, cognitive scores, and hippocampal volumes were similar in the two groups. Adults who developed Alzheimer’s disease were more likely than their matched controls to be carriers of the APOE epsilon-4 allele (37% vs 20%).

In terms of CSF Abeta42, the level of this biomarker in those who developed Alzheimer’s disease diverged from the level in controls an estimated 18 years before clinical diagnosis. At that time, the level was lower by a mean 59.13 pg/mL in the Alzheimer’s disease group.

A difference in the ratio of CSF Abeta42 to Abeta40 between the two groups appeared an estimated 14 years before the diagnosis of Alzheimer’s disease (difference in mean values, −0.01 pg/mL).

Differences between the two groups in CSF p-tau181 and total tau concentrations were apparent roughly 11 and 10 years before diagnosis, respectively. At those times, the mean differences in p-tau181 and total tau concentrations were 7.10 pg/mL and 87.10 pg/mL, respectively.

In terms of NfL, a difference between the groups was observed 9 years before diagnosis, with its trajectory progressively deviating from the concentrations observed in cognitively normal groups at that time, to a final mean difference in NfL of 228.29 pg/mL. 

Bilateral hippocampal volume decreased with age in both groups. However, the decrease began to differ between the two groups 8 years before Alzheimer’s disease diagnosis, at which time volume was lower by 358.94 mm3 in the Alzheimer’s disease group compared with the control group.

Average Clinical Dementia Rating–Sum of Boxes (CDR-SB) scores in the Alzheimer’s disease group began to worsen compared with the control group at about 6 years before diagnosis.

As Alzheimer’s disease progressed, changes in CSF biomarkers increased before reaching a plateau. 
 

 

 

Important Contribution 

In a linked editorial, Richard Mayeux, MD, Department of Neurology, Columbia University, New York, said the importance of this work “cannot be overstated. Knowledge of the timing of these physiological events is critical to provide clinicians with useful starting points for prevention and therapeutic strategies.”

Dr. Mayeux said this “remarkable” longitudinal study spanning 2 decades “not only confirms the hypotheses of previous investigators but extends and validates the sequence of changes” in sporadic Alzheimer’s disease.

Dr. Mayeux acknowledged that one might consider the finding in this study to be limited owing to the inclusion of only individuals of Han Chinese ancestry. 

However, longitudinal studies of plasma biomarkers in individuals of Asian, European, African, and Hispanic ancestry have shown similar trends in biomarker changes preceding the onset of Alzheimer’s disease, he noted. 

“Ethnic variation in these biomarkers is known, but that fact does not lessen the effect of the results reported. It merely highlights that similar studies must continue and must be inclusive of other groups,” Dr. Mayeux concluded.

The study had no commercial funding. Disclosures for authors and editorialist are available at NEJM.org.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A study spanning 20 years helps nail down the timing of biomarker changes that occur in the period between normal cognition and a diagnosis of sporadic Alzheimer’s disease, something that hasn’t previously been extensively investigated in longitudinal studies.

By analyzing cerebral spinal fluid (CSF), as well as cognitive and brain imaging assessments conducted every few years for two decades, researchers were able to plot the course of changing levels of amyloid-beta 42 (Abeta42), phosphorylated tau 181 (p-tau181), and neurofilament light chain (NfL) in adults with Alzheimer’s disease and mark when those levels began to deviate from those of adults without Alzheimer’s disease.

Levels of Abeta42 in CSF and the ratio of Abeta42 to Abeta40 in people who developed Alzheimer’s disease diverged from those of peers who remained cognitively normal at 18 years and 14 years, respectively, before clinical signs of disease appeared.

The level of p-tau181 in CSF increased 11 years before disease onset, and NfL levels, a measure of neurodegeneration, increased 9 years before diagnosis.

These changes were followed by hippocampal atrophy and cognitive decline a few years later.

The results also show “an apparent accelerated change in concentrations of CSF biomarkers followed by a slowing of this change up to the time of diagnosis,” report the authors, led by Jianping Jia, MD, PhD, with the Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, Beijing, China.

The study was published online in The New England Journal of Medicine.
 

Time Course of Biomarker Changes

Dr. Jia and colleagues conducted a nested case-control study within the China Cognition and Aging Study (COAST). They matched 648 adults who developed Alzheimer’s disease to 648 who remained cognitively normal. CSF, cognitive, and brain imaging assessments were performed every 2-3 years for a median of about 20 years.

Within both groups, men slightly outnumbered women. At baseline, CSF biomarker levels, cognitive scores, and hippocampal volumes were similar in the two groups. Adults who developed Alzheimer’s disease were more likely than their matched controls to be carriers of the APOE epsilon-4 allele (37% vs 20%).

In terms of CSF Abeta42, the level of this biomarker in those who developed Alzheimer’s disease diverged from the level in controls an estimated 18 years before clinical diagnosis. At that time, the level was lower by a mean 59.13 pg/mL in the Alzheimer’s disease group.

A difference in the ratio of CSF Abeta42 to Abeta40 between the two groups appeared an estimated 14 years before the diagnosis of Alzheimer’s disease (difference in mean values, −0.01 pg/mL).

Differences between the two groups in CSF p-tau181 and total tau concentrations were apparent roughly 11 and 10 years before diagnosis, respectively. At those times, the mean differences in p-tau181 and total tau concentrations were 7.10 pg/mL and 87.10 pg/mL, respectively.

In terms of NfL, a difference between the groups was observed 9 years before diagnosis, with its trajectory progressively deviating from the concentrations observed in cognitively normal groups at that time, to a final mean difference in NfL of 228.29 pg/mL. 

Bilateral hippocampal volume decreased with age in both groups. However, the decrease began to differ between the two groups 8 years before Alzheimer’s disease diagnosis, at which time volume was lower by 358.94 mm3 in the Alzheimer’s disease group compared with the control group.

Average Clinical Dementia Rating–Sum of Boxes (CDR-SB) scores in the Alzheimer’s disease group began to worsen compared with the control group at about 6 years before diagnosis.

As Alzheimer’s disease progressed, changes in CSF biomarkers increased before reaching a plateau. 
 

 

 

Important Contribution 

In a linked editorial, Richard Mayeux, MD, Department of Neurology, Columbia University, New York, said the importance of this work “cannot be overstated. Knowledge of the timing of these physiological events is critical to provide clinicians with useful starting points for prevention and therapeutic strategies.”

Dr. Mayeux said this “remarkable” longitudinal study spanning 2 decades “not only confirms the hypotheses of previous investigators but extends and validates the sequence of changes” in sporadic Alzheimer’s disease.

Dr. Mayeux acknowledged that one might consider the finding in this study to be limited owing to the inclusion of only individuals of Han Chinese ancestry. 

However, longitudinal studies of plasma biomarkers in individuals of Asian, European, African, and Hispanic ancestry have shown similar trends in biomarker changes preceding the onset of Alzheimer’s disease, he noted. 

“Ethnic variation in these biomarkers is known, but that fact does not lessen the effect of the results reported. It merely highlights that similar studies must continue and must be inclusive of other groups,” Dr. Mayeux concluded.

The study had no commercial funding. Disclosures for authors and editorialist are available at NEJM.org.

A version of this article appeared on Medscape.com.

A study spanning 20 years helps nail down the timing of biomarker changes that occur in the period between normal cognition and a diagnosis of sporadic Alzheimer’s disease, something that hasn’t previously been extensively investigated in longitudinal studies.

By analyzing cerebral spinal fluid (CSF), as well as cognitive and brain imaging assessments conducted every few years for two decades, researchers were able to plot the course of changing levels of amyloid-beta 42 (Abeta42), phosphorylated tau 181 (p-tau181), and neurofilament light chain (NfL) in adults with Alzheimer’s disease and mark when those levels began to deviate from those of adults without Alzheimer’s disease.

Levels of Abeta42 in CSF and the ratio of Abeta42 to Abeta40 in people who developed Alzheimer’s disease diverged from those of peers who remained cognitively normal at 18 years and 14 years, respectively, before clinical signs of disease appeared.

The level of p-tau181 in CSF increased 11 years before disease onset, and NfL levels, a measure of neurodegeneration, increased 9 years before diagnosis.

These changes were followed by hippocampal atrophy and cognitive decline a few years later.

The results also show “an apparent accelerated change in concentrations of CSF biomarkers followed by a slowing of this change up to the time of diagnosis,” report the authors, led by Jianping Jia, MD, PhD, with the Innovation Center for Neurological Disorders, Xuanwu Hospital, Capital Medical University, Beijing, China.

The study was published online in The New England Journal of Medicine.
 

Time Course of Biomarker Changes

Dr. Jia and colleagues conducted a nested case-control study within the China Cognition and Aging Study (COAST). They matched 648 adults who developed Alzheimer’s disease to 648 who remained cognitively normal. CSF, cognitive, and brain imaging assessments were performed every 2-3 years for a median of about 20 years.

Within both groups, men slightly outnumbered women. At baseline, CSF biomarker levels, cognitive scores, and hippocampal volumes were similar in the two groups. Adults who developed Alzheimer’s disease were more likely than their matched controls to be carriers of the APOE epsilon-4 allele (37% vs 20%).

In terms of CSF Abeta42, the level of this biomarker in those who developed Alzheimer’s disease diverged from the level in controls an estimated 18 years before clinical diagnosis. At that time, the level was lower by a mean 59.13 pg/mL in the Alzheimer’s disease group.

A difference in the ratio of CSF Abeta42 to Abeta40 between the two groups appeared an estimated 14 years before the diagnosis of Alzheimer’s disease (difference in mean values, −0.01 pg/mL).

Differences between the two groups in CSF p-tau181 and total tau concentrations were apparent roughly 11 and 10 years before diagnosis, respectively. At those times, the mean differences in p-tau181 and total tau concentrations were 7.10 pg/mL and 87.10 pg/mL, respectively.

In terms of NfL, a difference between the groups was observed 9 years before diagnosis, with its trajectory progressively deviating from the concentrations observed in cognitively normal groups at that time, to a final mean difference in NfL of 228.29 pg/mL. 

Bilateral hippocampal volume decreased with age in both groups. However, the decrease began to differ between the two groups 8 years before Alzheimer’s disease diagnosis, at which time volume was lower by 358.94 mm3 in the Alzheimer’s disease group compared with the control group.

Average Clinical Dementia Rating–Sum of Boxes (CDR-SB) scores in the Alzheimer’s disease group began to worsen compared with the control group at about 6 years before diagnosis.

As Alzheimer’s disease progressed, changes in CSF biomarkers increased before reaching a plateau. 
 

 

 

Important Contribution 

In a linked editorial, Richard Mayeux, MD, Department of Neurology, Columbia University, New York, said the importance of this work “cannot be overstated. Knowledge of the timing of these physiological events is critical to provide clinicians with useful starting points for prevention and therapeutic strategies.”

Dr. Mayeux said this “remarkable” longitudinal study spanning 2 decades “not only confirms the hypotheses of previous investigators but extends and validates the sequence of changes” in sporadic Alzheimer’s disease.

Dr. Mayeux acknowledged that one might consider the finding in this study to be limited owing to the inclusion of only individuals of Han Chinese ancestry. 

However, longitudinal studies of plasma biomarkers in individuals of Asian, European, African, and Hispanic ancestry have shown similar trends in biomarker changes preceding the onset of Alzheimer’s disease, he noted. 

“Ethnic variation in these biomarkers is known, but that fact does not lessen the effect of the results reported. It merely highlights that similar studies must continue and must be inclusive of other groups,” Dr. Mayeux concluded.

The study had no commercial funding. Disclosures for authors and editorialist are available at NEJM.org.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cognitive Deficits After Most Severe COVID Cases Associated With 9-Point IQ Drop

Article Type
Changed
Mon, 03/04/2024 - 18:15

 

A new study from the United Kingdom provides greater clarity on how SARS-CoV-2 infection can affect cognition and memory, including novel data on how long brain fog may last after the illness resolves and which cognitive functions are most vulnerable. 

In a large community sample, researchers found that on average, people who had recovered from COVID-19 showed small cognitive deficits equivalent to a 3-point loss in IQ for up to 1 year or more after recovering from the acute illness compared with peers who never had COVID-19.

However, people who had more severe cases, requiring treatment in a hospital intensive care unit, had cognitive deficits equivalent to a 9-point drop in IQ.

“People with ongoing persistent symptoms, indicative of long COVID, had larger cognitive deficits than people whose symptoms had resolved,” first author Adam Hampshire, PhD, with Imperial College London, told this news organization. 

The largest deficits among cognitive tasks were in memory, reasoning, and executive function, he added.

“That is, people who had had COVID-19 were both slower and less accurate when performing tasks that measure those abilities,” Dr. Hampshire said. “The group with the largest cognitive deficits were patients who had been in intensive care for COVID-19.”

The study was published online in The New England Journal of Medicine

Lingering Brain Fog

Cognitive symptoms after SARS-CoV-2 infection are well recognized, but whether objectively measurable cognitive deficits exist and how long they persist remains unclear. 

To investigate, researchers invited 800,000 adults from the REACT study of SARS-CoV-2 transmission in England to complete an online assessment for cognitive function with eight domains.

Altogether, 141,583 participants started the cognitive battery by completing at least one task, and 112,964 completed all eight tasks.

The researchers estimated global cognitive scores among participants who had been previously infected with SARS-CoV-2 with symptoms that persisted for at least 12 weeks, whether or not resolved, and among uninfected participants. 

Compared with uninfected adults, those who had COVID-19 that resolved had a small cognitive deficit, corresponding to a 3-point loss in IQ, the researchers found. 

Adults with unresolved persistent COVID-19 symptoms had the equivalent of a 6-point loss in IQ, and those who had been admitted to the intensive care unit had the equivalent of a 9-point loss in IQ, in line with previous findings of cognitive deficits in patients hospitalized in a critical care unit, the researchers report. 

Larger cognitive deficits were evident in adults infected early in the pandemic by the original SARS-CoV-2 virus or the B.1.1.7 variant, whereas peers infected later in the pandemic (eg, in the Omicron period), showed smaller cognitive deficits. This finding is in line with other studies suggesting that the association between COVID-19–associated cognitive deficits attenuated as the pandemic progressed, the researchers noted. 

They also found that people who had COVID-19 after receiving two or more vaccinations showed better cognitive performance compared with those who had not been vaccinated. 

The memory, reasoning, and executive function tasks were among the most sensitive to COVID-19–related cognitive differences and performance on these tasks differed according to illness duration and hospitalization. 

Dr. Hampshire said that more research is needed to determine whether the cognitive deficits resolve with time. 

“The implications of longer-term persistence of cognitive deficits and their clinical relevance remain unclear and warrant ongoing surveillance,” he said. 

 

 

Larger Cognitive Deficits Likely?

These results are “a concern and the broader implications require evaluation,” wrote Ziyad Al-Aly, MD, with Washington University School of Medicine in St. Louis, and Clifford Rosen, MD, with Tufts University School of Medicine in Boston, in an accompanying editorial

In their view, several outstanding questions remain, including what the potential functional implications of a 3-point loss in IQ may be and whether COVID-19–related cognitive deficits predispose to a higher risk for dementia later in life. 

“A deeper understanding of the biology of cognitive dysfunction after SARS-CoV-2 infection and how best to prevent and treat it are critical for addressing the needs of affected persons and preserving the cognitive health of populations,” Drs. Al-Aly and Rosen concluded. 

Commenting on the study for this news organization, Jacqueline Becker, PhD, clinical neuropsychologist and assistant professor of medicine, Icahn School of Medicine at Mount Sinai, New York City, noted that “one important caveat” is that the study used an online assessment tool for cognitive function and therefore the findings should be taken with “a grain of salt.”

“That said, this is a large sample, and the findings are generally consistent with what we’ve seen in terms of cognitive deficits post-COVID,” Dr. Becker said. 

It’s likely that this study “underestimates” the degree of cognitive deficits that would be seen on validated neuropsychological tests, she added.

In a recent study, Dr. Becker and her colleagues investigated rates of cognitive impairment in 740 COVID-19 patients who recovered and were treated in outpatient, emergency department, or inpatient hospital settings. 

Using validated neuropsychological measures, they found a relatively high frequency of cognitive impairment several months after patients contracted COVID-19. Impairments in executive functioning, processing speed, category fluency, memory encoding, and recall were predominant among hospitalized patients. 

Dr. Becker noted that in her experience, cognition typically will improve in some patients 12-18 months post-COVID. 

Support for the study was provided by the National Institute for Health and Care Research and UK Research and Innovation and by the Department of Health and Social Care in England and the Huo Family Foundation. Disclosures for authors and editorial writers are available at NEJM.org. Dr. Becker has no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

A new study from the United Kingdom provides greater clarity on how SARS-CoV-2 infection can affect cognition and memory, including novel data on how long brain fog may last after the illness resolves and which cognitive functions are most vulnerable. 

In a large community sample, researchers found that on average, people who had recovered from COVID-19 showed small cognitive deficits equivalent to a 3-point loss in IQ for up to 1 year or more after recovering from the acute illness compared with peers who never had COVID-19.

However, people who had more severe cases, requiring treatment in a hospital intensive care unit, had cognitive deficits equivalent to a 9-point drop in IQ.

“People with ongoing persistent symptoms, indicative of long COVID, had larger cognitive deficits than people whose symptoms had resolved,” first author Adam Hampshire, PhD, with Imperial College London, told this news organization. 

The largest deficits among cognitive tasks were in memory, reasoning, and executive function, he added.

“That is, people who had had COVID-19 were both slower and less accurate when performing tasks that measure those abilities,” Dr. Hampshire said. “The group with the largest cognitive deficits were patients who had been in intensive care for COVID-19.”

The study was published online in The New England Journal of Medicine

Lingering Brain Fog

Cognitive symptoms after SARS-CoV-2 infection are well recognized, but whether objectively measurable cognitive deficits exist and how long they persist remains unclear. 

To investigate, researchers invited 800,000 adults from the REACT study of SARS-CoV-2 transmission in England to complete an online assessment for cognitive function with eight domains.

Altogether, 141,583 participants started the cognitive battery by completing at least one task, and 112,964 completed all eight tasks.

The researchers estimated global cognitive scores among participants who had been previously infected with SARS-CoV-2 with symptoms that persisted for at least 12 weeks, whether or not resolved, and among uninfected participants. 

Compared with uninfected adults, those who had COVID-19 that resolved had a small cognitive deficit, corresponding to a 3-point loss in IQ, the researchers found. 

Adults with unresolved persistent COVID-19 symptoms had the equivalent of a 6-point loss in IQ, and those who had been admitted to the intensive care unit had the equivalent of a 9-point loss in IQ, in line with previous findings of cognitive deficits in patients hospitalized in a critical care unit, the researchers report. 

Larger cognitive deficits were evident in adults infected early in the pandemic by the original SARS-CoV-2 virus or the B.1.1.7 variant, whereas peers infected later in the pandemic (eg, in the Omicron period), showed smaller cognitive deficits. This finding is in line with other studies suggesting that the association between COVID-19–associated cognitive deficits attenuated as the pandemic progressed, the researchers noted. 

They also found that people who had COVID-19 after receiving two or more vaccinations showed better cognitive performance compared with those who had not been vaccinated. 

The memory, reasoning, and executive function tasks were among the most sensitive to COVID-19–related cognitive differences and performance on these tasks differed according to illness duration and hospitalization. 

Dr. Hampshire said that more research is needed to determine whether the cognitive deficits resolve with time. 

“The implications of longer-term persistence of cognitive deficits and their clinical relevance remain unclear and warrant ongoing surveillance,” he said. 

 

 

Larger Cognitive Deficits Likely?

These results are “a concern and the broader implications require evaluation,” wrote Ziyad Al-Aly, MD, with Washington University School of Medicine in St. Louis, and Clifford Rosen, MD, with Tufts University School of Medicine in Boston, in an accompanying editorial

In their view, several outstanding questions remain, including what the potential functional implications of a 3-point loss in IQ may be and whether COVID-19–related cognitive deficits predispose to a higher risk for dementia later in life. 

“A deeper understanding of the biology of cognitive dysfunction after SARS-CoV-2 infection and how best to prevent and treat it are critical for addressing the needs of affected persons and preserving the cognitive health of populations,” Drs. Al-Aly and Rosen concluded. 

Commenting on the study for this news organization, Jacqueline Becker, PhD, clinical neuropsychologist and assistant professor of medicine, Icahn School of Medicine at Mount Sinai, New York City, noted that “one important caveat” is that the study used an online assessment tool for cognitive function and therefore the findings should be taken with “a grain of salt.”

“That said, this is a large sample, and the findings are generally consistent with what we’ve seen in terms of cognitive deficits post-COVID,” Dr. Becker said. 

It’s likely that this study “underestimates” the degree of cognitive deficits that would be seen on validated neuropsychological tests, she added.

In a recent study, Dr. Becker and her colleagues investigated rates of cognitive impairment in 740 COVID-19 patients who recovered and were treated in outpatient, emergency department, or inpatient hospital settings. 

Using validated neuropsychological measures, they found a relatively high frequency of cognitive impairment several months after patients contracted COVID-19. Impairments in executive functioning, processing speed, category fluency, memory encoding, and recall were predominant among hospitalized patients. 

Dr. Becker noted that in her experience, cognition typically will improve in some patients 12-18 months post-COVID. 

Support for the study was provided by the National Institute for Health and Care Research and UK Research and Innovation and by the Department of Health and Social Care in England and the Huo Family Foundation. Disclosures for authors and editorial writers are available at NEJM.org. Dr. Becker has no relevant disclosures. 

A version of this article appeared on Medscape.com.

 

A new study from the United Kingdom provides greater clarity on how SARS-CoV-2 infection can affect cognition and memory, including novel data on how long brain fog may last after the illness resolves and which cognitive functions are most vulnerable. 

In a large community sample, researchers found that on average, people who had recovered from COVID-19 showed small cognitive deficits equivalent to a 3-point loss in IQ for up to 1 year or more after recovering from the acute illness compared with peers who never had COVID-19.

However, people who had more severe cases, requiring treatment in a hospital intensive care unit, had cognitive deficits equivalent to a 9-point drop in IQ.

“People with ongoing persistent symptoms, indicative of long COVID, had larger cognitive deficits than people whose symptoms had resolved,” first author Adam Hampshire, PhD, with Imperial College London, told this news organization. 

The largest deficits among cognitive tasks were in memory, reasoning, and executive function, he added.

“That is, people who had had COVID-19 were both slower and less accurate when performing tasks that measure those abilities,” Dr. Hampshire said. “The group with the largest cognitive deficits were patients who had been in intensive care for COVID-19.”

The study was published online in The New England Journal of Medicine

Lingering Brain Fog

Cognitive symptoms after SARS-CoV-2 infection are well recognized, but whether objectively measurable cognitive deficits exist and how long they persist remains unclear. 

To investigate, researchers invited 800,000 adults from the REACT study of SARS-CoV-2 transmission in England to complete an online assessment for cognitive function with eight domains.

Altogether, 141,583 participants started the cognitive battery by completing at least one task, and 112,964 completed all eight tasks.

The researchers estimated global cognitive scores among participants who had been previously infected with SARS-CoV-2 with symptoms that persisted for at least 12 weeks, whether or not resolved, and among uninfected participants. 

Compared with uninfected adults, those who had COVID-19 that resolved had a small cognitive deficit, corresponding to a 3-point loss in IQ, the researchers found. 

Adults with unresolved persistent COVID-19 symptoms had the equivalent of a 6-point loss in IQ, and those who had been admitted to the intensive care unit had the equivalent of a 9-point loss in IQ, in line with previous findings of cognitive deficits in patients hospitalized in a critical care unit, the researchers report. 

Larger cognitive deficits were evident in adults infected early in the pandemic by the original SARS-CoV-2 virus or the B.1.1.7 variant, whereas peers infected later in the pandemic (eg, in the Omicron period), showed smaller cognitive deficits. This finding is in line with other studies suggesting that the association between COVID-19–associated cognitive deficits attenuated as the pandemic progressed, the researchers noted. 

They also found that people who had COVID-19 after receiving two or more vaccinations showed better cognitive performance compared with those who had not been vaccinated. 

The memory, reasoning, and executive function tasks were among the most sensitive to COVID-19–related cognitive differences and performance on these tasks differed according to illness duration and hospitalization. 

Dr. Hampshire said that more research is needed to determine whether the cognitive deficits resolve with time. 

“The implications of longer-term persistence of cognitive deficits and their clinical relevance remain unclear and warrant ongoing surveillance,” he said. 

 

 

Larger Cognitive Deficits Likely?

These results are “a concern and the broader implications require evaluation,” wrote Ziyad Al-Aly, MD, with Washington University School of Medicine in St. Louis, and Clifford Rosen, MD, with Tufts University School of Medicine in Boston, in an accompanying editorial

In their view, several outstanding questions remain, including what the potential functional implications of a 3-point loss in IQ may be and whether COVID-19–related cognitive deficits predispose to a higher risk for dementia later in life. 

“A deeper understanding of the biology of cognitive dysfunction after SARS-CoV-2 infection and how best to prevent and treat it are critical for addressing the needs of affected persons and preserving the cognitive health of populations,” Drs. Al-Aly and Rosen concluded. 

Commenting on the study for this news organization, Jacqueline Becker, PhD, clinical neuropsychologist and assistant professor of medicine, Icahn School of Medicine at Mount Sinai, New York City, noted that “one important caveat” is that the study used an online assessment tool for cognitive function and therefore the findings should be taken with “a grain of salt.”

“That said, this is a large sample, and the findings are generally consistent with what we’ve seen in terms of cognitive deficits post-COVID,” Dr. Becker said. 

It’s likely that this study “underestimates” the degree of cognitive deficits that would be seen on validated neuropsychological tests, she added.

In a recent study, Dr. Becker and her colleagues investigated rates of cognitive impairment in 740 COVID-19 patients who recovered and were treated in outpatient, emergency department, or inpatient hospital settings. 

Using validated neuropsychological measures, they found a relatively high frequency of cognitive impairment several months after patients contracted COVID-19. Impairments in executive functioning, processing speed, category fluency, memory encoding, and recall were predominant among hospitalized patients. 

Dr. Becker noted that in her experience, cognition typically will improve in some patients 12-18 months post-COVID. 

Support for the study was provided by the National Institute for Health and Care Research and UK Research and Innovation and by the Department of Health and Social Care in England and the Huo Family Foundation. Disclosures for authors and editorial writers are available at NEJM.org. Dr. Becker has no relevant disclosures. 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Moderate to Severe TBI Linked to Brain Cancer Risk

Article Type
Changed
Tue, 02/27/2024 - 16:22

Moderate, severe, and penetrating traumatic brain injury (TBI) is associated with an elevated risk of developing brain cancer, new research suggested. However, mild TBI appears to confer no increased risk.

In a large cohort of post-9/11 US veterans, those who suffered moderate/severe TBI had a nearly twofold increased risk for a subsequent brain cancer diagnosis, while those with penetrating TBI had a greater than threefold increased risk.

“While the absolute number of brain cancer diagnoses was small, these diagnoses are associated with profoundly poor outcomes. Further research of this rare but devastating condition is needed to better identify those at risk and develop screening protocols,” wrote investigators led by Ian Stewart, MD, with the Uniformed Services University of Health Sciences, Bethesda, Maryland.

The study was published online on February 15 in JAMA Network Open.
 

Common War Wound

TBI is one of the most common battlefield wounds among veterans of the Iraq and Afghanistan wars. But evidence to date on the potential association of TBI with the subsequent risk for brain cancer is conflicting, the authors noted.

To investigate further, they reviewed the records of nearly 2 million mostly male US veterans of the Iraq and Afghanistan wars. A total of 449,880 people experienced TBI, which was mild in 385,848 cases, moderate/severe in 46,859 cases, and penetrating in 17,173 cases.

During a median follow-up of 7.2 years, brain cancer occurred in 318 veterans without TBI (0.02%), 80 with mild TBI (0.02%), 17 with moderate/severe TBI (0.04%), and 10 or fewer with penetrating TBI (0.06% or less).

There was a stepwise increase in brain cancer incidence with worse TBI severity. Crude incidence rates per 100,000 person-years were 3.06 for no TBI, 2.85 for mild TBI, 4.88 for moderate/severe TBI, and 10.34 for penetrating TBI.

In the fully adjusted model, moderate/severe TBI showed a near-doubling of brain cancer risk vs no TBI (adjusted hazard ratio [aHR], 1.90; 95% CI, 1.16-3.12), while penetrating TBI was associated with a greater than tripling of risk (aHR, 3.33; 95% CI, 1.71-6.49). There was no significantly increased risk after mild TBI.

There are plausible biological mechanisms linking TBI to brain cancer, the authors noted, including alterations in metabolism, inflammation, astrocyte proliferation, and stem cell migration and differentiation.

They caution that with few female veterans and a predominantly young cohort, the findings may not extend to the general population.
 

Meaningful New Data 

In an accompanying editorial, Elie Massaad, MD, MSc, and Ali Kiapour, PhD, MMSc, Massachusetts General Hospital, Boston, noted that federal data show glioblastoma, the most aggressive malignant brain tumor, is the third leading cause of cancer-related death among active duty personnel.

“Post-9/11 veterans deployed to Iraq, Afghanistan, and elsewhere face a 26% higher glioblastoma rate vs the general public, with an average age of onset decades earlier than in broader populations,” they wrote.

Overall, they noted this new research provides “meaningful data clarifying associations between combat-related TBI severity and subsequent brain cancer risk among post-9/11 veterans.

“Elucidating potential connections between battlefield trauma and longer-term health outcomes is imperative to inform prevention and care approaches for those who have served,” they added.

This study was supported by the Assistant Secretary of Defense for Health Affairs endorsed by the Department of Defense through the Psychological Health/Traumatic Brain Injury Research Program Long-Term Impact of Military Relevant Brain Injury Consortium. The authors and editorialists had declared no relevant conflicts of interest.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Moderate, severe, and penetrating traumatic brain injury (TBI) is associated with an elevated risk of developing brain cancer, new research suggested. However, mild TBI appears to confer no increased risk.

In a large cohort of post-9/11 US veterans, those who suffered moderate/severe TBI had a nearly twofold increased risk for a subsequent brain cancer diagnosis, while those with penetrating TBI had a greater than threefold increased risk.

“While the absolute number of brain cancer diagnoses was small, these diagnoses are associated with profoundly poor outcomes. Further research of this rare but devastating condition is needed to better identify those at risk and develop screening protocols,” wrote investigators led by Ian Stewart, MD, with the Uniformed Services University of Health Sciences, Bethesda, Maryland.

The study was published online on February 15 in JAMA Network Open.
 

Common War Wound

TBI is one of the most common battlefield wounds among veterans of the Iraq and Afghanistan wars. But evidence to date on the potential association of TBI with the subsequent risk for brain cancer is conflicting, the authors noted.

To investigate further, they reviewed the records of nearly 2 million mostly male US veterans of the Iraq and Afghanistan wars. A total of 449,880 people experienced TBI, which was mild in 385,848 cases, moderate/severe in 46,859 cases, and penetrating in 17,173 cases.

During a median follow-up of 7.2 years, brain cancer occurred in 318 veterans without TBI (0.02%), 80 with mild TBI (0.02%), 17 with moderate/severe TBI (0.04%), and 10 or fewer with penetrating TBI (0.06% or less).

There was a stepwise increase in brain cancer incidence with worse TBI severity. Crude incidence rates per 100,000 person-years were 3.06 for no TBI, 2.85 for mild TBI, 4.88 for moderate/severe TBI, and 10.34 for penetrating TBI.

In the fully adjusted model, moderate/severe TBI showed a near-doubling of brain cancer risk vs no TBI (adjusted hazard ratio [aHR], 1.90; 95% CI, 1.16-3.12), while penetrating TBI was associated with a greater than tripling of risk (aHR, 3.33; 95% CI, 1.71-6.49). There was no significantly increased risk after mild TBI.

There are plausible biological mechanisms linking TBI to brain cancer, the authors noted, including alterations in metabolism, inflammation, astrocyte proliferation, and stem cell migration and differentiation.

They caution that with few female veterans and a predominantly young cohort, the findings may not extend to the general population.
 

Meaningful New Data 

In an accompanying editorial, Elie Massaad, MD, MSc, and Ali Kiapour, PhD, MMSc, Massachusetts General Hospital, Boston, noted that federal data show glioblastoma, the most aggressive malignant brain tumor, is the third leading cause of cancer-related death among active duty personnel.

“Post-9/11 veterans deployed to Iraq, Afghanistan, and elsewhere face a 26% higher glioblastoma rate vs the general public, with an average age of onset decades earlier than in broader populations,” they wrote.

Overall, they noted this new research provides “meaningful data clarifying associations between combat-related TBI severity and subsequent brain cancer risk among post-9/11 veterans.

“Elucidating potential connections between battlefield trauma and longer-term health outcomes is imperative to inform prevention and care approaches for those who have served,” they added.

This study was supported by the Assistant Secretary of Defense for Health Affairs endorsed by the Department of Defense through the Psychological Health/Traumatic Brain Injury Research Program Long-Term Impact of Military Relevant Brain Injury Consortium. The authors and editorialists had declared no relevant conflicts of interest.
 

A version of this article appeared on Medscape.com.

Moderate, severe, and penetrating traumatic brain injury (TBI) is associated with an elevated risk of developing brain cancer, new research suggested. However, mild TBI appears to confer no increased risk.

In a large cohort of post-9/11 US veterans, those who suffered moderate/severe TBI had a nearly twofold increased risk for a subsequent brain cancer diagnosis, while those with penetrating TBI had a greater than threefold increased risk.

“While the absolute number of brain cancer diagnoses was small, these diagnoses are associated with profoundly poor outcomes. Further research of this rare but devastating condition is needed to better identify those at risk and develop screening protocols,” wrote investigators led by Ian Stewart, MD, with the Uniformed Services University of Health Sciences, Bethesda, Maryland.

The study was published online on February 15 in JAMA Network Open.
 

Common War Wound

TBI is one of the most common battlefield wounds among veterans of the Iraq and Afghanistan wars. But evidence to date on the potential association of TBI with the subsequent risk for brain cancer is conflicting, the authors noted.

To investigate further, they reviewed the records of nearly 2 million mostly male US veterans of the Iraq and Afghanistan wars. A total of 449,880 people experienced TBI, which was mild in 385,848 cases, moderate/severe in 46,859 cases, and penetrating in 17,173 cases.

During a median follow-up of 7.2 years, brain cancer occurred in 318 veterans without TBI (0.02%), 80 with mild TBI (0.02%), 17 with moderate/severe TBI (0.04%), and 10 or fewer with penetrating TBI (0.06% or less).

There was a stepwise increase in brain cancer incidence with worse TBI severity. Crude incidence rates per 100,000 person-years were 3.06 for no TBI, 2.85 for mild TBI, 4.88 for moderate/severe TBI, and 10.34 for penetrating TBI.

In the fully adjusted model, moderate/severe TBI showed a near-doubling of brain cancer risk vs no TBI (adjusted hazard ratio [aHR], 1.90; 95% CI, 1.16-3.12), while penetrating TBI was associated with a greater than tripling of risk (aHR, 3.33; 95% CI, 1.71-6.49). There was no significantly increased risk after mild TBI.

There are plausible biological mechanisms linking TBI to brain cancer, the authors noted, including alterations in metabolism, inflammation, astrocyte proliferation, and stem cell migration and differentiation.

They caution that with few female veterans and a predominantly young cohort, the findings may not extend to the general population.
 

Meaningful New Data 

In an accompanying editorial, Elie Massaad, MD, MSc, and Ali Kiapour, PhD, MMSc, Massachusetts General Hospital, Boston, noted that federal data show glioblastoma, the most aggressive malignant brain tumor, is the third leading cause of cancer-related death among active duty personnel.

“Post-9/11 veterans deployed to Iraq, Afghanistan, and elsewhere face a 26% higher glioblastoma rate vs the general public, with an average age of onset decades earlier than in broader populations,” they wrote.

Overall, they noted this new research provides “meaningful data clarifying associations between combat-related TBI severity and subsequent brain cancer risk among post-9/11 veterans.

“Elucidating potential connections between battlefield trauma and longer-term health outcomes is imperative to inform prevention and care approaches for those who have served,” they added.

This study was supported by the Assistant Secretary of Defense for Health Affairs endorsed by the Department of Defense through the Psychological Health/Traumatic Brain Injury Research Program Long-Term Impact of Military Relevant Brain Injury Consortium. The authors and editorialists had declared no relevant conflicts of interest.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Does Bariatric Surgery Increase or Decrease Cancer Risk? It Depends.

Article Type
Changed
Fri, 03/29/2024 - 10:22

Bariatric surgery appears to decrease the risk for some cancers, but it may increase the risk for others, particularly colorectal cancer (CRC), according to a synthesis of current evidence.

“There has been a recent burst of studies examining the association between bariatric surgery and the longitudinal risks of developing cancer,” corresponding author Zhi Ven Fong, MD, MPH, DrPH, surgical oncologist, Mayo Clinic Arizona, Phoenix, said in an interview. “However, there has not been a rigorous and critical analysis of the data published to date.”

In evaluating research showing an association between bariatric surgery and longitudinal cancer risk, the investigators found that the quality of the studies and their findings are “heterogeneous and might be susceptible to bias,” Dr. Fong said.

Bariatric surgery appears to have the strongest and most consistent association with the reduction of breast, ovarian, and endometrial cancer risk, first author Pei-Wen Lim, MD, MS, bariatric surgeon at Mayo Clinic Arizona, Phoenix, told this news organization. “However, there have been concerning signals from preclinical and epidemiological studies that bariatric surgery may be associated with a higher risk of developing colorectal cancers,” she added.

The authors cautioned against certain changes in clinical management.

“First, cancer surveillance frequency should not be altered after bariatric surgery because of any assumed reduction in longitudinal cancer risk, and surveillance strategy should mirror that of an average-risk individual,” they wrote. “Secondly, the indications for bariatric surgery should not be expanded for the purpose of cancer-risk mitigation.”

The review was published online in JAMA Surgery.
 

Protection Against Hormone-Related Cancers

The authors pointed to several studies that appear to support the association between bariatric surgery and decreased risk for hormone-related cancers.

Among them is an observational study of 6781 patients in Canada that showed a significant reduction in breast cancer risk at a median follow-up of 5 years in those who had bariatric surgery vs those who did not (P = .01).

The largest study to date on risk for hormone-related cancer after bariatric surgery was conducted using New York State data for 302,883 women.

It showed a lower rate of breast, endometrial, and ovarian cancers after bariatric surgery (hazard ratio [HR], 0.78; P < .001), with Roux-en-Y gastric bypass conferring the greatest benefit compared with laparoscopic sleeve gastrectomy (HR, 0.66; P = .006) and laparoscopic adjustable gastric banding (HR, 0.83; P = .006).

Beyond the shared mechanisms explaining obesity and cancer risk, a proposed explanation for the strong, consistent association between bariatric surgery and hormone-sensitive cancers is the role obesity-related changes in estrogen stimulation play in development of such cancers, the authors noted.
 

Association With GI Cancers

The association between bariatric surgery and development of esophageal, gastric, liver, and pancreas cancers is less clear. The data are heterogeneous, with studies showing either no association or decreased longitudinal incidence, the authors reported.

The data are also mixed when it comes to CRC. Epidemiological studies have demonstrated decreased longitudinal incidence of colon and rectal cancer after bariatric surgery; however, two studies have suggested an increased CRC risk after bariatric surgery, the authors noted.

15-year study from England that matched 8794 patients with obesity who underwent bariatric surgery with 8794 patients with obesity who did not have the surgery showed that gastric bypass (but not gastric banding or sleeve gastrectomy) was associated with a greater than twofold increased risk of developing colon and rectal cancer (odds ratio, 2.63).

These findings were corroborated in a Swedish cohort study with more than 10 years of follow-up data.

One potential explanation for the heterogeneous findings is that “present studies do not discriminate the sub-types of colon and rectal cancer, with bariatric surgery possibly increasing the incidence of colitis-associated cancers but not hereditary cancers,” the authors wrote.

“The mechanism by which gastric bypass may increase the risk of colorectal cancer is through changes in the gut’s microbiome. These changes in gut flora may triumph the protective effect of weight loss on the development of colorectal cancers,” Dr. Fong said.

Prospective studies are necessary to better delineate CRC risk after bariatric surgery, the authors wrote.
 

 

 

Benefits Outweigh Risk

“Ultimately, it has been proven that bariatric surgery saves lives by improving the metabolic profile of patients with obesity through reduction in cardiovascular risk factors such as hypertension, diabetes, and nonalcoholic fatty liver disease,” Dr. Lim said.

“If patients qualify for bariatric surgery on the basis of their BMI or comorbidities, they should pursue it for its metabolic benefits, but perhaps consider timely or closer-interval screening colonoscopies to monitor for potential colorectal cancer development,” Dr. Lim added.

When asked to comment on the review, Marina Kurian, MD, president, American Society for Metabolic and Bariatric Surgery, also pointed to the advantages of bariatric surgery in reducing major adverse cardiovascular events and improving hypertension, hyperlipidemia, and diabetes.

Bariatric surgery reduces many types of cancers, although the data specific to CRC risk with bariatric surgery are mixed, she added.

“The jury is still out,” said Dr. Kurian, clinical professor of surgery at NYU Langone Health in New York, who was not involved in the review. “There are papers and meta-analyses that show benefit even in colorectal cancer, but then there are a couple of papers out there that suggest a risk that seems to be specific to men.

“It could just be a numbers game, where we may not have enough data. We need more granular data that will help address these nuances and really determine what is the actual risk,” Dr. Kurian said. “But overall, for cancer, bariatric surgery is a win.”

This research had no specific funding. Dr. Fong and Dr. Lim had no relevant disclosures. Dr. Kurian disclosed relationships with Allergan, Allurion, CineMed, CSATS, Ezisurg Medical, Hernon, Johnson & Johnson, Medtronic, Novo, Stryker, and Vivus.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Bariatric surgery appears to decrease the risk for some cancers, but it may increase the risk for others, particularly colorectal cancer (CRC), according to a synthesis of current evidence.

“There has been a recent burst of studies examining the association between bariatric surgery and the longitudinal risks of developing cancer,” corresponding author Zhi Ven Fong, MD, MPH, DrPH, surgical oncologist, Mayo Clinic Arizona, Phoenix, said in an interview. “However, there has not been a rigorous and critical analysis of the data published to date.”

In evaluating research showing an association between bariatric surgery and longitudinal cancer risk, the investigators found that the quality of the studies and their findings are “heterogeneous and might be susceptible to bias,” Dr. Fong said.

Bariatric surgery appears to have the strongest and most consistent association with the reduction of breast, ovarian, and endometrial cancer risk, first author Pei-Wen Lim, MD, MS, bariatric surgeon at Mayo Clinic Arizona, Phoenix, told this news organization. “However, there have been concerning signals from preclinical and epidemiological studies that bariatric surgery may be associated with a higher risk of developing colorectal cancers,” she added.

The authors cautioned against certain changes in clinical management.

“First, cancer surveillance frequency should not be altered after bariatric surgery because of any assumed reduction in longitudinal cancer risk, and surveillance strategy should mirror that of an average-risk individual,” they wrote. “Secondly, the indications for bariatric surgery should not be expanded for the purpose of cancer-risk mitigation.”

The review was published online in JAMA Surgery.
 

Protection Against Hormone-Related Cancers

The authors pointed to several studies that appear to support the association between bariatric surgery and decreased risk for hormone-related cancers.

Among them is an observational study of 6781 patients in Canada that showed a significant reduction in breast cancer risk at a median follow-up of 5 years in those who had bariatric surgery vs those who did not (P = .01).

The largest study to date on risk for hormone-related cancer after bariatric surgery was conducted using New York State data for 302,883 women.

It showed a lower rate of breast, endometrial, and ovarian cancers after bariatric surgery (hazard ratio [HR], 0.78; P < .001), with Roux-en-Y gastric bypass conferring the greatest benefit compared with laparoscopic sleeve gastrectomy (HR, 0.66; P = .006) and laparoscopic adjustable gastric banding (HR, 0.83; P = .006).

Beyond the shared mechanisms explaining obesity and cancer risk, a proposed explanation for the strong, consistent association between bariatric surgery and hormone-sensitive cancers is the role obesity-related changes in estrogen stimulation play in development of such cancers, the authors noted.
 

Association With GI Cancers

The association between bariatric surgery and development of esophageal, gastric, liver, and pancreas cancers is less clear. The data are heterogeneous, with studies showing either no association or decreased longitudinal incidence, the authors reported.

The data are also mixed when it comes to CRC. Epidemiological studies have demonstrated decreased longitudinal incidence of colon and rectal cancer after bariatric surgery; however, two studies have suggested an increased CRC risk after bariatric surgery, the authors noted.

15-year study from England that matched 8794 patients with obesity who underwent bariatric surgery with 8794 patients with obesity who did not have the surgery showed that gastric bypass (but not gastric banding or sleeve gastrectomy) was associated with a greater than twofold increased risk of developing colon and rectal cancer (odds ratio, 2.63).

These findings were corroborated in a Swedish cohort study with more than 10 years of follow-up data.

One potential explanation for the heterogeneous findings is that “present studies do not discriminate the sub-types of colon and rectal cancer, with bariatric surgery possibly increasing the incidence of colitis-associated cancers but not hereditary cancers,” the authors wrote.

“The mechanism by which gastric bypass may increase the risk of colorectal cancer is through changes in the gut’s microbiome. These changes in gut flora may triumph the protective effect of weight loss on the development of colorectal cancers,” Dr. Fong said.

Prospective studies are necessary to better delineate CRC risk after bariatric surgery, the authors wrote.
 

 

 

Benefits Outweigh Risk

“Ultimately, it has been proven that bariatric surgery saves lives by improving the metabolic profile of patients with obesity through reduction in cardiovascular risk factors such as hypertension, diabetes, and nonalcoholic fatty liver disease,” Dr. Lim said.

“If patients qualify for bariatric surgery on the basis of their BMI or comorbidities, they should pursue it for its metabolic benefits, but perhaps consider timely or closer-interval screening colonoscopies to monitor for potential colorectal cancer development,” Dr. Lim added.

When asked to comment on the review, Marina Kurian, MD, president, American Society for Metabolic and Bariatric Surgery, also pointed to the advantages of bariatric surgery in reducing major adverse cardiovascular events and improving hypertension, hyperlipidemia, and diabetes.

Bariatric surgery reduces many types of cancers, although the data specific to CRC risk with bariatric surgery are mixed, she added.

“The jury is still out,” said Dr. Kurian, clinical professor of surgery at NYU Langone Health in New York, who was not involved in the review. “There are papers and meta-analyses that show benefit even in colorectal cancer, but then there are a couple of papers out there that suggest a risk that seems to be specific to men.

“It could just be a numbers game, where we may not have enough data. We need more granular data that will help address these nuances and really determine what is the actual risk,” Dr. Kurian said. “But overall, for cancer, bariatric surgery is a win.”

This research had no specific funding. Dr. Fong and Dr. Lim had no relevant disclosures. Dr. Kurian disclosed relationships with Allergan, Allurion, CineMed, CSATS, Ezisurg Medical, Hernon, Johnson & Johnson, Medtronic, Novo, Stryker, and Vivus.
 

A version of this article appeared on Medscape.com.

Bariatric surgery appears to decrease the risk for some cancers, but it may increase the risk for others, particularly colorectal cancer (CRC), according to a synthesis of current evidence.

“There has been a recent burst of studies examining the association between bariatric surgery and the longitudinal risks of developing cancer,” corresponding author Zhi Ven Fong, MD, MPH, DrPH, surgical oncologist, Mayo Clinic Arizona, Phoenix, said in an interview. “However, there has not been a rigorous and critical analysis of the data published to date.”

In evaluating research showing an association between bariatric surgery and longitudinal cancer risk, the investigators found that the quality of the studies and their findings are “heterogeneous and might be susceptible to bias,” Dr. Fong said.

Bariatric surgery appears to have the strongest and most consistent association with the reduction of breast, ovarian, and endometrial cancer risk, first author Pei-Wen Lim, MD, MS, bariatric surgeon at Mayo Clinic Arizona, Phoenix, told this news organization. “However, there have been concerning signals from preclinical and epidemiological studies that bariatric surgery may be associated with a higher risk of developing colorectal cancers,” she added.

The authors cautioned against certain changes in clinical management.

“First, cancer surveillance frequency should not be altered after bariatric surgery because of any assumed reduction in longitudinal cancer risk, and surveillance strategy should mirror that of an average-risk individual,” they wrote. “Secondly, the indications for bariatric surgery should not be expanded for the purpose of cancer-risk mitigation.”

The review was published online in JAMA Surgery.
 

Protection Against Hormone-Related Cancers

The authors pointed to several studies that appear to support the association between bariatric surgery and decreased risk for hormone-related cancers.

Among them is an observational study of 6781 patients in Canada that showed a significant reduction in breast cancer risk at a median follow-up of 5 years in those who had bariatric surgery vs those who did not (P = .01).

The largest study to date on risk for hormone-related cancer after bariatric surgery was conducted using New York State data for 302,883 women.

It showed a lower rate of breast, endometrial, and ovarian cancers after bariatric surgery (hazard ratio [HR], 0.78; P < .001), with Roux-en-Y gastric bypass conferring the greatest benefit compared with laparoscopic sleeve gastrectomy (HR, 0.66; P = .006) and laparoscopic adjustable gastric banding (HR, 0.83; P = .006).

Beyond the shared mechanisms explaining obesity and cancer risk, a proposed explanation for the strong, consistent association between bariatric surgery and hormone-sensitive cancers is the role obesity-related changes in estrogen stimulation play in development of such cancers, the authors noted.
 

Association With GI Cancers

The association between bariatric surgery and development of esophageal, gastric, liver, and pancreas cancers is less clear. The data are heterogeneous, with studies showing either no association or decreased longitudinal incidence, the authors reported.

The data are also mixed when it comes to CRC. Epidemiological studies have demonstrated decreased longitudinal incidence of colon and rectal cancer after bariatric surgery; however, two studies have suggested an increased CRC risk after bariatric surgery, the authors noted.

15-year study from England that matched 8794 patients with obesity who underwent bariatric surgery with 8794 patients with obesity who did not have the surgery showed that gastric bypass (but not gastric banding or sleeve gastrectomy) was associated with a greater than twofold increased risk of developing colon and rectal cancer (odds ratio, 2.63).

These findings were corroborated in a Swedish cohort study with more than 10 years of follow-up data.

One potential explanation for the heterogeneous findings is that “present studies do not discriminate the sub-types of colon and rectal cancer, with bariatric surgery possibly increasing the incidence of colitis-associated cancers but not hereditary cancers,” the authors wrote.

“The mechanism by which gastric bypass may increase the risk of colorectal cancer is through changes in the gut’s microbiome. These changes in gut flora may triumph the protective effect of weight loss on the development of colorectal cancers,” Dr. Fong said.

Prospective studies are necessary to better delineate CRC risk after bariatric surgery, the authors wrote.
 

 

 

Benefits Outweigh Risk

“Ultimately, it has been proven that bariatric surgery saves lives by improving the metabolic profile of patients with obesity through reduction in cardiovascular risk factors such as hypertension, diabetes, and nonalcoholic fatty liver disease,” Dr. Lim said.

“If patients qualify for bariatric surgery on the basis of their BMI or comorbidities, they should pursue it for its metabolic benefits, but perhaps consider timely or closer-interval screening colonoscopies to monitor for potential colorectal cancer development,” Dr. Lim added.

When asked to comment on the review, Marina Kurian, MD, president, American Society for Metabolic and Bariatric Surgery, also pointed to the advantages of bariatric surgery in reducing major adverse cardiovascular events and improving hypertension, hyperlipidemia, and diabetes.

Bariatric surgery reduces many types of cancers, although the data specific to CRC risk with bariatric surgery are mixed, she added.

“The jury is still out,” said Dr. Kurian, clinical professor of surgery at NYU Langone Health in New York, who was not involved in the review. “There are papers and meta-analyses that show benefit even in colorectal cancer, but then there are a couple of papers out there that suggest a risk that seems to be specific to men.

“It could just be a numbers game, where we may not have enough data. We need more granular data that will help address these nuances and really determine what is the actual risk,” Dr. Kurian said. “But overall, for cancer, bariatric surgery is a win.”

This research had no specific funding. Dr. Fong and Dr. Lim had no relevant disclosures. Dr. Kurian disclosed relationships with Allergan, Allurion, CineMed, CSATS, Ezisurg Medical, Hernon, Johnson & Johnson, Medtronic, Novo, Stryker, and Vivus.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Eating as Treatment’ Linked to Improved Survival in HNC

Article Type
Changed
Mon, 02/26/2024 - 13:53

 

TOPLINE:

“Eating as Treatment” (EAT), a psychological intervention led by oncology dietitians, significantly improves nutritional status and survival in patients with head and neck cancer receiving radiotherapy, new research showed.

METHODOLOGY:

  •  It is common in nearly 80% of patients with head and neck cancer and is associated with a higher burden of disease, poorer treatment outcomes, and increased mortality.
  • With the EAT intervention, trained oncology dietitians provide a combination of motivational interviewing and cognitive behavior therapy strategies to improve nutritional behaviors in patients with head and neck cancer receiving radiotherapy at six different Australian hospitals.
  • The initial EAT trial — which randomly allocated 307 patients with head and neck cancer undergoing radiotherapy between 2013 and 2016 to the EAT intervention or standard diet advice — demonstrated improved nutritional status and quality of life in patients assigned to the EAT intervention.
  • The researchers are now reporting an exploratory analysis of 5-year mortality among trial participants.

TAKEAWAY:

  • There were 64 deaths over 5 years — 36 (24%) in the control group and 28 (18%) in the intervention group.
  • Adjusted logistic regression analyses showed statistically significantly reduced odds of dying in the 5 years following radiotherapy in the intervention group (odds ratio, 0.33; P = .04).
  • With the EAT intervention, there was a 17% (P = .03) absolute risk reduction and a 55% relative risk reduction in 5-year mortality (P = .04), with 6 being the number needed to treat to avoid one death.
  • Using the Kaplan-Meier survival curve, there was an unadjusted 5-year actuarial survival rate of 76% (0.68-0.82) for the control group and 82% (0.75-0.87) for the intervention phase (P = .22).

IN PRACTICE:

“Our findings provide evidence that a behavioral intervention delivered during [radiotherapy] may substantially reduce mortality rates for patients with [head and neck cancer],” researchers wrote. “Although the mechanism of this reduction is unknown, the randomized study design and the results of this trial strengthen the association between improved nutritional status and oral intake during radiotherapy, and survival benefit.”

SOURCE:

The study, with first author Ben Britton, PhD, from the University of Newcastle, Callaghan, Australia, was published online in the International Journal of Radiation Oncology, Biology, Physics.

LIMITATIONS:

The study relied on the accuracy of the National Death Index, and it was unknown if the recorded deaths were due to cancer or another cause.

DISCLOSURES:

The EAT trial was funded by the Australian National Health and Medical Research Council. The authors had declared no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

“Eating as Treatment” (EAT), a psychological intervention led by oncology dietitians, significantly improves nutritional status and survival in patients with head and neck cancer receiving radiotherapy, new research showed.

METHODOLOGY:

  •  It is common in nearly 80% of patients with head and neck cancer and is associated with a higher burden of disease, poorer treatment outcomes, and increased mortality.
  • With the EAT intervention, trained oncology dietitians provide a combination of motivational interviewing and cognitive behavior therapy strategies to improve nutritional behaviors in patients with head and neck cancer receiving radiotherapy at six different Australian hospitals.
  • The initial EAT trial — which randomly allocated 307 patients with head and neck cancer undergoing radiotherapy between 2013 and 2016 to the EAT intervention or standard diet advice — demonstrated improved nutritional status and quality of life in patients assigned to the EAT intervention.
  • The researchers are now reporting an exploratory analysis of 5-year mortality among trial participants.

TAKEAWAY:

  • There were 64 deaths over 5 years — 36 (24%) in the control group and 28 (18%) in the intervention group.
  • Adjusted logistic regression analyses showed statistically significantly reduced odds of dying in the 5 years following radiotherapy in the intervention group (odds ratio, 0.33; P = .04).
  • With the EAT intervention, there was a 17% (P = .03) absolute risk reduction and a 55% relative risk reduction in 5-year mortality (P = .04), with 6 being the number needed to treat to avoid one death.
  • Using the Kaplan-Meier survival curve, there was an unadjusted 5-year actuarial survival rate of 76% (0.68-0.82) for the control group and 82% (0.75-0.87) for the intervention phase (P = .22).

IN PRACTICE:

“Our findings provide evidence that a behavioral intervention delivered during [radiotherapy] may substantially reduce mortality rates for patients with [head and neck cancer],” researchers wrote. “Although the mechanism of this reduction is unknown, the randomized study design and the results of this trial strengthen the association between improved nutritional status and oral intake during radiotherapy, and survival benefit.”

SOURCE:

The study, with first author Ben Britton, PhD, from the University of Newcastle, Callaghan, Australia, was published online in the International Journal of Radiation Oncology, Biology, Physics.

LIMITATIONS:

The study relied on the accuracy of the National Death Index, and it was unknown if the recorded deaths were due to cancer or another cause.

DISCLOSURES:

The EAT trial was funded by the Australian National Health and Medical Research Council. The authors had declared no conflicts of interest.

A version of this article appeared on Medscape.com.

 

TOPLINE:

“Eating as Treatment” (EAT), a psychological intervention led by oncology dietitians, significantly improves nutritional status and survival in patients with head and neck cancer receiving radiotherapy, new research showed.

METHODOLOGY:

  •  It is common in nearly 80% of patients with head and neck cancer and is associated with a higher burden of disease, poorer treatment outcomes, and increased mortality.
  • With the EAT intervention, trained oncology dietitians provide a combination of motivational interviewing and cognitive behavior therapy strategies to improve nutritional behaviors in patients with head and neck cancer receiving radiotherapy at six different Australian hospitals.
  • The initial EAT trial — which randomly allocated 307 patients with head and neck cancer undergoing radiotherapy between 2013 and 2016 to the EAT intervention or standard diet advice — demonstrated improved nutritional status and quality of life in patients assigned to the EAT intervention.
  • The researchers are now reporting an exploratory analysis of 5-year mortality among trial participants.

TAKEAWAY:

  • There were 64 deaths over 5 years — 36 (24%) in the control group and 28 (18%) in the intervention group.
  • Adjusted logistic regression analyses showed statistically significantly reduced odds of dying in the 5 years following radiotherapy in the intervention group (odds ratio, 0.33; P = .04).
  • With the EAT intervention, there was a 17% (P = .03) absolute risk reduction and a 55% relative risk reduction in 5-year mortality (P = .04), with 6 being the number needed to treat to avoid one death.
  • Using the Kaplan-Meier survival curve, there was an unadjusted 5-year actuarial survival rate of 76% (0.68-0.82) for the control group and 82% (0.75-0.87) for the intervention phase (P = .22).

IN PRACTICE:

“Our findings provide evidence that a behavioral intervention delivered during [radiotherapy] may substantially reduce mortality rates for patients with [head and neck cancer],” researchers wrote. “Although the mechanism of this reduction is unknown, the randomized study design and the results of this trial strengthen the association between improved nutritional status and oral intake during radiotherapy, and survival benefit.”

SOURCE:

The study, with first author Ben Britton, PhD, from the University of Newcastle, Callaghan, Australia, was published online in the International Journal of Radiation Oncology, Biology, Physics.

LIMITATIONS:

The study relied on the accuracy of the National Death Index, and it was unknown if the recorded deaths were due to cancer or another cause.

DISCLOSURES:

The EAT trial was funded by the Australian National Health and Medical Research Council. The authors had declared no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Lung Cancer Radiation May Up AF Risk

Article Type
Changed
Fri, 02/23/2024 - 13:35

 

TOPLINE:

Radiation exposure to the pulmonary veins during radiotherapy (RT) for non–small cell lung cancer (NSCLC) raises the risk for atrial fibrillation (AF), according to new findings.

METHODOLOGY:

  • Arrhythmia — with AF being the most common type — affects roughly 11% of patients following lung cancer RT.
  • Given RT’s recognized impact on cardiac tissues over time, researchers hypothesized that the dosage affecting pulmonary veins might contribute to the observed increased rates of AF after RT.
  • To investigate, researchers looked back at 420 patients with NSCLC (52% women, median age 70) undergoing definitive RT (± chemo) with modern planning techniques at 55 Gy in 20 once-daily fractions over 4 weeks.
  • Most patients underwent treatment planning using volumetric modulated arc therapy (50%) or static gantry intensity-modulated RT (20%). Chemotherapy was administered in a minority of cases (33%).
  • Pulmonary veins were contoured on planning CT scans, and dose metrics were calculated. The association between pulmonary veins dose and incidence of new AF was evaluated, with AF verified by a cardiologist.

TAKEAWAY:

  • Out of the entire cohort, 26 patients (6%) developed AF a median of 13 months after treatment. All cases of AF were grade 3 except for two grade 4 events.
  • Radiation dose to the left and right pulmonary veins was significantly associated with incident AF. Dose volumes most strongly associated with AF were ≥ 55 Gy (V55) on the left and ≥ 10 Gy (V10) on the right.
  • The risk for AF increased by 2% per percentage point increase in the left pulmonary veins V55 and 1% in the right pulmonary veins V10. The associations were statistically significant after accounting for cardiovascular factors and risk for death risk.
  • The area under the curve for prediction of AF events was 0.64 (P = .02) for the left pulmonary veins V55 and 0.61 (P = .03) for the right pulmonary veins V10. The optimal thresholds for predicting AF were 2% and 54%, respectively.

IN PRACTICE:

“The implications of these data are that actively sparing these structures could reduce the incidence of [AF], and where this is not possible, patients identified as being at high risk of AF could undergo active screening during follow-up,” the researchers said, adding that further validation of these findings should take place before implementation.

SOURCE:

The study, with first author Gerard M. Walls, MB, MRCP, Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland, was published online on January 4 in Radiotherapy and Oncology .

LIMITATIONS:

This was a single-center, retrospective study with a small number of AF events. The study may have underestimated the relationship between pulmonary vein irradiation and new AF events. The findings needed validation in larger datasets.

DISCLOSURES:

The study had no commercial funding. The authors declared no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Radiation exposure to the pulmonary veins during radiotherapy (RT) for non–small cell lung cancer (NSCLC) raises the risk for atrial fibrillation (AF), according to new findings.

METHODOLOGY:

  • Arrhythmia — with AF being the most common type — affects roughly 11% of patients following lung cancer RT.
  • Given RT’s recognized impact on cardiac tissues over time, researchers hypothesized that the dosage affecting pulmonary veins might contribute to the observed increased rates of AF after RT.
  • To investigate, researchers looked back at 420 patients with NSCLC (52% women, median age 70) undergoing definitive RT (± chemo) with modern planning techniques at 55 Gy in 20 once-daily fractions over 4 weeks.
  • Most patients underwent treatment planning using volumetric modulated arc therapy (50%) or static gantry intensity-modulated RT (20%). Chemotherapy was administered in a minority of cases (33%).
  • Pulmonary veins were contoured on planning CT scans, and dose metrics were calculated. The association between pulmonary veins dose and incidence of new AF was evaluated, with AF verified by a cardiologist.

TAKEAWAY:

  • Out of the entire cohort, 26 patients (6%) developed AF a median of 13 months after treatment. All cases of AF were grade 3 except for two grade 4 events.
  • Radiation dose to the left and right pulmonary veins was significantly associated with incident AF. Dose volumes most strongly associated with AF were ≥ 55 Gy (V55) on the left and ≥ 10 Gy (V10) on the right.
  • The risk for AF increased by 2% per percentage point increase in the left pulmonary veins V55 and 1% in the right pulmonary veins V10. The associations were statistically significant after accounting for cardiovascular factors and risk for death risk.
  • The area under the curve for prediction of AF events was 0.64 (P = .02) for the left pulmonary veins V55 and 0.61 (P = .03) for the right pulmonary veins V10. The optimal thresholds for predicting AF were 2% and 54%, respectively.

IN PRACTICE:

“The implications of these data are that actively sparing these structures could reduce the incidence of [AF], and where this is not possible, patients identified as being at high risk of AF could undergo active screening during follow-up,” the researchers said, adding that further validation of these findings should take place before implementation.

SOURCE:

The study, with first author Gerard M. Walls, MB, MRCP, Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland, was published online on January 4 in Radiotherapy and Oncology .

LIMITATIONS:

This was a single-center, retrospective study with a small number of AF events. The study may have underestimated the relationship between pulmonary vein irradiation and new AF events. The findings needed validation in larger datasets.

DISCLOSURES:

The study had no commercial funding. The authors declared no conflicts of interest.

A version of this article appeared on Medscape.com.

 

TOPLINE:

Radiation exposure to the pulmonary veins during radiotherapy (RT) for non–small cell lung cancer (NSCLC) raises the risk for atrial fibrillation (AF), according to new findings.

METHODOLOGY:

  • Arrhythmia — with AF being the most common type — affects roughly 11% of patients following lung cancer RT.
  • Given RT’s recognized impact on cardiac tissues over time, researchers hypothesized that the dosage affecting pulmonary veins might contribute to the observed increased rates of AF after RT.
  • To investigate, researchers looked back at 420 patients with NSCLC (52% women, median age 70) undergoing definitive RT (± chemo) with modern planning techniques at 55 Gy in 20 once-daily fractions over 4 weeks.
  • Most patients underwent treatment planning using volumetric modulated arc therapy (50%) or static gantry intensity-modulated RT (20%). Chemotherapy was administered in a minority of cases (33%).
  • Pulmonary veins were contoured on planning CT scans, and dose metrics were calculated. The association between pulmonary veins dose and incidence of new AF was evaluated, with AF verified by a cardiologist.

TAKEAWAY:

  • Out of the entire cohort, 26 patients (6%) developed AF a median of 13 months after treatment. All cases of AF were grade 3 except for two grade 4 events.
  • Radiation dose to the left and right pulmonary veins was significantly associated with incident AF. Dose volumes most strongly associated with AF were ≥ 55 Gy (V55) on the left and ≥ 10 Gy (V10) on the right.
  • The risk for AF increased by 2% per percentage point increase in the left pulmonary veins V55 and 1% in the right pulmonary veins V10. The associations were statistically significant after accounting for cardiovascular factors and risk for death risk.
  • The area under the curve for prediction of AF events was 0.64 (P = .02) for the left pulmonary veins V55 and 0.61 (P = .03) for the right pulmonary veins V10. The optimal thresholds for predicting AF were 2% and 54%, respectively.

IN PRACTICE:

“The implications of these data are that actively sparing these structures could reduce the incidence of [AF], and where this is not possible, patients identified as being at high risk of AF could undergo active screening during follow-up,” the researchers said, adding that further validation of these findings should take place before implementation.

SOURCE:

The study, with first author Gerard M. Walls, MB, MRCP, Patrick G Johnston Centre for Cancer Research, Queen’s University Belfast, Belfast, Northern Ireland, was published online on January 4 in Radiotherapy and Oncology .

LIMITATIONS:

This was a single-center, retrospective study with a small number of AF events. The study may have underestimated the relationship between pulmonary vein irradiation and new AF events. The findings needed validation in larger datasets.

DISCLOSURES:

The study had no commercial funding. The authors declared no conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article