End of the road for transcranial brain stimulation as an adjunct in major depression?

Article Type
Changed
Fri, 07/14/2023 - 17:24

 

Transcranial direct current stimulation (tDCS) provides no additional benefit when added to selective serotonin reuptake inhibitor therapy for adults with major depressive disorder (MDD), results of a triple-blind, randomized, sham-controlled trial show.

The study showed no difference in mean improvement in Montgomery-Åsberg Depression Rating Scale (MADRS) scores at 6 weeks between active and sham tDCS.

“Our trial does not support the efficacy of tDCS as an additional treatment to SSRIs in adults with MDD,” the investigators, led by Frank Padberg, MD, department of psychiatry and psychotherapy, Ludwig-Maximilians-University Munich, write.

The study was published online in The Lancet.

Rigorous trial

Because it neurophysiologically modulates prefrontal cortex connectivity, tDCS has been proposed as a potential treatment for MDD.

Yet evidence for its efficacy has been inconsistent, and there is a scarcity of multicenter trial data, the researchers note.

The DepressionDC trial assessed the efficacy of tDCS in combination with SSRIs in 160 adults with MDD. Participants had a score of at least 15 on the Hamilton Depression Rating Scale (21-item version); their conditions had not responded to at least one antidepressant trial in their current depressive episode; and they had received treatment with an SSRI at a stable dose for at least 4 weeks. The SSRI was continued at the same dose during stimulation.

Eighty-three patients were allocated to undergo 30 min of 2-mA bifrontal tDCS every weekday for 4 weeks, then two tDCS sessions per week for 2 weeks; 77 patients were assigned to receive matching sham stimulation. Randomization was stratified by baseline MADRS score of less than 31 or 31 and higher.

In intention-to-treat analysis, there was no between-group difference in mean improvement on the MADRS at week 6 (–8.2 with active and –8.0 with sham tDCS; difference, 0.3; 95% confidence interval, –2.4 to 2.9).

There was also no significant difference for all secondary outcomes, including response and remission rates, patient-reported depression, and functioning, as well as at 18-week and 30-week follow-up visits.

There were significantly more mild adverse events reported in the active tDCS group than in the sham group (60% vs. 43%; P = .028). The most common adverse events were headaches, local skin reactions, and sleep-related problems.

Still reason for optimism

These findings call into question the efficacy of tDCS as add-on therapy to SSRI treatment for individuals with MDD and highlight the need for supportive evidence from multicenter studies, the investigators write.

Yet Dr. Padberg said it’s not the end of the road for tDCS for depression.

tDCS exerts a “rather subtle and nonfocal effect on neuronal activity. Thus, tDCS may need to be combined with specific behavioral or cognitive interventions which functionally involve the brain region where tDCS is targeted at,” he said.

Another “promising avenue” is personalization of tDCS by “individual MRI-based computational modeling of tDCS-induced electric fields,” he noted.

The coauthors of an accompanying commentary note that the DepressionDC trial was “carefully designed” and “well executed.”

And while the results did not show the superiority of active tDCS over sham stimulation as an additional treatment to SSRI therapy, “clinicians and researchers should not disregard this treatment for people with MDD,” write Daphne Voineskos, MD, PhD, and Daniel Blumberger, MD, with the University of Toronto.

“Specifically, further exploration of placebo response in less heterogeneous MDD samples and the evaluation of tDCS as an earlier treatment option for people with MDD are important areas of future research,” they suggest.

“Moreover, elucidating the effects of interindividual anatomical variability on electrical current distribution might lead to tDCS protocols that individualize treatment to optimize therapeutic effects as opposed to a so-called one-size-fits-all approach.

“Overall, there is reason for optimism about the potential to individualize tDCS and deliver it outside of the clinical setting,” Dr. Voineskos and Dr. Blumberger conclude.

Funding for the study was provided by the German Federal Ministry of Education and Research. Several authors disclosed relationships with the pharmaceutical industry. A complete list of disclosures of authors and comment writers is available with the original article.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Transcranial direct current stimulation (tDCS) provides no additional benefit when added to selective serotonin reuptake inhibitor therapy for adults with major depressive disorder (MDD), results of a triple-blind, randomized, sham-controlled trial show.

The study showed no difference in mean improvement in Montgomery-Åsberg Depression Rating Scale (MADRS) scores at 6 weeks between active and sham tDCS.

“Our trial does not support the efficacy of tDCS as an additional treatment to SSRIs in adults with MDD,” the investigators, led by Frank Padberg, MD, department of psychiatry and psychotherapy, Ludwig-Maximilians-University Munich, write.

The study was published online in The Lancet.

Rigorous trial

Because it neurophysiologically modulates prefrontal cortex connectivity, tDCS has been proposed as a potential treatment for MDD.

Yet evidence for its efficacy has been inconsistent, and there is a scarcity of multicenter trial data, the researchers note.

The DepressionDC trial assessed the efficacy of tDCS in combination with SSRIs in 160 adults with MDD. Participants had a score of at least 15 on the Hamilton Depression Rating Scale (21-item version); their conditions had not responded to at least one antidepressant trial in their current depressive episode; and they had received treatment with an SSRI at a stable dose for at least 4 weeks. The SSRI was continued at the same dose during stimulation.

Eighty-three patients were allocated to undergo 30 min of 2-mA bifrontal tDCS every weekday for 4 weeks, then two tDCS sessions per week for 2 weeks; 77 patients were assigned to receive matching sham stimulation. Randomization was stratified by baseline MADRS score of less than 31 or 31 and higher.

In intention-to-treat analysis, there was no between-group difference in mean improvement on the MADRS at week 6 (–8.2 with active and –8.0 with sham tDCS; difference, 0.3; 95% confidence interval, –2.4 to 2.9).

There was also no significant difference for all secondary outcomes, including response and remission rates, patient-reported depression, and functioning, as well as at 18-week and 30-week follow-up visits.

There were significantly more mild adverse events reported in the active tDCS group than in the sham group (60% vs. 43%; P = .028). The most common adverse events were headaches, local skin reactions, and sleep-related problems.

Still reason for optimism

These findings call into question the efficacy of tDCS as add-on therapy to SSRI treatment for individuals with MDD and highlight the need for supportive evidence from multicenter studies, the investigators write.

Yet Dr. Padberg said it’s not the end of the road for tDCS for depression.

tDCS exerts a “rather subtle and nonfocal effect on neuronal activity. Thus, tDCS may need to be combined with specific behavioral or cognitive interventions which functionally involve the brain region where tDCS is targeted at,” he said.

Another “promising avenue” is personalization of tDCS by “individual MRI-based computational modeling of tDCS-induced electric fields,” he noted.

The coauthors of an accompanying commentary note that the DepressionDC trial was “carefully designed” and “well executed.”

And while the results did not show the superiority of active tDCS over sham stimulation as an additional treatment to SSRI therapy, “clinicians and researchers should not disregard this treatment for people with MDD,” write Daphne Voineskos, MD, PhD, and Daniel Blumberger, MD, with the University of Toronto.

“Specifically, further exploration of placebo response in less heterogeneous MDD samples and the evaluation of tDCS as an earlier treatment option for people with MDD are important areas of future research,” they suggest.

“Moreover, elucidating the effects of interindividual anatomical variability on electrical current distribution might lead to tDCS protocols that individualize treatment to optimize therapeutic effects as opposed to a so-called one-size-fits-all approach.

“Overall, there is reason for optimism about the potential to individualize tDCS and deliver it outside of the clinical setting,” Dr. Voineskos and Dr. Blumberger conclude.

Funding for the study was provided by the German Federal Ministry of Education and Research. Several authors disclosed relationships with the pharmaceutical industry. A complete list of disclosures of authors and comment writers is available with the original article.

A version of this article first appeared on Medscape.com.

 

Transcranial direct current stimulation (tDCS) provides no additional benefit when added to selective serotonin reuptake inhibitor therapy for adults with major depressive disorder (MDD), results of a triple-blind, randomized, sham-controlled trial show.

The study showed no difference in mean improvement in Montgomery-Åsberg Depression Rating Scale (MADRS) scores at 6 weeks between active and sham tDCS.

“Our trial does not support the efficacy of tDCS as an additional treatment to SSRIs in adults with MDD,” the investigators, led by Frank Padberg, MD, department of psychiatry and psychotherapy, Ludwig-Maximilians-University Munich, write.

The study was published online in The Lancet.

Rigorous trial

Because it neurophysiologically modulates prefrontal cortex connectivity, tDCS has been proposed as a potential treatment for MDD.

Yet evidence for its efficacy has been inconsistent, and there is a scarcity of multicenter trial data, the researchers note.

The DepressionDC trial assessed the efficacy of tDCS in combination with SSRIs in 160 adults with MDD. Participants had a score of at least 15 on the Hamilton Depression Rating Scale (21-item version); their conditions had not responded to at least one antidepressant trial in their current depressive episode; and they had received treatment with an SSRI at a stable dose for at least 4 weeks. The SSRI was continued at the same dose during stimulation.

Eighty-three patients were allocated to undergo 30 min of 2-mA bifrontal tDCS every weekday for 4 weeks, then two tDCS sessions per week for 2 weeks; 77 patients were assigned to receive matching sham stimulation. Randomization was stratified by baseline MADRS score of less than 31 or 31 and higher.

In intention-to-treat analysis, there was no between-group difference in mean improvement on the MADRS at week 6 (–8.2 with active and –8.0 with sham tDCS; difference, 0.3; 95% confidence interval, –2.4 to 2.9).

There was also no significant difference for all secondary outcomes, including response and remission rates, patient-reported depression, and functioning, as well as at 18-week and 30-week follow-up visits.

There were significantly more mild adverse events reported in the active tDCS group than in the sham group (60% vs. 43%; P = .028). The most common adverse events were headaches, local skin reactions, and sleep-related problems.

Still reason for optimism

These findings call into question the efficacy of tDCS as add-on therapy to SSRI treatment for individuals with MDD and highlight the need for supportive evidence from multicenter studies, the investigators write.

Yet Dr. Padberg said it’s not the end of the road for tDCS for depression.

tDCS exerts a “rather subtle and nonfocal effect on neuronal activity. Thus, tDCS may need to be combined with specific behavioral or cognitive interventions which functionally involve the brain region where tDCS is targeted at,” he said.

Another “promising avenue” is personalization of tDCS by “individual MRI-based computational modeling of tDCS-induced electric fields,” he noted.

The coauthors of an accompanying commentary note that the DepressionDC trial was “carefully designed” and “well executed.”

And while the results did not show the superiority of active tDCS over sham stimulation as an additional treatment to SSRI therapy, “clinicians and researchers should not disregard this treatment for people with MDD,” write Daphne Voineskos, MD, PhD, and Daniel Blumberger, MD, with the University of Toronto.

“Specifically, further exploration of placebo response in less heterogeneous MDD samples and the evaluation of tDCS as an earlier treatment option for people with MDD are important areas of future research,” they suggest.

“Moreover, elucidating the effects of interindividual anatomical variability on electrical current distribution might lead to tDCS protocols that individualize treatment to optimize therapeutic effects as opposed to a so-called one-size-fits-all approach.

“Overall, there is reason for optimism about the potential to individualize tDCS and deliver it outside of the clinical setting,” Dr. Voineskos and Dr. Blumberger conclude.

Funding for the study was provided by the German Federal Ministry of Education and Research. Several authors disclosed relationships with the pharmaceutical industry. A complete list of disclosures of authors and comment writers is available with the original article.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Schizophrenia up to three times more common than previously thought

Article Type
Changed
Mon, 07/10/2023 - 09:13

Roughly 3.7 million adults have a history of schizophrenia spectrum disorders – a figure two to three times higher than previously assumed, according to the first study to estimate the national prevalence of schizophrenia spectrum disorders.

This finding is “especially important,” given that people with schizophrenia spectrum disorders experience “high levels of disability that present significant challenges in all aspects of their life,” principal investigator Heather Ringeisen, PhD, with RTI International, a nonprofit research institute based on Research Triangle Park, N.C., said in a statement.

The results “highlight the need to improve systems of care and access to treatment for people with schizophrenia and other mental health disorders,” added co–principal investigator Mark J. Edlund, MD, PhD, also with RTI.

The study also found that prevalence rates of many other nonpsychotic disorders were generally within an expected range in light of findings from prior research – with three exceptions.

Rates of major depressive disorder (MDD), generalized anxiety disorder (GAD), and obsessive-compulsive disorder (OCD) were higher than reported in past nationally representative samples.

The new data come from the Mental and Substance Use Disorder Prevalence Study (MDPS), a pilot program funded by the Substance Abuse and Mental Health Services Administration (SAMHSA).

A nationally representative sample of 5,679 adults aged 18-65 residing in U.S. households, prisons, homeless shelters, and state psychiatric hospitals were interviewed, virtually or in person, between October 2020 and October 2022.

The research team used a population-based version of the Structured Clinical Interview of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; SCID-5) for mental health and substance use disorder diagnostic assessment.

Among the key findings in the report:

  • Nearly 2% of adults (about 3.7 million) had a lifetime history of schizophrenia spectrum disorders, which include schizophrenia, schizoaffective disorder, and schizophreniform disorder.
  • Roughly 2.5 million adults (1.2%) met diagnostic criteria for a schizophrenia spectrum disorder in the past year.
  • The two most common mental disorders among adults were MDD (15.5%, or about 31.4 million) and GAD (10.0%, or about 20.2 million).
  • Approximately 8.2 million adults (4.1%) had past-year posttraumatic stress disorder, about 5.0 million (2.5%) had OCD, and roughly 3.1 million (1.5%) had bipolar I disorder.
  • Alcohol use disorder (AUD) was the most common substance use disorder among adults aged 18-65; roughly 13.4 million adults (6.7%) met criteria for AUD in the past year.
  • About 7.7 million adults (3.8%) had cannabis use disorder, about 3.2 million (1.6%) had stimulant use disorder, and about 1 million (0.5%) had opioid use disorder.

Multiple comorbidities

The data also show that one in four adults had at least one mental health disorder in the past year, most commonly MDD and GAD.

About 11% of adults met the criteria for at least one substance use disorder, with AUD and cannabis use disorder the most common.

In addition, an estimated 11 million adults aged 18-65 had both a mental health disorder and a substance use disorder in the past year.

Encouragingly, the findings suggest that more individuals are seeking and accessing treatment compared with previous studies, the authors noted; 61% of adults with a mental health disorder reported having at least one visit with a treatment provider in the past year.

However, considerable treatment gaps still exist for the most common mental health disorders, they reported. Within the past year, more than 40% of adults with MDD and more than 30% of those with GAD did not receive any treatment services.

The full report is available online.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Roughly 3.7 million adults have a history of schizophrenia spectrum disorders – a figure two to three times higher than previously assumed, according to the first study to estimate the national prevalence of schizophrenia spectrum disorders.

This finding is “especially important,” given that people with schizophrenia spectrum disorders experience “high levels of disability that present significant challenges in all aspects of their life,” principal investigator Heather Ringeisen, PhD, with RTI International, a nonprofit research institute based on Research Triangle Park, N.C., said in a statement.

The results “highlight the need to improve systems of care and access to treatment for people with schizophrenia and other mental health disorders,” added co–principal investigator Mark J. Edlund, MD, PhD, also with RTI.

The study also found that prevalence rates of many other nonpsychotic disorders were generally within an expected range in light of findings from prior research – with three exceptions.

Rates of major depressive disorder (MDD), generalized anxiety disorder (GAD), and obsessive-compulsive disorder (OCD) were higher than reported in past nationally representative samples.

The new data come from the Mental and Substance Use Disorder Prevalence Study (MDPS), a pilot program funded by the Substance Abuse and Mental Health Services Administration (SAMHSA).

A nationally representative sample of 5,679 adults aged 18-65 residing in U.S. households, prisons, homeless shelters, and state psychiatric hospitals were interviewed, virtually or in person, between October 2020 and October 2022.

The research team used a population-based version of the Structured Clinical Interview of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; SCID-5) for mental health and substance use disorder diagnostic assessment.

Among the key findings in the report:

  • Nearly 2% of adults (about 3.7 million) had a lifetime history of schizophrenia spectrum disorders, which include schizophrenia, schizoaffective disorder, and schizophreniform disorder.
  • Roughly 2.5 million adults (1.2%) met diagnostic criteria for a schizophrenia spectrum disorder in the past year.
  • The two most common mental disorders among adults were MDD (15.5%, or about 31.4 million) and GAD (10.0%, or about 20.2 million).
  • Approximately 8.2 million adults (4.1%) had past-year posttraumatic stress disorder, about 5.0 million (2.5%) had OCD, and roughly 3.1 million (1.5%) had bipolar I disorder.
  • Alcohol use disorder (AUD) was the most common substance use disorder among adults aged 18-65; roughly 13.4 million adults (6.7%) met criteria for AUD in the past year.
  • About 7.7 million adults (3.8%) had cannabis use disorder, about 3.2 million (1.6%) had stimulant use disorder, and about 1 million (0.5%) had opioid use disorder.

Multiple comorbidities

The data also show that one in four adults had at least one mental health disorder in the past year, most commonly MDD and GAD.

About 11% of adults met the criteria for at least one substance use disorder, with AUD and cannabis use disorder the most common.

In addition, an estimated 11 million adults aged 18-65 had both a mental health disorder and a substance use disorder in the past year.

Encouragingly, the findings suggest that more individuals are seeking and accessing treatment compared with previous studies, the authors noted; 61% of adults with a mental health disorder reported having at least one visit with a treatment provider in the past year.

However, considerable treatment gaps still exist for the most common mental health disorders, they reported. Within the past year, more than 40% of adults with MDD and more than 30% of those with GAD did not receive any treatment services.

The full report is available online.

A version of this article originally appeared on Medscape.com.

Roughly 3.7 million adults have a history of schizophrenia spectrum disorders – a figure two to three times higher than previously assumed, according to the first study to estimate the national prevalence of schizophrenia spectrum disorders.

This finding is “especially important,” given that people with schizophrenia spectrum disorders experience “high levels of disability that present significant challenges in all aspects of their life,” principal investigator Heather Ringeisen, PhD, with RTI International, a nonprofit research institute based on Research Triangle Park, N.C., said in a statement.

The results “highlight the need to improve systems of care and access to treatment for people with schizophrenia and other mental health disorders,” added co–principal investigator Mark J. Edlund, MD, PhD, also with RTI.

The study also found that prevalence rates of many other nonpsychotic disorders were generally within an expected range in light of findings from prior research – with three exceptions.

Rates of major depressive disorder (MDD), generalized anxiety disorder (GAD), and obsessive-compulsive disorder (OCD) were higher than reported in past nationally representative samples.

The new data come from the Mental and Substance Use Disorder Prevalence Study (MDPS), a pilot program funded by the Substance Abuse and Mental Health Services Administration (SAMHSA).

A nationally representative sample of 5,679 adults aged 18-65 residing in U.S. households, prisons, homeless shelters, and state psychiatric hospitals were interviewed, virtually or in person, between October 2020 and October 2022.

The research team used a population-based version of the Structured Clinical Interview of the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5; SCID-5) for mental health and substance use disorder diagnostic assessment.

Among the key findings in the report:

  • Nearly 2% of adults (about 3.7 million) had a lifetime history of schizophrenia spectrum disorders, which include schizophrenia, schizoaffective disorder, and schizophreniform disorder.
  • Roughly 2.5 million adults (1.2%) met diagnostic criteria for a schizophrenia spectrum disorder in the past year.
  • The two most common mental disorders among adults were MDD (15.5%, or about 31.4 million) and GAD (10.0%, or about 20.2 million).
  • Approximately 8.2 million adults (4.1%) had past-year posttraumatic stress disorder, about 5.0 million (2.5%) had OCD, and roughly 3.1 million (1.5%) had bipolar I disorder.
  • Alcohol use disorder (AUD) was the most common substance use disorder among adults aged 18-65; roughly 13.4 million adults (6.7%) met criteria for AUD in the past year.
  • About 7.7 million adults (3.8%) had cannabis use disorder, about 3.2 million (1.6%) had stimulant use disorder, and about 1 million (0.5%) had opioid use disorder.

Multiple comorbidities

The data also show that one in four adults had at least one mental health disorder in the past year, most commonly MDD and GAD.

About 11% of adults met the criteria for at least one substance use disorder, with AUD and cannabis use disorder the most common.

In addition, an estimated 11 million adults aged 18-65 had both a mental health disorder and a substance use disorder in the past year.

Encouragingly, the findings suggest that more individuals are seeking and accessing treatment compared with previous studies, the authors noted; 61% of adults with a mental health disorder reported having at least one visit with a treatment provider in the past year.

However, considerable treatment gaps still exist for the most common mental health disorders, they reported. Within the past year, more than 40% of adults with MDD and more than 30% of those with GAD did not receive any treatment services.

The full report is available online.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Smartwatches able to detect very early signs of Parkinson’s

Article Type
Changed
Mon, 07/17/2023 - 14:45

Changes in movement detected passively by smartwatches can help flag Parkinson’s disease (PD) years before symptom onset, new research shows.

An analysis of wearable motion-tracking data from UK Biobank participants showed a strong correlation between reduced daytime movement over 1 week and a clinical diagnosis of PD up to 7 years later.

“Smartwatch data is easily accessible and low cost. By using this type of data, we would potentially be able to identify individuals in the very early stages of Parkinson’s disease within the general population,” lead researcher Cynthia Sandor, PhD, from Cardiff (Wales) University, said in a statement.

“We have shown here that a single week of data captured can predict events up to 7 years in the future. With these results we could develop a valuable screening tool to aid in the early detection of Parkinson’s,” she added.

“This has implications both for research, in improving recruitment into clinical trials, and in clinical practice, in allowing patients to access treatments at an earlier stage, in future when such treatments become available,” said Dr. Sandor.

The study was published online in Nature Medicine.
 

Novel biomarker for PD

Using machine learning, the researchers analyzed accelerometry data from 103,712 UK Biobank participants who wore a medical-grade smartwatch for a 7-day period during 2013-2016.

At the time of or within 2 years after accelerometry data collection, 273 participants were diagnosed with PD. An additional 196 individuals received a new PD diagnosis more than 2 years after accelerometry data collection (the prodromal group).

The patients with prodromal symptoms of PD and those who were diagnosed with PD showed a significantly reduced daytime acceleration profile up to 7 years before diagnosis, compared with age- and sex-matched healthy control persons, the researchers found.



The reduction in acceleration both before and following diagnosis was unique to patients with PD, “suggesting this measure to be disease specific with potential for use in early identification of individuals likely to be diagnosed with PD,” they wrote.

Accelerometry data proved more accurate than other risk factors (lifestyle, genetics, blood chemistry) or recognized prodromal symptoms of PD in predicting whether an individual would develop PD.

“Our results suggest that accelerometry collected with wearable devices in the general population could be used to identify those at elevated risk for PD on an unprecedented scale and, importantly, individuals who will likely convert within the next few years can be included in studies for neuroprotective treatments,” the researchers conclude in their article.

High-quality research

In a statement from the U.K.-based nonprofit Science Media Centre, José López Barneo, MD, PhD, with the University of Seville (Spain), said this “good quality” study “fits well with current knowledge.”

Dr. Barneo noted that other investigators have also observed that slowness of movement is a characteristic feature of some people who subsequently develop PD.

But these studies involved preselected cohorts of persons at risk of developing PD, or they were carried out in a hospital that required healthcare staff to conduct the movement analysis. In contrast, the current study was conducted in a very large cohort from the general U.K. population.

Also weighing in, José Luis Lanciego, MD, PhD, with the University of Navarra (Spain), said the “main value of this study is that it has demonstrated that accelerometry measurements obtained using wearable devices (such as a smartwatch or other similar devices) are more useful than the assessment of any other potentially prodromal symptom in identifying which people in the [general] population are at increased risk of developing Parkinson’s disease in the future, as well as being able to estimate how many years it will take to start suffering from this neurodegenerative process.

“In these diseases, early diagnosis is to some extent questionable, as early diagnosis is of little use if neuroprotective treatment is not available,” Dr. Lanciego noted.

“However, it is of great importance for use in clinical trials aimed at evaluating the efficacy of new potentially neuroprotective treatments whose main objective is to slow down – and, ideally, even halt ― the clinical progression that typically characterizes Parkinson’s disease,” Dr. Lanciego added.

The study was funded by the UK Dementia Research Institute, the Welsh government, and Cardiff University. Dr. Sandor, Dr. Barneo, and Dr. Lanciego have no relevant disclosures.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Changes in movement detected passively by smartwatches can help flag Parkinson’s disease (PD) years before symptom onset, new research shows.

An analysis of wearable motion-tracking data from UK Biobank participants showed a strong correlation between reduced daytime movement over 1 week and a clinical diagnosis of PD up to 7 years later.

“Smartwatch data is easily accessible and low cost. By using this type of data, we would potentially be able to identify individuals in the very early stages of Parkinson’s disease within the general population,” lead researcher Cynthia Sandor, PhD, from Cardiff (Wales) University, said in a statement.

“We have shown here that a single week of data captured can predict events up to 7 years in the future. With these results we could develop a valuable screening tool to aid in the early detection of Parkinson’s,” she added.

“This has implications both for research, in improving recruitment into clinical trials, and in clinical practice, in allowing patients to access treatments at an earlier stage, in future when such treatments become available,” said Dr. Sandor.

The study was published online in Nature Medicine.
 

Novel biomarker for PD

Using machine learning, the researchers analyzed accelerometry data from 103,712 UK Biobank participants who wore a medical-grade smartwatch for a 7-day period during 2013-2016.

At the time of or within 2 years after accelerometry data collection, 273 participants were diagnosed with PD. An additional 196 individuals received a new PD diagnosis more than 2 years after accelerometry data collection (the prodromal group).

The patients with prodromal symptoms of PD and those who were diagnosed with PD showed a significantly reduced daytime acceleration profile up to 7 years before diagnosis, compared with age- and sex-matched healthy control persons, the researchers found.



The reduction in acceleration both before and following diagnosis was unique to patients with PD, “suggesting this measure to be disease specific with potential for use in early identification of individuals likely to be diagnosed with PD,” they wrote.

Accelerometry data proved more accurate than other risk factors (lifestyle, genetics, blood chemistry) or recognized prodromal symptoms of PD in predicting whether an individual would develop PD.

“Our results suggest that accelerometry collected with wearable devices in the general population could be used to identify those at elevated risk for PD on an unprecedented scale and, importantly, individuals who will likely convert within the next few years can be included in studies for neuroprotective treatments,” the researchers conclude in their article.

High-quality research

In a statement from the U.K.-based nonprofit Science Media Centre, José López Barneo, MD, PhD, with the University of Seville (Spain), said this “good quality” study “fits well with current knowledge.”

Dr. Barneo noted that other investigators have also observed that slowness of movement is a characteristic feature of some people who subsequently develop PD.

But these studies involved preselected cohorts of persons at risk of developing PD, or they were carried out in a hospital that required healthcare staff to conduct the movement analysis. In contrast, the current study was conducted in a very large cohort from the general U.K. population.

Also weighing in, José Luis Lanciego, MD, PhD, with the University of Navarra (Spain), said the “main value of this study is that it has demonstrated that accelerometry measurements obtained using wearable devices (such as a smartwatch or other similar devices) are more useful than the assessment of any other potentially prodromal symptom in identifying which people in the [general] population are at increased risk of developing Parkinson’s disease in the future, as well as being able to estimate how many years it will take to start suffering from this neurodegenerative process.

“In these diseases, early diagnosis is to some extent questionable, as early diagnosis is of little use if neuroprotective treatment is not available,” Dr. Lanciego noted.

“However, it is of great importance for use in clinical trials aimed at evaluating the efficacy of new potentially neuroprotective treatments whose main objective is to slow down – and, ideally, even halt ― the clinical progression that typically characterizes Parkinson’s disease,” Dr. Lanciego added.

The study was funded by the UK Dementia Research Institute, the Welsh government, and Cardiff University. Dr. Sandor, Dr. Barneo, and Dr. Lanciego have no relevant disclosures.

A version of this article originally appeared on Medscape.com.

Changes in movement detected passively by smartwatches can help flag Parkinson’s disease (PD) years before symptom onset, new research shows.

An analysis of wearable motion-tracking data from UK Biobank participants showed a strong correlation between reduced daytime movement over 1 week and a clinical diagnosis of PD up to 7 years later.

“Smartwatch data is easily accessible and low cost. By using this type of data, we would potentially be able to identify individuals in the very early stages of Parkinson’s disease within the general population,” lead researcher Cynthia Sandor, PhD, from Cardiff (Wales) University, said in a statement.

“We have shown here that a single week of data captured can predict events up to 7 years in the future. With these results we could develop a valuable screening tool to aid in the early detection of Parkinson’s,” she added.

“This has implications both for research, in improving recruitment into clinical trials, and in clinical practice, in allowing patients to access treatments at an earlier stage, in future when such treatments become available,” said Dr. Sandor.

The study was published online in Nature Medicine.
 

Novel biomarker for PD

Using machine learning, the researchers analyzed accelerometry data from 103,712 UK Biobank participants who wore a medical-grade smartwatch for a 7-day period during 2013-2016.

At the time of or within 2 years after accelerometry data collection, 273 participants were diagnosed with PD. An additional 196 individuals received a new PD diagnosis more than 2 years after accelerometry data collection (the prodromal group).

The patients with prodromal symptoms of PD and those who were diagnosed with PD showed a significantly reduced daytime acceleration profile up to 7 years before diagnosis, compared with age- and sex-matched healthy control persons, the researchers found.



The reduction in acceleration both before and following diagnosis was unique to patients with PD, “suggesting this measure to be disease specific with potential for use in early identification of individuals likely to be diagnosed with PD,” they wrote.

Accelerometry data proved more accurate than other risk factors (lifestyle, genetics, blood chemistry) or recognized prodromal symptoms of PD in predicting whether an individual would develop PD.

“Our results suggest that accelerometry collected with wearable devices in the general population could be used to identify those at elevated risk for PD on an unprecedented scale and, importantly, individuals who will likely convert within the next few years can be included in studies for neuroprotective treatments,” the researchers conclude in their article.

High-quality research

In a statement from the U.K.-based nonprofit Science Media Centre, José López Barneo, MD, PhD, with the University of Seville (Spain), said this “good quality” study “fits well with current knowledge.”

Dr. Barneo noted that other investigators have also observed that slowness of movement is a characteristic feature of some people who subsequently develop PD.

But these studies involved preselected cohorts of persons at risk of developing PD, or they were carried out in a hospital that required healthcare staff to conduct the movement analysis. In contrast, the current study was conducted in a very large cohort from the general U.K. population.

Also weighing in, José Luis Lanciego, MD, PhD, with the University of Navarra (Spain), said the “main value of this study is that it has demonstrated that accelerometry measurements obtained using wearable devices (such as a smartwatch or other similar devices) are more useful than the assessment of any other potentially prodromal symptom in identifying which people in the [general] population are at increased risk of developing Parkinson’s disease in the future, as well as being able to estimate how many years it will take to start suffering from this neurodegenerative process.

“In these diseases, early diagnosis is to some extent questionable, as early diagnosis is of little use if neuroprotective treatment is not available,” Dr. Lanciego noted.

“However, it is of great importance for use in clinical trials aimed at evaluating the efficacy of new potentially neuroprotective treatments whose main objective is to slow down – and, ideally, even halt ― the clinical progression that typically characterizes Parkinson’s disease,” Dr. Lanciego added.

The study was funded by the UK Dementia Research Institute, the Welsh government, and Cardiff University. Dr. Sandor, Dr. Barneo, and Dr. Lanciego have no relevant disclosures.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Coffee’s brain-boosting effect goes beyond caffeine

Article Type
Changed
Mon, 07/17/2023 - 14:45

Coffee’s ability to boost alertness is commonly attributed to caffeine, but new research suggests there may be other underlying mechanisms that explain this effect.

“There is a widespread anticipation that coffee boosts alertness and psychomotor performance. By gaining a deeper understanding of the mechanisms underlying this biological phenomenon, we pave the way for investigating the factors that can influence it and even exploring the potential advantages of those mechanisms,” study investigator Nuno Sousa, MD, PhD, with the University of Minho, Braga, Portugal, said in a statement.

The study was published online in Frontiers in Behavioral Neuroscience.
 

Caffeine can’t take all the credit

Certain compounds in coffee, including caffeine and chlorogenic acids, have well-documented psychoactive effects, but the psychological impact of coffee/caffeine consumption as a whole remains a matter of debate.

The researchers investigated the neurobiological impact of coffee drinking on brain connectivity using resting-state functional MRI (fMRI).

They recruited 47 generally healthy adults (mean age, 30 years; 31 women) who regularly drank a minimum of one cup of coffee per day. Participants refrained from eating or drinking caffeinated beverages for at least 3 hours prior to undergoing fMRI.

To tease out the specific impact of caffeinated coffee intake, 30 habitual coffee drinkers (mean age, 32 years; 27 women) were given hot water containing the same amount of caffeine, but they were not given coffee.

The investigators conducted two fMRI scans – one before, and one 30 minutes after drinking coffee or caffeine-infused water.

Both drinking coffee and drinking plain caffeine in water led to a decrease in functional connectivity of the brain’s default mode network, which is typically active during self-reflection in resting states.



This finding suggests that consuming either coffee or caffeine heightened individuals’ readiness to transition from a state of rest to engaging in task-related activities, the researchers noted.

However, drinking a cup of coffee also boosted connectivity in the higher visual network and the right executive control network, which are linked to working memory, cognitive control, and goal-directed behavior – something that did not occur from drinking caffeinated water.

“Put simply, individuals exhibited a heightened state of preparedness, being more responsive and attentive to external stimuli after drinking coffee,” said first author Maria Picó-Pérez, PhD, with the University of Minho.

Given that some of the effects of coffee also occurred with caffeine alone, it’s “plausible to assume that other caffeinated beverages may share similar effects,” she added.

Still, certain effects were specific to coffee drinking, “likely influenced by factors such as the distinct aroma and taste of coffee or the psychological expectations associated with consuming this particular beverage,” the researcher wrote.

The investigators report that the observations could provide a scientific foundation for the common belief that coffee increases alertness and cognitive functioning. Further research is needed to differentiate the effects of caffeine from the overall experience of drinking coffee.

A limitation of the study is the absence of a nondrinker control sample (to rule out the withdrawal effect) or an alternative group that consumed decaffeinated coffee (to rule out the placebo effect of coffee intake) – something that should be considered in future studies, the researchers noted.

The study was funded by the Institute for the Scientific Information on Coffee. The authors declared no relevant conflicts of interest.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Coffee’s ability to boost alertness is commonly attributed to caffeine, but new research suggests there may be other underlying mechanisms that explain this effect.

“There is a widespread anticipation that coffee boosts alertness and psychomotor performance. By gaining a deeper understanding of the mechanisms underlying this biological phenomenon, we pave the way for investigating the factors that can influence it and even exploring the potential advantages of those mechanisms,” study investigator Nuno Sousa, MD, PhD, with the University of Minho, Braga, Portugal, said in a statement.

The study was published online in Frontiers in Behavioral Neuroscience.
 

Caffeine can’t take all the credit

Certain compounds in coffee, including caffeine and chlorogenic acids, have well-documented psychoactive effects, but the psychological impact of coffee/caffeine consumption as a whole remains a matter of debate.

The researchers investigated the neurobiological impact of coffee drinking on brain connectivity using resting-state functional MRI (fMRI).

They recruited 47 generally healthy adults (mean age, 30 years; 31 women) who regularly drank a minimum of one cup of coffee per day. Participants refrained from eating or drinking caffeinated beverages for at least 3 hours prior to undergoing fMRI.

To tease out the specific impact of caffeinated coffee intake, 30 habitual coffee drinkers (mean age, 32 years; 27 women) were given hot water containing the same amount of caffeine, but they were not given coffee.

The investigators conducted two fMRI scans – one before, and one 30 minutes after drinking coffee or caffeine-infused water.

Both drinking coffee and drinking plain caffeine in water led to a decrease in functional connectivity of the brain’s default mode network, which is typically active during self-reflection in resting states.



This finding suggests that consuming either coffee or caffeine heightened individuals’ readiness to transition from a state of rest to engaging in task-related activities, the researchers noted.

However, drinking a cup of coffee also boosted connectivity in the higher visual network and the right executive control network, which are linked to working memory, cognitive control, and goal-directed behavior – something that did not occur from drinking caffeinated water.

“Put simply, individuals exhibited a heightened state of preparedness, being more responsive and attentive to external stimuli after drinking coffee,” said first author Maria Picó-Pérez, PhD, with the University of Minho.

Given that some of the effects of coffee also occurred with caffeine alone, it’s “plausible to assume that other caffeinated beverages may share similar effects,” she added.

Still, certain effects were specific to coffee drinking, “likely influenced by factors such as the distinct aroma and taste of coffee or the psychological expectations associated with consuming this particular beverage,” the researcher wrote.

The investigators report that the observations could provide a scientific foundation for the common belief that coffee increases alertness and cognitive functioning. Further research is needed to differentiate the effects of caffeine from the overall experience of drinking coffee.

A limitation of the study is the absence of a nondrinker control sample (to rule out the withdrawal effect) or an alternative group that consumed decaffeinated coffee (to rule out the placebo effect of coffee intake) – something that should be considered in future studies, the researchers noted.

The study was funded by the Institute for the Scientific Information on Coffee. The authors declared no relevant conflicts of interest.

A version of this article originally appeared on Medscape.com.

Coffee’s ability to boost alertness is commonly attributed to caffeine, but new research suggests there may be other underlying mechanisms that explain this effect.

“There is a widespread anticipation that coffee boosts alertness and psychomotor performance. By gaining a deeper understanding of the mechanisms underlying this biological phenomenon, we pave the way for investigating the factors that can influence it and even exploring the potential advantages of those mechanisms,” study investigator Nuno Sousa, MD, PhD, with the University of Minho, Braga, Portugal, said in a statement.

The study was published online in Frontiers in Behavioral Neuroscience.
 

Caffeine can’t take all the credit

Certain compounds in coffee, including caffeine and chlorogenic acids, have well-documented psychoactive effects, but the psychological impact of coffee/caffeine consumption as a whole remains a matter of debate.

The researchers investigated the neurobiological impact of coffee drinking on brain connectivity using resting-state functional MRI (fMRI).

They recruited 47 generally healthy adults (mean age, 30 years; 31 women) who regularly drank a minimum of one cup of coffee per day. Participants refrained from eating or drinking caffeinated beverages for at least 3 hours prior to undergoing fMRI.

To tease out the specific impact of caffeinated coffee intake, 30 habitual coffee drinkers (mean age, 32 years; 27 women) were given hot water containing the same amount of caffeine, but they were not given coffee.

The investigators conducted two fMRI scans – one before, and one 30 minutes after drinking coffee or caffeine-infused water.

Both drinking coffee and drinking plain caffeine in water led to a decrease in functional connectivity of the brain’s default mode network, which is typically active during self-reflection in resting states.



This finding suggests that consuming either coffee or caffeine heightened individuals’ readiness to transition from a state of rest to engaging in task-related activities, the researchers noted.

However, drinking a cup of coffee also boosted connectivity in the higher visual network and the right executive control network, which are linked to working memory, cognitive control, and goal-directed behavior – something that did not occur from drinking caffeinated water.

“Put simply, individuals exhibited a heightened state of preparedness, being more responsive and attentive to external stimuli after drinking coffee,” said first author Maria Picó-Pérez, PhD, with the University of Minho.

Given that some of the effects of coffee also occurred with caffeine alone, it’s “plausible to assume that other caffeinated beverages may share similar effects,” she added.

Still, certain effects were specific to coffee drinking, “likely influenced by factors such as the distinct aroma and taste of coffee or the psychological expectations associated with consuming this particular beverage,” the researcher wrote.

The investigators report that the observations could provide a scientific foundation for the common belief that coffee increases alertness and cognitive functioning. Further research is needed to differentiate the effects of caffeine from the overall experience of drinking coffee.

A limitation of the study is the absence of a nondrinker control sample (to rule out the withdrawal effect) or an alternative group that consumed decaffeinated coffee (to rule out the placebo effect of coffee intake) – something that should be considered in future studies, the researchers noted.

The study was funded by the Institute for the Scientific Information on Coffee. The authors declared no relevant conflicts of interest.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM FRONTIERS IN BEHAVIORAL NEUROSCIENCE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Novel tool accurately predicts suicide after self-harm

Article Type
Changed
Wed, 07/12/2023 - 10:36

Investigators have developed and validated a new risk calculator to help predict death by suicide in the 6-12 months after an episode of nonfatal self-harm, new research shows.

A study led by Seena Fazel, MBChB, MD, University of Oxford, England, suggests the Oxford Suicide Assessment Tool for Self-harm (OxSATS) may help guide treatment decisions and target resources to those most in need, the researchers note.

“Many tools use only simple high/low categories, whereas OxSATS includes probability scores, which align more closely with risk calculators in cardiovascular medicine, such as the Framingham Risk Score, and prognostic models in cancer medicine, which provide 5-year survival probabilities. This potentially allows OxSATS to inform clinical decision-making more directly,” Dr. Fazel told this news organization.

The findings were published online in BMJ Mental Health.
 

Targeted tool

Self-harm is associated with a 1-year risk of suicide that is 20 times higher than that of the general population. Given that about 16 million people self-harm annually, the impact at a population level is potentially quite large, the researchers note.

Current structured approaches to gauge suicide risk among those who have engaged in self-harm are based on tools developed for other purposes and symptom checklists. “Their poor to moderate performance is therefore not unexpected,” Dr. Fazel told this news organization.

In contrast, OxSATS was specifically developed to predict suicide mortality after self-harm.

Dr. Fazel’s group evaluated data on 53,172 Swedish individuals aged 10 years and older who sought emergency medical care after episodes of self-harm.

The development cohort included 37,523 individuals. Of these, 391 died by suicide within 12 months. The validation cohort included 15,649 individuals; of these people, 178 died by suicide within 12 months.

The final OxSATS model includes 11 predictors related to age and sex, as well as variables related to substance misuse, mental health, and treatment and history of self-harm.

“The performance of the model in external validation was good, with c-index at 6 and 12 months of 0.77,” the researchers note.

Using a cutoff threshold of 1%, the OxSATS correctly identified 68% of those who died by suicide within 6 months, while 71% of those who didn’t die were correctly classified as being at low risk. The figures for risk prediction at 12 months were 82% and 54%, respectively.

The OxSATS has been made into a simple online tool with probability scores for suicide at 6 and 12 months after an episode of self-harm, but without linkage to interventions. A tool on its own is unlikely to improve outcomes, said Dr. Fazel.

“However,” he added, “it can improve consistency in the assessment process, especially in busy clinical settings where people from different professional backgrounds and experience undertake such assessments. It can also highlight the role of modifiable risk factors and provide an opportunity to transparently discuss risk with patients and their carers.”
 

Valuable work

Reached for comment, Igor Galynker, MD, PhD, professor of psychiatry at the Icahn School of Medicine at Mount Sinai, New York, said that this is a “very solid study with a very large sample size and solid statistical analysis.”

Another strength of the research is the outcome of suicide death versus suicide attempt or suicidal ideation. “In that respect, it is a valuable paper,” Dr. Galynker, who directs the Mount Sinai Beth Israel Suicide Research Laboratory, told this news organization.

He noted that there are no new risk factors in the model. Rather, the model contains the typical risk factors for suicide, which include male sex, substance misuse, past suicide attempt, and psychiatric diagnosis.

“The strongest risk factor in the model is self-harm by hanging, strangulation, or suffocation, which has been shown before and is therefore unsurprising,” said Dr. Galynker.

In general, the risk factors included in the model are often part of administrative tools for suicide risk assessment, said Dr. Galynker, but the OxSATS “seems easier to use because it has 11 items only.”

Broadly speaking, individuals with mental illness and past suicide attempt, past self-harm, alcohol use, and other risk factors “should be treated proactively with suicide prevention measures,” he told this news organization.

As previously reported, Dr. Galynker and colleagues have developed the Abbreviated Suicide Crisis Syndrome Checklist (A-SCS-C), a novel tool to help identify which suicidal patients who present to the emergency department should be admitted to hospital and which patients can be safely discharged.

Funding for the study was provided by Wellcome Trust and the Swedish Research Council. Dr. Fazel and Dr. Galynker have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Investigators have developed and validated a new risk calculator to help predict death by suicide in the 6-12 months after an episode of nonfatal self-harm, new research shows.

A study led by Seena Fazel, MBChB, MD, University of Oxford, England, suggests the Oxford Suicide Assessment Tool for Self-harm (OxSATS) may help guide treatment decisions and target resources to those most in need, the researchers note.

“Many tools use only simple high/low categories, whereas OxSATS includes probability scores, which align more closely with risk calculators in cardiovascular medicine, such as the Framingham Risk Score, and prognostic models in cancer medicine, which provide 5-year survival probabilities. This potentially allows OxSATS to inform clinical decision-making more directly,” Dr. Fazel told this news organization.

The findings were published online in BMJ Mental Health.
 

Targeted tool

Self-harm is associated with a 1-year risk of suicide that is 20 times higher than that of the general population. Given that about 16 million people self-harm annually, the impact at a population level is potentially quite large, the researchers note.

Current structured approaches to gauge suicide risk among those who have engaged in self-harm are based on tools developed for other purposes and symptom checklists. “Their poor to moderate performance is therefore not unexpected,” Dr. Fazel told this news organization.

In contrast, OxSATS was specifically developed to predict suicide mortality after self-harm.

Dr. Fazel’s group evaluated data on 53,172 Swedish individuals aged 10 years and older who sought emergency medical care after episodes of self-harm.

The development cohort included 37,523 individuals. Of these, 391 died by suicide within 12 months. The validation cohort included 15,649 individuals; of these people, 178 died by suicide within 12 months.

The final OxSATS model includes 11 predictors related to age and sex, as well as variables related to substance misuse, mental health, and treatment and history of self-harm.

“The performance of the model in external validation was good, with c-index at 6 and 12 months of 0.77,” the researchers note.

Using a cutoff threshold of 1%, the OxSATS correctly identified 68% of those who died by suicide within 6 months, while 71% of those who didn’t die were correctly classified as being at low risk. The figures for risk prediction at 12 months were 82% and 54%, respectively.

The OxSATS has been made into a simple online tool with probability scores for suicide at 6 and 12 months after an episode of self-harm, but without linkage to interventions. A tool on its own is unlikely to improve outcomes, said Dr. Fazel.

“However,” he added, “it can improve consistency in the assessment process, especially in busy clinical settings where people from different professional backgrounds and experience undertake such assessments. It can also highlight the role of modifiable risk factors and provide an opportunity to transparently discuss risk with patients and their carers.”
 

Valuable work

Reached for comment, Igor Galynker, MD, PhD, professor of psychiatry at the Icahn School of Medicine at Mount Sinai, New York, said that this is a “very solid study with a very large sample size and solid statistical analysis.”

Another strength of the research is the outcome of suicide death versus suicide attempt or suicidal ideation. “In that respect, it is a valuable paper,” Dr. Galynker, who directs the Mount Sinai Beth Israel Suicide Research Laboratory, told this news organization.

He noted that there are no new risk factors in the model. Rather, the model contains the typical risk factors for suicide, which include male sex, substance misuse, past suicide attempt, and psychiatric diagnosis.

“The strongest risk factor in the model is self-harm by hanging, strangulation, or suffocation, which has been shown before and is therefore unsurprising,” said Dr. Galynker.

In general, the risk factors included in the model are often part of administrative tools for suicide risk assessment, said Dr. Galynker, but the OxSATS “seems easier to use because it has 11 items only.”

Broadly speaking, individuals with mental illness and past suicide attempt, past self-harm, alcohol use, and other risk factors “should be treated proactively with suicide prevention measures,” he told this news organization.

As previously reported, Dr. Galynker and colleagues have developed the Abbreviated Suicide Crisis Syndrome Checklist (A-SCS-C), a novel tool to help identify which suicidal patients who present to the emergency department should be admitted to hospital and which patients can be safely discharged.

Funding for the study was provided by Wellcome Trust and the Swedish Research Council. Dr. Fazel and Dr. Galynker have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Investigators have developed and validated a new risk calculator to help predict death by suicide in the 6-12 months after an episode of nonfatal self-harm, new research shows.

A study led by Seena Fazel, MBChB, MD, University of Oxford, England, suggests the Oxford Suicide Assessment Tool for Self-harm (OxSATS) may help guide treatment decisions and target resources to those most in need, the researchers note.

“Many tools use only simple high/low categories, whereas OxSATS includes probability scores, which align more closely with risk calculators in cardiovascular medicine, such as the Framingham Risk Score, and prognostic models in cancer medicine, which provide 5-year survival probabilities. This potentially allows OxSATS to inform clinical decision-making more directly,” Dr. Fazel told this news organization.

The findings were published online in BMJ Mental Health.
 

Targeted tool

Self-harm is associated with a 1-year risk of suicide that is 20 times higher than that of the general population. Given that about 16 million people self-harm annually, the impact at a population level is potentially quite large, the researchers note.

Current structured approaches to gauge suicide risk among those who have engaged in self-harm are based on tools developed for other purposes and symptom checklists. “Their poor to moderate performance is therefore not unexpected,” Dr. Fazel told this news organization.

In contrast, OxSATS was specifically developed to predict suicide mortality after self-harm.

Dr. Fazel’s group evaluated data on 53,172 Swedish individuals aged 10 years and older who sought emergency medical care after episodes of self-harm.

The development cohort included 37,523 individuals. Of these, 391 died by suicide within 12 months. The validation cohort included 15,649 individuals; of these people, 178 died by suicide within 12 months.

The final OxSATS model includes 11 predictors related to age and sex, as well as variables related to substance misuse, mental health, and treatment and history of self-harm.

“The performance of the model in external validation was good, with c-index at 6 and 12 months of 0.77,” the researchers note.

Using a cutoff threshold of 1%, the OxSATS correctly identified 68% of those who died by suicide within 6 months, while 71% of those who didn’t die were correctly classified as being at low risk. The figures for risk prediction at 12 months were 82% and 54%, respectively.

The OxSATS has been made into a simple online tool with probability scores for suicide at 6 and 12 months after an episode of self-harm, but without linkage to interventions. A tool on its own is unlikely to improve outcomes, said Dr. Fazel.

“However,” he added, “it can improve consistency in the assessment process, especially in busy clinical settings where people from different professional backgrounds and experience undertake such assessments. It can also highlight the role of modifiable risk factors and provide an opportunity to transparently discuss risk with patients and their carers.”
 

Valuable work

Reached for comment, Igor Galynker, MD, PhD, professor of psychiatry at the Icahn School of Medicine at Mount Sinai, New York, said that this is a “very solid study with a very large sample size and solid statistical analysis.”

Another strength of the research is the outcome of suicide death versus suicide attempt or suicidal ideation. “In that respect, it is a valuable paper,” Dr. Galynker, who directs the Mount Sinai Beth Israel Suicide Research Laboratory, told this news organization.

He noted that there are no new risk factors in the model. Rather, the model contains the typical risk factors for suicide, which include male sex, substance misuse, past suicide attempt, and psychiatric diagnosis.

“The strongest risk factor in the model is self-harm by hanging, strangulation, or suffocation, which has been shown before and is therefore unsurprising,” said Dr. Galynker.

In general, the risk factors included in the model are often part of administrative tools for suicide risk assessment, said Dr. Galynker, but the OxSATS “seems easier to use because it has 11 items only.”

Broadly speaking, individuals with mental illness and past suicide attempt, past self-harm, alcohol use, and other risk factors “should be treated proactively with suicide prevention measures,” he told this news organization.

As previously reported, Dr. Galynker and colleagues have developed the Abbreviated Suicide Crisis Syndrome Checklist (A-SCS-C), a novel tool to help identify which suicidal patients who present to the emergency department should be admitted to hospital and which patients can be safely discharged.

Funding for the study was provided by Wellcome Trust and the Swedish Research Council. Dr. Fazel and Dr. Galynker have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

GLP-1 agonists offer multiple benefits in type 2 diabetes with liver cirrhosis

Article Type
Changed
Fri, 07/07/2023 - 10:37

 

Topline

Glucagon-like peptide-1 receptor agonist (GLP-1 RA) use lowers the risk for death, cardiovascular disease, decompensated cirrhosis, and liver failure in adults with type 2 diabetes (T2D) and compensated liver cirrhosis, new observational data show.

Methodology

  • Population-based cohort study using data from the National Health Insurance Research Database of Taiwan.
  • Propensity-score matching was used to construct 467 matched pairs of GLP-1 RA users and nonusers (mean age, 57) with T2D and compensated liver cirrhosis.
  • All-cause mortality, cardiovascular events, decompensated cirrhosis, and other key outcomes were compared using multivariable-adjusted Cox proportional hazards models.

Takeaway

  • During mean follow-up of about 3 years, rates of death per 1,000 person-years were 27.5 in GLP-1 RA users versus 55.9 in nonusers.
  • GLP-1 RA users had a significantly lower risk for mortality (adjusted hazard ratio [aHR], 0.47), cardiovascular events (aHR, 0.6), decompensated cirrhosis (aHR, 0.7), hepatic encephalopathy (aHR, 0.59), and liver failure (aHR, 0.54).
  • A longer cumulative duration of GLP-1 RA use was associated with lower risk for these outcomes compared with no use.

In practice

“GLP-1 RAs may be a treatment option for diabetes patients with liver cirrhosis. However, additional studies are needed to confirm our results and to explore the mechanisms of GLP-1 RAs, cirrhotic decompensation and hepatic encephalopathy,” the researchers concluded.

Study details

The study was led by Fu-Shun Yen, Dr Yen’s Clinic, Taoyuan, Taiwan. It was published online June 16, 2023, in Clinical Gastroenterology and Hepatology. Funding was provided in part by the Taiwan Ministry of Health and Welfare Clinical Trial Center, China Medical University Hospital, Taipei Veterans General Hospital, and the Ministry of Science and Technology.

Limitations 

Limitations of the study include a lack of complete information on family history, diet, body weight, and physical activity, as well as biochemical tests, hemoglobin A1c, pathology, and imaging findings that could potentially influence the results.

Disclosures

The authors disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

Topline

Glucagon-like peptide-1 receptor agonist (GLP-1 RA) use lowers the risk for death, cardiovascular disease, decompensated cirrhosis, and liver failure in adults with type 2 diabetes (T2D) and compensated liver cirrhosis, new observational data show.

Methodology

  • Population-based cohort study using data from the National Health Insurance Research Database of Taiwan.
  • Propensity-score matching was used to construct 467 matched pairs of GLP-1 RA users and nonusers (mean age, 57) with T2D and compensated liver cirrhosis.
  • All-cause mortality, cardiovascular events, decompensated cirrhosis, and other key outcomes were compared using multivariable-adjusted Cox proportional hazards models.

Takeaway

  • During mean follow-up of about 3 years, rates of death per 1,000 person-years were 27.5 in GLP-1 RA users versus 55.9 in nonusers.
  • GLP-1 RA users had a significantly lower risk for mortality (adjusted hazard ratio [aHR], 0.47), cardiovascular events (aHR, 0.6), decompensated cirrhosis (aHR, 0.7), hepatic encephalopathy (aHR, 0.59), and liver failure (aHR, 0.54).
  • A longer cumulative duration of GLP-1 RA use was associated with lower risk for these outcomes compared with no use.

In practice

“GLP-1 RAs may be a treatment option for diabetes patients with liver cirrhosis. However, additional studies are needed to confirm our results and to explore the mechanisms of GLP-1 RAs, cirrhotic decompensation and hepatic encephalopathy,” the researchers concluded.

Study details

The study was led by Fu-Shun Yen, Dr Yen’s Clinic, Taoyuan, Taiwan. It was published online June 16, 2023, in Clinical Gastroenterology and Hepatology. Funding was provided in part by the Taiwan Ministry of Health and Welfare Clinical Trial Center, China Medical University Hospital, Taipei Veterans General Hospital, and the Ministry of Science and Technology.

Limitations 

Limitations of the study include a lack of complete information on family history, diet, body weight, and physical activity, as well as biochemical tests, hemoglobin A1c, pathology, and imaging findings that could potentially influence the results.

Disclosures

The authors disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

 

Topline

Glucagon-like peptide-1 receptor agonist (GLP-1 RA) use lowers the risk for death, cardiovascular disease, decompensated cirrhosis, and liver failure in adults with type 2 diabetes (T2D) and compensated liver cirrhosis, new observational data show.

Methodology

  • Population-based cohort study using data from the National Health Insurance Research Database of Taiwan.
  • Propensity-score matching was used to construct 467 matched pairs of GLP-1 RA users and nonusers (mean age, 57) with T2D and compensated liver cirrhosis.
  • All-cause mortality, cardiovascular events, decompensated cirrhosis, and other key outcomes were compared using multivariable-adjusted Cox proportional hazards models.

Takeaway

  • During mean follow-up of about 3 years, rates of death per 1,000 person-years were 27.5 in GLP-1 RA users versus 55.9 in nonusers.
  • GLP-1 RA users had a significantly lower risk for mortality (adjusted hazard ratio [aHR], 0.47), cardiovascular events (aHR, 0.6), decompensated cirrhosis (aHR, 0.7), hepatic encephalopathy (aHR, 0.59), and liver failure (aHR, 0.54).
  • A longer cumulative duration of GLP-1 RA use was associated with lower risk for these outcomes compared with no use.

In practice

“GLP-1 RAs may be a treatment option for diabetes patients with liver cirrhosis. However, additional studies are needed to confirm our results and to explore the mechanisms of GLP-1 RAs, cirrhotic decompensation and hepatic encephalopathy,” the researchers concluded.

Study details

The study was led by Fu-Shun Yen, Dr Yen’s Clinic, Taoyuan, Taiwan. It was published online June 16, 2023, in Clinical Gastroenterology and Hepatology. Funding was provided in part by the Taiwan Ministry of Health and Welfare Clinical Trial Center, China Medical University Hospital, Taipei Veterans General Hospital, and the Ministry of Science and Technology.

Limitations 

Limitations of the study include a lack of complete information on family history, diet, body weight, and physical activity, as well as biochemical tests, hemoglobin A1c, pathology, and imaging findings that could potentially influence the results.

Disclosures

The authors disclosed no relevant financial relationships.
 

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cardiorespiratory fitness linked to cancer risk, mortality?

Article Type
Changed
Wed, 07/12/2023 - 10:35

 

TOPLINE:

Higher levels of cardiorespiratory fitness (CRF) may offer protection from colon and lung cancer and from lung and prostate cancer mortality among men, a large Swedish cohort study suggests.

METHODOLOGY:

  • A prospective cohort study included 177,709 Swedish men (mean age, 42; mean body mass index, 26 kg/m2) who completed an occupational health profile assessment and were followed for a mean of 9.6 years.
  • CRF was assessed by determining maximal oxygen consumption during an aerobic fitness test, known as a submaximal Åstrand cycle ergometer test.
  • Participants reported physical activity habits, lifestyle, and perceived health.
  • Data on prostate, colon, and lung cancer incidence and mortality were derived from national registers.
  • Outcomes from three higher CRF groups (low, > 25-35; moderate, > 35-45; high, > 45 mL/min per kg) were compared with those from the very low CRF group (25 mL/min per kg or less). Models were adjusted for various factors, including age, BMI, education, dietary habits, comorbidity, and smoking.

TAKEAWAY:

  • During follow-up, investigators identified 1,918 prostate, 499 colon, and 283 lung cancer cases as well as 141 prostate, 207 lung, and 152 colon cancer deaths.
  • In the fully adjusted model, higher CRF levels were associated with a significantly lower risk for colon cancer (hazard ratio, 0.72 for moderate; HR, 0.63 for high).
  • In this model, higher CRF was also associated with a lower risk of death from prostate cancer (HR, 0.67 for low; HR, 0.57 for moderate; HR, 0.29 for high).
  • For lung cancer mortality, only high CRF was associated with a significantly lower risk of death (HR, 0.41).
  • An association between CRF and lung cancer incidence (HR, 0.99) and death (HR, 0.99) was only evident among adults aged 60 and older.

IN PRACTICE:

“The clinical implications of these findings further emphasize the importance of CRF for possibly reducing cancer incidence and mortality,” the authors concluded. “It is important for the general public to understand that higher-intensity [physical activity] has greater effects on CRF and is likely to be more protective against the risk of developing and dying from certain cancers.”

SOURCE:

The study was led by Elin Ekblom-Bak, PhD, from the Swedish School of Sport and Health Sciences, Stockholm. It was published online in JAMA Network Open.

LIMITATIONS:

The study was limited by voluntary participation, inclusion of only employed individuals, and estimations of CRF via submaximal tests. Data on smoking status were not optimal and there was a small number of cancer cases and deaths.

DISCLOSURES:

Funding was provided by the Swedish Cancer Society. The authors have reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Higher levels of cardiorespiratory fitness (CRF) may offer protection from colon and lung cancer and from lung and prostate cancer mortality among men, a large Swedish cohort study suggests.

METHODOLOGY:

  • A prospective cohort study included 177,709 Swedish men (mean age, 42; mean body mass index, 26 kg/m2) who completed an occupational health profile assessment and were followed for a mean of 9.6 years.
  • CRF was assessed by determining maximal oxygen consumption during an aerobic fitness test, known as a submaximal Åstrand cycle ergometer test.
  • Participants reported physical activity habits, lifestyle, and perceived health.
  • Data on prostate, colon, and lung cancer incidence and mortality were derived from national registers.
  • Outcomes from three higher CRF groups (low, > 25-35; moderate, > 35-45; high, > 45 mL/min per kg) were compared with those from the very low CRF group (25 mL/min per kg or less). Models were adjusted for various factors, including age, BMI, education, dietary habits, comorbidity, and smoking.

TAKEAWAY:

  • During follow-up, investigators identified 1,918 prostate, 499 colon, and 283 lung cancer cases as well as 141 prostate, 207 lung, and 152 colon cancer deaths.
  • In the fully adjusted model, higher CRF levels were associated with a significantly lower risk for colon cancer (hazard ratio, 0.72 for moderate; HR, 0.63 for high).
  • In this model, higher CRF was also associated with a lower risk of death from prostate cancer (HR, 0.67 for low; HR, 0.57 for moderate; HR, 0.29 for high).
  • For lung cancer mortality, only high CRF was associated with a significantly lower risk of death (HR, 0.41).
  • An association between CRF and lung cancer incidence (HR, 0.99) and death (HR, 0.99) was only evident among adults aged 60 and older.

IN PRACTICE:

“The clinical implications of these findings further emphasize the importance of CRF for possibly reducing cancer incidence and mortality,” the authors concluded. “It is important for the general public to understand that higher-intensity [physical activity] has greater effects on CRF and is likely to be more protective against the risk of developing and dying from certain cancers.”

SOURCE:

The study was led by Elin Ekblom-Bak, PhD, from the Swedish School of Sport and Health Sciences, Stockholm. It was published online in JAMA Network Open.

LIMITATIONS:

The study was limited by voluntary participation, inclusion of only employed individuals, and estimations of CRF via submaximal tests. Data on smoking status were not optimal and there was a small number of cancer cases and deaths.

DISCLOSURES:

Funding was provided by the Swedish Cancer Society. The authors have reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Higher levels of cardiorespiratory fitness (CRF) may offer protection from colon and lung cancer and from lung and prostate cancer mortality among men, a large Swedish cohort study suggests.

METHODOLOGY:

  • A prospective cohort study included 177,709 Swedish men (mean age, 42; mean body mass index, 26 kg/m2) who completed an occupational health profile assessment and were followed for a mean of 9.6 years.
  • CRF was assessed by determining maximal oxygen consumption during an aerobic fitness test, known as a submaximal Åstrand cycle ergometer test.
  • Participants reported physical activity habits, lifestyle, and perceived health.
  • Data on prostate, colon, and lung cancer incidence and mortality were derived from national registers.
  • Outcomes from three higher CRF groups (low, > 25-35; moderate, > 35-45; high, > 45 mL/min per kg) were compared with those from the very low CRF group (25 mL/min per kg or less). Models were adjusted for various factors, including age, BMI, education, dietary habits, comorbidity, and smoking.

TAKEAWAY:

  • During follow-up, investigators identified 1,918 prostate, 499 colon, and 283 lung cancer cases as well as 141 prostate, 207 lung, and 152 colon cancer deaths.
  • In the fully adjusted model, higher CRF levels were associated with a significantly lower risk for colon cancer (hazard ratio, 0.72 for moderate; HR, 0.63 for high).
  • In this model, higher CRF was also associated with a lower risk of death from prostate cancer (HR, 0.67 for low; HR, 0.57 for moderate; HR, 0.29 for high).
  • For lung cancer mortality, only high CRF was associated with a significantly lower risk of death (HR, 0.41).
  • An association between CRF and lung cancer incidence (HR, 0.99) and death (HR, 0.99) was only evident among adults aged 60 and older.

IN PRACTICE:

“The clinical implications of these findings further emphasize the importance of CRF for possibly reducing cancer incidence and mortality,” the authors concluded. “It is important for the general public to understand that higher-intensity [physical activity] has greater effects on CRF and is likely to be more protective against the risk of developing and dying from certain cancers.”

SOURCE:

The study was led by Elin Ekblom-Bak, PhD, from the Swedish School of Sport and Health Sciences, Stockholm. It was published online in JAMA Network Open.

LIMITATIONS:

The study was limited by voluntary participation, inclusion of only employed individuals, and estimations of CRF via submaximal tests. Data on smoking status were not optimal and there was a small number of cancer cases and deaths.

DISCLOSURES:

Funding was provided by the Swedish Cancer Society. The authors have reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

FDA approves first leadless dual-chamber pacing system

Article Type
Changed
Thu, 07/06/2023 - 10:39

The U.S. Food and Drug Administration has approved “the world’s first” leadless dual-chamber pacing system, one based in part on an already-approved leadless single-chamber device, Abbott has announced.

A stamp saying "FDA approved."
Olivier Le Moal/Getty Images

The company’s AVEIR DR leadless pacing system consists of two percutaneously implanted devices, the single-chamber AVEIR VR leadless pacemaker, implanted within the right ventricle, and the novel AVEIR AR single-chamber pacemaker for implantation in the right atrium.

The AVEIR DR system relies on proprietary wireless technology to provide bidirectional, beat-to-beat communication between its two components to achieve dual-chamber synchronization, the company stated in a press release on the approval.

The system also provides real-time pacing analysis, Abbott said, allowing clinicians to assess proper device placement during the procedure and before implantation. The system is designed to be easily removed if the patient’s pacing needs evolve or its battery needs replacing.

Experienced operators achieved a 98% implantation success rate using the AVIER DR system in a 300-patient study conducted at 55 sites in Canada, Europe, and the United States. In that study, 63% of the patients had sinus-node dysfunction and 33% had AV block as their primary dual-chamber pacing indication.

The system exceeded its predefined safety and performance goals, providing AV-synchronous pacing in 97% of patients for at least 3 months, it was reported in May at the annual scientific sessions of the Heart Rhythm Society and in a simultaneous publication in The New England Journal of Medicine.

“Modern medicine has been filled with technological achievements that fundamentally changed how doctors approach patient care, and now we can officially add dual-chamber leadless pacing to that list of achievements,” coauthor Vivek Reddy, MD, director of cardiac arrhythmia services for Mount Sinai Hospital and the Mount Sinai Health System, New York, said in the press release.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The U.S. Food and Drug Administration has approved “the world’s first” leadless dual-chamber pacing system, one based in part on an already-approved leadless single-chamber device, Abbott has announced.

A stamp saying "FDA approved."
Olivier Le Moal/Getty Images

The company’s AVEIR DR leadless pacing system consists of two percutaneously implanted devices, the single-chamber AVEIR VR leadless pacemaker, implanted within the right ventricle, and the novel AVEIR AR single-chamber pacemaker for implantation in the right atrium.

The AVEIR DR system relies on proprietary wireless technology to provide bidirectional, beat-to-beat communication between its two components to achieve dual-chamber synchronization, the company stated in a press release on the approval.

The system also provides real-time pacing analysis, Abbott said, allowing clinicians to assess proper device placement during the procedure and before implantation. The system is designed to be easily removed if the patient’s pacing needs evolve or its battery needs replacing.

Experienced operators achieved a 98% implantation success rate using the AVIER DR system in a 300-patient study conducted at 55 sites in Canada, Europe, and the United States. In that study, 63% of the patients had sinus-node dysfunction and 33% had AV block as their primary dual-chamber pacing indication.

The system exceeded its predefined safety and performance goals, providing AV-synchronous pacing in 97% of patients for at least 3 months, it was reported in May at the annual scientific sessions of the Heart Rhythm Society and in a simultaneous publication in The New England Journal of Medicine.

“Modern medicine has been filled with technological achievements that fundamentally changed how doctors approach patient care, and now we can officially add dual-chamber leadless pacing to that list of achievements,” coauthor Vivek Reddy, MD, director of cardiac arrhythmia services for Mount Sinai Hospital and the Mount Sinai Health System, New York, said in the press release.

A version of this article first appeared on Medscape.com.

The U.S. Food and Drug Administration has approved “the world’s first” leadless dual-chamber pacing system, one based in part on an already-approved leadless single-chamber device, Abbott has announced.

A stamp saying "FDA approved."
Olivier Le Moal/Getty Images

The company’s AVEIR DR leadless pacing system consists of two percutaneously implanted devices, the single-chamber AVEIR VR leadless pacemaker, implanted within the right ventricle, and the novel AVEIR AR single-chamber pacemaker for implantation in the right atrium.

The AVEIR DR system relies on proprietary wireless technology to provide bidirectional, beat-to-beat communication between its two components to achieve dual-chamber synchronization, the company stated in a press release on the approval.

The system also provides real-time pacing analysis, Abbott said, allowing clinicians to assess proper device placement during the procedure and before implantation. The system is designed to be easily removed if the patient’s pacing needs evolve or its battery needs replacing.

Experienced operators achieved a 98% implantation success rate using the AVIER DR system in a 300-patient study conducted at 55 sites in Canada, Europe, and the United States. In that study, 63% of the patients had sinus-node dysfunction and 33% had AV block as their primary dual-chamber pacing indication.

The system exceeded its predefined safety and performance goals, providing AV-synchronous pacing in 97% of patients for at least 3 months, it was reported in May at the annual scientific sessions of the Heart Rhythm Society and in a simultaneous publication in The New England Journal of Medicine.

“Modern medicine has been filled with technological achievements that fundamentally changed how doctors approach patient care, and now we can officially add dual-chamber leadless pacing to that list of achievements,” coauthor Vivek Reddy, MD, director of cardiac arrhythmia services for Mount Sinai Hospital and the Mount Sinai Health System, New York, said in the press release.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cannabis for cancer symptoms: Perceived or real benefit?

Article Type
Changed
Wed, 07/12/2023 - 10:35

 

TOPLINE:

Adults receiving cancer treatment who use cannabis perceived benefits regarding pain, sleep, nausea, and other factors but also reported worse physical and psychological symptoms.

METHODOLOGY:

  • Participants included 267 adults (mean age, 58 years; 70% women; 88% White) undergoing treatment for cancer, most commonly breast (47%) and ovarian (29%).
  • Participants completed online surveys to characterize cannabis use, reasons for using it, perceived benefits and harms, and physical/psychological symptoms.
  • Participants who had used cannabis for more than 1 day during the previous 30 days were compared with those who had not.

TAKEAWAY:

  • Overall, 26% of respondents reported cannabis use in the past 30 days, most often edibles (65%) or smoked cannabis (51%).
  • Cannabis users were more likely to be younger, male, Black, to have lower income, worse physical/psychological symptoms, and to be disabled or unable to work in comparison with nonusers.
  • Cannabis was used to treat pain, cancer, sleep problems, anxiety, nausea, and poor appetite; perceived benefits were greatest with respect to sleep, nausea, pain, muscle spasms, and anxiety.
  • Despite perceived benefits, cannabis users reported worse overall distress, anxiety, sleep disturbances, appetite, nausea, fatigue, and pain.

IN PRACTICE:

“The study findings indicate that patients with cancer perceived benefits to using cannabis for many symptoms” but also revealed that “those who used cannabis in the past 30 days had significantly worse symptom profiles overall than those who did not use cannabis,” the authors wrote.

SOURCE:

The study, led by Desiree R. Azizoddin, PsyD, University of Oklahoma Health Science Center, Oklahoma City, was published online in Cancer.

LIMITATIONS:

It’s not known whether adults who used cannabis had significantly worse symptoms at the outset, which may have prompted cannabis use, or whether cannabis use may have exacerbated their symptoms.

DISCLOSURES:

Funding for the study was provided by grants from the National Cancer Institute and the Oklahoma Tobacco Settlement Endowment Trust. Nine of the 10 authors have disclosed no relevant conflicts of interest. One author has relationships with various pharmaceutical companies involved in oncology.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Adults receiving cancer treatment who use cannabis perceived benefits regarding pain, sleep, nausea, and other factors but also reported worse physical and psychological symptoms.

METHODOLOGY:

  • Participants included 267 adults (mean age, 58 years; 70% women; 88% White) undergoing treatment for cancer, most commonly breast (47%) and ovarian (29%).
  • Participants completed online surveys to characterize cannabis use, reasons for using it, perceived benefits and harms, and physical/psychological symptoms.
  • Participants who had used cannabis for more than 1 day during the previous 30 days were compared with those who had not.

TAKEAWAY:

  • Overall, 26% of respondents reported cannabis use in the past 30 days, most often edibles (65%) or smoked cannabis (51%).
  • Cannabis users were more likely to be younger, male, Black, to have lower income, worse physical/psychological symptoms, and to be disabled or unable to work in comparison with nonusers.
  • Cannabis was used to treat pain, cancer, sleep problems, anxiety, nausea, and poor appetite; perceived benefits were greatest with respect to sleep, nausea, pain, muscle spasms, and anxiety.
  • Despite perceived benefits, cannabis users reported worse overall distress, anxiety, sleep disturbances, appetite, nausea, fatigue, and pain.

IN PRACTICE:

“The study findings indicate that patients with cancer perceived benefits to using cannabis for many symptoms” but also revealed that “those who used cannabis in the past 30 days had significantly worse symptom profiles overall than those who did not use cannabis,” the authors wrote.

SOURCE:

The study, led by Desiree R. Azizoddin, PsyD, University of Oklahoma Health Science Center, Oklahoma City, was published online in Cancer.

LIMITATIONS:

It’s not known whether adults who used cannabis had significantly worse symptoms at the outset, which may have prompted cannabis use, or whether cannabis use may have exacerbated their symptoms.

DISCLOSURES:

Funding for the study was provided by grants from the National Cancer Institute and the Oklahoma Tobacco Settlement Endowment Trust. Nine of the 10 authors have disclosed no relevant conflicts of interest. One author has relationships with various pharmaceutical companies involved in oncology.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Adults receiving cancer treatment who use cannabis perceived benefits regarding pain, sleep, nausea, and other factors but also reported worse physical and psychological symptoms.

METHODOLOGY:

  • Participants included 267 adults (mean age, 58 years; 70% women; 88% White) undergoing treatment for cancer, most commonly breast (47%) and ovarian (29%).
  • Participants completed online surveys to characterize cannabis use, reasons for using it, perceived benefits and harms, and physical/psychological symptoms.
  • Participants who had used cannabis for more than 1 day during the previous 30 days were compared with those who had not.

TAKEAWAY:

  • Overall, 26% of respondents reported cannabis use in the past 30 days, most often edibles (65%) or smoked cannabis (51%).
  • Cannabis users were more likely to be younger, male, Black, to have lower income, worse physical/psychological symptoms, and to be disabled or unable to work in comparison with nonusers.
  • Cannabis was used to treat pain, cancer, sleep problems, anxiety, nausea, and poor appetite; perceived benefits were greatest with respect to sleep, nausea, pain, muscle spasms, and anxiety.
  • Despite perceived benefits, cannabis users reported worse overall distress, anxiety, sleep disturbances, appetite, nausea, fatigue, and pain.

IN PRACTICE:

“The study findings indicate that patients with cancer perceived benefits to using cannabis for many symptoms” but also revealed that “those who used cannabis in the past 30 days had significantly worse symptom profiles overall than those who did not use cannabis,” the authors wrote.

SOURCE:

The study, led by Desiree R. Azizoddin, PsyD, University of Oklahoma Health Science Center, Oklahoma City, was published online in Cancer.

LIMITATIONS:

It’s not known whether adults who used cannabis had significantly worse symptoms at the outset, which may have prompted cannabis use, or whether cannabis use may have exacerbated their symptoms.

DISCLOSURES:

Funding for the study was provided by grants from the National Cancer Institute and the Oklahoma Tobacco Settlement Endowment Trust. Nine of the 10 authors have disclosed no relevant conflicts of interest. One author has relationships with various pharmaceutical companies involved in oncology.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New AHA statement on ischemia after cardiac surgery

Article Type
Changed
Mon, 07/03/2023 - 10:48

 

The American Heart Association outlines “considerations” on the management of acute postoperative myocardial ischemia (PMI) after cardiac surgery in a scientific statement.

Although an infrequent event, acute PMI following cardiac surgery can rapidly evolve and become a potentially life-threatening complication, the writing group, led by Mario Gaudino, MD, PhD, with Weill Cornell Medicine, New York, points out.

The new statement was published online in Circulation.

Data show that the incidence of postoperative myocardial infarction after cardiac surgery ranges from 0.3% to 9.8% after isolated coronary artery bypass graft (CABG) surgery and 0.7% to 11.8% after concomitant valvular surgery. For isolated mitral valve surgery, incidence ranges from 1.7% to 2.2%.

Short-term mortality is elevated among patients with acute PMI, irrespective of the type of surgery. Reported mortality rates range from 5.1% to 24%; the evidence on long-term mortality has been mixed.

Graft-related factors are the most common cause of PMI after CABG, but other factors may contribute, including technical factors, competitive flow, suture entrapment, or coronary artery distortion, as well as non–graft related factors.


 

Prompt diagnosis and treatment important

Currently, there is no consensus definition of PMI. Elevations in cardiac biomarkers may not be reliable for diagnosis after surgery, and pain management regimens may mask symptoms of ischemia, the writing group notes.

Given the difficulty in diagnosis, it’s important to maintain a “high index of suspicion for acute PMI in all patients undergoing cardiac surgery because timely diagnosis and treatment are key to a good clinical outcome,” they write.

Delay in urgent angiography has been associated with higher mortality; thus, a low threshold for action is encouraged for patients with suspected acute PMI.

Indications for urgent angiography include new ECG changes, chest pain with ongoing signs of ischemia, cardiac imaging abnormalities, cardiac rhythm abnormalities, significant elevations in cardiac biomarkers, and low cardiac output syndrome despite postoperative pressor support.

Patients with acute PMI and low cardiac output syndrome may require mechanical support when first-line treatment fails.

The writing group says fast and effective reperfusion of the ischemic zone, which is generally achieved by percutaneous intervention and, less often, by repeat surgery, is the key to a good clinical outcome.

The statement was prepared by the volunteer writing group on behalf of the AHA Council on Cardiovascular Surgery and Anesthesia; Council on Clinical Cardiology; Council on Cardiovascular and Stroke Nursing; and Stroke Council.

The research had no commercial funding. Disclosures for the writing group are listed with the original article.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

The American Heart Association outlines “considerations” on the management of acute postoperative myocardial ischemia (PMI) after cardiac surgery in a scientific statement.

Although an infrequent event, acute PMI following cardiac surgery can rapidly evolve and become a potentially life-threatening complication, the writing group, led by Mario Gaudino, MD, PhD, with Weill Cornell Medicine, New York, points out.

The new statement was published online in Circulation.

Data show that the incidence of postoperative myocardial infarction after cardiac surgery ranges from 0.3% to 9.8% after isolated coronary artery bypass graft (CABG) surgery and 0.7% to 11.8% after concomitant valvular surgery. For isolated mitral valve surgery, incidence ranges from 1.7% to 2.2%.

Short-term mortality is elevated among patients with acute PMI, irrespective of the type of surgery. Reported mortality rates range from 5.1% to 24%; the evidence on long-term mortality has been mixed.

Graft-related factors are the most common cause of PMI after CABG, but other factors may contribute, including technical factors, competitive flow, suture entrapment, or coronary artery distortion, as well as non–graft related factors.


 

Prompt diagnosis and treatment important

Currently, there is no consensus definition of PMI. Elevations in cardiac biomarkers may not be reliable for diagnosis after surgery, and pain management regimens may mask symptoms of ischemia, the writing group notes.

Given the difficulty in diagnosis, it’s important to maintain a “high index of suspicion for acute PMI in all patients undergoing cardiac surgery because timely diagnosis and treatment are key to a good clinical outcome,” they write.

Delay in urgent angiography has been associated with higher mortality; thus, a low threshold for action is encouraged for patients with suspected acute PMI.

Indications for urgent angiography include new ECG changes, chest pain with ongoing signs of ischemia, cardiac imaging abnormalities, cardiac rhythm abnormalities, significant elevations in cardiac biomarkers, and low cardiac output syndrome despite postoperative pressor support.

Patients with acute PMI and low cardiac output syndrome may require mechanical support when first-line treatment fails.

The writing group says fast and effective reperfusion of the ischemic zone, which is generally achieved by percutaneous intervention and, less often, by repeat surgery, is the key to a good clinical outcome.

The statement was prepared by the volunteer writing group on behalf of the AHA Council on Cardiovascular Surgery and Anesthesia; Council on Clinical Cardiology; Council on Cardiovascular and Stroke Nursing; and Stroke Council.

The research had no commercial funding. Disclosures for the writing group are listed with the original article.

A version of this article originally appeared on Medscape.com.

 

The American Heart Association outlines “considerations” on the management of acute postoperative myocardial ischemia (PMI) after cardiac surgery in a scientific statement.

Although an infrequent event, acute PMI following cardiac surgery can rapidly evolve and become a potentially life-threatening complication, the writing group, led by Mario Gaudino, MD, PhD, with Weill Cornell Medicine, New York, points out.

The new statement was published online in Circulation.

Data show that the incidence of postoperative myocardial infarction after cardiac surgery ranges from 0.3% to 9.8% after isolated coronary artery bypass graft (CABG) surgery and 0.7% to 11.8% after concomitant valvular surgery. For isolated mitral valve surgery, incidence ranges from 1.7% to 2.2%.

Short-term mortality is elevated among patients with acute PMI, irrespective of the type of surgery. Reported mortality rates range from 5.1% to 24%; the evidence on long-term mortality has been mixed.

Graft-related factors are the most common cause of PMI after CABG, but other factors may contribute, including technical factors, competitive flow, suture entrapment, or coronary artery distortion, as well as non–graft related factors.


 

Prompt diagnosis and treatment important

Currently, there is no consensus definition of PMI. Elevations in cardiac biomarkers may not be reliable for diagnosis after surgery, and pain management regimens may mask symptoms of ischemia, the writing group notes.

Given the difficulty in diagnosis, it’s important to maintain a “high index of suspicion for acute PMI in all patients undergoing cardiac surgery because timely diagnosis and treatment are key to a good clinical outcome,” they write.

Delay in urgent angiography has been associated with higher mortality; thus, a low threshold for action is encouraged for patients with suspected acute PMI.

Indications for urgent angiography include new ECG changes, chest pain with ongoing signs of ischemia, cardiac imaging abnormalities, cardiac rhythm abnormalities, significant elevations in cardiac biomarkers, and low cardiac output syndrome despite postoperative pressor support.

Patients with acute PMI and low cardiac output syndrome may require mechanical support when first-line treatment fails.

The writing group says fast and effective reperfusion of the ischemic zone, which is generally achieved by percutaneous intervention and, less often, by repeat surgery, is the key to a good clinical outcome.

The statement was prepared by the volunteer writing group on behalf of the AHA Council on Cardiovascular Surgery and Anesthesia; Council on Clinical Cardiology; Council on Cardiovascular and Stroke Nursing; and Stroke Council.

The research had no commercial funding. Disclosures for the writing group are listed with the original article.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CIRCULATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article