Maternal BMI and Eating Disorders Tied to Mental Health in Kids

Article Type
Changed
Fri, 11/01/2024 - 13:18

 

TOPLINE:

Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.

METHODOLOGY:

  • Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
  • They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
  • Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
  • Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
  • Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.

TAKEAWAY:

  • From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
  • Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
  • The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
  • The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.

IN PRACTICE:

“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.

SOURCE:

This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.

LIMITATIONS:

A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.

DISCLOSURES:

This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.

METHODOLOGY:

  • Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
  • They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
  • Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
  • Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
  • Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.

TAKEAWAY:

  • From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
  • Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
  • The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
  • The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.

IN PRACTICE:

“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.

SOURCE:

This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.

LIMITATIONS:

A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.

DISCLOSURES:

This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Children of mothers who had obesity or eating disorders before or during pregnancy may face higher risks for neurodevelopmental and psychiatric disorders.

METHODOLOGY:

  • Researchers conducted a population-based cohort study to investigate the association of maternal eating disorders and high prepregnancy body mass index (BMI) with psychiatric disorder and neurodevelopmental diagnoses in offspring.
  • They used Finnish national registers to assess all live births from 2004 through 2014, with follow-up until 2021.
  • Data of 392,098 mothers (mean age, 30.15 years) and 649,956 offspring (48.86% girls) were included.
  • Maternal eating disorders and prepregnancy BMI were the main exposures, with 1.60% of mothers having a history of eating disorders; 5.89% were underweight and 53.13% had obesity.
  • Diagnoses of children were identified and grouped by ICD-10 codes of mental, behavioral, and neurodevelopmental disorders, mood disorders, anxiety disorders, sleep disorders, attention-deficit/hyperactivity disorder, and conduct disorders, among several others.

TAKEAWAY:

  • From birth until 7-17 years of age, 16.43% of offspring were diagnosed with a neurodevelopmental or psychiatric disorder.
  • Maternal eating disorders were associated with psychiatric disorders in the offspring, with the largest effect sizes observed for sleep disorders (hazard ratio [HR], 2.36) and social functioning and tic disorders (HR, 2.18; P < .001 for both).
  • The offspring of mothers with severe prepregnancy obesity had a more than twofold increased risk for intellectual disabilities (HR, 2.04; 95% CI, 1.83-2.28); being underweight before pregnancy was also linked to many psychiatric disorders in offspring.
  • The occurrence of adverse birth outcomes along with maternal eating disorders or high BMI further increased the risk for neurodevelopmental and psychiatric disorders in the offspring.

IN PRACTICE:

“The findings underline the risk of offspring mental illness associated with maternal eating disorders and prepregnancy BMI and suggest the need to consider these exposures clinically to help prevent offspring mental illness,” the authors wrote.

SOURCE:

This study was led by Ida A.K. Nilsson, PhD, of the Department of Molecular Medicine and Surgery at the Karolinska Institutet in Stockholm, Sweden, and was published online in JAMA Network Open.

LIMITATIONS:

A limitation of the study was the relatively short follow-up time, which restricted the inclusion of late-onset psychiatric disorder diagnoses, such as schizophrenia spectrum disorders. Paternal data and genetic information, which may have influenced the interpretation of the data, were not available. Another potential bias was that mothers with eating disorders may have been more perceptive to their child’s eating behavior, leading to greater access to care and diagnosis for these children.

DISCLOSURES:

This work was supported by the Swedish Research Council, the regional agreement on medical training and clinical research between Region Stockholm and the Karolinska Institutet, the Swedish Brain Foundation, and other sources. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Can Fish Skin Grafts Heal Diabetic Foot Ulcers?

Article Type
Changed
Thu, 10/31/2024 - 15:47

 

TOPLINE:

Intact fish skin grafts, sourced from Atlantic cod, show superior and faster healing than standard wound care practices in patients with deep and penetrating diabetic foot ulcers.

METHODOLOGY:

  • Standard wound care for diabetic foot ulcers involves vascular assessment, surgical debridement, use of appropriate dressings, infection management, and glycemic control; however, standard care is typically associated with poor outcomes.
  • Researchers conducted a multicenter clinical trial in 15 tertiary care centers with diabetic foot units across France, Italy, Germany, and Sweden to evaluate the efficacy and safety of intact fish skin grafts over standard-of-care practices in treating complex diabetic foot ulcers.
  • A total of 255 patients aged 18 years or older with diabetes and lower limb wounds penetrating to the tendon, capsule, bone, or joint were randomly assigned to receive either an intact fish skin graft or standard wound care for 14 weeks.
  • The primary endpoint was the percentage of wounds achieving complete closure by 16 weeks.
  • Wound healing was also assessed at 20 and 24 weeks.

TAKEAWAY:

  • The proportion of wounds healed at 16 weeks was higher with intact fish skin grafts than with standard-of-care (44.0% vs 26.4% adjusted odds ratio [aOR], 2.58; 95% CI, 1.48-4.56).
  • The fish skin grafts continued to be more effective than standard wound care practices at weeks 20 (aOR, 2.15; 95% CI, 1.27–3.70) and 24 (aOR, 2.19; 95% CI, 1.31–3.70).
  • The mean time to healing was 17.31 weeks for the intact fish skin graft group and 19.37 weeks for the standard-of-care group; intact fish skin grafts were also associated with faster healing times than standard wound care (hazard ratio, 1.59; 95% CI, 1.07-2.36).
  • Target wound infections were the most common adverse events, occurring in a similar number of patients in both the groups.

IN PRACTICE:

“Our trial demonstrated treatment of complex diabetic foot ulcers with intact fish skin grafts achieved a significantly greater proportion of diabetic foot ulcers healed at 16 weeks than standard of care, and was associated with increased healing at 20 and 24 weeks. That these results were achieved in non-superficial UT [University of Texas diabetic wound classification system] grade 2 and 3 diabetic foot ulcers and included ischemic and/or infected diabetic foot ulcers is of importance,” the authors wrote.

SOURCE:

The study was led by Dured Dardari, MD, PhD, Center Hospitalier Sud Francilien, Corbeil-Essonnes, France, and was published online in NEJM Evidence.

LIMITATIONS:

No limitations were discussed for this study.

DISCLOSURES:

The study was funded by European Commission Fast Track to Innovation Horizon 2020 and Kerecis. Two authors reported being employees with or without stock options at Kerecis, and other authors reported having ties with many sources including Kerecis.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Intact fish skin grafts, sourced from Atlantic cod, show superior and faster healing than standard wound care practices in patients with deep and penetrating diabetic foot ulcers.

METHODOLOGY:

  • Standard wound care for diabetic foot ulcers involves vascular assessment, surgical debridement, use of appropriate dressings, infection management, and glycemic control; however, standard care is typically associated with poor outcomes.
  • Researchers conducted a multicenter clinical trial in 15 tertiary care centers with diabetic foot units across France, Italy, Germany, and Sweden to evaluate the efficacy and safety of intact fish skin grafts over standard-of-care practices in treating complex diabetic foot ulcers.
  • A total of 255 patients aged 18 years or older with diabetes and lower limb wounds penetrating to the tendon, capsule, bone, or joint were randomly assigned to receive either an intact fish skin graft or standard wound care for 14 weeks.
  • The primary endpoint was the percentage of wounds achieving complete closure by 16 weeks.
  • Wound healing was also assessed at 20 and 24 weeks.

TAKEAWAY:

  • The proportion of wounds healed at 16 weeks was higher with intact fish skin grafts than with standard-of-care (44.0% vs 26.4% adjusted odds ratio [aOR], 2.58; 95% CI, 1.48-4.56).
  • The fish skin grafts continued to be more effective than standard wound care practices at weeks 20 (aOR, 2.15; 95% CI, 1.27–3.70) and 24 (aOR, 2.19; 95% CI, 1.31–3.70).
  • The mean time to healing was 17.31 weeks for the intact fish skin graft group and 19.37 weeks for the standard-of-care group; intact fish skin grafts were also associated with faster healing times than standard wound care (hazard ratio, 1.59; 95% CI, 1.07-2.36).
  • Target wound infections were the most common adverse events, occurring in a similar number of patients in both the groups.

IN PRACTICE:

“Our trial demonstrated treatment of complex diabetic foot ulcers with intact fish skin grafts achieved a significantly greater proportion of diabetic foot ulcers healed at 16 weeks than standard of care, and was associated with increased healing at 20 and 24 weeks. That these results were achieved in non-superficial UT [University of Texas diabetic wound classification system] grade 2 and 3 diabetic foot ulcers and included ischemic and/or infected diabetic foot ulcers is of importance,” the authors wrote.

SOURCE:

The study was led by Dured Dardari, MD, PhD, Center Hospitalier Sud Francilien, Corbeil-Essonnes, France, and was published online in NEJM Evidence.

LIMITATIONS:

No limitations were discussed for this study.

DISCLOSURES:

The study was funded by European Commission Fast Track to Innovation Horizon 2020 and Kerecis. Two authors reported being employees with or without stock options at Kerecis, and other authors reported having ties with many sources including Kerecis.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Intact fish skin grafts, sourced from Atlantic cod, show superior and faster healing than standard wound care practices in patients with deep and penetrating diabetic foot ulcers.

METHODOLOGY:

  • Standard wound care for diabetic foot ulcers involves vascular assessment, surgical debridement, use of appropriate dressings, infection management, and glycemic control; however, standard care is typically associated with poor outcomes.
  • Researchers conducted a multicenter clinical trial in 15 tertiary care centers with diabetic foot units across France, Italy, Germany, and Sweden to evaluate the efficacy and safety of intact fish skin grafts over standard-of-care practices in treating complex diabetic foot ulcers.
  • A total of 255 patients aged 18 years or older with diabetes and lower limb wounds penetrating to the tendon, capsule, bone, or joint were randomly assigned to receive either an intact fish skin graft or standard wound care for 14 weeks.
  • The primary endpoint was the percentage of wounds achieving complete closure by 16 weeks.
  • Wound healing was also assessed at 20 and 24 weeks.

TAKEAWAY:

  • The proportion of wounds healed at 16 weeks was higher with intact fish skin grafts than with standard-of-care (44.0% vs 26.4% adjusted odds ratio [aOR], 2.58; 95% CI, 1.48-4.56).
  • The fish skin grafts continued to be more effective than standard wound care practices at weeks 20 (aOR, 2.15; 95% CI, 1.27–3.70) and 24 (aOR, 2.19; 95% CI, 1.31–3.70).
  • The mean time to healing was 17.31 weeks for the intact fish skin graft group and 19.37 weeks for the standard-of-care group; intact fish skin grafts were also associated with faster healing times than standard wound care (hazard ratio, 1.59; 95% CI, 1.07-2.36).
  • Target wound infections were the most common adverse events, occurring in a similar number of patients in both the groups.

IN PRACTICE:

“Our trial demonstrated treatment of complex diabetic foot ulcers with intact fish skin grafts achieved a significantly greater proportion of diabetic foot ulcers healed at 16 weeks than standard of care, and was associated with increased healing at 20 and 24 weeks. That these results were achieved in non-superficial UT [University of Texas diabetic wound classification system] grade 2 and 3 diabetic foot ulcers and included ischemic and/or infected diabetic foot ulcers is of importance,” the authors wrote.

SOURCE:

The study was led by Dured Dardari, MD, PhD, Center Hospitalier Sud Francilien, Corbeil-Essonnes, France, and was published online in NEJM Evidence.

LIMITATIONS:

No limitations were discussed for this study.

DISCLOSURES:

The study was funded by European Commission Fast Track to Innovation Horizon 2020 and Kerecis. Two authors reported being employees with or without stock options at Kerecis, and other authors reported having ties with many sources including Kerecis.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Few Differences Seen in RA Pain Outcomes for JAK Inhibitors, Biologics

Article Type
Changed
Wed, 10/23/2024 - 12:13

 

TOPLINE:

Janus kinase (JAK) inhibitors had a marginally superior effect on pain relief when compared with tumor necrosis factor (TNF) inhibitors in patients with rheumatoid arthritis (RA), particularly when used as monotherapy and in those previously treated with at least two biologic disease-modifying antirheumatic drugs (DMARDs), but their pain reduction effects were similar to those of non–TNF inhibitor biologic DMARDs.

METHODOLOGY:

  • Researchers aimed to compare the effect of JAK inhibitors and each class of biologic DMARDs such as TNF inhibitors, rituximababatacept, and interleukin (IL)-6 inhibitors on pain in patients with RA in clinical practice.
  • They included 8430 patients with RA who were initiated on either a JAK inhibitor (n = 1827), TNF inhibitor (n = 6422), IL-6 inhibitor (n = 887), abatacept (n = 1102), or rituximab (n = 1149) in 2017-2019.
  • Differences in the change in pain, assessed using a visual analog scale (VAS; 0-100 mm), from baseline to 3 months were compared between the treatment arms.
  • The proportion of patients who continued their initial treatment with low pain levels (VAS pain, < 20 mm) at 12 months was also evaluated.
  • The comparisons of treatment responses between JAK inhibitors and biologic DMARDs were analyzed using multivariate linear regression, adjusted for patient characteristics, comorbidities, current co-medication, and previous treatment.

TAKEAWAY:

  • Pain scores improved from baseline to 3 months in all the treatment arms, with mean changes ranging from −20.1 mm (95% CI, −23.1 to −17.2) for IL-6 inhibitors to −16.6 mm (95% CI, −19.1 to −14.0) for rituximab.
  • At 3 months, JAK inhibitors reduced pain scores by 4.0 mm (95% CI, 1.7-6.3) more than TNF inhibitors and by 3.9 mm (95% CI, 0.9-6.9) more than rituximab; however, the change in pain was not significantly different on comparing JAK inhibitors with abatacept or IL-6 inhibitors.
  • The superior pain-reducing effects of JAK inhibitors over those of TNF inhibitors were more prominent in those who were previously treated with at least two biologic DMARDs and when the treatments were used as monotherapy.
  • At 12 months, 19.5% of the patients receiving JAK inhibitors continued their treatment and achieved low pain levels, with the corresponding proportions ranging from 17% to 26% for biologic DMARDs; JAK inhibitors were more effective in reducing pain than TNF inhibitors, although the difference was not statistically significant.

IN PRACTICE:

“JAK inhibitors yield slightly better pain outcomes than TNF inhibitors. The magnitude of these effects is unlikely to be clinically meaningful in unselected groups of patients with RA,” experts from Feinberg School of Medicine, Northwestern University, Chicago, wrote in an accompanying editorial. “Specific subgroups, such as those who have tried at least two DMARDs, may experience greater effects,” they added.

SOURCE:

The study was led by Anna Eberhard, MD, Department of Clinical Sciences, Lund University, Malmö, Sweden. It was published online on September 22, 2024, in Arthritis & Rheumatology.

 

 

LIMITATIONS:

The study had a significant amount of missing data, particularly for follow-up evaluations, which may have introduced bias. The majority of patients were treated using baricitinib, potentially limiting the generalizability to other JAK inhibitors. Residual confounding could not be excluded despite adjustments for multiple relevant patient characteristics.

DISCLOSURES:

This study was supported by grants from The Swedish Research Council, The Swedish Rheumatism Association, and Lund University. Some authors declared receiving consulting fees, payments or honoraria, or grants or having other ties with pharmaceutical companies and other sources.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Janus kinase (JAK) inhibitors had a marginally superior effect on pain relief when compared with tumor necrosis factor (TNF) inhibitors in patients with rheumatoid arthritis (RA), particularly when used as monotherapy and in those previously treated with at least two biologic disease-modifying antirheumatic drugs (DMARDs), but their pain reduction effects were similar to those of non–TNF inhibitor biologic DMARDs.

METHODOLOGY:

  • Researchers aimed to compare the effect of JAK inhibitors and each class of biologic DMARDs such as TNF inhibitors, rituximababatacept, and interleukin (IL)-6 inhibitors on pain in patients with RA in clinical practice.
  • They included 8430 patients with RA who were initiated on either a JAK inhibitor (n = 1827), TNF inhibitor (n = 6422), IL-6 inhibitor (n = 887), abatacept (n = 1102), or rituximab (n = 1149) in 2017-2019.
  • Differences in the change in pain, assessed using a visual analog scale (VAS; 0-100 mm), from baseline to 3 months were compared between the treatment arms.
  • The proportion of patients who continued their initial treatment with low pain levels (VAS pain, < 20 mm) at 12 months was also evaluated.
  • The comparisons of treatment responses between JAK inhibitors and biologic DMARDs were analyzed using multivariate linear regression, adjusted for patient characteristics, comorbidities, current co-medication, and previous treatment.

TAKEAWAY:

  • Pain scores improved from baseline to 3 months in all the treatment arms, with mean changes ranging from −20.1 mm (95% CI, −23.1 to −17.2) for IL-6 inhibitors to −16.6 mm (95% CI, −19.1 to −14.0) for rituximab.
  • At 3 months, JAK inhibitors reduced pain scores by 4.0 mm (95% CI, 1.7-6.3) more than TNF inhibitors and by 3.9 mm (95% CI, 0.9-6.9) more than rituximab; however, the change in pain was not significantly different on comparing JAK inhibitors with abatacept or IL-6 inhibitors.
  • The superior pain-reducing effects of JAK inhibitors over those of TNF inhibitors were more prominent in those who were previously treated with at least two biologic DMARDs and when the treatments were used as monotherapy.
  • At 12 months, 19.5% of the patients receiving JAK inhibitors continued their treatment and achieved low pain levels, with the corresponding proportions ranging from 17% to 26% for biologic DMARDs; JAK inhibitors were more effective in reducing pain than TNF inhibitors, although the difference was not statistically significant.

IN PRACTICE:

“JAK inhibitors yield slightly better pain outcomes than TNF inhibitors. The magnitude of these effects is unlikely to be clinically meaningful in unselected groups of patients with RA,” experts from Feinberg School of Medicine, Northwestern University, Chicago, wrote in an accompanying editorial. “Specific subgroups, such as those who have tried at least two DMARDs, may experience greater effects,” they added.

SOURCE:

The study was led by Anna Eberhard, MD, Department of Clinical Sciences, Lund University, Malmö, Sweden. It was published online on September 22, 2024, in Arthritis & Rheumatology.

 

 

LIMITATIONS:

The study had a significant amount of missing data, particularly for follow-up evaluations, which may have introduced bias. The majority of patients were treated using baricitinib, potentially limiting the generalizability to other JAK inhibitors. Residual confounding could not be excluded despite adjustments for multiple relevant patient characteristics.

DISCLOSURES:

This study was supported by grants from The Swedish Research Council, The Swedish Rheumatism Association, and Lund University. Some authors declared receiving consulting fees, payments or honoraria, or grants or having other ties with pharmaceutical companies and other sources.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

Janus kinase (JAK) inhibitors had a marginally superior effect on pain relief when compared with tumor necrosis factor (TNF) inhibitors in patients with rheumatoid arthritis (RA), particularly when used as monotherapy and in those previously treated with at least two biologic disease-modifying antirheumatic drugs (DMARDs), but their pain reduction effects were similar to those of non–TNF inhibitor biologic DMARDs.

METHODOLOGY:

  • Researchers aimed to compare the effect of JAK inhibitors and each class of biologic DMARDs such as TNF inhibitors, rituximababatacept, and interleukin (IL)-6 inhibitors on pain in patients with RA in clinical practice.
  • They included 8430 patients with RA who were initiated on either a JAK inhibitor (n = 1827), TNF inhibitor (n = 6422), IL-6 inhibitor (n = 887), abatacept (n = 1102), or rituximab (n = 1149) in 2017-2019.
  • Differences in the change in pain, assessed using a visual analog scale (VAS; 0-100 mm), from baseline to 3 months were compared between the treatment arms.
  • The proportion of patients who continued their initial treatment with low pain levels (VAS pain, < 20 mm) at 12 months was also evaluated.
  • The comparisons of treatment responses between JAK inhibitors and biologic DMARDs were analyzed using multivariate linear regression, adjusted for patient characteristics, comorbidities, current co-medication, and previous treatment.

TAKEAWAY:

  • Pain scores improved from baseline to 3 months in all the treatment arms, with mean changes ranging from −20.1 mm (95% CI, −23.1 to −17.2) for IL-6 inhibitors to −16.6 mm (95% CI, −19.1 to −14.0) for rituximab.
  • At 3 months, JAK inhibitors reduced pain scores by 4.0 mm (95% CI, 1.7-6.3) more than TNF inhibitors and by 3.9 mm (95% CI, 0.9-6.9) more than rituximab; however, the change in pain was not significantly different on comparing JAK inhibitors with abatacept or IL-6 inhibitors.
  • The superior pain-reducing effects of JAK inhibitors over those of TNF inhibitors were more prominent in those who were previously treated with at least two biologic DMARDs and when the treatments were used as monotherapy.
  • At 12 months, 19.5% of the patients receiving JAK inhibitors continued their treatment and achieved low pain levels, with the corresponding proportions ranging from 17% to 26% for biologic DMARDs; JAK inhibitors were more effective in reducing pain than TNF inhibitors, although the difference was not statistically significant.

IN PRACTICE:

“JAK inhibitors yield slightly better pain outcomes than TNF inhibitors. The magnitude of these effects is unlikely to be clinically meaningful in unselected groups of patients with RA,” experts from Feinberg School of Medicine, Northwestern University, Chicago, wrote in an accompanying editorial. “Specific subgroups, such as those who have tried at least two DMARDs, may experience greater effects,” they added.

SOURCE:

The study was led by Anna Eberhard, MD, Department of Clinical Sciences, Lund University, Malmö, Sweden. It was published online on September 22, 2024, in Arthritis & Rheumatology.

 

 

LIMITATIONS:

The study had a significant amount of missing data, particularly for follow-up evaluations, which may have introduced bias. The majority of patients were treated using baricitinib, potentially limiting the generalizability to other JAK inhibitors. Residual confounding could not be excluded despite adjustments for multiple relevant patient characteristics.

DISCLOSURES:

This study was supported by grants from The Swedish Research Council, The Swedish Rheumatism Association, and Lund University. Some authors declared receiving consulting fees, payments or honoraria, or grants or having other ties with pharmaceutical companies and other sources.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Maternal COVID-19 May Not Harm Baby’s Neural Development

Article Type
Changed
Fri, 10/18/2024 - 14:05

 

TOPLINE:

Fetuses exposed in utero to SARS-CoV-2 are not at an increased risk for neurodevelopmental problems in early childhood.

METHODOLOGY:

  • This prospective study aimed to assess whether in utero exposure to SARS-CoV-2, which causes COVID-19, is associated with abnormal neurodevelopment among children at ages 12, 18, and 24 months.
  • It included 2003 pregnant individuals (mean age, 33.3 years) from the ASPIRE cohort who were enrolled before 10 weeks’ gestation and followed through 24 months post partum; 10.8% of them were exposed to SARS-CoV-2 during pregnancy, as determined via self-reported data or dried blood spot cards.
  • The birth mothers were required to complete the Ages & Stages Questionnaires, Third Edition (ASQ-3), a validated screening tool for neurodevelopmental delays, at 12, 18, and 24 months postpartum.
  • Neurodevelopmental outcomes were available for 1757, 1522, and 1523 children at ages 12, 18, and 24 months, respectively.
  • The primary outcome was a score below the cutoff on the ASQ-3 across any of the following developmental domains: Communication, gross motor, fine motor, problem-solving, and social skills.

TAKEAWAY:

  • The prevalence of abnormal ASQ-3 scores did not differ between children who were exposed to SARS-CoV-2 in utero and those who were not, at ages 12 (P = .39), 18 (= .58), and 24 (P = .45) months.
  • No association was observed between in utero exposure to SARS-CoV-2 and abnormal ASQ-3 scores among children in any of the age groups.
  • The lack of an association between exposure to SARS-CoV-2 during pregnancy and abnormal neurodevelopment remained unchanged even when factors such as preterm delivery and the sex of the infant were considered.
  • Supplemental analyses found no difference in risk based on the trimester of infection, presence of fever, or incidence of breakthrough infection following vaccination.

IN PRACTICE:

“In this prospective cohort study of pregnant individuals and offspring, in utero exposure to maternal SARS-CoV-2 infection was not associated with abnormal neurodevelopmental screening scores of children through age 24 months. These findings are critical considering the novelty of the SARS-CoV-2 virus to the human species, the global scale of the initial COVID-19 outbreak, the now-endemic nature of the virus indicating ongoing relevance for pregnant individuals,” the authors of the study wrote. 

“While the scientific consensus resists a link between in utero COVID-19 exposure and impaired offspring neurodevelopment, the question remains whether societal responses to the pandemic impacted developmental trajectories,” the researchers added. “Certain studies comparing infants from a pandemic cohort with historic controls have raised concerns about lower ASQ-3 scores among children living during the pandemic. Critically, socioeconomic factors influence vulnerability, not only to infection itself but also regarding the ability to deploy resources in times of stress (eg, school closures) to mitigate sources of developmental harm. Our data support this theory, with the observed independent protective association of increasing household income with childhood ASQ-3 scores. Additional research is warranted to clarify the potential impact of societal measures on early development and the differential impact of these measures on different communities.”
 

SOURCE:

The study was led by Eleni G. Jaswa, MD, MSc, of the Department of Obstetrics, Gynecology & Reproductive Sciences at the University of California, San Francisco. It was published online in JAMA Network Open.

LIMITATIONS: 

Limitations of the research included the use of self-reported data and dried blood spot cards for determining exposure to SARS-CoV-2, which may have led to misclassification. The ASQ-3 is a modestly sensitive tool for detecting developmental delays that may have affected the study’s power to detect associations. The sample size of this study, while larger than many, may still have been underpowered to detect small differences in neurodevelopmental outcomes.

DISCLOSURES:

The ASPIRE cohort was supported by research grants provided to the University of California, San Francisco, and by the Start Small Foundation, the California Breast Cancer Research Program, the COVID Catalyst Award, and other sources. Some authors reported receiving grants, royalties, and personal fees, serving on medical advisory boards, and having other ties with several institutions.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Fetuses exposed in utero to SARS-CoV-2 are not at an increased risk for neurodevelopmental problems in early childhood.

METHODOLOGY:

  • This prospective study aimed to assess whether in utero exposure to SARS-CoV-2, which causes COVID-19, is associated with abnormal neurodevelopment among children at ages 12, 18, and 24 months.
  • It included 2003 pregnant individuals (mean age, 33.3 years) from the ASPIRE cohort who were enrolled before 10 weeks’ gestation and followed through 24 months post partum; 10.8% of them were exposed to SARS-CoV-2 during pregnancy, as determined via self-reported data or dried blood spot cards.
  • The birth mothers were required to complete the Ages & Stages Questionnaires, Third Edition (ASQ-3), a validated screening tool for neurodevelopmental delays, at 12, 18, and 24 months postpartum.
  • Neurodevelopmental outcomes were available for 1757, 1522, and 1523 children at ages 12, 18, and 24 months, respectively.
  • The primary outcome was a score below the cutoff on the ASQ-3 across any of the following developmental domains: Communication, gross motor, fine motor, problem-solving, and social skills.

TAKEAWAY:

  • The prevalence of abnormal ASQ-3 scores did not differ between children who were exposed to SARS-CoV-2 in utero and those who were not, at ages 12 (P = .39), 18 (= .58), and 24 (P = .45) months.
  • No association was observed between in utero exposure to SARS-CoV-2 and abnormal ASQ-3 scores among children in any of the age groups.
  • The lack of an association between exposure to SARS-CoV-2 during pregnancy and abnormal neurodevelopment remained unchanged even when factors such as preterm delivery and the sex of the infant were considered.
  • Supplemental analyses found no difference in risk based on the trimester of infection, presence of fever, or incidence of breakthrough infection following vaccination.

IN PRACTICE:

“In this prospective cohort study of pregnant individuals and offspring, in utero exposure to maternal SARS-CoV-2 infection was not associated with abnormal neurodevelopmental screening scores of children through age 24 months. These findings are critical considering the novelty of the SARS-CoV-2 virus to the human species, the global scale of the initial COVID-19 outbreak, the now-endemic nature of the virus indicating ongoing relevance for pregnant individuals,” the authors of the study wrote. 

“While the scientific consensus resists a link between in utero COVID-19 exposure and impaired offspring neurodevelopment, the question remains whether societal responses to the pandemic impacted developmental trajectories,” the researchers added. “Certain studies comparing infants from a pandemic cohort with historic controls have raised concerns about lower ASQ-3 scores among children living during the pandemic. Critically, socioeconomic factors influence vulnerability, not only to infection itself but also regarding the ability to deploy resources in times of stress (eg, school closures) to mitigate sources of developmental harm. Our data support this theory, with the observed independent protective association of increasing household income with childhood ASQ-3 scores. Additional research is warranted to clarify the potential impact of societal measures on early development and the differential impact of these measures on different communities.”
 

SOURCE:

The study was led by Eleni G. Jaswa, MD, MSc, of the Department of Obstetrics, Gynecology & Reproductive Sciences at the University of California, San Francisco. It was published online in JAMA Network Open.

LIMITATIONS: 

Limitations of the research included the use of self-reported data and dried blood spot cards for determining exposure to SARS-CoV-2, which may have led to misclassification. The ASQ-3 is a modestly sensitive tool for detecting developmental delays that may have affected the study’s power to detect associations. The sample size of this study, while larger than many, may still have been underpowered to detect small differences in neurodevelopmental outcomes.

DISCLOSURES:

The ASPIRE cohort was supported by research grants provided to the University of California, San Francisco, and by the Start Small Foundation, the California Breast Cancer Research Program, the COVID Catalyst Award, and other sources. Some authors reported receiving grants, royalties, and personal fees, serving on medical advisory boards, and having other ties with several institutions.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Fetuses exposed in utero to SARS-CoV-2 are not at an increased risk for neurodevelopmental problems in early childhood.

METHODOLOGY:

  • This prospective study aimed to assess whether in utero exposure to SARS-CoV-2, which causes COVID-19, is associated with abnormal neurodevelopment among children at ages 12, 18, and 24 months.
  • It included 2003 pregnant individuals (mean age, 33.3 years) from the ASPIRE cohort who were enrolled before 10 weeks’ gestation and followed through 24 months post partum; 10.8% of them were exposed to SARS-CoV-2 during pregnancy, as determined via self-reported data or dried blood spot cards.
  • The birth mothers were required to complete the Ages & Stages Questionnaires, Third Edition (ASQ-3), a validated screening tool for neurodevelopmental delays, at 12, 18, and 24 months postpartum.
  • Neurodevelopmental outcomes were available for 1757, 1522, and 1523 children at ages 12, 18, and 24 months, respectively.
  • The primary outcome was a score below the cutoff on the ASQ-3 across any of the following developmental domains: Communication, gross motor, fine motor, problem-solving, and social skills.

TAKEAWAY:

  • The prevalence of abnormal ASQ-3 scores did not differ between children who were exposed to SARS-CoV-2 in utero and those who were not, at ages 12 (P = .39), 18 (= .58), and 24 (P = .45) months.
  • No association was observed between in utero exposure to SARS-CoV-2 and abnormal ASQ-3 scores among children in any of the age groups.
  • The lack of an association between exposure to SARS-CoV-2 during pregnancy and abnormal neurodevelopment remained unchanged even when factors such as preterm delivery and the sex of the infant were considered.
  • Supplemental analyses found no difference in risk based on the trimester of infection, presence of fever, or incidence of breakthrough infection following vaccination.

IN PRACTICE:

“In this prospective cohort study of pregnant individuals and offspring, in utero exposure to maternal SARS-CoV-2 infection was not associated with abnormal neurodevelopmental screening scores of children through age 24 months. These findings are critical considering the novelty of the SARS-CoV-2 virus to the human species, the global scale of the initial COVID-19 outbreak, the now-endemic nature of the virus indicating ongoing relevance for pregnant individuals,” the authors of the study wrote. 

“While the scientific consensus resists a link between in utero COVID-19 exposure and impaired offspring neurodevelopment, the question remains whether societal responses to the pandemic impacted developmental trajectories,” the researchers added. “Certain studies comparing infants from a pandemic cohort with historic controls have raised concerns about lower ASQ-3 scores among children living during the pandemic. Critically, socioeconomic factors influence vulnerability, not only to infection itself but also regarding the ability to deploy resources in times of stress (eg, school closures) to mitigate sources of developmental harm. Our data support this theory, with the observed independent protective association of increasing household income with childhood ASQ-3 scores. Additional research is warranted to clarify the potential impact of societal measures on early development and the differential impact of these measures on different communities.”
 

SOURCE:

The study was led by Eleni G. Jaswa, MD, MSc, of the Department of Obstetrics, Gynecology & Reproductive Sciences at the University of California, San Francisco. It was published online in JAMA Network Open.

LIMITATIONS: 

Limitations of the research included the use of self-reported data and dried blood spot cards for determining exposure to SARS-CoV-2, which may have led to misclassification. The ASQ-3 is a modestly sensitive tool for detecting developmental delays that may have affected the study’s power to detect associations. The sample size of this study, while larger than many, may still have been underpowered to detect small differences in neurodevelopmental outcomes.

DISCLOSURES:

The ASPIRE cohort was supported by research grants provided to the University of California, San Francisco, and by the Start Small Foundation, the California Breast Cancer Research Program, the COVID Catalyst Award, and other sources. Some authors reported receiving grants, royalties, and personal fees, serving on medical advisory boards, and having other ties with several institutions.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Automated Insulin Delivery Systems Reduce Burden in Diabetes

Article Type
Changed
Thu, 10/17/2024 - 12:03

 

TOPLINE:

Automated insulin delivery (AID) systems reduce diabetes distress and fear of hypoglycemia, improve quality of life, and increase awareness about hypoglycemia in adults, children, and adolescents with diabetes.

METHODOLOGY:

  • Despite the known benefits of AID systems for glycemic control, conclusive evidence on the impact of these devices on person-reported outcomes (PROs) has been limited.
  • A systematic review and meta-analysis of 62 studies that reported the findings of 45 different quantitative questionnaires analyzed the effects of AID systems on various PROs in patients with diabetes.
  • Studies were included if they reported the results of at least one PRO assessed via a validated questionnaire; no restrictions on populations were applied, such that studies could include individuals of all ages with type 1 diabetes or adults with type 2 diabetes.
  • Intervention groups in the original studies involved an AID system comprising an insulin pump, a continuous glucose monitoring (CGM) system, and an algorithm controlling insulin delivery on the basis of CGM data. The control group, if included, involved non-AID systems such as multiple daily injections of insulin, standalone insulin pump therapy, or others.
  • The main outcomes studied were diabetes distress, fear of hypoglycemia, and quality of life.

TAKEAWAY:

  • Meta-analysis of 13 randomized controlled trials (RCTs) found a significant reduction in diabetes distress with the use of AID systems vs non-AID systems (standardized mean difference [SMD], −0.159; P = .0322).
  • Fear of hypoglycemia, as assessed by the Hypoglycemia Fear Survey-II in up to 16 RCTs, was significantly reduced in participants using AID systems (SMD, −0.339; P = .0005); AID systems also improved awareness about hypoglycemia, as determined from analysis of four RCTs (SMD, −0.231; P = .0193).
  • Quality of life and pediatric quality of life scores at follow-up, as assessed in three and five RCTs, respectively, were higher for patients using AID systems than for those in the control group.
  • The promising effects of AID systems on alleviating disease burden and improving quality of life outcomes were also evident from the observational studies included in this meta-analysis.

IN PRACTICE:

“These findings can be used by health technology assessment bodies and policy makers to inform reimbursement decisions for AID therapy and can also help to widen access to this diabetes technology,” the authors wrote.

SOURCE:

The study was led by Timm Roos, Research Institute of the Diabetes Academy Mergentheim, Bad Mergentheim, Germany. It was published online in eClinicalMedicine.

LIMITATIONS:

A large number of different questionnaires were used to assess PROs, leading to complexity in the analysis. The limited number of studies that could be pooled for some PROs suggests the need for more research with a uniform assessment of PROs. Finally, the inclusion of different generations of AID systems may have introduced bias in the observed effects on PROs.

DISCLOSURES:

This study did not receive any funding. Some authors reported receiving honoraria, consulting fees, travel support, and advisory board member fees as well as other ties with many pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Automated insulin delivery (AID) systems reduce diabetes distress and fear of hypoglycemia, improve quality of life, and increase awareness about hypoglycemia in adults, children, and adolescents with diabetes.

METHODOLOGY:

  • Despite the known benefits of AID systems for glycemic control, conclusive evidence on the impact of these devices on person-reported outcomes (PROs) has been limited.
  • A systematic review and meta-analysis of 62 studies that reported the findings of 45 different quantitative questionnaires analyzed the effects of AID systems on various PROs in patients with diabetes.
  • Studies were included if they reported the results of at least one PRO assessed via a validated questionnaire; no restrictions on populations were applied, such that studies could include individuals of all ages with type 1 diabetes or adults with type 2 diabetes.
  • Intervention groups in the original studies involved an AID system comprising an insulin pump, a continuous glucose monitoring (CGM) system, and an algorithm controlling insulin delivery on the basis of CGM data. The control group, if included, involved non-AID systems such as multiple daily injections of insulin, standalone insulin pump therapy, or others.
  • The main outcomes studied were diabetes distress, fear of hypoglycemia, and quality of life.

TAKEAWAY:

  • Meta-analysis of 13 randomized controlled trials (RCTs) found a significant reduction in diabetes distress with the use of AID systems vs non-AID systems (standardized mean difference [SMD], −0.159; P = .0322).
  • Fear of hypoglycemia, as assessed by the Hypoglycemia Fear Survey-II in up to 16 RCTs, was significantly reduced in participants using AID systems (SMD, −0.339; P = .0005); AID systems also improved awareness about hypoglycemia, as determined from analysis of four RCTs (SMD, −0.231; P = .0193).
  • Quality of life and pediatric quality of life scores at follow-up, as assessed in three and five RCTs, respectively, were higher for patients using AID systems than for those in the control group.
  • The promising effects of AID systems on alleviating disease burden and improving quality of life outcomes were also evident from the observational studies included in this meta-analysis.

IN PRACTICE:

“These findings can be used by health technology assessment bodies and policy makers to inform reimbursement decisions for AID therapy and can also help to widen access to this diabetes technology,” the authors wrote.

SOURCE:

The study was led by Timm Roos, Research Institute of the Diabetes Academy Mergentheim, Bad Mergentheim, Germany. It was published online in eClinicalMedicine.

LIMITATIONS:

A large number of different questionnaires were used to assess PROs, leading to complexity in the analysis. The limited number of studies that could be pooled for some PROs suggests the need for more research with a uniform assessment of PROs. Finally, the inclusion of different generations of AID systems may have introduced bias in the observed effects on PROs.

DISCLOSURES:

This study did not receive any funding. Some authors reported receiving honoraria, consulting fees, travel support, and advisory board member fees as well as other ties with many pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version this article first appeared on Medscape.com.

 

TOPLINE:

Automated insulin delivery (AID) systems reduce diabetes distress and fear of hypoglycemia, improve quality of life, and increase awareness about hypoglycemia in adults, children, and adolescents with diabetes.

METHODOLOGY:

  • Despite the known benefits of AID systems for glycemic control, conclusive evidence on the impact of these devices on person-reported outcomes (PROs) has been limited.
  • A systematic review and meta-analysis of 62 studies that reported the findings of 45 different quantitative questionnaires analyzed the effects of AID systems on various PROs in patients with diabetes.
  • Studies were included if they reported the results of at least one PRO assessed via a validated questionnaire; no restrictions on populations were applied, such that studies could include individuals of all ages with type 1 diabetes or adults with type 2 diabetes.
  • Intervention groups in the original studies involved an AID system comprising an insulin pump, a continuous glucose monitoring (CGM) system, and an algorithm controlling insulin delivery on the basis of CGM data. The control group, if included, involved non-AID systems such as multiple daily injections of insulin, standalone insulin pump therapy, or others.
  • The main outcomes studied were diabetes distress, fear of hypoglycemia, and quality of life.

TAKEAWAY:

  • Meta-analysis of 13 randomized controlled trials (RCTs) found a significant reduction in diabetes distress with the use of AID systems vs non-AID systems (standardized mean difference [SMD], −0.159; P = .0322).
  • Fear of hypoglycemia, as assessed by the Hypoglycemia Fear Survey-II in up to 16 RCTs, was significantly reduced in participants using AID systems (SMD, −0.339; P = .0005); AID systems also improved awareness about hypoglycemia, as determined from analysis of four RCTs (SMD, −0.231; P = .0193).
  • Quality of life and pediatric quality of life scores at follow-up, as assessed in three and five RCTs, respectively, were higher for patients using AID systems than for those in the control group.
  • The promising effects of AID systems on alleviating disease burden and improving quality of life outcomes were also evident from the observational studies included in this meta-analysis.

IN PRACTICE:

“These findings can be used by health technology assessment bodies and policy makers to inform reimbursement decisions for AID therapy and can also help to widen access to this diabetes technology,” the authors wrote.

SOURCE:

The study was led by Timm Roos, Research Institute of the Diabetes Academy Mergentheim, Bad Mergentheim, Germany. It was published online in eClinicalMedicine.

LIMITATIONS:

A large number of different questionnaires were used to assess PROs, leading to complexity in the analysis. The limited number of studies that could be pooled for some PROs suggests the need for more research with a uniform assessment of PROs. Finally, the inclusion of different generations of AID systems may have introduced bias in the observed effects on PROs.

DISCLOSURES:

This study did not receive any funding. Some authors reported receiving honoraria, consulting fees, travel support, and advisory board member fees as well as other ties with many pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Too Few Immunocompromised Veterans Are Getting Zoster Vaccinations

Article Type
Changed
Tue, 10/15/2024 - 02:29

 

TOPLINE:

A study has found that less than half of US veterans on chronic immunosuppressive medications, and a much lower percentage of those younger than 50 years, received at least one dose of the recombinant zoster vaccine (RZV) by mid-2023; the low rate of herpes zoster vaccination in this immunocompromised group, especially among younger individuals, is concerning.

METHODOLOGY:

  • In 2021, the Food and Drug Administration authorized the use of RZV for adults aged 18 years or older on chronic immunosuppressive medications because of their high risk for herpes zoster and its related complications, followed by updated guidance from the Centers for Disease Control and Prevention and American College of Rheumatology in 2021 and 2022, respectively.
  • This study aimed to assess the receipt of RZV among veterans receiving immunosuppressive medications within the Veterans Health Administration (VHA) healthcare system before and after the expanded indications in February 2022.
  • It included 190,162 veterans who were prescribed one or more immunosuppressive medications for at least 90 days at 130 medical facilities between January 1, 2018, and June 30, 2023.
  • A total of 23,295 veterans (12.3%) were younger than 50 years by the end of the study period.
  • The outcome measured was the percentage of veterans with one or more doses of RZV documented during the study period.

TAKEAWAY:

  • Among veterans aged 50 years or older, 36.2% and 49.8% received an RZV before the expanded indication and by mid-2023, respectively. Even though the rate of vaccination is higher than that observed in the 2021 National Health Interview Survey, significant room for improvement remains.
  • Among veterans younger than 50 years, very few (2.8%) received an RZV before the expanded indication, and only 13.4% received it by mid-2023.
  • Demographic factors associated with lower odds of vaccination included male sex, African American or unknown race, and nonurban residence (P ≤ .004 for all).
  • Those who received targeted synthetic disease-modifying antirheumatic drugs (DMARDs) alone or in combination with other drugs or those who received other vaccines were more likely to receive RZV than those who received conventional synthetic DMARD monotherapy (P < .001 for both).

IN PRACTICE:

“Future work to improve RZV vaccination in patients at high risk should focus on creating informatics tools to identify individuals at high risk and standardizing vaccination guidelines across subspecialties,” the authors wrote.

SOURCE:

This study was led by Sharon Abada, MD, University of California, San Francisco. It was published online on October 11, 2024, in JAMA Network Open.

LIMITATIONS:

This study may not be generalizable to nonveteran populations or countries outside the United States. Limitations also included difficulty with capturing vaccinations not administered within the VHA system, which may have resulted in an underestimation of the percentage of patients vaccinated.

DISCLOSURES:

This work was funded by grants from the VA Quality Enhancement Research Initiative and the Agency for Healthcare Research and Quality. Some authors reported receiving grants from institutions and pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A study has found that less than half of US veterans on chronic immunosuppressive medications, and a much lower percentage of those younger than 50 years, received at least one dose of the recombinant zoster vaccine (RZV) by mid-2023; the low rate of herpes zoster vaccination in this immunocompromised group, especially among younger individuals, is concerning.

METHODOLOGY:

  • In 2021, the Food and Drug Administration authorized the use of RZV for adults aged 18 years or older on chronic immunosuppressive medications because of their high risk for herpes zoster and its related complications, followed by updated guidance from the Centers for Disease Control and Prevention and American College of Rheumatology in 2021 and 2022, respectively.
  • This study aimed to assess the receipt of RZV among veterans receiving immunosuppressive medications within the Veterans Health Administration (VHA) healthcare system before and after the expanded indications in February 2022.
  • It included 190,162 veterans who were prescribed one or more immunosuppressive medications for at least 90 days at 130 medical facilities between January 1, 2018, and June 30, 2023.
  • A total of 23,295 veterans (12.3%) were younger than 50 years by the end of the study period.
  • The outcome measured was the percentage of veterans with one or more doses of RZV documented during the study period.

TAKEAWAY:

  • Among veterans aged 50 years or older, 36.2% and 49.8% received an RZV before the expanded indication and by mid-2023, respectively. Even though the rate of vaccination is higher than that observed in the 2021 National Health Interview Survey, significant room for improvement remains.
  • Among veterans younger than 50 years, very few (2.8%) received an RZV before the expanded indication, and only 13.4% received it by mid-2023.
  • Demographic factors associated with lower odds of vaccination included male sex, African American or unknown race, and nonurban residence (P ≤ .004 for all).
  • Those who received targeted synthetic disease-modifying antirheumatic drugs (DMARDs) alone or in combination with other drugs or those who received other vaccines were more likely to receive RZV than those who received conventional synthetic DMARD monotherapy (P < .001 for both).

IN PRACTICE:

“Future work to improve RZV vaccination in patients at high risk should focus on creating informatics tools to identify individuals at high risk and standardizing vaccination guidelines across subspecialties,” the authors wrote.

SOURCE:

This study was led by Sharon Abada, MD, University of California, San Francisco. It was published online on October 11, 2024, in JAMA Network Open.

LIMITATIONS:

This study may not be generalizable to nonveteran populations or countries outside the United States. Limitations also included difficulty with capturing vaccinations not administered within the VHA system, which may have resulted in an underestimation of the percentage of patients vaccinated.

DISCLOSURES:

This work was funded by grants from the VA Quality Enhancement Research Initiative and the Agency for Healthcare Research and Quality. Some authors reported receiving grants from institutions and pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

A study has found that less than half of US veterans on chronic immunosuppressive medications, and a much lower percentage of those younger than 50 years, received at least one dose of the recombinant zoster vaccine (RZV) by mid-2023; the low rate of herpes zoster vaccination in this immunocompromised group, especially among younger individuals, is concerning.

METHODOLOGY:

  • In 2021, the Food and Drug Administration authorized the use of RZV for adults aged 18 years or older on chronic immunosuppressive medications because of their high risk for herpes zoster and its related complications, followed by updated guidance from the Centers for Disease Control and Prevention and American College of Rheumatology in 2021 and 2022, respectively.
  • This study aimed to assess the receipt of RZV among veterans receiving immunosuppressive medications within the Veterans Health Administration (VHA) healthcare system before and after the expanded indications in February 2022.
  • It included 190,162 veterans who were prescribed one or more immunosuppressive medications for at least 90 days at 130 medical facilities between January 1, 2018, and June 30, 2023.
  • A total of 23,295 veterans (12.3%) were younger than 50 years by the end of the study period.
  • The outcome measured was the percentage of veterans with one or more doses of RZV documented during the study period.

TAKEAWAY:

  • Among veterans aged 50 years or older, 36.2% and 49.8% received an RZV before the expanded indication and by mid-2023, respectively. Even though the rate of vaccination is higher than that observed in the 2021 National Health Interview Survey, significant room for improvement remains.
  • Among veterans younger than 50 years, very few (2.8%) received an RZV before the expanded indication, and only 13.4% received it by mid-2023.
  • Demographic factors associated with lower odds of vaccination included male sex, African American or unknown race, and nonurban residence (P ≤ .004 for all).
  • Those who received targeted synthetic disease-modifying antirheumatic drugs (DMARDs) alone or in combination with other drugs or those who received other vaccines were more likely to receive RZV than those who received conventional synthetic DMARD monotherapy (P < .001 for both).

IN PRACTICE:

“Future work to improve RZV vaccination in patients at high risk should focus on creating informatics tools to identify individuals at high risk and standardizing vaccination guidelines across subspecialties,” the authors wrote.

SOURCE:

This study was led by Sharon Abada, MD, University of California, San Francisco. It was published online on October 11, 2024, in JAMA Network Open.

LIMITATIONS:

This study may not be generalizable to nonveteran populations or countries outside the United States. Limitations also included difficulty with capturing vaccinations not administered within the VHA system, which may have resulted in an underestimation of the percentage of patients vaccinated.

DISCLOSURES:

This work was funded by grants from the VA Quality Enhancement Research Initiative and the Agency for Healthcare Research and Quality. Some authors reported receiving grants from institutions and pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

CGM With Geriatric Care Simplifies T1D Management in Seniors

Article Type
Changed
Thu, 10/10/2024 - 13:52

 

TOPLINE:

Continuous glucose monitoring (CGM) combined with geriatric principles of simplified treatment regimens and personalized glycemic goals reduces hypoglycemia duration in older adults with type 1 diabetes (T1D) without worsening glycemic control.

METHODOLOGY:

  • Researchers evaluated the effectiveness of CGM use enhanced by geriatric principles in adults aged ≥ 65 years with T1D and at least two episodes of hypoglycemia (blood glucose level, < 70 mg/dL for ≥ 20 minutes over 2 weeks), who were either CGM-naive or CGM users prior to the study.
  • Participants were randomly assigned to an intervention group using CGM with geriatric principles (ie, adjusting goals based on overall health and simplifying regimens based on CGM patterns and clinical characteristics) or a control group receiving usual care by their endocrinologist.
  • The primary outcome was the change in duration of hypoglycemia from baseline to 6 months.
  • A cost-effectiveness analysis was also performed for the intervention using a healthcare sector perspective, considering the cost of CGM devices and the cost of medical staff time.

TAKEAWAY:

  • Researchers included 131 participants (mean age, 71 years), of whom 68 were in the intervention group (35 CGM-naive) and 63 in the control group (23 CGM-naive).
  • The intervention group showed a median reduction of 2.6% in the duration of hypoglycemia vs a 0.3% reduction in the control group (median difference, −2.3%; P < .001).
  • This reduction was observed in both CGM users (median difference, −1.2%) and CGM-naive participants (median difference, −2.8%) in the intervention group.
  • No significant difference in A1c levels was observed between the intervention and control groups, indicating that CGM enhanced with geriatric principles did not worsen glycemic control.
  • The intervention was associated with an incremental cost-effectiveness ratio of $71,623 per quality-adjusted life-year and was cost-effective for CGM-naive participants but at a lower level owing to the high cost of the CGM device.

IN PRACTICE:

“Personalization of goals and simplification of complex regimens can be combined with CGM use to improve management of type 1 diabetes in older adults,” the study authors wrote.

SOURCE:

The study was led by Medha N. Munshi, MD, Joslin Diabetes Center, Boston. It was published online in Diabetes Care.

LIMITATIONS:

The study included a relatively small sample size and an ethnically homogeneous and highly educated cohort, which may have limited the generalizability of its findings. Additionally, the study did not measure adherence to individual simplification strategies, which may have hindered the quantification of behavioral changes.

DISCLOSURES:

This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health. Two authors declared serving as consultants for pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Continuous glucose monitoring (CGM) combined with geriatric principles of simplified treatment regimens and personalized glycemic goals reduces hypoglycemia duration in older adults with type 1 diabetes (T1D) without worsening glycemic control.

METHODOLOGY:

  • Researchers evaluated the effectiveness of CGM use enhanced by geriatric principles in adults aged ≥ 65 years with T1D and at least two episodes of hypoglycemia (blood glucose level, < 70 mg/dL for ≥ 20 minutes over 2 weeks), who were either CGM-naive or CGM users prior to the study.
  • Participants were randomly assigned to an intervention group using CGM with geriatric principles (ie, adjusting goals based on overall health and simplifying regimens based on CGM patterns and clinical characteristics) or a control group receiving usual care by their endocrinologist.
  • The primary outcome was the change in duration of hypoglycemia from baseline to 6 months.
  • A cost-effectiveness analysis was also performed for the intervention using a healthcare sector perspective, considering the cost of CGM devices and the cost of medical staff time.

TAKEAWAY:

  • Researchers included 131 participants (mean age, 71 years), of whom 68 were in the intervention group (35 CGM-naive) and 63 in the control group (23 CGM-naive).
  • The intervention group showed a median reduction of 2.6% in the duration of hypoglycemia vs a 0.3% reduction in the control group (median difference, −2.3%; P < .001).
  • This reduction was observed in both CGM users (median difference, −1.2%) and CGM-naive participants (median difference, −2.8%) in the intervention group.
  • No significant difference in A1c levels was observed between the intervention and control groups, indicating that CGM enhanced with geriatric principles did not worsen glycemic control.
  • The intervention was associated with an incremental cost-effectiveness ratio of $71,623 per quality-adjusted life-year and was cost-effective for CGM-naive participants but at a lower level owing to the high cost of the CGM device.

IN PRACTICE:

“Personalization of goals and simplification of complex regimens can be combined with CGM use to improve management of type 1 diabetes in older adults,” the study authors wrote.

SOURCE:

The study was led by Medha N. Munshi, MD, Joslin Diabetes Center, Boston. It was published online in Diabetes Care.

LIMITATIONS:

The study included a relatively small sample size and an ethnically homogeneous and highly educated cohort, which may have limited the generalizability of its findings. Additionally, the study did not measure adherence to individual simplification strategies, which may have hindered the quantification of behavioral changes.

DISCLOSURES:

This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health. Two authors declared serving as consultants for pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Continuous glucose monitoring (CGM) combined with geriatric principles of simplified treatment regimens and personalized glycemic goals reduces hypoglycemia duration in older adults with type 1 diabetes (T1D) without worsening glycemic control.

METHODOLOGY:

  • Researchers evaluated the effectiveness of CGM use enhanced by geriatric principles in adults aged ≥ 65 years with T1D and at least two episodes of hypoglycemia (blood glucose level, < 70 mg/dL for ≥ 20 minutes over 2 weeks), who were either CGM-naive or CGM users prior to the study.
  • Participants were randomly assigned to an intervention group using CGM with geriatric principles (ie, adjusting goals based on overall health and simplifying regimens based on CGM patterns and clinical characteristics) or a control group receiving usual care by their endocrinologist.
  • The primary outcome was the change in duration of hypoglycemia from baseline to 6 months.
  • A cost-effectiveness analysis was also performed for the intervention using a healthcare sector perspective, considering the cost of CGM devices and the cost of medical staff time.

TAKEAWAY:

  • Researchers included 131 participants (mean age, 71 years), of whom 68 were in the intervention group (35 CGM-naive) and 63 in the control group (23 CGM-naive).
  • The intervention group showed a median reduction of 2.6% in the duration of hypoglycemia vs a 0.3% reduction in the control group (median difference, −2.3%; P < .001).
  • This reduction was observed in both CGM users (median difference, −1.2%) and CGM-naive participants (median difference, −2.8%) in the intervention group.
  • No significant difference in A1c levels was observed between the intervention and control groups, indicating that CGM enhanced with geriatric principles did not worsen glycemic control.
  • The intervention was associated with an incremental cost-effectiveness ratio of $71,623 per quality-adjusted life-year and was cost-effective for CGM-naive participants but at a lower level owing to the high cost of the CGM device.

IN PRACTICE:

“Personalization of goals and simplification of complex regimens can be combined with CGM use to improve management of type 1 diabetes in older adults,” the study authors wrote.

SOURCE:

The study was led by Medha N. Munshi, MD, Joslin Diabetes Center, Boston. It was published online in Diabetes Care.

LIMITATIONS:

The study included a relatively small sample size and an ethnically homogeneous and highly educated cohort, which may have limited the generalizability of its findings. Additionally, the study did not measure adherence to individual simplification strategies, which may have hindered the quantification of behavioral changes.

DISCLOSURES:

This study was supported by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health. Two authors declared serving as consultants for pharmaceutical companies.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Rheumatologic Disease–Associated Hyperinflammatory Condition Successfully Treated with Emapalumab

Article Type
Changed
Fri, 09/27/2024 - 16:12

 

TOPLINE:

Emapalumab (Gamifant)-containing regimens stabilize key laboratory parameters and show a high 12-month survival probability in patients with rheumatologic disease–associated hemophagocytic lymphohistiocytosis (HLH).

METHODOLOGY:

  • Researchers conducted a retrospective medical chart review study across 33 US hospitals to assess the real-world treatment patterns and outcomes in patients with HLH treated with emapalumab.
  • They included 15 patients with rheumatologic disease–associated HLH (median age at diagnosis, 5 years; 73.3% women) who received at least one dose of emapalumab between November 20, 2018, and October 31, 2021.
  • Most patients with rheumatologic disease–associated HLH had either systemic juvenile idiopathic arthritis (n = 9) or adult-onset Still’s disease (n = 1).
  • Patients received emapalumab for refractory, recurrent, or progressive disease, with an overall treatment duration of 63 days.
  • The primary objective of this study was to describe emapalumab treatment patterns such as time to initiation, treatment duration, dosing patterns, and reasons for initiation.

TAKEAWAY:

  • Most patients (60%) with rheumatologic disease–associated HLH were critically ill and were initiated on emapalumab in an intensive care unit; emapalumab was mostly initiated for treating refractory (33.3%) and recurrent (33.3%) disease.
  • All patients concurrently received emapalumab with other HLH-related therapies, with glucocorticoids (100%) and anakinra (60%) used most frequently.
  • Emapalumab treatment led to achievement of normal fibrinogen levels (> 360 mg/dL), according to defined laboratory criteria in all patients with rheumatologic disease–associated HLH, and an 80.6% reduction in the required glucocorticoid dose.
  • The 12-month survival probability from the initiation of emapalumab was 86.7% in all patients with rheumatologic disease–associated HLH and 90.0% in the subset with systemic juvenile idiopathic arthritis or adult-onset Still’s disease.

IN PRACTICE:

“In this study, emapalumab-containing regimens normalized rheumatologic disease–associated laboratory parameters, substantially reduced glucocorticoid dose, and were associated with low mortality,” the authors wrote.

SOURCE:

The study was led by Shanmuganathan Chandrakasan, MD, Children’s Healthcare of Atlanta, Emory University, Atlanta, Georgia, and was published online on September 8, 2024, in Arthritis & Rheumatology.

LIMITATIONS:

Chart data required for analyses were missing or incomplete in this retrospective study. The sample size of patients with rheumatologic disease–associated HLH was small. No safety data were collected.

DISCLOSURES:

The study was supported by Sobi, which markets emapalumab. Some authors declared receiving grants, consulting fees, or payments or having financial and nonfinancial interests and other ties with several pharmaceutical companies, including Sobi.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Emapalumab (Gamifant)-containing regimens stabilize key laboratory parameters and show a high 12-month survival probability in patients with rheumatologic disease–associated hemophagocytic lymphohistiocytosis (HLH).

METHODOLOGY:

  • Researchers conducted a retrospective medical chart review study across 33 US hospitals to assess the real-world treatment patterns and outcomes in patients with HLH treated with emapalumab.
  • They included 15 patients with rheumatologic disease–associated HLH (median age at diagnosis, 5 years; 73.3% women) who received at least one dose of emapalumab between November 20, 2018, and October 31, 2021.
  • Most patients with rheumatologic disease–associated HLH had either systemic juvenile idiopathic arthritis (n = 9) or adult-onset Still’s disease (n = 1).
  • Patients received emapalumab for refractory, recurrent, or progressive disease, with an overall treatment duration of 63 days.
  • The primary objective of this study was to describe emapalumab treatment patterns such as time to initiation, treatment duration, dosing patterns, and reasons for initiation.

TAKEAWAY:

  • Most patients (60%) with rheumatologic disease–associated HLH were critically ill and were initiated on emapalumab in an intensive care unit; emapalumab was mostly initiated for treating refractory (33.3%) and recurrent (33.3%) disease.
  • All patients concurrently received emapalumab with other HLH-related therapies, with glucocorticoids (100%) and anakinra (60%) used most frequently.
  • Emapalumab treatment led to achievement of normal fibrinogen levels (> 360 mg/dL), according to defined laboratory criteria in all patients with rheumatologic disease–associated HLH, and an 80.6% reduction in the required glucocorticoid dose.
  • The 12-month survival probability from the initiation of emapalumab was 86.7% in all patients with rheumatologic disease–associated HLH and 90.0% in the subset with systemic juvenile idiopathic arthritis or adult-onset Still’s disease.

IN PRACTICE:

“In this study, emapalumab-containing regimens normalized rheumatologic disease–associated laboratory parameters, substantially reduced glucocorticoid dose, and were associated with low mortality,” the authors wrote.

SOURCE:

The study was led by Shanmuganathan Chandrakasan, MD, Children’s Healthcare of Atlanta, Emory University, Atlanta, Georgia, and was published online on September 8, 2024, in Arthritis & Rheumatology.

LIMITATIONS:

Chart data required for analyses were missing or incomplete in this retrospective study. The sample size of patients with rheumatologic disease–associated HLH was small. No safety data were collected.

DISCLOSURES:

The study was supported by Sobi, which markets emapalumab. Some authors declared receiving grants, consulting fees, or payments or having financial and nonfinancial interests and other ties with several pharmaceutical companies, including Sobi.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Emapalumab (Gamifant)-containing regimens stabilize key laboratory parameters and show a high 12-month survival probability in patients with rheumatologic disease–associated hemophagocytic lymphohistiocytosis (HLH).

METHODOLOGY:

  • Researchers conducted a retrospective medical chart review study across 33 US hospitals to assess the real-world treatment patterns and outcomes in patients with HLH treated with emapalumab.
  • They included 15 patients with rheumatologic disease–associated HLH (median age at diagnosis, 5 years; 73.3% women) who received at least one dose of emapalumab between November 20, 2018, and October 31, 2021.
  • Most patients with rheumatologic disease–associated HLH had either systemic juvenile idiopathic arthritis (n = 9) or adult-onset Still’s disease (n = 1).
  • Patients received emapalumab for refractory, recurrent, or progressive disease, with an overall treatment duration of 63 days.
  • The primary objective of this study was to describe emapalumab treatment patterns such as time to initiation, treatment duration, dosing patterns, and reasons for initiation.

TAKEAWAY:

  • Most patients (60%) with rheumatologic disease–associated HLH were critically ill and were initiated on emapalumab in an intensive care unit; emapalumab was mostly initiated for treating refractory (33.3%) and recurrent (33.3%) disease.
  • All patients concurrently received emapalumab with other HLH-related therapies, with glucocorticoids (100%) and anakinra (60%) used most frequently.
  • Emapalumab treatment led to achievement of normal fibrinogen levels (> 360 mg/dL), according to defined laboratory criteria in all patients with rheumatologic disease–associated HLH, and an 80.6% reduction in the required glucocorticoid dose.
  • The 12-month survival probability from the initiation of emapalumab was 86.7% in all patients with rheumatologic disease–associated HLH and 90.0% in the subset with systemic juvenile idiopathic arthritis or adult-onset Still’s disease.

IN PRACTICE:

“In this study, emapalumab-containing regimens normalized rheumatologic disease–associated laboratory parameters, substantially reduced glucocorticoid dose, and were associated with low mortality,” the authors wrote.

SOURCE:

The study was led by Shanmuganathan Chandrakasan, MD, Children’s Healthcare of Atlanta, Emory University, Atlanta, Georgia, and was published online on September 8, 2024, in Arthritis & Rheumatology.

LIMITATIONS:

Chart data required for analyses were missing or incomplete in this retrospective study. The sample size of patients with rheumatologic disease–associated HLH was small. No safety data were collected.

DISCLOSURES:

The study was supported by Sobi, which markets emapalumab. Some authors declared receiving grants, consulting fees, or payments or having financial and nonfinancial interests and other ties with several pharmaceutical companies, including Sobi.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Walking App Works Only if Users Think It Does

Article Type
Changed
Fri, 09/27/2024 - 11:37

 

TOPLINE:

Apps designed to increase physical activity may be useful in increasing daily step counts for users who believe the intervention beneficial, but not for those who do not. The app’s effectiveness is notably influenced by how users perceive its utility.

METHODOLOGY:

  • Researchers conducted a randomized controlled trial from February 2021 to May 2022 to evaluate the effectiveness of SNapp, an adaptive app designed to promote walking through tailored coaching content.
  • Overall, 176 adults (76% women; mean age, 56 years) were randomly assigned to use the app plus tailored coaching content (SNapp group; n = 87) or only the step counter app (control group; n = 89).
  • SNapp’s coaching content provided personalized feedback on step counts and recommendations for increasing walking, while also considering individual preferences for behavior change techniques.
  • The primary outcome was the daily step count recorded by the app, which was updated on an hourly basis in a database over an intervention period of 12 months.
  • Perceptions of ease of use and usefulness were assessed to determine their effect on the effectiveness of the app.

TAKEAWAY:

  • Intervention group participants used the app nearly 30% of days, while those using the app alone showed almost identical use.
  • The SNapp intervention did not significantly affect the step counts on average over time (B, −202.30; 95% CI, −889.7 to 485.1).
  • Perceived usefulness significantly moderated the intervention effect of SNapp (B, 344.38; 90% CI, 40.4-648.3), but perceived ease of use did not (B, 38.60; 90% CI, −276.5 to 353.7).
  • Among participants with a high perceived usefulness, the SNapp group had a higher median step count than the control group (median difference, 1260 steps; 90% CI, −3243.7 to 1298.2); however, this difference was not statistically significant.

IN PRACTICE:

“This study shows that perceived usefulness is also an important factor influencing behavioral effects. Hence, it is essential for apps to be perceived as useful to effectively improve users’ activity levels,” the authors wrote.

SOURCE:

The study was led by Anne L. Vos, PhD, of the Amsterdam School of Communication Research at the University of Amsterdam, in the Netherlands. It was published online on September 16, 2024, in the American Journal of Preventive Medicine.

LIMITATIONS:

The study’s recruitment strategy primarily attracted highly educated individuals, limiting generalizability. The app’s accuracy in measuring steps could be improved, as it sometimes underestimated step counts. Researchers also were unable to check if participants read messages from coaches.

DISCLOSURES:

The study was supported by grants from the Dutch Heart Foundation and the Netherlands Organisation for Health Research and Development. No relevant conflicts of interest were disclosed by the authors.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Apps designed to increase physical activity may be useful in increasing daily step counts for users who believe the intervention beneficial, but not for those who do not. The app’s effectiveness is notably influenced by how users perceive its utility.

METHODOLOGY:

  • Researchers conducted a randomized controlled trial from February 2021 to May 2022 to evaluate the effectiveness of SNapp, an adaptive app designed to promote walking through tailored coaching content.
  • Overall, 176 adults (76% women; mean age, 56 years) were randomly assigned to use the app plus tailored coaching content (SNapp group; n = 87) or only the step counter app (control group; n = 89).
  • SNapp’s coaching content provided personalized feedback on step counts and recommendations for increasing walking, while also considering individual preferences for behavior change techniques.
  • The primary outcome was the daily step count recorded by the app, which was updated on an hourly basis in a database over an intervention period of 12 months.
  • Perceptions of ease of use and usefulness were assessed to determine their effect on the effectiveness of the app.

TAKEAWAY:

  • Intervention group participants used the app nearly 30% of days, while those using the app alone showed almost identical use.
  • The SNapp intervention did not significantly affect the step counts on average over time (B, −202.30; 95% CI, −889.7 to 485.1).
  • Perceived usefulness significantly moderated the intervention effect of SNapp (B, 344.38; 90% CI, 40.4-648.3), but perceived ease of use did not (B, 38.60; 90% CI, −276.5 to 353.7).
  • Among participants with a high perceived usefulness, the SNapp group had a higher median step count than the control group (median difference, 1260 steps; 90% CI, −3243.7 to 1298.2); however, this difference was not statistically significant.

IN PRACTICE:

“This study shows that perceived usefulness is also an important factor influencing behavioral effects. Hence, it is essential for apps to be perceived as useful to effectively improve users’ activity levels,” the authors wrote.

SOURCE:

The study was led by Anne L. Vos, PhD, of the Amsterdam School of Communication Research at the University of Amsterdam, in the Netherlands. It was published online on September 16, 2024, in the American Journal of Preventive Medicine.

LIMITATIONS:

The study’s recruitment strategy primarily attracted highly educated individuals, limiting generalizability. The app’s accuracy in measuring steps could be improved, as it sometimes underestimated step counts. Researchers also were unable to check if participants read messages from coaches.

DISCLOSURES:

The study was supported by grants from the Dutch Heart Foundation and the Netherlands Organisation for Health Research and Development. No relevant conflicts of interest were disclosed by the authors.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Apps designed to increase physical activity may be useful in increasing daily step counts for users who believe the intervention beneficial, but not for those who do not. The app’s effectiveness is notably influenced by how users perceive its utility.

METHODOLOGY:

  • Researchers conducted a randomized controlled trial from February 2021 to May 2022 to evaluate the effectiveness of SNapp, an adaptive app designed to promote walking through tailored coaching content.
  • Overall, 176 adults (76% women; mean age, 56 years) were randomly assigned to use the app plus tailored coaching content (SNapp group; n = 87) or only the step counter app (control group; n = 89).
  • SNapp’s coaching content provided personalized feedback on step counts and recommendations for increasing walking, while also considering individual preferences for behavior change techniques.
  • The primary outcome was the daily step count recorded by the app, which was updated on an hourly basis in a database over an intervention period of 12 months.
  • Perceptions of ease of use and usefulness were assessed to determine their effect on the effectiveness of the app.

TAKEAWAY:

  • Intervention group participants used the app nearly 30% of days, while those using the app alone showed almost identical use.
  • The SNapp intervention did not significantly affect the step counts on average over time (B, −202.30; 95% CI, −889.7 to 485.1).
  • Perceived usefulness significantly moderated the intervention effect of SNapp (B, 344.38; 90% CI, 40.4-648.3), but perceived ease of use did not (B, 38.60; 90% CI, −276.5 to 353.7).
  • Among participants with a high perceived usefulness, the SNapp group had a higher median step count than the control group (median difference, 1260 steps; 90% CI, −3243.7 to 1298.2); however, this difference was not statistically significant.

IN PRACTICE:

“This study shows that perceived usefulness is also an important factor influencing behavioral effects. Hence, it is essential for apps to be perceived as useful to effectively improve users’ activity levels,” the authors wrote.

SOURCE:

The study was led by Anne L. Vos, PhD, of the Amsterdam School of Communication Research at the University of Amsterdam, in the Netherlands. It was published online on September 16, 2024, in the American Journal of Preventive Medicine.

LIMITATIONS:

The study’s recruitment strategy primarily attracted highly educated individuals, limiting generalizability. The app’s accuracy in measuring steps could be improved, as it sometimes underestimated step counts. Researchers also were unable to check if participants read messages from coaches.

DISCLOSURES:

The study was supported by grants from the Dutch Heart Foundation and the Netherlands Organisation for Health Research and Development. No relevant conflicts of interest were disclosed by the authors.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Bariatric Surgery and Weight Loss Make Brain Say Meh to Sweets

Article Type
Changed
Thu, 09/19/2024 - 14:17

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

A preference for less sweet beverages after bariatric surgery and weight loss appears to stem from a lower brain reward response to sweet taste without affecting the sensory regions.
 

METHODOLOGY:

  • Previous studies have suggested that individuals undergoing bariatric surgery show reduced preference for sweet-tasting food post-surgery, but the mechanisms behind these changes remain unclear.
  • This observational cohort study aimed to examine the neural processing of sweet taste in the reward regions of the brain before and after bariatric surgery in 24 women with obesity (mean body mass index [BMI], 47) who underwent bariatric surgery and 21 control participants with normal to overweight (mean BMI, 23.5).
  • Participants (mean age about 43 years; 75%-81% White) underwent sucrose taste testing and functional MRI (fMRI) to compare the responses of the brain with sucrose solutions of 0.10 M and 0.40 M (akin to sugar-sweetened beverages, such as Coca-Cola at ~0.32 M) and Mountain Dew at ~0.35 M) versus water.
  • In the bariatric surgery group, participants underwent fMRI 1-117 days before surgery, and 21 participants who lost about 20% of their weight after the surgery underwent a follow-up fMRI roughly 3-4 months later.
  • The researchers analyzed the brain’s reward response using a composite activation of several reward system regions (the ventral tegmental area, ventral striatum, and orbitofrontal cortex) and of sensory regions (the primary somatosensory cortex and primary insula taste cortex).
  •  

TAKEAWAY:

  • The perceived intensity of sweetness was comparable between the control group and the bariatric surgery group both before and after surgery.
  • In the bariatric surgery group, the average preferred sweet concentration decreased from 0.52 M before surgery to 0.29 M after surgery (P = .008).
  • The fMRI analysis indicated that women showed a trend toward a higher reward response to 0.4 M sucrose before bariatric surgery than the control participants.
  • The activation of the reward region in response to 0.4 M sucrose (but not 0.1 M) declined in the bariatric surgery group after surgery (P = .042).
  •  

IN PRACTICE:

“Our findings suggest that both the brain reward response to and subjective liking of an innately desirable taste decline following bariatric surgery,” the authors wrote.
 

SOURCE:

This study was led by Jonathan Alessi, Indiana University School of Medicine, Indianapolis, and published online in Obesity.
 

LIMITATIONS:

The study sample size was relatively small, and the duration of follow-up was short, with recruitment curtailed by the COVID-19 pandemic. This study did not assess the consumption of sugar or sweetened food, which could provide further insights into changes in the dietary behavior post-surgery. Participants included women only, and the findings could have been different if men were recruited.
 

DISCLOSURES:

This study was funded by the American Diabetes Association, Indiana Clinical and Translational Sciences Institute, and National Institute on Alcohol Abuse and Alcoholism. Three authors reported financial relationships with some pharmaceutical companies outside of this study.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article