Hypothyroidism Treatment Does Not Affect Cognitive Decline in Menopausal Women

Article Type
Changed
Fri, 10/04/2024 - 10:54

 

TOPLINE:

Women with hypothyroidism treated with levothyroxine show no significant cognitive decline across the menopausal transition compared with those without thyroid disease.

METHODOLOGY:

  • Levothyroxine, the primary treatment for hypothyroidism, has been linked to perceived cognitive deficits, yet it is unclear whether this is due to the underlying condition being inadequately treated or other factors.
  • Using data collected from the Study of Women’s Health Across the Nation, which encompasses five ethnic/racial groups from seven centers across the United States, researchers compared cognitive function over time between women with hypothyroidism treated with levothyroxine and those without thyroid disease.
  • Participants underwent cognitive testing across three domains — processing speed, working memory, and episodic memory — which were assessed over a mean follow-up of 13 years.
  • Further analyses assessed the impact of abnormal levels of thyroid-stimulating hormone on cognitive outcomes.

TAKEAWAY:

  • Of 2033 women included, 227 (mean age, 49.8 years) had levothyroxine-treated hypothyroidism and 1806 (mean age, 50.0 years) did not have thyroid disease; the proportion of women with premenopausal or early perimenopausal status at baseline was higher in the hypothyroidism group (54.2% vs 49.8%; = .010).
  • At baseline, levothyroxine-treated women had higher scores for processing speed (mean score, 56.5 vs 54.4; P = .006) and working memory (mean score, 6.8 vs 6.4; P = .018) than those without thyroid disease; however, no difference in episodic memory was observed between the groups.
  • Over the study period, there was no significant difference in cognitive decline between the groups.
  • There was no significant effect of levothyroxine-treated hypothyroidism on working memory or episodic memory, although an annual decline in processing speed was observed (P < .001).
  • Sensitivity analyses determined that abnormal levels of thyroid-stimulating hormone did not affect cognitive outcomes in women with hypothyroidism.

IN PRACTICE:

When cognitive decline is observed in these patients, the authors advised that “clinicians should resist anchoring on inadequate treatment of hypothyroidism as the cause of these symptoms and may investigate other disease processes (eg, iron deficiency, B12 deficiency, sleep apnea, celiac disease).”

SOURCE:

The study, led by Matthew D. Ettleson, Section of Endocrinology, Diabetes, and Metabolism, University of Chicago, was published online in Thyroid.

LIMITATIONS:

The cognitive assessments in the study were not designed to provide a thorough evaluation of all aspects of cognitive function. The study may not have been adequately powered to detect small effects of levothyroxine-treated hypothyroidism on cognitive outcomes. The higher levels of education attained by the study population may have acted as a protective factor against cognitive decline, potentially biasing the results.

DISCLOSURES:

The Study of Women’s Health Across the Nation was supported by grants from the National Institutes of Health (NIH), DHHS, through the National Institute on Aging, the National Institute of Nursing Research, and the NIH Office of Research on Women’s Health. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Women with hypothyroidism treated with levothyroxine show no significant cognitive decline across the menopausal transition compared with those without thyroid disease.

METHODOLOGY:

  • Levothyroxine, the primary treatment for hypothyroidism, has been linked to perceived cognitive deficits, yet it is unclear whether this is due to the underlying condition being inadequately treated or other factors.
  • Using data collected from the Study of Women’s Health Across the Nation, which encompasses five ethnic/racial groups from seven centers across the United States, researchers compared cognitive function over time between women with hypothyroidism treated with levothyroxine and those without thyroid disease.
  • Participants underwent cognitive testing across three domains — processing speed, working memory, and episodic memory — which were assessed over a mean follow-up of 13 years.
  • Further analyses assessed the impact of abnormal levels of thyroid-stimulating hormone on cognitive outcomes.

TAKEAWAY:

  • Of 2033 women included, 227 (mean age, 49.8 years) had levothyroxine-treated hypothyroidism and 1806 (mean age, 50.0 years) did not have thyroid disease; the proportion of women with premenopausal or early perimenopausal status at baseline was higher in the hypothyroidism group (54.2% vs 49.8%; = .010).
  • At baseline, levothyroxine-treated women had higher scores for processing speed (mean score, 56.5 vs 54.4; P = .006) and working memory (mean score, 6.8 vs 6.4; P = .018) than those without thyroid disease; however, no difference in episodic memory was observed between the groups.
  • Over the study period, there was no significant difference in cognitive decline between the groups.
  • There was no significant effect of levothyroxine-treated hypothyroidism on working memory or episodic memory, although an annual decline in processing speed was observed (P < .001).
  • Sensitivity analyses determined that abnormal levels of thyroid-stimulating hormone did not affect cognitive outcomes in women with hypothyroidism.

IN PRACTICE:

When cognitive decline is observed in these patients, the authors advised that “clinicians should resist anchoring on inadequate treatment of hypothyroidism as the cause of these symptoms and may investigate other disease processes (eg, iron deficiency, B12 deficiency, sleep apnea, celiac disease).”

SOURCE:

The study, led by Matthew D. Ettleson, Section of Endocrinology, Diabetes, and Metabolism, University of Chicago, was published online in Thyroid.

LIMITATIONS:

The cognitive assessments in the study were not designed to provide a thorough evaluation of all aspects of cognitive function. The study may not have been adequately powered to detect small effects of levothyroxine-treated hypothyroidism on cognitive outcomes. The higher levels of education attained by the study population may have acted as a protective factor against cognitive decline, potentially biasing the results.

DISCLOSURES:

The Study of Women’s Health Across the Nation was supported by grants from the National Institutes of Health (NIH), DHHS, through the National Institute on Aging, the National Institute of Nursing Research, and the NIH Office of Research on Women’s Health. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Women with hypothyroidism treated with levothyroxine show no significant cognitive decline across the menopausal transition compared with those without thyroid disease.

METHODOLOGY:

  • Levothyroxine, the primary treatment for hypothyroidism, has been linked to perceived cognitive deficits, yet it is unclear whether this is due to the underlying condition being inadequately treated or other factors.
  • Using data collected from the Study of Women’s Health Across the Nation, which encompasses five ethnic/racial groups from seven centers across the United States, researchers compared cognitive function over time between women with hypothyroidism treated with levothyroxine and those without thyroid disease.
  • Participants underwent cognitive testing across three domains — processing speed, working memory, and episodic memory — which were assessed over a mean follow-up of 13 years.
  • Further analyses assessed the impact of abnormal levels of thyroid-stimulating hormone on cognitive outcomes.

TAKEAWAY:

  • Of 2033 women included, 227 (mean age, 49.8 years) had levothyroxine-treated hypothyroidism and 1806 (mean age, 50.0 years) did not have thyroid disease; the proportion of women with premenopausal or early perimenopausal status at baseline was higher in the hypothyroidism group (54.2% vs 49.8%; = .010).
  • At baseline, levothyroxine-treated women had higher scores for processing speed (mean score, 56.5 vs 54.4; P = .006) and working memory (mean score, 6.8 vs 6.4; P = .018) than those without thyroid disease; however, no difference in episodic memory was observed between the groups.
  • Over the study period, there was no significant difference in cognitive decline between the groups.
  • There was no significant effect of levothyroxine-treated hypothyroidism on working memory or episodic memory, although an annual decline in processing speed was observed (P < .001).
  • Sensitivity analyses determined that abnormal levels of thyroid-stimulating hormone did not affect cognitive outcomes in women with hypothyroidism.

IN PRACTICE:

When cognitive decline is observed in these patients, the authors advised that “clinicians should resist anchoring on inadequate treatment of hypothyroidism as the cause of these symptoms and may investigate other disease processes (eg, iron deficiency, B12 deficiency, sleep apnea, celiac disease).”

SOURCE:

The study, led by Matthew D. Ettleson, Section of Endocrinology, Diabetes, and Metabolism, University of Chicago, was published online in Thyroid.

LIMITATIONS:

The cognitive assessments in the study were not designed to provide a thorough evaluation of all aspects of cognitive function. The study may not have been adequately powered to detect small effects of levothyroxine-treated hypothyroidism on cognitive outcomes. The higher levels of education attained by the study population may have acted as a protective factor against cognitive decline, potentially biasing the results.

DISCLOSURES:

The Study of Women’s Health Across the Nation was supported by grants from the National Institutes of Health (NIH), DHHS, through the National Institute on Aging, the National Institute of Nursing Research, and the NIH Office of Research on Women’s Health. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Race Adjustments in Algorithms Boost CRC Risk Prediction

Article Type
Changed
Wed, 10/02/2024 - 10:41

 

TOPLINE:

Accounting for racial disparities, including in the quality of family history data, enhanced the predictive performance of a colorectal cancer (CRC) risk prediction model.

METHODOLOGY:

  • The medical community is reevaluating the use of race adjustments in clinical algorithms due to concerns about the exacerbation of health disparities, especially as reported family history data are known to vary by race.
  • To understand how adjusting for race affects the accuracy of CRC prediction algorithms, researchers studied data from community health centers across 12 states as part of the Southern Community Cohort Study.
  • Researchers compared two screening algorithms that modeled 10-year CRC risk: A race-blind algorithm and a race-adjusted algorithm that included Black race as a main effect and an interaction with family history.
  • The primary outcome was the development of CRC within 10 years of enrollment, assessed using data collected from surveys at enrollment and follow-ups, cancer registry data, and National Death Index reports.
  • The researchers compared the algorithms’ predictive performance using such measures as area under the receiver operating characteristic curve (AUC) and calibration and also assessed how adjusting for race changed the proportion of Black participants identified as being at high risk for CRC.

TAKEAWAY:

  • The study sample included 77,836 adults aged 40-74 years with no history of CRC at baseline.
  • Despite having higher cancer rates, Black participants were more likely to report unknown family history (odds ratio [OR], 1.69; P < .001) and less likely to report known positive family history (OR, 0.68; P < .001) than White participants.
  • The interaction term between race and family history was 0.56, indicating that reported family history was less predictive of CRC risk in Black participants than in White participants (P = .010).
  • Compared with the race-blinded algorithm, the race-adjusted algorithm increased the fraction of Black participants among the predicted high-risk group (66.1% vs 74.4%; P < .001), potentially enhancing access to screening.
  • The race-adjusted algorithm improved the goodness of fit (< .001) and showed a small improvement in AUC among Black participants (0.611 vs 0.608; P = .006).

IN PRACTICE:

“Our analysis found that removing race from colorectal screening predictors could reduce the number of Black patients recommended for screening, which would work against efforts to reduce disparities in colorectal cancer screening and outcomes,” the authors wrote.

SOURCE:

The study, led by Anna Zink, PhD, the University of Chicago Booth School of Business, Chicago, was published online in Proceedings of the National Academy of Sciences of the USA .

LIMITATIONS:

The study did not report any limitations.

DISCLOSURES:

The study was supported by the National Cancer Institute of the National Institutes of Health. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Accounting for racial disparities, including in the quality of family history data, enhanced the predictive performance of a colorectal cancer (CRC) risk prediction model.

METHODOLOGY:

  • The medical community is reevaluating the use of race adjustments in clinical algorithms due to concerns about the exacerbation of health disparities, especially as reported family history data are known to vary by race.
  • To understand how adjusting for race affects the accuracy of CRC prediction algorithms, researchers studied data from community health centers across 12 states as part of the Southern Community Cohort Study.
  • Researchers compared two screening algorithms that modeled 10-year CRC risk: A race-blind algorithm and a race-adjusted algorithm that included Black race as a main effect and an interaction with family history.
  • The primary outcome was the development of CRC within 10 years of enrollment, assessed using data collected from surveys at enrollment and follow-ups, cancer registry data, and National Death Index reports.
  • The researchers compared the algorithms’ predictive performance using such measures as area under the receiver operating characteristic curve (AUC) and calibration and also assessed how adjusting for race changed the proportion of Black participants identified as being at high risk for CRC.

TAKEAWAY:

  • The study sample included 77,836 adults aged 40-74 years with no history of CRC at baseline.
  • Despite having higher cancer rates, Black participants were more likely to report unknown family history (odds ratio [OR], 1.69; P < .001) and less likely to report known positive family history (OR, 0.68; P < .001) than White participants.
  • The interaction term between race and family history was 0.56, indicating that reported family history was less predictive of CRC risk in Black participants than in White participants (P = .010).
  • Compared with the race-blinded algorithm, the race-adjusted algorithm increased the fraction of Black participants among the predicted high-risk group (66.1% vs 74.4%; P < .001), potentially enhancing access to screening.
  • The race-adjusted algorithm improved the goodness of fit (< .001) and showed a small improvement in AUC among Black participants (0.611 vs 0.608; P = .006).

IN PRACTICE:

“Our analysis found that removing race from colorectal screening predictors could reduce the number of Black patients recommended for screening, which would work against efforts to reduce disparities in colorectal cancer screening and outcomes,” the authors wrote.

SOURCE:

The study, led by Anna Zink, PhD, the University of Chicago Booth School of Business, Chicago, was published online in Proceedings of the National Academy of Sciences of the USA .

LIMITATIONS:

The study did not report any limitations.

DISCLOSURES:

The study was supported by the National Cancer Institute of the National Institutes of Health. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Accounting for racial disparities, including in the quality of family history data, enhanced the predictive performance of a colorectal cancer (CRC) risk prediction model.

METHODOLOGY:

  • The medical community is reevaluating the use of race adjustments in clinical algorithms due to concerns about the exacerbation of health disparities, especially as reported family history data are known to vary by race.
  • To understand how adjusting for race affects the accuracy of CRC prediction algorithms, researchers studied data from community health centers across 12 states as part of the Southern Community Cohort Study.
  • Researchers compared two screening algorithms that modeled 10-year CRC risk: A race-blind algorithm and a race-adjusted algorithm that included Black race as a main effect and an interaction with family history.
  • The primary outcome was the development of CRC within 10 years of enrollment, assessed using data collected from surveys at enrollment and follow-ups, cancer registry data, and National Death Index reports.
  • The researchers compared the algorithms’ predictive performance using such measures as area under the receiver operating characteristic curve (AUC) and calibration and also assessed how adjusting for race changed the proportion of Black participants identified as being at high risk for CRC.

TAKEAWAY:

  • The study sample included 77,836 adults aged 40-74 years with no history of CRC at baseline.
  • Despite having higher cancer rates, Black participants were more likely to report unknown family history (odds ratio [OR], 1.69; P < .001) and less likely to report known positive family history (OR, 0.68; P < .001) than White participants.
  • The interaction term between race and family history was 0.56, indicating that reported family history was less predictive of CRC risk in Black participants than in White participants (P = .010).
  • Compared with the race-blinded algorithm, the race-adjusted algorithm increased the fraction of Black participants among the predicted high-risk group (66.1% vs 74.4%; P < .001), potentially enhancing access to screening.
  • The race-adjusted algorithm improved the goodness of fit (< .001) and showed a small improvement in AUC among Black participants (0.611 vs 0.608; P = .006).

IN PRACTICE:

“Our analysis found that removing race from colorectal screening predictors could reduce the number of Black patients recommended for screening, which would work against efforts to reduce disparities in colorectal cancer screening and outcomes,” the authors wrote.

SOURCE:

The study, led by Anna Zink, PhD, the University of Chicago Booth School of Business, Chicago, was published online in Proceedings of the National Academy of Sciences of the USA .

LIMITATIONS:

The study did not report any limitations.

DISCLOSURES:

The study was supported by the National Cancer Institute of the National Institutes of Health. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Does Screening for CKD Benefit Older Adults?

Article Type
Changed
Tue, 10/01/2024 - 06:25

 

TOPLINE:

Short-term mortality, hospitalizations, and cardiovascular disease (CVD) events are not significantly different between patients diagnosed with chronic kidney disease (CKD) during routine medical care and those through screening, in a study that found older age, being male, and having a diagnosis of heart failure are associated with an increased risk for mortality in patients with CKD.

METHODOLOGY:

  • Researchers conducted a prospective cohort study involving 892 primary care patients aged 60 years or older with CKD from the Oxford Renal Cohort Study in England.
  • Participants were categorized into those with existing CKD (n = 257; median age, 75 years), screen-detected CKD (n = 185; median age, roughly 73 years), or temporary reduction in kidney function (n = 450; median age, roughly 73 years).
  • The primary outcome was a composite of all-cause mortality, hospitalization, CVD, or end-stage kidney disease.
  • The secondary outcomes were the individual components of the composite primary outcome and factors associated with mortality in those with CKD.

TAKEAWAY:

  • The composite outcomes were not significantly different between patients with preexisting CKD and kidney disease identified during screening (adjusted hazard ratio [aHR], 0.94; 95% CI, 0.67-1.33).
  • Risks for death, hospitalization, CVD, or end-stage kidney disease were not significantly different between the two groups.
  • Older age (aHR per year, 1.10; 95% CI, 1.06-1.15), male sex (aHR, 2.31; 95% CI, 1.26-4.24), and heart failure (aHR, 5.18; 95% CI, 2.45-10.97) were associated with higher risks for death.
  • No cases of end-stage kidney disease were reported during the study period.

IN PRACTICE:

“Our findings show that the risk of short-term mortality, hospitalization, and CVD is comparable in people diagnosed through screening to those diagnosed routinely in primary care. This suggests that screening older people for CKD may be of value to increase detection and enable disease-modifying treatment to be initiated at an earlier stage,” the study authors wrote.

SOURCE:

The study was led by Anna K. Forbes, MBChB, and José M. Ordóñez-Mena, PhD, of the Nuffield Department of Primary Care Health Sciences at the University of Oxford, England. It was published online in BJGP Open.

LIMITATIONS:

The study had a relatively short follow-up period and a cohort primarily consisting of individuals with early-stage CKD, which may have limited the identification of end-stage cases of the condition. The study population predominantly consisted of White individuals, affecting the generalizability of the results to more diverse populations. Misclassification bias may have occurred due to changes in the kidney function over time.

DISCLOSURES:

The data linkage provided by NHS Digital was supported by funding from the NIHR School of Primary Care Research. Some authors were partly supported by the NIHR Oxford Biomedical Research Centre and NIHR Oxford Thames Valley Applied Research Collaborative. One author reported receiving financial support for attending a conference, while another received consulting fees from various pharmaceutical companies. Another author reported receiving a grant from the Wellcome Trust and payment while working as a presenter for NB Medical and is an unpaid trustee of some charities.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Short-term mortality, hospitalizations, and cardiovascular disease (CVD) events are not significantly different between patients diagnosed with chronic kidney disease (CKD) during routine medical care and those through screening, in a study that found older age, being male, and having a diagnosis of heart failure are associated with an increased risk for mortality in patients with CKD.

METHODOLOGY:

  • Researchers conducted a prospective cohort study involving 892 primary care patients aged 60 years or older with CKD from the Oxford Renal Cohort Study in England.
  • Participants were categorized into those with existing CKD (n = 257; median age, 75 years), screen-detected CKD (n = 185; median age, roughly 73 years), or temporary reduction in kidney function (n = 450; median age, roughly 73 years).
  • The primary outcome was a composite of all-cause mortality, hospitalization, CVD, or end-stage kidney disease.
  • The secondary outcomes were the individual components of the composite primary outcome and factors associated with mortality in those with CKD.

TAKEAWAY:

  • The composite outcomes were not significantly different between patients with preexisting CKD and kidney disease identified during screening (adjusted hazard ratio [aHR], 0.94; 95% CI, 0.67-1.33).
  • Risks for death, hospitalization, CVD, or end-stage kidney disease were not significantly different between the two groups.
  • Older age (aHR per year, 1.10; 95% CI, 1.06-1.15), male sex (aHR, 2.31; 95% CI, 1.26-4.24), and heart failure (aHR, 5.18; 95% CI, 2.45-10.97) were associated with higher risks for death.
  • No cases of end-stage kidney disease were reported during the study period.

IN PRACTICE:

“Our findings show that the risk of short-term mortality, hospitalization, and CVD is comparable in people diagnosed through screening to those diagnosed routinely in primary care. This suggests that screening older people for CKD may be of value to increase detection and enable disease-modifying treatment to be initiated at an earlier stage,” the study authors wrote.

SOURCE:

The study was led by Anna K. Forbes, MBChB, and José M. Ordóñez-Mena, PhD, of the Nuffield Department of Primary Care Health Sciences at the University of Oxford, England. It was published online in BJGP Open.

LIMITATIONS:

The study had a relatively short follow-up period and a cohort primarily consisting of individuals with early-stage CKD, which may have limited the identification of end-stage cases of the condition. The study population predominantly consisted of White individuals, affecting the generalizability of the results to more diverse populations. Misclassification bias may have occurred due to changes in the kidney function over time.

DISCLOSURES:

The data linkage provided by NHS Digital was supported by funding from the NIHR School of Primary Care Research. Some authors were partly supported by the NIHR Oxford Biomedical Research Centre and NIHR Oxford Thames Valley Applied Research Collaborative. One author reported receiving financial support for attending a conference, while another received consulting fees from various pharmaceutical companies. Another author reported receiving a grant from the Wellcome Trust and payment while working as a presenter for NB Medical and is an unpaid trustee of some charities.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Short-term mortality, hospitalizations, and cardiovascular disease (CVD) events are not significantly different between patients diagnosed with chronic kidney disease (CKD) during routine medical care and those through screening, in a study that found older age, being male, and having a diagnosis of heart failure are associated with an increased risk for mortality in patients with CKD.

METHODOLOGY:

  • Researchers conducted a prospective cohort study involving 892 primary care patients aged 60 years or older with CKD from the Oxford Renal Cohort Study in England.
  • Participants were categorized into those with existing CKD (n = 257; median age, 75 years), screen-detected CKD (n = 185; median age, roughly 73 years), or temporary reduction in kidney function (n = 450; median age, roughly 73 years).
  • The primary outcome was a composite of all-cause mortality, hospitalization, CVD, or end-stage kidney disease.
  • The secondary outcomes were the individual components of the composite primary outcome and factors associated with mortality in those with CKD.

TAKEAWAY:

  • The composite outcomes were not significantly different between patients with preexisting CKD and kidney disease identified during screening (adjusted hazard ratio [aHR], 0.94; 95% CI, 0.67-1.33).
  • Risks for death, hospitalization, CVD, or end-stage kidney disease were not significantly different between the two groups.
  • Older age (aHR per year, 1.10; 95% CI, 1.06-1.15), male sex (aHR, 2.31; 95% CI, 1.26-4.24), and heart failure (aHR, 5.18; 95% CI, 2.45-10.97) were associated with higher risks for death.
  • No cases of end-stage kidney disease were reported during the study period.

IN PRACTICE:

“Our findings show that the risk of short-term mortality, hospitalization, and CVD is comparable in people diagnosed through screening to those diagnosed routinely in primary care. This suggests that screening older people for CKD may be of value to increase detection and enable disease-modifying treatment to be initiated at an earlier stage,” the study authors wrote.

SOURCE:

The study was led by Anna K. Forbes, MBChB, and José M. Ordóñez-Mena, PhD, of the Nuffield Department of Primary Care Health Sciences at the University of Oxford, England. It was published online in BJGP Open.

LIMITATIONS:

The study had a relatively short follow-up period and a cohort primarily consisting of individuals with early-stage CKD, which may have limited the identification of end-stage cases of the condition. The study population predominantly consisted of White individuals, affecting the generalizability of the results to more diverse populations. Misclassification bias may have occurred due to changes in the kidney function over time.

DISCLOSURES:

The data linkage provided by NHS Digital was supported by funding from the NIHR School of Primary Care Research. Some authors were partly supported by the NIHR Oxford Biomedical Research Centre and NIHR Oxford Thames Valley Applied Research Collaborative. One author reported receiving financial support for attending a conference, while another received consulting fees from various pharmaceutical companies. Another author reported receiving a grant from the Wellcome Trust and payment while working as a presenter for NB Medical and is an unpaid trustee of some charities.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The Uneven Surge in Diabetes in the United States

Article Type
Changed
Wed, 09/25/2024 - 16:14

 

TOPLINE:

The prevalence of diabetes in the United States increased by 18.6% from 2012 to 2022, with notably higher rates among racial and ethnic minorities, men, older adults, and socioeconomically disadvantaged populations.

METHODOLOGY:

  • Over 37 million people in the United States have diabetes, and its prevalence is only expected to increase in the coming years, making identifying high-risk demographic groups particularly crucial.
  • To assess recent national trends and disparities in diabetes prevalence among US adults, researchers conducted an observational study using data from the Behavioral Risk Factor Surveillance System and included 5,312,827 observations from 2012 to 2022.
  • Diabetes was defined on the basis of a previous self-reported diagnosis using standardized questionnaires.
  • The sociodemographic factors of age, sex, race, education, physical activity, income, and body mass index were used to establish the risk indicators for diabetes diagnosis.
  • Age-standardized diabetes prevalence and the association between risk factors and diabetes were assessed both overall and across various sociodemographic groups.

TAKEAWAY:

  • The overall prevalence of diabetes increased by 18.6% (P < .001) from 2012 to 2022, with the highest prevalence observed among non-Hispanic Black individuals (15.8%) and people aged ≥ 65 years (23.86%).
  • The likelihood of being diagnosed with diabetes was 1.15 times higher in men than in women, 5.16 times higher in adults aged 45-64 years than in those aged 18-24 years, and 3.64 times higher in those with obesity than in those with normal weight.
  • The risk for being diagnosed with diabetes was 1.60 times higher among Hispanic individuals, 1.67 times higher among non-Hispanic Asian individuals, and 2.10 times higher among non-Hispanic Black individuals than among non-Hispanic White individuals.
  • Individuals with a college education and higher income level were 24% and 41% less likely, respectively, to be diagnosed with diabetes.

IN PRACTICE:

“Improving access to quality care, implementing diabetes prevention programs focusing on high-risk groups, and addressing social determinants through multilevel interventions may help curb the diabetes epidemic in the United States,” the authors wrote.

SOURCE:

The study, led by Sulakshan Neupane, MS, Department of Agricultural and Applied Economics, University of Georgia, Athens, Georgia, was published online in Diabetes, Obesity, and Metabolism.

LIMITATIONS:

The self-reported diagnoses and lack of clinical data may have introduced bias. Diabetes prevalence could not be analyzed in South-East Asian and South Asian populations owing to limitations in the data collection process.

DISCLOSURES:

The study was not supported by any funding, and no potential author disclosures or conflicts were identified.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

The prevalence of diabetes in the United States increased by 18.6% from 2012 to 2022, with notably higher rates among racial and ethnic minorities, men, older adults, and socioeconomically disadvantaged populations.

METHODOLOGY:

  • Over 37 million people in the United States have diabetes, and its prevalence is only expected to increase in the coming years, making identifying high-risk demographic groups particularly crucial.
  • To assess recent national trends and disparities in diabetes prevalence among US adults, researchers conducted an observational study using data from the Behavioral Risk Factor Surveillance System and included 5,312,827 observations from 2012 to 2022.
  • Diabetes was defined on the basis of a previous self-reported diagnosis using standardized questionnaires.
  • The sociodemographic factors of age, sex, race, education, physical activity, income, and body mass index were used to establish the risk indicators for diabetes diagnosis.
  • Age-standardized diabetes prevalence and the association between risk factors and diabetes were assessed both overall and across various sociodemographic groups.

TAKEAWAY:

  • The overall prevalence of diabetes increased by 18.6% (P < .001) from 2012 to 2022, with the highest prevalence observed among non-Hispanic Black individuals (15.8%) and people aged ≥ 65 years (23.86%).
  • The likelihood of being diagnosed with diabetes was 1.15 times higher in men than in women, 5.16 times higher in adults aged 45-64 years than in those aged 18-24 years, and 3.64 times higher in those with obesity than in those with normal weight.
  • The risk for being diagnosed with diabetes was 1.60 times higher among Hispanic individuals, 1.67 times higher among non-Hispanic Asian individuals, and 2.10 times higher among non-Hispanic Black individuals than among non-Hispanic White individuals.
  • Individuals with a college education and higher income level were 24% and 41% less likely, respectively, to be diagnosed with diabetes.

IN PRACTICE:

“Improving access to quality care, implementing diabetes prevention programs focusing on high-risk groups, and addressing social determinants through multilevel interventions may help curb the diabetes epidemic in the United States,” the authors wrote.

SOURCE:

The study, led by Sulakshan Neupane, MS, Department of Agricultural and Applied Economics, University of Georgia, Athens, Georgia, was published online in Diabetes, Obesity, and Metabolism.

LIMITATIONS:

The self-reported diagnoses and lack of clinical data may have introduced bias. Diabetes prevalence could not be analyzed in South-East Asian and South Asian populations owing to limitations in the data collection process.

DISCLOSURES:

The study was not supported by any funding, and no potential author disclosures or conflicts were identified.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

The prevalence of diabetes in the United States increased by 18.6% from 2012 to 2022, with notably higher rates among racial and ethnic minorities, men, older adults, and socioeconomically disadvantaged populations.

METHODOLOGY:

  • Over 37 million people in the United States have diabetes, and its prevalence is only expected to increase in the coming years, making identifying high-risk demographic groups particularly crucial.
  • To assess recent national trends and disparities in diabetes prevalence among US adults, researchers conducted an observational study using data from the Behavioral Risk Factor Surveillance System and included 5,312,827 observations from 2012 to 2022.
  • Diabetes was defined on the basis of a previous self-reported diagnosis using standardized questionnaires.
  • The sociodemographic factors of age, sex, race, education, physical activity, income, and body mass index were used to establish the risk indicators for diabetes diagnosis.
  • Age-standardized diabetes prevalence and the association between risk factors and diabetes were assessed both overall and across various sociodemographic groups.

TAKEAWAY:

  • The overall prevalence of diabetes increased by 18.6% (P < .001) from 2012 to 2022, with the highest prevalence observed among non-Hispanic Black individuals (15.8%) and people aged ≥ 65 years (23.86%).
  • The likelihood of being diagnosed with diabetes was 1.15 times higher in men than in women, 5.16 times higher in adults aged 45-64 years than in those aged 18-24 years, and 3.64 times higher in those with obesity than in those with normal weight.
  • The risk for being diagnosed with diabetes was 1.60 times higher among Hispanic individuals, 1.67 times higher among non-Hispanic Asian individuals, and 2.10 times higher among non-Hispanic Black individuals than among non-Hispanic White individuals.
  • Individuals with a college education and higher income level were 24% and 41% less likely, respectively, to be diagnosed with diabetes.

IN PRACTICE:

“Improving access to quality care, implementing diabetes prevention programs focusing on high-risk groups, and addressing social determinants through multilevel interventions may help curb the diabetes epidemic in the United States,” the authors wrote.

SOURCE:

The study, led by Sulakshan Neupane, MS, Department of Agricultural and Applied Economics, University of Georgia, Athens, Georgia, was published online in Diabetes, Obesity, and Metabolism.

LIMITATIONS:

The self-reported diagnoses and lack of clinical data may have introduced bias. Diabetes prevalence could not be analyzed in South-East Asian and South Asian populations owing to limitations in the data collection process.

DISCLOSURES:

The study was not supported by any funding, and no potential author disclosures or conflicts were identified.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Healthy Lifestyle Mitigates Brain Aging in Diabetes

Article Type
Changed
Wed, 09/25/2024 - 05:46

 

TOPLINE:

Diabetes and prediabetes are associated with accelerated brain aging with brain age gaps of 2.29 and 0.50 years, respectively. This association is more pronounced in men and those with poor cardiometabolic health but may be mitigated by a healthy lifestyle.

METHODOLOGY:

  • Diabetes is a known risk factor for cognitive impairment, dementia, and global brain atrophy but conflicting results have been reported for prediabetes, and it’s unknown whether a healthy lifestyle can counteract the negative impact of prediabetes.
  • Researchers examined the cross-sectional and longitudinal relationship between hyperglycemia and brain aging, as well as the potential mitigating effect of a healthy lifestyle in 31,229 dementia-free adults (mean age, 54.8 years; 53% women) from the UK Biobank, including 13,518 participants with prediabetes and 1149 with diabetes.
  • The glycemic status of the participants was determined by their medical history, medication use, and A1c levels.
  • The brain age gap was calculated as a difference between chronologic age and brain age estimated from MRI data from six modalities vs several hundred brain MRI phenotypes that were modeled from a subset of healthy individuals.
  • The role of sex, cardiometabolic risk factors, and a healthy lifestyle and their association with brain age was also explored, with a healthy lifestyle defined as never smoking, no or light or moderate alcohol consumption, and high physical activity.

TAKEAWAY:

  • Prediabetes and diabetes were associated with a higher brain age gap than normoglycemia (beta-coefficient, 0.22 and 2.01; 95% CI, 0.10-0.34 and 1.70-2.32, respectively), and diabetes was more pronounced in men vs women and those with a higher vs lower burden of cardiometabolic risk factors.
  • The brain ages of those with prediabetes and diabetes were 0.50 years and 2.29 years older on average than their respective chronologic ages.
  • In an exploratory longitudinal analysis of the 2414 participants with two brain MRI scans, diabetes was linked to a 0.27-year annual increase in the brain age gap, and higher A1c, but not prediabetes, was associated with a significant increase in brain age gap.
  • A healthy lifestyle attenuated the association between diabetes and a higher brain age gap (P = .003), reducing it by 1.68 years, also with a significant interaction between glycemic status and lifestyle.

IN PRACTICE:

“Our findings highlight diabetes and prediabetes as ideal targets for lifestyle-based interventions to promote brain health,” the authors wrote.

SOURCE:

This study, led by Abigail Dove, Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden, was published online in Diabetes Care.

LIMITATIONS:

The generalizability of the findings was limited due to a healthy volunteer bias in the UK Biobank. A high proportion of missing data prevented the inclusion of diet in the healthy lifestyle construct. Reverse causality may be possible as an older brain may contribute to the development of prediabetes by making it more difficult to manage medical conditions and adhere to a healthy lifestyle. A1c levels were measured only at baseline, preventing the assessment of changes in glycemic control over time.

DISCLOSURES:

The authors reported receiving funding from the Swedish Research Council; Swedish Research Council for Health, Working Life and Welfare; Karolinska Institutet Board of Research; Riksbankens Jubileumsfond; Marianne and Marcus Wallenberg Foundation; Alzheimerfonden; and Demensfonden. They declared no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Diabetes and prediabetes are associated with accelerated brain aging with brain age gaps of 2.29 and 0.50 years, respectively. This association is more pronounced in men and those with poor cardiometabolic health but may be mitigated by a healthy lifestyle.

METHODOLOGY:

  • Diabetes is a known risk factor for cognitive impairment, dementia, and global brain atrophy but conflicting results have been reported for prediabetes, and it’s unknown whether a healthy lifestyle can counteract the negative impact of prediabetes.
  • Researchers examined the cross-sectional and longitudinal relationship between hyperglycemia and brain aging, as well as the potential mitigating effect of a healthy lifestyle in 31,229 dementia-free adults (mean age, 54.8 years; 53% women) from the UK Biobank, including 13,518 participants with prediabetes and 1149 with diabetes.
  • The glycemic status of the participants was determined by their medical history, medication use, and A1c levels.
  • The brain age gap was calculated as a difference between chronologic age and brain age estimated from MRI data from six modalities vs several hundred brain MRI phenotypes that were modeled from a subset of healthy individuals.
  • The role of sex, cardiometabolic risk factors, and a healthy lifestyle and their association with brain age was also explored, with a healthy lifestyle defined as never smoking, no or light or moderate alcohol consumption, and high physical activity.

TAKEAWAY:

  • Prediabetes and diabetes were associated with a higher brain age gap than normoglycemia (beta-coefficient, 0.22 and 2.01; 95% CI, 0.10-0.34 and 1.70-2.32, respectively), and diabetes was more pronounced in men vs women and those with a higher vs lower burden of cardiometabolic risk factors.
  • The brain ages of those with prediabetes and diabetes were 0.50 years and 2.29 years older on average than their respective chronologic ages.
  • In an exploratory longitudinal analysis of the 2414 participants with two brain MRI scans, diabetes was linked to a 0.27-year annual increase in the brain age gap, and higher A1c, but not prediabetes, was associated with a significant increase in brain age gap.
  • A healthy lifestyle attenuated the association between diabetes and a higher brain age gap (P = .003), reducing it by 1.68 years, also with a significant interaction between glycemic status and lifestyle.

IN PRACTICE:

“Our findings highlight diabetes and prediabetes as ideal targets for lifestyle-based interventions to promote brain health,” the authors wrote.

SOURCE:

This study, led by Abigail Dove, Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden, was published online in Diabetes Care.

LIMITATIONS:

The generalizability of the findings was limited due to a healthy volunteer bias in the UK Biobank. A high proportion of missing data prevented the inclusion of diet in the healthy lifestyle construct. Reverse causality may be possible as an older brain may contribute to the development of prediabetes by making it more difficult to manage medical conditions and adhere to a healthy lifestyle. A1c levels were measured only at baseline, preventing the assessment of changes in glycemic control over time.

DISCLOSURES:

The authors reported receiving funding from the Swedish Research Council; Swedish Research Council for Health, Working Life and Welfare; Karolinska Institutet Board of Research; Riksbankens Jubileumsfond; Marianne and Marcus Wallenberg Foundation; Alzheimerfonden; and Demensfonden. They declared no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Diabetes and prediabetes are associated with accelerated brain aging with brain age gaps of 2.29 and 0.50 years, respectively. This association is more pronounced in men and those with poor cardiometabolic health but may be mitigated by a healthy lifestyle.

METHODOLOGY:

  • Diabetes is a known risk factor for cognitive impairment, dementia, and global brain atrophy but conflicting results have been reported for prediabetes, and it’s unknown whether a healthy lifestyle can counteract the negative impact of prediabetes.
  • Researchers examined the cross-sectional and longitudinal relationship between hyperglycemia and brain aging, as well as the potential mitigating effect of a healthy lifestyle in 31,229 dementia-free adults (mean age, 54.8 years; 53% women) from the UK Biobank, including 13,518 participants with prediabetes and 1149 with diabetes.
  • The glycemic status of the participants was determined by their medical history, medication use, and A1c levels.
  • The brain age gap was calculated as a difference between chronologic age and brain age estimated from MRI data from six modalities vs several hundred brain MRI phenotypes that were modeled from a subset of healthy individuals.
  • The role of sex, cardiometabolic risk factors, and a healthy lifestyle and their association with brain age was also explored, with a healthy lifestyle defined as never smoking, no or light or moderate alcohol consumption, and high physical activity.

TAKEAWAY:

  • Prediabetes and diabetes were associated with a higher brain age gap than normoglycemia (beta-coefficient, 0.22 and 2.01; 95% CI, 0.10-0.34 and 1.70-2.32, respectively), and diabetes was more pronounced in men vs women and those with a higher vs lower burden of cardiometabolic risk factors.
  • The brain ages of those with prediabetes and diabetes were 0.50 years and 2.29 years older on average than their respective chronologic ages.
  • In an exploratory longitudinal analysis of the 2414 participants with two brain MRI scans, diabetes was linked to a 0.27-year annual increase in the brain age gap, and higher A1c, but not prediabetes, was associated with a significant increase in brain age gap.
  • A healthy lifestyle attenuated the association between diabetes and a higher brain age gap (P = .003), reducing it by 1.68 years, also with a significant interaction between glycemic status and lifestyle.

IN PRACTICE:

“Our findings highlight diabetes and prediabetes as ideal targets for lifestyle-based interventions to promote brain health,” the authors wrote.

SOURCE:

This study, led by Abigail Dove, Aging Research Center, Department of Neurobiology, Care Sciences and Society, Karolinska Institutet, Stockholm, Sweden, was published online in Diabetes Care.

LIMITATIONS:

The generalizability of the findings was limited due to a healthy volunteer bias in the UK Biobank. A high proportion of missing data prevented the inclusion of diet in the healthy lifestyle construct. Reverse causality may be possible as an older brain may contribute to the development of prediabetes by making it more difficult to manage medical conditions and adhere to a healthy lifestyle. A1c levels were measured only at baseline, preventing the assessment of changes in glycemic control over time.

DISCLOSURES:

The authors reported receiving funding from the Swedish Research Council; Swedish Research Council for Health, Working Life and Welfare; Karolinska Institutet Board of Research; Riksbankens Jubileumsfond; Marianne and Marcus Wallenberg Foundation; Alzheimerfonden; and Demensfonden. They declared no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Muscle Relaxants for Chronic Pain: Where Is the Greatest Evidence?

Article Type
Changed
Mon, 09/30/2024 - 09:14

 

TOPLINE:

The long-term use of muscle relaxants may benefit patients with painful spasms or cramps and neck pain, according to a systematic review of clinical studies, but they do not appear to be beneficial for low back pain, fibromyalgia, or headaches and can have adverse effects such as sedation and dry mouth.

METHODOLOGY:

  • Researchers conducted a systematic review to evaluate the effectiveness of long-term use (≥ 4 weeks) of muscle relaxants for chronic pain lasting ≥ 3 months.
  • They identified 30 randomized clinical trials involving 1314 patients and 14 cohort studies involving 1168 patients, grouped according to the categories of low back pain, fibromyalgia, painful cramps or spasticity, headaches, and other syndromes.
  • Baclofen, tizanidine, cyclobenzaprine, eperisone, quinine, carisoprodol, orphenadrine, chlormezanone, and methocarbamol were the muscle relaxants assessed in comparison with placebo, other treatments, or untreated individuals.

TAKEAWAY:

  • The long-term use of muscle relaxants reduced pain intensity in those with painful spasms or cramps and neck pain. Baclofen, orphenadrine, carisoprodol, and methocarbamol improved cramp frequency, while the use of eperisone and chlormezanone improved neck pain and enhanced the quality of sleep, respectively, in those with neck osteoarthritis.
  • While some studies suggested that muscle relaxants reduced pain intensity in those with back pain and fibromyalgia, between-group differences were not observed. The benefits seen with some medications diminished after their discontinuation.
  • Despite tizanidine improving pain severity in headaches, 25% participants dropped out owing to adverse effects. Although certain muscle relaxants demonstrated pain relief, others did not.
  • The most common adverse effects of muscle relaxants were somnolence and dry mouth. Other adverse events included vomiting, diarrhea, nausea, weakness, and constipation.

IN PRACTICE:

“For patients already prescribed long-term SMRs [skeletal muscle relaxants], interventions are needed to assist clinicians to engage in shared decision-making with patients about deprescribing SMRs. This may be particularly true for older patients for whom risks of adverse events may be greater,” the authors wrote. “Clinicians should be vigilant for adverse effects and consider deprescribing if pain-related goals are not met.”

SOURCE:

The study, led by Benjamin J. Oldfield, MD, MHS, Yale School of Medicine, New Haven, Connecticut, was published online on September 19, 2024, in JAMA Network Open

LIMITATIONS:

This systematic review was limited to publications written in English, Spanish, and Italian language, potentially excluding studies from other regions. Variations in clinical sites, definitions of pain syndromes, medications, and durations of therapy prevented the possibility of conducting meta-analyses. Only quantitative studies were included, excluding valuable insights into patient experiences offered by qualitative studies. 

DISCLOSURES:

The study was supported by the National Institute on Drug Abuse. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

The long-term use of muscle relaxants may benefit patients with painful spasms or cramps and neck pain, according to a systematic review of clinical studies, but they do not appear to be beneficial for low back pain, fibromyalgia, or headaches and can have adverse effects such as sedation and dry mouth.

METHODOLOGY:

  • Researchers conducted a systematic review to evaluate the effectiveness of long-term use (≥ 4 weeks) of muscle relaxants for chronic pain lasting ≥ 3 months.
  • They identified 30 randomized clinical trials involving 1314 patients and 14 cohort studies involving 1168 patients, grouped according to the categories of low back pain, fibromyalgia, painful cramps or spasticity, headaches, and other syndromes.
  • Baclofen, tizanidine, cyclobenzaprine, eperisone, quinine, carisoprodol, orphenadrine, chlormezanone, and methocarbamol were the muscle relaxants assessed in comparison with placebo, other treatments, or untreated individuals.

TAKEAWAY:

  • The long-term use of muscle relaxants reduced pain intensity in those with painful spasms or cramps and neck pain. Baclofen, orphenadrine, carisoprodol, and methocarbamol improved cramp frequency, while the use of eperisone and chlormezanone improved neck pain and enhanced the quality of sleep, respectively, in those with neck osteoarthritis.
  • While some studies suggested that muscle relaxants reduced pain intensity in those with back pain and fibromyalgia, between-group differences were not observed. The benefits seen with some medications diminished after their discontinuation.
  • Despite tizanidine improving pain severity in headaches, 25% participants dropped out owing to adverse effects. Although certain muscle relaxants demonstrated pain relief, others did not.
  • The most common adverse effects of muscle relaxants were somnolence and dry mouth. Other adverse events included vomiting, diarrhea, nausea, weakness, and constipation.

IN PRACTICE:

“For patients already prescribed long-term SMRs [skeletal muscle relaxants], interventions are needed to assist clinicians to engage in shared decision-making with patients about deprescribing SMRs. This may be particularly true for older patients for whom risks of adverse events may be greater,” the authors wrote. “Clinicians should be vigilant for adverse effects and consider deprescribing if pain-related goals are not met.”

SOURCE:

The study, led by Benjamin J. Oldfield, MD, MHS, Yale School of Medicine, New Haven, Connecticut, was published online on September 19, 2024, in JAMA Network Open

LIMITATIONS:

This systematic review was limited to publications written in English, Spanish, and Italian language, potentially excluding studies from other regions. Variations in clinical sites, definitions of pain syndromes, medications, and durations of therapy prevented the possibility of conducting meta-analyses. Only quantitative studies were included, excluding valuable insights into patient experiences offered by qualitative studies. 

DISCLOSURES:

The study was supported by the National Institute on Drug Abuse. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

The long-term use of muscle relaxants may benefit patients with painful spasms or cramps and neck pain, according to a systematic review of clinical studies, but they do not appear to be beneficial for low back pain, fibromyalgia, or headaches and can have adverse effects such as sedation and dry mouth.

METHODOLOGY:

  • Researchers conducted a systematic review to evaluate the effectiveness of long-term use (≥ 4 weeks) of muscle relaxants for chronic pain lasting ≥ 3 months.
  • They identified 30 randomized clinical trials involving 1314 patients and 14 cohort studies involving 1168 patients, grouped according to the categories of low back pain, fibromyalgia, painful cramps or spasticity, headaches, and other syndromes.
  • Baclofen, tizanidine, cyclobenzaprine, eperisone, quinine, carisoprodol, orphenadrine, chlormezanone, and methocarbamol were the muscle relaxants assessed in comparison with placebo, other treatments, or untreated individuals.

TAKEAWAY:

  • The long-term use of muscle relaxants reduced pain intensity in those with painful spasms or cramps and neck pain. Baclofen, orphenadrine, carisoprodol, and methocarbamol improved cramp frequency, while the use of eperisone and chlormezanone improved neck pain and enhanced the quality of sleep, respectively, in those with neck osteoarthritis.
  • While some studies suggested that muscle relaxants reduced pain intensity in those with back pain and fibromyalgia, between-group differences were not observed. The benefits seen with some medications diminished after their discontinuation.
  • Despite tizanidine improving pain severity in headaches, 25% participants dropped out owing to adverse effects. Although certain muscle relaxants demonstrated pain relief, others did not.
  • The most common adverse effects of muscle relaxants were somnolence and dry mouth. Other adverse events included vomiting, diarrhea, nausea, weakness, and constipation.

IN PRACTICE:

“For patients already prescribed long-term SMRs [skeletal muscle relaxants], interventions are needed to assist clinicians to engage in shared decision-making with patients about deprescribing SMRs. This may be particularly true for older patients for whom risks of adverse events may be greater,” the authors wrote. “Clinicians should be vigilant for adverse effects and consider deprescribing if pain-related goals are not met.”

SOURCE:

The study, led by Benjamin J. Oldfield, MD, MHS, Yale School of Medicine, New Haven, Connecticut, was published online on September 19, 2024, in JAMA Network Open

LIMITATIONS:

This systematic review was limited to publications written in English, Spanish, and Italian language, potentially excluding studies from other regions. Variations in clinical sites, definitions of pain syndromes, medications, and durations of therapy prevented the possibility of conducting meta-analyses. Only quantitative studies were included, excluding valuable insights into patient experiences offered by qualitative studies. 

DISCLOSURES:

The study was supported by the National Institute on Drug Abuse. The authors declared no conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Hypnosis May Offer Relief During Sharp Debridement of Skin Ulcers

Article Type
Changed
Mon, 09/23/2024 - 11:39

 

TOPLINE:

Hypnosis reduces pain during sharp debridement of skin ulcers in patients with immune-mediated inflammatory diseases, with most patients reporting decreased pain awareness and lasting pain relief for 2-3 days after the procedure.

METHODOLOGY:

  • Researchers reported their experience with the anecdotal use of hypnosis for pain management in debridement of skin ulcers in immune-mediated inflammatory diseases.
  • They studied 16 participants (14 women; mean age, 56 years; 14 with systemic sclerosis or morphea) with recurrent skin ulcerations requiring sharp debridement, who presented to a wound care clinic at the Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom. The participants had negative experiences with pharmacologic pain management.
  • Participants consented to hypnosis during debridement as the only mode of analgesia, conducted by the same hypnosis-trained, experienced healthcare professional in charge of their ulcer care.
  • Ulcer pain scores were recorded using a numerical rating pain scale before and immediately after debridement, with a score of 0 indicating no pain and 10 indicating worst pain.

TAKEAWAY:

  • Hypnosis reduced the median pre-debridement ulcer pain score from 8 (interquartile range [IQR], 7-10) to 0.5 (IQR, 0-2) immediately after the procedure.
  • Of 16 participants, 14 reported being aware of the procedure but not feeling the pain, with only two participants experiencing a brief spike in pain.
  • The other two participants reported experiencing reduced awareness and being pain-free during the procedure.
  • Five participants reported a lasting decrease in pain perception for 2-3 days after the procedure.

IN PRACTICE:

“These preliminary data underscore the potential for the integration of hypnosis into the management of intervention-related pain in clinical care,” the authors wrote.

SOURCE:

The study was led by Begonya Alcacer-Pitarch, PhD, Leeds Institute of Rheumatic and Musculoskeletal Medicine, the University of Leeds, and Chapel Allerton Hospital in Leeds, United Kingdom. It was published as a correspondence on September 10, 2024, in The Lancet Rheumatology.

LIMITATIONS:

The small sample size may limit the generalizability of the findings. The methods used for data collection were not standardized, and the individuals included in the study may have introduced selection bias.

DISCLOSURES:

The study did not have a funding source. The authors declared no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Hypnosis reduces pain during sharp debridement of skin ulcers in patients with immune-mediated inflammatory diseases, with most patients reporting decreased pain awareness and lasting pain relief for 2-3 days after the procedure.

METHODOLOGY:

  • Researchers reported their experience with the anecdotal use of hypnosis for pain management in debridement of skin ulcers in immune-mediated inflammatory diseases.
  • They studied 16 participants (14 women; mean age, 56 years; 14 with systemic sclerosis or morphea) with recurrent skin ulcerations requiring sharp debridement, who presented to a wound care clinic at the Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom. The participants had negative experiences with pharmacologic pain management.
  • Participants consented to hypnosis during debridement as the only mode of analgesia, conducted by the same hypnosis-trained, experienced healthcare professional in charge of their ulcer care.
  • Ulcer pain scores were recorded using a numerical rating pain scale before and immediately after debridement, with a score of 0 indicating no pain and 10 indicating worst pain.

TAKEAWAY:

  • Hypnosis reduced the median pre-debridement ulcer pain score from 8 (interquartile range [IQR], 7-10) to 0.5 (IQR, 0-2) immediately after the procedure.
  • Of 16 participants, 14 reported being aware of the procedure but not feeling the pain, with only two participants experiencing a brief spike in pain.
  • The other two participants reported experiencing reduced awareness and being pain-free during the procedure.
  • Five participants reported a lasting decrease in pain perception for 2-3 days after the procedure.

IN PRACTICE:

“These preliminary data underscore the potential for the integration of hypnosis into the management of intervention-related pain in clinical care,” the authors wrote.

SOURCE:

The study was led by Begonya Alcacer-Pitarch, PhD, Leeds Institute of Rheumatic and Musculoskeletal Medicine, the University of Leeds, and Chapel Allerton Hospital in Leeds, United Kingdom. It was published as a correspondence on September 10, 2024, in The Lancet Rheumatology.

LIMITATIONS:

The small sample size may limit the generalizability of the findings. The methods used for data collection were not standardized, and the individuals included in the study may have introduced selection bias.

DISCLOSURES:

The study did not have a funding source. The authors declared no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

Hypnosis reduces pain during sharp debridement of skin ulcers in patients with immune-mediated inflammatory diseases, with most patients reporting decreased pain awareness and lasting pain relief for 2-3 days after the procedure.

METHODOLOGY:

  • Researchers reported their experience with the anecdotal use of hypnosis for pain management in debridement of skin ulcers in immune-mediated inflammatory diseases.
  • They studied 16 participants (14 women; mean age, 56 years; 14 with systemic sclerosis or morphea) with recurrent skin ulcerations requiring sharp debridement, who presented to a wound care clinic at the Leeds Teaching Hospitals NHS Trust, Leeds, United Kingdom. The participants had negative experiences with pharmacologic pain management.
  • Participants consented to hypnosis during debridement as the only mode of analgesia, conducted by the same hypnosis-trained, experienced healthcare professional in charge of their ulcer care.
  • Ulcer pain scores were recorded using a numerical rating pain scale before and immediately after debridement, with a score of 0 indicating no pain and 10 indicating worst pain.

TAKEAWAY:

  • Hypnosis reduced the median pre-debridement ulcer pain score from 8 (interquartile range [IQR], 7-10) to 0.5 (IQR, 0-2) immediately after the procedure.
  • Of 16 participants, 14 reported being aware of the procedure but not feeling the pain, with only two participants experiencing a brief spike in pain.
  • The other two participants reported experiencing reduced awareness and being pain-free during the procedure.
  • Five participants reported a lasting decrease in pain perception for 2-3 days after the procedure.

IN PRACTICE:

“These preliminary data underscore the potential for the integration of hypnosis into the management of intervention-related pain in clinical care,” the authors wrote.

SOURCE:

The study was led by Begonya Alcacer-Pitarch, PhD, Leeds Institute of Rheumatic and Musculoskeletal Medicine, the University of Leeds, and Chapel Allerton Hospital in Leeds, United Kingdom. It was published as a correspondence on September 10, 2024, in The Lancet Rheumatology.

LIMITATIONS:

The small sample size may limit the generalizability of the findings. The methods used for data collection were not standardized, and the individuals included in the study may have introduced selection bias.

DISCLOSURES:

The study did not have a funding source. The authors declared no relevant conflicts of interest.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Over One Third of Patients Develop Exocrine Pancreatic Insufficiency After Acute Pancreatitis

Article Type
Changed
Thu, 09/19/2024 - 11:45

 

TOPLINE:

Over one third of patients with acute pancreatitis develop exocrine pancreatic insufficiency (EPI) at 12 months, with the key predictors being idiopathic etiology, moderately severe or severe disease, and preexisting diabetes.

METHODOLOGY:

  • EPI has traditionally been associated with chronic pancreatitis, but its prevalence and natural history following acute pancreatitis are less well defined.
  • Researchers conducted a prospective cohort study including 85 hospital inpatients (mean age, 54.7 years; 48.2% women) diagnosed with acute pancreatitis from three tertiary institutions in the United States.
  • Severity of acute pancreatitis was classified according to the Revised Atlanta Criteria.
  • EPI was assessed by measuring fecal elastase 1 (FE-1) levels from stool samples at baseline and at 3 and 12 months after enrollment. EPI was defined by FE-1 levels ≤ 200 μg/g stool, with mild and severe EPI categorized by FE-1 levels of 101-200 μg/g stool and ≤ 100 μg/g stool, respectively.
  • The prevalence of EPI was assessed at 3 and 12 months after acute pancreatitis. The study also identified the predictors of EPI, including the role of etiology and severity of acute pancreatitis and preexisting diabetes.

TAKEAWAY:

  • EPI was present in 34.1% participants at 12 months after an acute pancreatitis attack, with 22.4% having severe EPI.
  • Even 12.8% of those with an index mild attack of acute pancreatitis had severe EPI at 12 months.
  • The odds of developing EPI at 12 months increased fourfold with idiopathic etiology of acute pancreatitis (P = .0094).
  • The odds of developing EPI increased over threefold with moderately severe or severe acute pancreatitis (P = .025) and preexisting diabetes (P = .031).
  • The prevalence of severe EPI after acute pancreatitis decreased from 29% at baseline to 26% at 3 months and 22% at 12 months.

IN PRACTICE:

“While specific subpopulations may have identified clinical risk factors, it will remain important to have a low threshold for testing and treatment as there remains much to learn about mechanisms leading to EPI after [acute pancreatitis],” the authors wrote.

SOURCE:

This study, led by Anna Evans Phillips, MD, MS, University of Pittsburgh School of Medicine in Pennsylvania, was published online in eClinicalMedicine.

LIMITATIONS:

Participants were often transferred from other hospitals with differing management techniques, which may have introduced selection bias. The use of FE-1 levels may have had diagnostic limitations. The study did not assess the impact of pancreatic enzyme replacement therapy on recovery from EPI. Some patients with early chronic pancreatitis may have been included owing to the lack of diagnostic clarity.

DISCLOSURES:

The study was supported by an investigator-initiated research grant from AbbVie. Some authors received funding for research from AbbVie. One of the authors declared serving as a consultant and scientific advisory board member and being an equity holder in biotechnology, biopharmaceutical, and diagnostics companies. Another author declared support from the Cystic Fibrosis Foundation and the American Society for Parenteral and Enteral Nutrition.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Over one third of patients with acute pancreatitis develop exocrine pancreatic insufficiency (EPI) at 12 months, with the key predictors being idiopathic etiology, moderately severe or severe disease, and preexisting diabetes.

METHODOLOGY:

  • EPI has traditionally been associated with chronic pancreatitis, but its prevalence and natural history following acute pancreatitis are less well defined.
  • Researchers conducted a prospective cohort study including 85 hospital inpatients (mean age, 54.7 years; 48.2% women) diagnosed with acute pancreatitis from three tertiary institutions in the United States.
  • Severity of acute pancreatitis was classified according to the Revised Atlanta Criteria.
  • EPI was assessed by measuring fecal elastase 1 (FE-1) levels from stool samples at baseline and at 3 and 12 months after enrollment. EPI was defined by FE-1 levels ≤ 200 μg/g stool, with mild and severe EPI categorized by FE-1 levels of 101-200 μg/g stool and ≤ 100 μg/g stool, respectively.
  • The prevalence of EPI was assessed at 3 and 12 months after acute pancreatitis. The study also identified the predictors of EPI, including the role of etiology and severity of acute pancreatitis and preexisting diabetes.

TAKEAWAY:

  • EPI was present in 34.1% participants at 12 months after an acute pancreatitis attack, with 22.4% having severe EPI.
  • Even 12.8% of those with an index mild attack of acute pancreatitis had severe EPI at 12 months.
  • The odds of developing EPI at 12 months increased fourfold with idiopathic etiology of acute pancreatitis (P = .0094).
  • The odds of developing EPI increased over threefold with moderately severe or severe acute pancreatitis (P = .025) and preexisting diabetes (P = .031).
  • The prevalence of severe EPI after acute pancreatitis decreased from 29% at baseline to 26% at 3 months and 22% at 12 months.

IN PRACTICE:

“While specific subpopulations may have identified clinical risk factors, it will remain important to have a low threshold for testing and treatment as there remains much to learn about mechanisms leading to EPI after [acute pancreatitis],” the authors wrote.

SOURCE:

This study, led by Anna Evans Phillips, MD, MS, University of Pittsburgh School of Medicine in Pennsylvania, was published online in eClinicalMedicine.

LIMITATIONS:

Participants were often transferred from other hospitals with differing management techniques, which may have introduced selection bias. The use of FE-1 levels may have had diagnostic limitations. The study did not assess the impact of pancreatic enzyme replacement therapy on recovery from EPI. Some patients with early chronic pancreatitis may have been included owing to the lack of diagnostic clarity.

DISCLOSURES:

The study was supported by an investigator-initiated research grant from AbbVie. Some authors received funding for research from AbbVie. One of the authors declared serving as a consultant and scientific advisory board member and being an equity holder in biotechnology, biopharmaceutical, and diagnostics companies. Another author declared support from the Cystic Fibrosis Foundation and the American Society for Parenteral and Enteral Nutrition.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Over one third of patients with acute pancreatitis develop exocrine pancreatic insufficiency (EPI) at 12 months, with the key predictors being idiopathic etiology, moderately severe or severe disease, and preexisting diabetes.

METHODOLOGY:

  • EPI has traditionally been associated with chronic pancreatitis, but its prevalence and natural history following acute pancreatitis are less well defined.
  • Researchers conducted a prospective cohort study including 85 hospital inpatients (mean age, 54.7 years; 48.2% women) diagnosed with acute pancreatitis from three tertiary institutions in the United States.
  • Severity of acute pancreatitis was classified according to the Revised Atlanta Criteria.
  • EPI was assessed by measuring fecal elastase 1 (FE-1) levels from stool samples at baseline and at 3 and 12 months after enrollment. EPI was defined by FE-1 levels ≤ 200 μg/g stool, with mild and severe EPI categorized by FE-1 levels of 101-200 μg/g stool and ≤ 100 μg/g stool, respectively.
  • The prevalence of EPI was assessed at 3 and 12 months after acute pancreatitis. The study also identified the predictors of EPI, including the role of etiology and severity of acute pancreatitis and preexisting diabetes.

TAKEAWAY:

  • EPI was present in 34.1% participants at 12 months after an acute pancreatitis attack, with 22.4% having severe EPI.
  • Even 12.8% of those with an index mild attack of acute pancreatitis had severe EPI at 12 months.
  • The odds of developing EPI at 12 months increased fourfold with idiopathic etiology of acute pancreatitis (P = .0094).
  • The odds of developing EPI increased over threefold with moderately severe or severe acute pancreatitis (P = .025) and preexisting diabetes (P = .031).
  • The prevalence of severe EPI after acute pancreatitis decreased from 29% at baseline to 26% at 3 months and 22% at 12 months.

IN PRACTICE:

“While specific subpopulations may have identified clinical risk factors, it will remain important to have a low threshold for testing and treatment as there remains much to learn about mechanisms leading to EPI after [acute pancreatitis],” the authors wrote.

SOURCE:

This study, led by Anna Evans Phillips, MD, MS, University of Pittsburgh School of Medicine in Pennsylvania, was published online in eClinicalMedicine.

LIMITATIONS:

Participants were often transferred from other hospitals with differing management techniques, which may have introduced selection bias. The use of FE-1 levels may have had diagnostic limitations. The study did not assess the impact of pancreatic enzyme replacement therapy on recovery from EPI. Some patients with early chronic pancreatitis may have been included owing to the lack of diagnostic clarity.

DISCLOSURES:

The study was supported by an investigator-initiated research grant from AbbVie. Some authors received funding for research from AbbVie. One of the authors declared serving as a consultant and scientific advisory board member and being an equity holder in biotechnology, biopharmaceutical, and diagnostics companies. Another author declared support from the Cystic Fibrosis Foundation and the American Society for Parenteral and Enteral Nutrition.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Belimumab Hits Newer Remission, Low Disease Activity Metrics

Article Type
Changed
Tue, 09/17/2024 - 11:29

 

TOPLINE:

A greater proportion of patients with active systemic lupus erythematosus (SLE) treated with belimumab plus standard therapy achieved the newest definitions for remission and low disease activity compared with those treated with placebo plus standard therapy, with benefits observed as early as week 28 for remission and week 8 for disease activity, according to pooled results from five clinical trials.

METHODOLOGY:

  • Researchers conducted an integrated post hoc analysis of five randomized phase 3 clinical trials to evaluate the attainment of remission and low disease activity in adult patients with active, autoantibody-positive SLE.
  • A total of 3086 patients (median age, 36 years; 94% women) were randomly assigned to receive standard therapy with intravenous belimumab 10 mg/kg monthly or subcutaneous belimumab 200 mg weekly (n = 1869) or placebo (n = 1217).
  • The proportion of patients who achieved definitions of remission in SLE (DORIS) remission and lupus low disease activity state (LLDAS) by visit up to week 52 was assessed.
  • The analysis also evaluated the time taken to achieve sustained (at least two consecutive visits) and maintained (up to week 52) DORIS remission and LLDAS.

TAKEAWAY:

  • At week 52, a higher proportion of patients receiving belimumab vs placebo achieved DORIS remission (8% vs 6%; risk ratio [RR], 1.51; P = .0055) and LLDAS (17% vs 10%; RR, 1.74; P < .0001).
  • The earliest observed significant benefit of belimumab over placebo in patients with a higher baseline disease activity was at week 20 for DORIS remission (RR, 2.09; P = .043) and at week 16 for LLDAS (RR, 1.46; P = .034), with both maintained through week 52.
  • The proportion of patients who attained DORIS remission and LLDAS as early as week 28 and week 8, respectively, was higher in the belimumab group than in the placebo group, with both maintained through week 52.
  • Patients on belimumab were more likely to have a sustained and maintained DORIS remission (hazard ratio [HR], 1.53; P = .013) and LLDAS (HR, 1.79; P < .0001) at any timepoint.

IN PRACTICE:

“The data clearly support that belimumab is a valuable addition toward accomplishing and maintaining remission or LLDAS,” George Bertsias, MD, PhD, University of Crete Medical School, Heraklion, Greece, and Jinoos Yazdany, MD, University of California San Francisco, wrote in a related comment.

SOURCE:

This study, led by Ioannis Parodis, MD, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden, was published online on August 26, 2024, in The Lancet Rheumatology.

LIMITATIONS: 

Due to the post hoc nature of the analysis, the trials were not specifically designed to have adequate statistical power to demonstrate the difference between patients who did or did not achieve DORIS remission or LLDAS. The analysis was limited to patients who met the eligibility criteria, and the outcomes are not generalizable to populations outside a clinical trial setting. The study population had high disease activity, which made it challenging to attain the treatment targets.

DISCLOSURES:

The five trials included in this analysis were funded by GSK. The study was supported by the Swedish Rheumatism Association, King Gustaf V’s 80-year Foundation, the Swedish Society of Medicine, Nyckelfonden, Professor Nanna Svartz Foundation, Ulla and Roland Gustafsson Foundation, Region Stockholm, and Karolinska Institutet. Some authors reported receiving grants, speaker honoraria, or consulting fees from various pharmaceutical companies. Some authors reported being employees and owning stocks and shares of GSK.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

A greater proportion of patients with active systemic lupus erythematosus (SLE) treated with belimumab plus standard therapy achieved the newest definitions for remission and low disease activity compared with those treated with placebo plus standard therapy, with benefits observed as early as week 28 for remission and week 8 for disease activity, according to pooled results from five clinical trials.

METHODOLOGY:

  • Researchers conducted an integrated post hoc analysis of five randomized phase 3 clinical trials to evaluate the attainment of remission and low disease activity in adult patients with active, autoantibody-positive SLE.
  • A total of 3086 patients (median age, 36 years; 94% women) were randomly assigned to receive standard therapy with intravenous belimumab 10 mg/kg monthly or subcutaneous belimumab 200 mg weekly (n = 1869) or placebo (n = 1217).
  • The proportion of patients who achieved definitions of remission in SLE (DORIS) remission and lupus low disease activity state (LLDAS) by visit up to week 52 was assessed.
  • The analysis also evaluated the time taken to achieve sustained (at least two consecutive visits) and maintained (up to week 52) DORIS remission and LLDAS.

TAKEAWAY:

  • At week 52, a higher proportion of patients receiving belimumab vs placebo achieved DORIS remission (8% vs 6%; risk ratio [RR], 1.51; P = .0055) and LLDAS (17% vs 10%; RR, 1.74; P < .0001).
  • The earliest observed significant benefit of belimumab over placebo in patients with a higher baseline disease activity was at week 20 for DORIS remission (RR, 2.09; P = .043) and at week 16 for LLDAS (RR, 1.46; P = .034), with both maintained through week 52.
  • The proportion of patients who attained DORIS remission and LLDAS as early as week 28 and week 8, respectively, was higher in the belimumab group than in the placebo group, with both maintained through week 52.
  • Patients on belimumab were more likely to have a sustained and maintained DORIS remission (hazard ratio [HR], 1.53; P = .013) and LLDAS (HR, 1.79; P < .0001) at any timepoint.

IN PRACTICE:

“The data clearly support that belimumab is a valuable addition toward accomplishing and maintaining remission or LLDAS,” George Bertsias, MD, PhD, University of Crete Medical School, Heraklion, Greece, and Jinoos Yazdany, MD, University of California San Francisco, wrote in a related comment.

SOURCE:

This study, led by Ioannis Parodis, MD, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden, was published online on August 26, 2024, in The Lancet Rheumatology.

LIMITATIONS: 

Due to the post hoc nature of the analysis, the trials were not specifically designed to have adequate statistical power to demonstrate the difference between patients who did or did not achieve DORIS remission or LLDAS. The analysis was limited to patients who met the eligibility criteria, and the outcomes are not generalizable to populations outside a clinical trial setting. The study population had high disease activity, which made it challenging to attain the treatment targets.

DISCLOSURES:

The five trials included in this analysis were funded by GSK. The study was supported by the Swedish Rheumatism Association, King Gustaf V’s 80-year Foundation, the Swedish Society of Medicine, Nyckelfonden, Professor Nanna Svartz Foundation, Ulla and Roland Gustafsson Foundation, Region Stockholm, and Karolinska Institutet. Some authors reported receiving grants, speaker honoraria, or consulting fees from various pharmaceutical companies. Some authors reported being employees and owning stocks and shares of GSK.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

TOPLINE:

A greater proportion of patients with active systemic lupus erythematosus (SLE) treated with belimumab plus standard therapy achieved the newest definitions for remission and low disease activity compared with those treated with placebo plus standard therapy, with benefits observed as early as week 28 for remission and week 8 for disease activity, according to pooled results from five clinical trials.

METHODOLOGY:

  • Researchers conducted an integrated post hoc analysis of five randomized phase 3 clinical trials to evaluate the attainment of remission and low disease activity in adult patients with active, autoantibody-positive SLE.
  • A total of 3086 patients (median age, 36 years; 94% women) were randomly assigned to receive standard therapy with intravenous belimumab 10 mg/kg monthly or subcutaneous belimumab 200 mg weekly (n = 1869) or placebo (n = 1217).
  • The proportion of patients who achieved definitions of remission in SLE (DORIS) remission and lupus low disease activity state (LLDAS) by visit up to week 52 was assessed.
  • The analysis also evaluated the time taken to achieve sustained (at least two consecutive visits) and maintained (up to week 52) DORIS remission and LLDAS.

TAKEAWAY:

  • At week 52, a higher proportion of patients receiving belimumab vs placebo achieved DORIS remission (8% vs 6%; risk ratio [RR], 1.51; P = .0055) and LLDAS (17% vs 10%; RR, 1.74; P < .0001).
  • The earliest observed significant benefit of belimumab over placebo in patients with a higher baseline disease activity was at week 20 for DORIS remission (RR, 2.09; P = .043) and at week 16 for LLDAS (RR, 1.46; P = .034), with both maintained through week 52.
  • The proportion of patients who attained DORIS remission and LLDAS as early as week 28 and week 8, respectively, was higher in the belimumab group than in the placebo group, with both maintained through week 52.
  • Patients on belimumab were more likely to have a sustained and maintained DORIS remission (hazard ratio [HR], 1.53; P = .013) and LLDAS (HR, 1.79; P < .0001) at any timepoint.

IN PRACTICE:

“The data clearly support that belimumab is a valuable addition toward accomplishing and maintaining remission or LLDAS,” George Bertsias, MD, PhD, University of Crete Medical School, Heraklion, Greece, and Jinoos Yazdany, MD, University of California San Francisco, wrote in a related comment.

SOURCE:

This study, led by Ioannis Parodis, MD, Karolinska Institutet and Karolinska University Hospital, Stockholm, Sweden, was published online on August 26, 2024, in The Lancet Rheumatology.

LIMITATIONS: 

Due to the post hoc nature of the analysis, the trials were not specifically designed to have adequate statistical power to demonstrate the difference between patients who did or did not achieve DORIS remission or LLDAS. The analysis was limited to patients who met the eligibility criteria, and the outcomes are not generalizable to populations outside a clinical trial setting. The study population had high disease activity, which made it challenging to attain the treatment targets.

DISCLOSURES:

The five trials included in this analysis were funded by GSK. The study was supported by the Swedish Rheumatism Association, King Gustaf V’s 80-year Foundation, the Swedish Society of Medicine, Nyckelfonden, Professor Nanna Svartz Foundation, Ulla and Roland Gustafsson Foundation, Region Stockholm, and Karolinska Institutet. Some authors reported receiving grants, speaker honoraria, or consulting fees from various pharmaceutical companies. Some authors reported being employees and owning stocks and shares of GSK.
 

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Does Tailored Acupuncture Relieve Chronic Neck Pain?

Article Type
Changed
Tue, 09/17/2024 - 19:37

 

TOPLINE:

Patients with chronic neck pain who received acupuncture experienced an alleviation of their symptoms, but not at clinically meaningful levels, compared with those who received sham treatment.

METHODOLOGY:

  • A 24-week randomized trial was conducted at four clinical centers in China over a 2-year period starting in 2018.
  • A total of 659 patients with chronic neck pain were randomly assigned to one of the four groups: Higher sensitive acupoints (mean age, 38.63 years; 70.41% women; n = 169), lower sensitive acupoints (mean age, 40.21 years; 74.4% women; n = 168), sham acupuncture (mean age, 40.16 years; 75.29% women; n = 170), and a waiting list (mean age, 38.63 years; 69.89% women; n = 176).
  • Participants in the acupuncture groups had 10 sessions over 4 weeks and were followed up for 20 weeks. Those in the waiting list group received no treatment.
  • The primary outcome was the change in neck pain at 4 weeks, measured on a 0-100 scale. A change of 10 points was considered clinically significant.
  • The secondary outcomes were neck pain and movement, quality of life, and use of pain medication over 24 weeks.

TAKEAWAY:

  • Acupuncture targeted at higher sensitive points led to a pain score reduction of 12.16 (95% CI, −14.45 to −9.87), while lower sensitive points reduced it by 10.19 (95% CI, −12.43 to −7.95).
  • Sham acupuncture reduced the score by 6.11 (95% CI, −8.31 to −3.91), and no treatment reduced it by 2.24 (95% CI, −4.10 to −0.38).
  • The higher and lower sensitive acupoint groups showed no clinically significant net differences in pain reduction and secondary outcomes compared with the sham and waiting list groups.
  • Differences in reductions in pain between groups all decreased by week 24.

IN PRACTICE:

“The clinical importance of this improvement is unclear. Our results suggest that the selection of pressure pain, sensory-based objective acupoints could be considered as a treatment of CNP [chronic neck pain],” the authors wrote.

SOURCE:

This study, led by Ling Zhao, PhD, of Acupuncture and Tuina School at Chengdu University of Traditional Chinese Medicine in Chengdu, China, was published online in the Annals of Internal Medicine.

LIMITATIONS:

Blinding was not done in the waiting list group. Individuals in the higher and lower sensitive acupoint groups experienced a specific sensation after needle manipulation, which could have influenced the analysis. Additionally, the participants were middle-aged adults with moderate pain, which limited the generalizability to older individuals or those with severe pain.

DISCLOSURES:

The study was supported by grants from the National Natural Science Foundation of China, Central Guidance on Local Science and Technology Development Fund of Sichuan Province, among others. The authors declared no conflicts of interest outside the submitted work.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Patients with chronic neck pain who received acupuncture experienced an alleviation of their symptoms, but not at clinically meaningful levels, compared with those who received sham treatment.

METHODOLOGY:

  • A 24-week randomized trial was conducted at four clinical centers in China over a 2-year period starting in 2018.
  • A total of 659 patients with chronic neck pain were randomly assigned to one of the four groups: Higher sensitive acupoints (mean age, 38.63 years; 70.41% women; n = 169), lower sensitive acupoints (mean age, 40.21 years; 74.4% women; n = 168), sham acupuncture (mean age, 40.16 years; 75.29% women; n = 170), and a waiting list (mean age, 38.63 years; 69.89% women; n = 176).
  • Participants in the acupuncture groups had 10 sessions over 4 weeks and were followed up for 20 weeks. Those in the waiting list group received no treatment.
  • The primary outcome was the change in neck pain at 4 weeks, measured on a 0-100 scale. A change of 10 points was considered clinically significant.
  • The secondary outcomes were neck pain and movement, quality of life, and use of pain medication over 24 weeks.

TAKEAWAY:

  • Acupuncture targeted at higher sensitive points led to a pain score reduction of 12.16 (95% CI, −14.45 to −9.87), while lower sensitive points reduced it by 10.19 (95% CI, −12.43 to −7.95).
  • Sham acupuncture reduced the score by 6.11 (95% CI, −8.31 to −3.91), and no treatment reduced it by 2.24 (95% CI, −4.10 to −0.38).
  • The higher and lower sensitive acupoint groups showed no clinically significant net differences in pain reduction and secondary outcomes compared with the sham and waiting list groups.
  • Differences in reductions in pain between groups all decreased by week 24.

IN PRACTICE:

“The clinical importance of this improvement is unclear. Our results suggest that the selection of pressure pain, sensory-based objective acupoints could be considered as a treatment of CNP [chronic neck pain],” the authors wrote.

SOURCE:

This study, led by Ling Zhao, PhD, of Acupuncture and Tuina School at Chengdu University of Traditional Chinese Medicine in Chengdu, China, was published online in the Annals of Internal Medicine.

LIMITATIONS:

Blinding was not done in the waiting list group. Individuals in the higher and lower sensitive acupoint groups experienced a specific sensation after needle manipulation, which could have influenced the analysis. Additionally, the participants were middle-aged adults with moderate pain, which limited the generalizability to older individuals or those with severe pain.

DISCLOSURES:

The study was supported by grants from the National Natural Science Foundation of China, Central Guidance on Local Science and Technology Development Fund of Sichuan Province, among others. The authors declared no conflicts of interest outside the submitted work.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

 

TOPLINE:

Patients with chronic neck pain who received acupuncture experienced an alleviation of their symptoms, but not at clinically meaningful levels, compared with those who received sham treatment.

METHODOLOGY:

  • A 24-week randomized trial was conducted at four clinical centers in China over a 2-year period starting in 2018.
  • A total of 659 patients with chronic neck pain were randomly assigned to one of the four groups: Higher sensitive acupoints (mean age, 38.63 years; 70.41% women; n = 169), lower sensitive acupoints (mean age, 40.21 years; 74.4% women; n = 168), sham acupuncture (mean age, 40.16 years; 75.29% women; n = 170), and a waiting list (mean age, 38.63 years; 69.89% women; n = 176).
  • Participants in the acupuncture groups had 10 sessions over 4 weeks and were followed up for 20 weeks. Those in the waiting list group received no treatment.
  • The primary outcome was the change in neck pain at 4 weeks, measured on a 0-100 scale. A change of 10 points was considered clinically significant.
  • The secondary outcomes were neck pain and movement, quality of life, and use of pain medication over 24 weeks.

TAKEAWAY:

  • Acupuncture targeted at higher sensitive points led to a pain score reduction of 12.16 (95% CI, −14.45 to −9.87), while lower sensitive points reduced it by 10.19 (95% CI, −12.43 to −7.95).
  • Sham acupuncture reduced the score by 6.11 (95% CI, −8.31 to −3.91), and no treatment reduced it by 2.24 (95% CI, −4.10 to −0.38).
  • The higher and lower sensitive acupoint groups showed no clinically significant net differences in pain reduction and secondary outcomes compared with the sham and waiting list groups.
  • Differences in reductions in pain between groups all decreased by week 24.

IN PRACTICE:

“The clinical importance of this improvement is unclear. Our results suggest that the selection of pressure pain, sensory-based objective acupoints could be considered as a treatment of CNP [chronic neck pain],” the authors wrote.

SOURCE:

This study, led by Ling Zhao, PhD, of Acupuncture and Tuina School at Chengdu University of Traditional Chinese Medicine in Chengdu, China, was published online in the Annals of Internal Medicine.

LIMITATIONS:

Blinding was not done in the waiting list group. Individuals in the higher and lower sensitive acupoint groups experienced a specific sensation after needle manipulation, which could have influenced the analysis. Additionally, the participants were middle-aged adults with moderate pain, which limited the generalizability to older individuals or those with severe pain.

DISCLOSURES:

The study was supported by grants from the National Natural Science Foundation of China, Central Guidance on Local Science and Technology Development Fund of Sichuan Province, among others. The authors declared no conflicts of interest outside the submitted work.

This article was created using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Tue, 09/17/2024 - 19:37
Un-Gate On Date
Tue, 09/17/2024 - 19:37
Use ProPublica
CFC Schedule Remove Status
Tue, 09/17/2024 - 19:37
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article
survey writer start date
Tue, 09/17/2024 - 19:37