No added benefit of time-restricted eating in NAFLD

Article Type
Changed
Fri, 04/07/2023 - 13:58

Adding time-restricted eating to a calorie-restricted diet did not provide added benefits in adults with obesity and nonalcoholic fatty liver disease (NAFLD), according to the results of a randomized controlled trial conducted in China.

With the same calorie restrictions, an 8-hour time-restricted eating (TRE) pattern was no more effective in lowering intrahepatic triglyceride content or achieving resolution of NAFLD than daily calorie restriction (DCR) without time constraints (habitual meal timing).

TRE also did not provide additional benefits over DCR for reducing body fat or metabolic risk factors.

Calorie intake restriction seems to explain most of the beneficial effects of TRE and supports the importance of calorie restriction in a TRE regimen in adults with obesity and NAFLD, say the investigators, led by Xueyun Wei, MD, with Southern Medical University, Guangzhou (China).

The study “supports some other recent data that kind of disproves that intermittent fasting actually works that well and that it basically comes down to calorie restriction,” said Lisa Ganjhu, DO, who wasn’t involved in the research.

“It doesn’t matter when you are calorie restricting; it’s just that you are restricting calories to a certain amount. We know that works,” Dr. Ganjhu, a clinical associate professor in the division of gastroenterology and hepatology at NYU Grossman School of Medicine, told this news organization.

Results of the TREATY-FLD study were published online  in JAMA Network Open.
 

Calorie reduction is key

NAFLD has become a major worldwide public health challenge, affecting roughly 20%-30% of adults in the general population and more than 70% of adults with obesity and diabetes.

Weight loss through lifestyle modifications has been shown to improve liver fat and metabolic disorders. TRE, a type of intermittent fasting, has garnered attention as a potential alternative to DCR for weight loss. “However, most of the reported benefits of TRE are either ‘untested or under tested’ and can’t isolate the effects of TRE itself,” Dr. Wei and colleagues note.

In the TREATY-FLD study, 88 adults (mean age, 32 years; 56% male) with obesity and NAFLD and similar baseline characteristics were randomly allocated to a TRE or DCR group.

All participants were instructed to maintain a diet of 1,500-1,800 kcal per day for men and 1,200-1,500 kcal per day for women for 12 months. The diets consisted of 40%-55% carbohydrate, 15%-20% protein, and 20%-30% fat. Participants were also given one protein shake per day for the first 6 months and received dietary counseling throughout the study.

Participants in the TRE group were told to eat only between 8 AM and 4 PM each day. Only noncaloric beverages were permitted outside of the daily eating window. Participants in the DCR group had no restrictions on when they could eat.

Investigators found no significant between-group differences in change in MRI-measured IHTG content from baseline to 6 or 12 months (the primary outcome).

At 6 months, IHTG content was reduced by 8.3% in the TRE group and by 8.1% in the DRC group. At 12 months, IHTG content was reduced by 6.9% and 7.9%, respectively. The net change in IHTG content was not significantly different between the groups at 6 months (percentage point difference: −0.2; P = .86) or 12 months (percentage point difference: 1; P = .45)

Liver stiffness was reduced by 2.1 kPa in the TRE group and 1.7 kPa in the DCR group at 12 months, with no significant difference between the groups (P = .33). A percentage of participants in the TRE and DCR groups had resolution of NAFLD (defined as IHTG content less than 5%) at 12 months (33% vs. 49%; P = .31).

During the 12-month intervention, body weight was significantly reduced by 8.4 kg in the TRE group and 7.8 kg in the DCR group, with no significant between-group differences (P = .69).

In addition, waist circumference, body fat percentage, fat mass, lean mass, total abdominal fat, subcutaneous fat, visceral fat, and visceral to subcutaneous fat ratio were all significantly and comparably reduced in the two groups.

Both groups also saw significant and comparable improvement over 12 months in metabolic risk factors, including systolic and diastolic blood pressure, pulse rate, and total cholesterol, triglyceride, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol levels.

However, TRE might be more effective in improving insulin sensitivity than DCR. Both diets significantly reduced fasting plasma glucose level, hemoglobin A1c, and homeostasis model assessment of insulin resistance (HOMA-IR) at 6 months. TRE significantly reduced HOMA-IR, compared with DCR at 12 months.

Both diets significantly reduced levels of liver enzymes, including serum alanine aminotransferase, aspartate aminotransferase, and gamma-glutamyl transferase, with no significant between-group differences.
 

 

 

Eat less, exercise more

Although the study found no additional benefit from TRE, it’s still good advice to skip snacking in the evening, Dr. Ganjhu said in an interview. “No one snacks on anything healthy at night. I mean, who’s chewing on celery?” she added.

Eating late at night can trigger reflux, so “not eating anything for several hours before bed or better yet going for a walk after dinner to kickstart your metabolism is good advice,” Dr. Ganjhu said.

For obesity and fatty liver disease, it really comes down to diet and exercise, she noted.

“For all the money that is going into pharmaceuticals, the long and the short of it is you just have to eat less and work out more and manage all the other factors like diabetes, high blood pressure, and metabolic syndrome. But getting people to follow that is tough,” Dr. Ganjhu said.

The study was supported by grants from the National Key Research and Development Project, Joint Funds of the National Natural Science Foundation of China, and Key-Area Clinical Research Program of Southern Medical University. Dr. Wei and Dr. Ganjhu report no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Adding time-restricted eating to a calorie-restricted diet did not provide added benefits in adults with obesity and nonalcoholic fatty liver disease (NAFLD), according to the results of a randomized controlled trial conducted in China.

With the same calorie restrictions, an 8-hour time-restricted eating (TRE) pattern was no more effective in lowering intrahepatic triglyceride content or achieving resolution of NAFLD than daily calorie restriction (DCR) without time constraints (habitual meal timing).

TRE also did not provide additional benefits over DCR for reducing body fat or metabolic risk factors.

Calorie intake restriction seems to explain most of the beneficial effects of TRE and supports the importance of calorie restriction in a TRE regimen in adults with obesity and NAFLD, say the investigators, led by Xueyun Wei, MD, with Southern Medical University, Guangzhou (China).

The study “supports some other recent data that kind of disproves that intermittent fasting actually works that well and that it basically comes down to calorie restriction,” said Lisa Ganjhu, DO, who wasn’t involved in the research.

“It doesn’t matter when you are calorie restricting; it’s just that you are restricting calories to a certain amount. We know that works,” Dr. Ganjhu, a clinical associate professor in the division of gastroenterology and hepatology at NYU Grossman School of Medicine, told this news organization.

Results of the TREATY-FLD study were published online  in JAMA Network Open.
 

Calorie reduction is key

NAFLD has become a major worldwide public health challenge, affecting roughly 20%-30% of adults in the general population and more than 70% of adults with obesity and diabetes.

Weight loss through lifestyle modifications has been shown to improve liver fat and metabolic disorders. TRE, a type of intermittent fasting, has garnered attention as a potential alternative to DCR for weight loss. “However, most of the reported benefits of TRE are either ‘untested or under tested’ and can’t isolate the effects of TRE itself,” Dr. Wei and colleagues note.

In the TREATY-FLD study, 88 adults (mean age, 32 years; 56% male) with obesity and NAFLD and similar baseline characteristics were randomly allocated to a TRE or DCR group.

All participants were instructed to maintain a diet of 1,500-1,800 kcal per day for men and 1,200-1,500 kcal per day for women for 12 months. The diets consisted of 40%-55% carbohydrate, 15%-20% protein, and 20%-30% fat. Participants were also given one protein shake per day for the first 6 months and received dietary counseling throughout the study.

Participants in the TRE group were told to eat only between 8 AM and 4 PM each day. Only noncaloric beverages were permitted outside of the daily eating window. Participants in the DCR group had no restrictions on when they could eat.

Investigators found no significant between-group differences in change in MRI-measured IHTG content from baseline to 6 or 12 months (the primary outcome).

At 6 months, IHTG content was reduced by 8.3% in the TRE group and by 8.1% in the DRC group. At 12 months, IHTG content was reduced by 6.9% and 7.9%, respectively. The net change in IHTG content was not significantly different between the groups at 6 months (percentage point difference: −0.2; P = .86) or 12 months (percentage point difference: 1; P = .45)

Liver stiffness was reduced by 2.1 kPa in the TRE group and 1.7 kPa in the DCR group at 12 months, with no significant difference between the groups (P = .33). A percentage of participants in the TRE and DCR groups had resolution of NAFLD (defined as IHTG content less than 5%) at 12 months (33% vs. 49%; P = .31).

During the 12-month intervention, body weight was significantly reduced by 8.4 kg in the TRE group and 7.8 kg in the DCR group, with no significant between-group differences (P = .69).

In addition, waist circumference, body fat percentage, fat mass, lean mass, total abdominal fat, subcutaneous fat, visceral fat, and visceral to subcutaneous fat ratio were all significantly and comparably reduced in the two groups.

Both groups also saw significant and comparable improvement over 12 months in metabolic risk factors, including systolic and diastolic blood pressure, pulse rate, and total cholesterol, triglyceride, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol levels.

However, TRE might be more effective in improving insulin sensitivity than DCR. Both diets significantly reduced fasting plasma glucose level, hemoglobin A1c, and homeostasis model assessment of insulin resistance (HOMA-IR) at 6 months. TRE significantly reduced HOMA-IR, compared with DCR at 12 months.

Both diets significantly reduced levels of liver enzymes, including serum alanine aminotransferase, aspartate aminotransferase, and gamma-glutamyl transferase, with no significant between-group differences.
 

 

 

Eat less, exercise more

Although the study found no additional benefit from TRE, it’s still good advice to skip snacking in the evening, Dr. Ganjhu said in an interview. “No one snacks on anything healthy at night. I mean, who’s chewing on celery?” she added.

Eating late at night can trigger reflux, so “not eating anything for several hours before bed or better yet going for a walk after dinner to kickstart your metabolism is good advice,” Dr. Ganjhu said.

For obesity and fatty liver disease, it really comes down to diet and exercise, she noted.

“For all the money that is going into pharmaceuticals, the long and the short of it is you just have to eat less and work out more and manage all the other factors like diabetes, high blood pressure, and metabolic syndrome. But getting people to follow that is tough,” Dr. Ganjhu said.

The study was supported by grants from the National Key Research and Development Project, Joint Funds of the National Natural Science Foundation of China, and Key-Area Clinical Research Program of Southern Medical University. Dr. Wei and Dr. Ganjhu report no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Adding time-restricted eating to a calorie-restricted diet did not provide added benefits in adults with obesity and nonalcoholic fatty liver disease (NAFLD), according to the results of a randomized controlled trial conducted in China.

With the same calorie restrictions, an 8-hour time-restricted eating (TRE) pattern was no more effective in lowering intrahepatic triglyceride content or achieving resolution of NAFLD than daily calorie restriction (DCR) without time constraints (habitual meal timing).

TRE also did not provide additional benefits over DCR for reducing body fat or metabolic risk factors.

Calorie intake restriction seems to explain most of the beneficial effects of TRE and supports the importance of calorie restriction in a TRE regimen in adults with obesity and NAFLD, say the investigators, led by Xueyun Wei, MD, with Southern Medical University, Guangzhou (China).

The study “supports some other recent data that kind of disproves that intermittent fasting actually works that well and that it basically comes down to calorie restriction,” said Lisa Ganjhu, DO, who wasn’t involved in the research.

“It doesn’t matter when you are calorie restricting; it’s just that you are restricting calories to a certain amount. We know that works,” Dr. Ganjhu, a clinical associate professor in the division of gastroenterology and hepatology at NYU Grossman School of Medicine, told this news organization.

Results of the TREATY-FLD study were published online  in JAMA Network Open.
 

Calorie reduction is key

NAFLD has become a major worldwide public health challenge, affecting roughly 20%-30% of adults in the general population and more than 70% of adults with obesity and diabetes.

Weight loss through lifestyle modifications has been shown to improve liver fat and metabolic disorders. TRE, a type of intermittent fasting, has garnered attention as a potential alternative to DCR for weight loss. “However, most of the reported benefits of TRE are either ‘untested or under tested’ and can’t isolate the effects of TRE itself,” Dr. Wei and colleagues note.

In the TREATY-FLD study, 88 adults (mean age, 32 years; 56% male) with obesity and NAFLD and similar baseline characteristics were randomly allocated to a TRE or DCR group.

All participants were instructed to maintain a diet of 1,500-1,800 kcal per day for men and 1,200-1,500 kcal per day for women for 12 months. The diets consisted of 40%-55% carbohydrate, 15%-20% protein, and 20%-30% fat. Participants were also given one protein shake per day for the first 6 months and received dietary counseling throughout the study.

Participants in the TRE group were told to eat only between 8 AM and 4 PM each day. Only noncaloric beverages were permitted outside of the daily eating window. Participants in the DCR group had no restrictions on when they could eat.

Investigators found no significant between-group differences in change in MRI-measured IHTG content from baseline to 6 or 12 months (the primary outcome).

At 6 months, IHTG content was reduced by 8.3% in the TRE group and by 8.1% in the DRC group. At 12 months, IHTG content was reduced by 6.9% and 7.9%, respectively. The net change in IHTG content was not significantly different between the groups at 6 months (percentage point difference: −0.2; P = .86) or 12 months (percentage point difference: 1; P = .45)

Liver stiffness was reduced by 2.1 kPa in the TRE group and 1.7 kPa in the DCR group at 12 months, with no significant difference between the groups (P = .33). A percentage of participants in the TRE and DCR groups had resolution of NAFLD (defined as IHTG content less than 5%) at 12 months (33% vs. 49%; P = .31).

During the 12-month intervention, body weight was significantly reduced by 8.4 kg in the TRE group and 7.8 kg in the DCR group, with no significant between-group differences (P = .69).

In addition, waist circumference, body fat percentage, fat mass, lean mass, total abdominal fat, subcutaneous fat, visceral fat, and visceral to subcutaneous fat ratio were all significantly and comparably reduced in the two groups.

Both groups also saw significant and comparable improvement over 12 months in metabolic risk factors, including systolic and diastolic blood pressure, pulse rate, and total cholesterol, triglyceride, high-density lipoprotein cholesterol, and low-density lipoprotein cholesterol levels.

However, TRE might be more effective in improving insulin sensitivity than DCR. Both diets significantly reduced fasting plasma glucose level, hemoglobin A1c, and homeostasis model assessment of insulin resistance (HOMA-IR) at 6 months. TRE significantly reduced HOMA-IR, compared with DCR at 12 months.

Both diets significantly reduced levels of liver enzymes, including serum alanine aminotransferase, aspartate aminotransferase, and gamma-glutamyl transferase, with no significant between-group differences.
 

 

 

Eat less, exercise more

Although the study found no additional benefit from TRE, it’s still good advice to skip snacking in the evening, Dr. Ganjhu said in an interview. “No one snacks on anything healthy at night. I mean, who’s chewing on celery?” she added.

Eating late at night can trigger reflux, so “not eating anything for several hours before bed or better yet going for a walk after dinner to kickstart your metabolism is good advice,” Dr. Ganjhu said.

For obesity and fatty liver disease, it really comes down to diet and exercise, she noted.

“For all the money that is going into pharmaceuticals, the long and the short of it is you just have to eat less and work out more and manage all the other factors like diabetes, high blood pressure, and metabolic syndrome. But getting people to follow that is tough,” Dr. Ganjhu said.

The study was supported by grants from the National Key Research and Development Project, Joint Funds of the National Natural Science Foundation of China, and Key-Area Clinical Research Program of Southern Medical University. Dr. Wei and Dr. Ganjhu report no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

High EHR burden for some GI providers

Article Type
Changed
Fri, 04/07/2023 - 13:59

Working with electronic health records may be particularly burdensome for physicians who specialize in inflammatory bowel disease (IBD) or hepatology, as well as for nonphysician providers (NPPs), according to a new study.

IBD and hepatology specialists spend more time per appointment using the EHR, including for clinical review and outside of regular hours, compared with other subspecialists, the study finds. Additionally, NPPs spend more time in EHRs than physicians.

“Given the often-complicated medical histories of patients with IBD as well as the role of IBD specialists as de facto primary care providers for many patients, these findings are not surprising,” wrote authors Aman Bali, MD, and colleagues from Mayo Clinic, Jacksonville, Fla.

The finding that hepatology specialists show similar overall EHR burden as IBD specialists suggests that management of chronic disease in these patient populations may be contributing to this increased workload, they added.

The study was published online in the American Journal of Gastroenterology.
 

EHR burden dissected

Widespread adoption of EHRs has been shown to have significant benefits, but it’s also been identified as a “key driver” of physician burnout. However, the EHR burden specific to GI providers had not been well explored.

To investigate, Dr. Bali and colleagues analyzed data collected through an analytics tool in their EHR system for 41 outpatient GI providers. The data covered 2,803 clinic days and 16,407 appointments over the 6-month period of January to June 2021.

They compared metrics across provider gender; the subspecialties of IBD, hepatology (HEP), esophagus (ESO), advanced endoscopy (AE), and motility/irritable bowel syndrome (IBS); and training (physicians vs. NPPs).

Overall, 76% of providers were physicians and 24% were NPPs; 44% were women, including all the NPPs.

Among the key findings: Female and male gastroenterology providers spent a similar amount of time in the EHR per appointment (22.2 minutes and 19.4 minutes, respectively).

  • IBD and hepatology specialists spent more time in the EHR per appointment than other subspecialists (38.3 minutes and 34.6 minutes for IBD and HEP, respectively, vs. 10, 11.2, and 19, for ESO, AE, and motility/IBS, respectively), including more time in clinical review. They also spent more time using the EHR outside of regularly scheduled hours per appointment.
  • IBD specialists also received more messages requesting medical advice per appointment than other providers (2.4 per appointment for IBD vs. fewer than 1-1.2 per appointment for the other specialties).
  • Junior faculty spent more time outside of scheduled hours, including “pajama time” (5:30 p.m.–7:00 a.m. and weekends), than senior faculty (21.1 minutes vs. 14.8 minutes per appointment) and had a lower percentage of visits closed the same day (20.3% vs. 57.1%), though comparison was limited by small sample size.
  • NPPs spent more overall time in the EHR per appointment than physicians (36.2 minutes vs. 20.1 minutes), owing in part to increased time in clinical review per appointment (10.2 minutes vs. 6.6 minutes).
  • NPPs received a similar number of medical advice request messages per appointment as physicians (1.4 vs. 1) but spent more time per completed message (70.9 seconds vs. 43.3 seconds).

More research needed

The findings align with a recent study that found “similar evidence of high EHR burden” for gastroenterology providers, Dr. Bali and colleagues wrote. In that study, for each hour of scheduled time, providers spent an additional 45-50 minutes on EHR-related tasks, though no statistically significant differences were identified when comparing NPPs to physicians.

Dr. Bali and colleagues said the increased EHR burden of NPPs in their study may be explained by their institution’s practice model, in which NPPs spend a significant portion of time seeing patients for follow-up visits. This allows physicians time for other tasks, such as procedures and research.

The study did not assess provider burnout and was limited to the metrics provided by their EHR system, the researchers noted. Their findings are from a single tertiary care center that was using one EHR system; the findings may not be valid in different practice settings that use different EHRs.

More data across various practice settings encompassing more providers are needed to understand the true landscape of EHR burden in gastroenterology. That knowledge will be essential to create strategies to address the problem, the researchers wrote.

The study had no external funding. The authors disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

Working with electronic health records may be particularly burdensome for physicians who specialize in inflammatory bowel disease (IBD) or hepatology, as well as for nonphysician providers (NPPs), according to a new study.

IBD and hepatology specialists spend more time per appointment using the EHR, including for clinical review and outside of regular hours, compared with other subspecialists, the study finds. Additionally, NPPs spend more time in EHRs than physicians.

“Given the often-complicated medical histories of patients with IBD as well as the role of IBD specialists as de facto primary care providers for many patients, these findings are not surprising,” wrote authors Aman Bali, MD, and colleagues from Mayo Clinic, Jacksonville, Fla.

The finding that hepatology specialists show similar overall EHR burden as IBD specialists suggests that management of chronic disease in these patient populations may be contributing to this increased workload, they added.

The study was published online in the American Journal of Gastroenterology.
 

EHR burden dissected

Widespread adoption of EHRs has been shown to have significant benefits, but it’s also been identified as a “key driver” of physician burnout. However, the EHR burden specific to GI providers had not been well explored.

To investigate, Dr. Bali and colleagues analyzed data collected through an analytics tool in their EHR system for 41 outpatient GI providers. The data covered 2,803 clinic days and 16,407 appointments over the 6-month period of January to June 2021.

They compared metrics across provider gender; the subspecialties of IBD, hepatology (HEP), esophagus (ESO), advanced endoscopy (AE), and motility/irritable bowel syndrome (IBS); and training (physicians vs. NPPs).

Overall, 76% of providers were physicians and 24% were NPPs; 44% were women, including all the NPPs.

Among the key findings: Female and male gastroenterology providers spent a similar amount of time in the EHR per appointment (22.2 minutes and 19.4 minutes, respectively).

  • IBD and hepatology specialists spent more time in the EHR per appointment than other subspecialists (38.3 minutes and 34.6 minutes for IBD and HEP, respectively, vs. 10, 11.2, and 19, for ESO, AE, and motility/IBS, respectively), including more time in clinical review. They also spent more time using the EHR outside of regularly scheduled hours per appointment.
  • IBD specialists also received more messages requesting medical advice per appointment than other providers (2.4 per appointment for IBD vs. fewer than 1-1.2 per appointment for the other specialties).
  • Junior faculty spent more time outside of scheduled hours, including “pajama time” (5:30 p.m.–7:00 a.m. and weekends), than senior faculty (21.1 minutes vs. 14.8 minutes per appointment) and had a lower percentage of visits closed the same day (20.3% vs. 57.1%), though comparison was limited by small sample size.
  • NPPs spent more overall time in the EHR per appointment than physicians (36.2 minutes vs. 20.1 minutes), owing in part to increased time in clinical review per appointment (10.2 minutes vs. 6.6 minutes).
  • NPPs received a similar number of medical advice request messages per appointment as physicians (1.4 vs. 1) but spent more time per completed message (70.9 seconds vs. 43.3 seconds).

More research needed

The findings align with a recent study that found “similar evidence of high EHR burden” for gastroenterology providers, Dr. Bali and colleagues wrote. In that study, for each hour of scheduled time, providers spent an additional 45-50 minutes on EHR-related tasks, though no statistically significant differences were identified when comparing NPPs to physicians.

Dr. Bali and colleagues said the increased EHR burden of NPPs in their study may be explained by their institution’s practice model, in which NPPs spend a significant portion of time seeing patients for follow-up visits. This allows physicians time for other tasks, such as procedures and research.

The study did not assess provider burnout and was limited to the metrics provided by their EHR system, the researchers noted. Their findings are from a single tertiary care center that was using one EHR system; the findings may not be valid in different practice settings that use different EHRs.

More data across various practice settings encompassing more providers are needed to understand the true landscape of EHR burden in gastroenterology. That knowledge will be essential to create strategies to address the problem, the researchers wrote.

The study had no external funding. The authors disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Working with electronic health records may be particularly burdensome for physicians who specialize in inflammatory bowel disease (IBD) or hepatology, as well as for nonphysician providers (NPPs), according to a new study.

IBD and hepatology specialists spend more time per appointment using the EHR, including for clinical review and outside of regular hours, compared with other subspecialists, the study finds. Additionally, NPPs spend more time in EHRs than physicians.

“Given the often-complicated medical histories of patients with IBD as well as the role of IBD specialists as de facto primary care providers for many patients, these findings are not surprising,” wrote authors Aman Bali, MD, and colleagues from Mayo Clinic, Jacksonville, Fla.

The finding that hepatology specialists show similar overall EHR burden as IBD specialists suggests that management of chronic disease in these patient populations may be contributing to this increased workload, they added.

The study was published online in the American Journal of Gastroenterology.
 

EHR burden dissected

Widespread adoption of EHRs has been shown to have significant benefits, but it’s also been identified as a “key driver” of physician burnout. However, the EHR burden specific to GI providers had not been well explored.

To investigate, Dr. Bali and colleagues analyzed data collected through an analytics tool in their EHR system for 41 outpatient GI providers. The data covered 2,803 clinic days and 16,407 appointments over the 6-month period of January to June 2021.

They compared metrics across provider gender; the subspecialties of IBD, hepatology (HEP), esophagus (ESO), advanced endoscopy (AE), and motility/irritable bowel syndrome (IBS); and training (physicians vs. NPPs).

Overall, 76% of providers were physicians and 24% were NPPs; 44% were women, including all the NPPs.

Among the key findings: Female and male gastroenterology providers spent a similar amount of time in the EHR per appointment (22.2 minutes and 19.4 minutes, respectively).

  • IBD and hepatology specialists spent more time in the EHR per appointment than other subspecialists (38.3 minutes and 34.6 minutes for IBD and HEP, respectively, vs. 10, 11.2, and 19, for ESO, AE, and motility/IBS, respectively), including more time in clinical review. They also spent more time using the EHR outside of regularly scheduled hours per appointment.
  • IBD specialists also received more messages requesting medical advice per appointment than other providers (2.4 per appointment for IBD vs. fewer than 1-1.2 per appointment for the other specialties).
  • Junior faculty spent more time outside of scheduled hours, including “pajama time” (5:30 p.m.–7:00 a.m. and weekends), than senior faculty (21.1 minutes vs. 14.8 minutes per appointment) and had a lower percentage of visits closed the same day (20.3% vs. 57.1%), though comparison was limited by small sample size.
  • NPPs spent more overall time in the EHR per appointment than physicians (36.2 minutes vs. 20.1 minutes), owing in part to increased time in clinical review per appointment (10.2 minutes vs. 6.6 minutes).
  • NPPs received a similar number of medical advice request messages per appointment as physicians (1.4 vs. 1) but spent more time per completed message (70.9 seconds vs. 43.3 seconds).

More research needed

The findings align with a recent study that found “similar evidence of high EHR burden” for gastroenterology providers, Dr. Bali and colleagues wrote. In that study, for each hour of scheduled time, providers spent an additional 45-50 minutes on EHR-related tasks, though no statistically significant differences were identified when comparing NPPs to physicians.

Dr. Bali and colleagues said the increased EHR burden of NPPs in their study may be explained by their institution’s practice model, in which NPPs spend a significant portion of time seeing patients for follow-up visits. This allows physicians time for other tasks, such as procedures and research.

The study did not assess provider burnout and was limited to the metrics provided by their EHR system, the researchers noted. Their findings are from a single tertiary care center that was using one EHR system; the findings may not be valid in different practice settings that use different EHRs.

More data across various practice settings encompassing more providers are needed to understand the true landscape of EHR burden in gastroenterology. That knowledge will be essential to create strategies to address the problem, the researchers wrote.

The study had no external funding. The authors disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

SBRT: Alternative to surgery in early stage lung cancer?

Article Type
Changed
Wed, 04/05/2023 - 13:27

 

Stereotactic body radiation therapy (SBRT) and surgery offer nearly equal overall survival rates for patients with stage I and II non–small cell lung cancer (NSCLC), according to population-based data from a German cancer registry.

“From a public health perspective, SBRT is a good therapeutic option in terms of survival, especially for elderly and inoperable patients,” noted the study authors, led by Jörg Andreas Müller, MD, department of radiation oncology, University Hospital of Halle, Germany.

The analysis was published online in the journal Strahlentherapie Und Onkologie.

Surgery remains the standard of care for early stage NSCLC. However, many patients are not eligible for surgery because of the tumor’s location, age, frailty, or comorbidities.

Before the introduction of SBRT, conventional radiation therapy was the only reasonable option for inoperable patients, with study data showing only a small survival improvement in treated vs. untreated patients.

High-precision, image-guided SBRT offers better tumor control with limited toxicity. And while many radiation oncology centers in Germany adopted SBRT as an alternative treatment for surgery after 2000, few population-based studies evaluating SBRT’s impact on overall survival exist.

Using the German clinical cancer registry of Berlin-Brandenburg, Dr. Müller and colleagues assessed SBRT as an alternative to surgery in 558 patients with stage I and II NSCLC, diagnosed between 2000 and 2015.

More patients received surgery than SBRT (74% vs. 26%). Those who received SBRT were younger than those in the surgery group and had better Karnofsky performance status.

Among patients in the SBRT group, median survival was 19 months overall and 27 months in patients over age 75. In the surgery group, median survival was 22 months overall and 24 months in those over 75.

In a univariate survival model of a propensity-matched sample of 292 patients – half of whom received SBRT – survival rates were similar among those who underwent SBRT versus surgery (hazard ratio [HR], 1.2; P = .2).

Survival was also similar in the two treatment groups in a T1 subanalysis (HR, 1.12; P = .7) as well as in patients over age 75 (HR, 0.86; P = .5).

Better performance status scores were associated with improved survival, and higher histological grades and TNM stages were linked to higher mortality risk. The availability of histological data did not have a significant impact on survival outcomes.

Overall, the findings suggest that SBRT and surgery offer comparable survival outcomes in early stage NSCLC and “the availability of histological data might not be decisive for treatment planning,” Dr. Müller and colleagues said. 

Drew Moghanaki, MD, chief of the thoracic oncology service at UCLA Health Jonsson Comprehensive Cancer Center, Los Angeles, highlighted the findings on Twitter.

A thoracic surgeon from Germany responded with several concerns about the study, including the use of statistics with univariate modeling and undiagnosed lymph node (N) status.

Dr. Moghanaki replied that these “concerns summarize how we USED to think. It increasingly seems they aren’t as important as our teachers once thought they were.  As we move into the future we need to reassess the data that supported these recommendations as they seem more academic than patient centered.”

The study authors reported no specific funding, and no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Stereotactic body radiation therapy (SBRT) and surgery offer nearly equal overall survival rates for patients with stage I and II non–small cell lung cancer (NSCLC), according to population-based data from a German cancer registry.

“From a public health perspective, SBRT is a good therapeutic option in terms of survival, especially for elderly and inoperable patients,” noted the study authors, led by Jörg Andreas Müller, MD, department of radiation oncology, University Hospital of Halle, Germany.

The analysis was published online in the journal Strahlentherapie Und Onkologie.

Surgery remains the standard of care for early stage NSCLC. However, many patients are not eligible for surgery because of the tumor’s location, age, frailty, or comorbidities.

Before the introduction of SBRT, conventional radiation therapy was the only reasonable option for inoperable patients, with study data showing only a small survival improvement in treated vs. untreated patients.

High-precision, image-guided SBRT offers better tumor control with limited toxicity. And while many radiation oncology centers in Germany adopted SBRT as an alternative treatment for surgery after 2000, few population-based studies evaluating SBRT’s impact on overall survival exist.

Using the German clinical cancer registry of Berlin-Brandenburg, Dr. Müller and colleagues assessed SBRT as an alternative to surgery in 558 patients with stage I and II NSCLC, diagnosed between 2000 and 2015.

More patients received surgery than SBRT (74% vs. 26%). Those who received SBRT were younger than those in the surgery group and had better Karnofsky performance status.

Among patients in the SBRT group, median survival was 19 months overall and 27 months in patients over age 75. In the surgery group, median survival was 22 months overall and 24 months in those over 75.

In a univariate survival model of a propensity-matched sample of 292 patients – half of whom received SBRT – survival rates were similar among those who underwent SBRT versus surgery (hazard ratio [HR], 1.2; P = .2).

Survival was also similar in the two treatment groups in a T1 subanalysis (HR, 1.12; P = .7) as well as in patients over age 75 (HR, 0.86; P = .5).

Better performance status scores were associated with improved survival, and higher histological grades and TNM stages were linked to higher mortality risk. The availability of histological data did not have a significant impact on survival outcomes.

Overall, the findings suggest that SBRT and surgery offer comparable survival outcomes in early stage NSCLC and “the availability of histological data might not be decisive for treatment planning,” Dr. Müller and colleagues said. 

Drew Moghanaki, MD, chief of the thoracic oncology service at UCLA Health Jonsson Comprehensive Cancer Center, Los Angeles, highlighted the findings on Twitter.

A thoracic surgeon from Germany responded with several concerns about the study, including the use of statistics with univariate modeling and undiagnosed lymph node (N) status.

Dr. Moghanaki replied that these “concerns summarize how we USED to think. It increasingly seems they aren’t as important as our teachers once thought they were.  As we move into the future we need to reassess the data that supported these recommendations as they seem more academic than patient centered.”

The study authors reported no specific funding, and no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

 

Stereotactic body radiation therapy (SBRT) and surgery offer nearly equal overall survival rates for patients with stage I and II non–small cell lung cancer (NSCLC), according to population-based data from a German cancer registry.

“From a public health perspective, SBRT is a good therapeutic option in terms of survival, especially for elderly and inoperable patients,” noted the study authors, led by Jörg Andreas Müller, MD, department of radiation oncology, University Hospital of Halle, Germany.

The analysis was published online in the journal Strahlentherapie Und Onkologie.

Surgery remains the standard of care for early stage NSCLC. However, many patients are not eligible for surgery because of the tumor’s location, age, frailty, or comorbidities.

Before the introduction of SBRT, conventional radiation therapy was the only reasonable option for inoperable patients, with study data showing only a small survival improvement in treated vs. untreated patients.

High-precision, image-guided SBRT offers better tumor control with limited toxicity. And while many radiation oncology centers in Germany adopted SBRT as an alternative treatment for surgery after 2000, few population-based studies evaluating SBRT’s impact on overall survival exist.

Using the German clinical cancer registry of Berlin-Brandenburg, Dr. Müller and colleagues assessed SBRT as an alternative to surgery in 558 patients with stage I and II NSCLC, diagnosed between 2000 and 2015.

More patients received surgery than SBRT (74% vs. 26%). Those who received SBRT were younger than those in the surgery group and had better Karnofsky performance status.

Among patients in the SBRT group, median survival was 19 months overall and 27 months in patients over age 75. In the surgery group, median survival was 22 months overall and 24 months in those over 75.

In a univariate survival model of a propensity-matched sample of 292 patients – half of whom received SBRT – survival rates were similar among those who underwent SBRT versus surgery (hazard ratio [HR], 1.2; P = .2).

Survival was also similar in the two treatment groups in a T1 subanalysis (HR, 1.12; P = .7) as well as in patients over age 75 (HR, 0.86; P = .5).

Better performance status scores were associated with improved survival, and higher histological grades and TNM stages were linked to higher mortality risk. The availability of histological data did not have a significant impact on survival outcomes.

Overall, the findings suggest that SBRT and surgery offer comparable survival outcomes in early stage NSCLC and “the availability of histological data might not be decisive for treatment planning,” Dr. Müller and colleagues said. 

Drew Moghanaki, MD, chief of the thoracic oncology service at UCLA Health Jonsson Comprehensive Cancer Center, Los Angeles, highlighted the findings on Twitter.

A thoracic surgeon from Germany responded with several concerns about the study, including the use of statistics with univariate modeling and undiagnosed lymph node (N) status.

Dr. Moghanaki replied that these “concerns summarize how we USED to think. It increasingly seems they aren’t as important as our teachers once thought they were.  As we move into the future we need to reassess the data that supported these recommendations as they seem more academic than patient centered.”

The study authors reported no specific funding, and no relevant financial relationships.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM STRAHLENTHERAPIE UND ONKOLOGIE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Mindfulness-based CBT an ‘important’ option for moderate depression

Article Type
Changed
Fri, 04/07/2023 - 14:00

 

A mindfulness-based cognitive therapy self-help (MBCT-SH) intervention in which patients were supported by a trained practitioner led to better clinical outcomes at lower cost than practitioner-supported cognitive-behavioral therapy self-help (CBT-SH), new research shows.

The findings suggest that “offering practitioner-supported MBCT-SH as an intervention for mild to moderate depression would improve outcomes and save money compared with practitioner-supported CBT-SH,” noted the investigators, led by Clara Strauss, PhD, DClinPsy, with the University of Sussex School of Psychology in England.

Practitioner-supported CBT-SH is recommended in U.K. national treatment guidelines for mild to moderate depression. However, some patients’ conditions don’t respond, and dropout rates are high.

The Low-Intensity Guided Help Through Mindfulness (LIGHTMind) trial tested practitioner-supported MBCT-SH as an alternative.

The findings have “important implications” for the more than 100,000 people currently offered CBT-SH for depression in the Improving Access to Psychological Therapies (IAPT) program each year and in publicly funded services elsewhere, the researchers noted.

If translated into routine practice, “this would see many more people recovering from depression while costing health services less money,” they added.

The study was published online in JAMA Psychiatry .
 

Practice changing?

The trial included 410 adults (mean age, 32 years; 62% women) with mild to moderate depression who were recruited from 10 publicly funded psychological therapy services in England as part of the IAPT program.

Participants were given one of two established self-help workbooks – The Mindful Way Workbook: An 8-Week Program to Free Yourself from Depression and Emotional Distress, written by the pioneers of MBCT, or Overcoming Depression and Low Mood, 3rd Edition: A Five Areas Approach, which is a CBT-SH program widely used in IAPT.

Use of the self-help books was supported by six structured phone or in-person sessions with a trained psychological well-being practitioner.

The primary outcome was depression symptom severity at 16 weeks, which was determined on the basis of Patient Health Questionnaire 9 (PHQ-9) score.

At 16 weeks following randomization, MBCT-SH led to significantly greater reductions in depression symptom severity compared with CBT-SH (mean PHQ-9 score, 7.2 vs. 8.6; between-group difference, 1.5 points; P = .009; d = −0.36).

MBCT-SH also had superior effects on anxiety symptom severity at 16 weeks.

At the 42-week follow-up, between-group effects on depression and anxiety symptom severity remained in the hypothesized direction but were nonsignificant.

This could be due in part by the greater postintervention psychological therapy accessed by participants in the CBT-SH group, the investigators noted.

Practitioner-supported MBCT-SH was more cost-effective than supported CBT-SH.

On average, the CBT-SH intervention cost health services £526 ($631) more per participant than the MBCT-SH intervention over the 42-week follow-up. The probability of MBCT-SH being cost-effective compared with CBT-SH exceeded 95%, the researchers noted.
 

Useful model for the United States

Commenting for this news organization, Zindel Segal, PhD, professor of psychology, University of Toronto, Scarborough, cautioned against making too much of the differences between the groups, because CBT-SH “trended positive and had a pretty healthy effect size, it just never reached significance.

“I wouldn’t say mindfulness drastically outperformed cognitive therapy. But cognitive therapy is a robust treatment in its own right, and so doing a little bit better is significant,” Dr. Segal said.

He also noted that, appropriately, the trial enrolled adults who were experiencing moderate depression and were not acutely ill. “That’s one of the rationales for self-help compared to providing patients with a more resource-intensive group treatment.

“If you look at the needs of people with moderate depression, what you find is that for cognitive therapy to work, negative thoughts and feelings need to be pervasive in order to make use of the techniques,” Dr. Segal explained.

“With mindfulness, you don’t need any to have constant negative thoughts or feelings. Anything that arises in your experience serves as grist for mill in terms of concentration and focus,” Dr. Segal said.

He also noted that mindfulness-based intervention is “more optimized” for people who are experiencing some measure of recovery or remission.

“It’s well suited for that, as it trends towards the wellness spectrum. But for people who might have greater levels of acuity or severity, cognitive-behavioral therapy might be indicated,” said Dr. Segal.

He also said the U.K. study findings are relevant to U.S. patients with depression.

“While it’s not disseminated in the same way through any kind of national program, the self-help books that are used are widely available, and the support that people were offered, either in person, telephone, or email, could be easily delivered. This would be a very useful model,” Dr. Segal said.

The LIGHTMind trial was funded by the National Institute for Health and Care Research and the Brighton and Sussex Clinical Trials Unit. Dr. Strauss has received grants from Headspace, is research lead for Sussex Mindfulness Centre, and has been chief investigator on National Institute for Health and Care Research. Dr. Segal is one of the authors of the MBCT-SH workbooks used in the study.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

A mindfulness-based cognitive therapy self-help (MBCT-SH) intervention in which patients were supported by a trained practitioner led to better clinical outcomes at lower cost than practitioner-supported cognitive-behavioral therapy self-help (CBT-SH), new research shows.

The findings suggest that “offering practitioner-supported MBCT-SH as an intervention for mild to moderate depression would improve outcomes and save money compared with practitioner-supported CBT-SH,” noted the investigators, led by Clara Strauss, PhD, DClinPsy, with the University of Sussex School of Psychology in England.

Practitioner-supported CBT-SH is recommended in U.K. national treatment guidelines for mild to moderate depression. However, some patients’ conditions don’t respond, and dropout rates are high.

The Low-Intensity Guided Help Through Mindfulness (LIGHTMind) trial tested practitioner-supported MBCT-SH as an alternative.

The findings have “important implications” for the more than 100,000 people currently offered CBT-SH for depression in the Improving Access to Psychological Therapies (IAPT) program each year and in publicly funded services elsewhere, the researchers noted.

If translated into routine practice, “this would see many more people recovering from depression while costing health services less money,” they added.

The study was published online in JAMA Psychiatry .
 

Practice changing?

The trial included 410 adults (mean age, 32 years; 62% women) with mild to moderate depression who were recruited from 10 publicly funded psychological therapy services in England as part of the IAPT program.

Participants were given one of two established self-help workbooks – The Mindful Way Workbook: An 8-Week Program to Free Yourself from Depression and Emotional Distress, written by the pioneers of MBCT, or Overcoming Depression and Low Mood, 3rd Edition: A Five Areas Approach, which is a CBT-SH program widely used in IAPT.

Use of the self-help books was supported by six structured phone or in-person sessions with a trained psychological well-being practitioner.

The primary outcome was depression symptom severity at 16 weeks, which was determined on the basis of Patient Health Questionnaire 9 (PHQ-9) score.

At 16 weeks following randomization, MBCT-SH led to significantly greater reductions in depression symptom severity compared with CBT-SH (mean PHQ-9 score, 7.2 vs. 8.6; between-group difference, 1.5 points; P = .009; d = −0.36).

MBCT-SH also had superior effects on anxiety symptom severity at 16 weeks.

At the 42-week follow-up, between-group effects on depression and anxiety symptom severity remained in the hypothesized direction but were nonsignificant.

This could be due in part by the greater postintervention psychological therapy accessed by participants in the CBT-SH group, the investigators noted.

Practitioner-supported MBCT-SH was more cost-effective than supported CBT-SH.

On average, the CBT-SH intervention cost health services £526 ($631) more per participant than the MBCT-SH intervention over the 42-week follow-up. The probability of MBCT-SH being cost-effective compared with CBT-SH exceeded 95%, the researchers noted.
 

Useful model for the United States

Commenting for this news organization, Zindel Segal, PhD, professor of psychology, University of Toronto, Scarborough, cautioned against making too much of the differences between the groups, because CBT-SH “trended positive and had a pretty healthy effect size, it just never reached significance.

“I wouldn’t say mindfulness drastically outperformed cognitive therapy. But cognitive therapy is a robust treatment in its own right, and so doing a little bit better is significant,” Dr. Segal said.

He also noted that, appropriately, the trial enrolled adults who were experiencing moderate depression and were not acutely ill. “That’s one of the rationales for self-help compared to providing patients with a more resource-intensive group treatment.

“If you look at the needs of people with moderate depression, what you find is that for cognitive therapy to work, negative thoughts and feelings need to be pervasive in order to make use of the techniques,” Dr. Segal explained.

“With mindfulness, you don’t need any to have constant negative thoughts or feelings. Anything that arises in your experience serves as grist for mill in terms of concentration and focus,” Dr. Segal said.

He also noted that mindfulness-based intervention is “more optimized” for people who are experiencing some measure of recovery or remission.

“It’s well suited for that, as it trends towards the wellness spectrum. But for people who might have greater levels of acuity or severity, cognitive-behavioral therapy might be indicated,” said Dr. Segal.

He also said the U.K. study findings are relevant to U.S. patients with depression.

“While it’s not disseminated in the same way through any kind of national program, the self-help books that are used are widely available, and the support that people were offered, either in person, telephone, or email, could be easily delivered. This would be a very useful model,” Dr. Segal said.

The LIGHTMind trial was funded by the National Institute for Health and Care Research and the Brighton and Sussex Clinical Trials Unit. Dr. Strauss has received grants from Headspace, is research lead for Sussex Mindfulness Centre, and has been chief investigator on National Institute for Health and Care Research. Dr. Segal is one of the authors of the MBCT-SH workbooks used in the study.
 

A version of this article first appeared on Medscape.com.

 

A mindfulness-based cognitive therapy self-help (MBCT-SH) intervention in which patients were supported by a trained practitioner led to better clinical outcomes at lower cost than practitioner-supported cognitive-behavioral therapy self-help (CBT-SH), new research shows.

The findings suggest that “offering practitioner-supported MBCT-SH as an intervention for mild to moderate depression would improve outcomes and save money compared with practitioner-supported CBT-SH,” noted the investigators, led by Clara Strauss, PhD, DClinPsy, with the University of Sussex School of Psychology in England.

Practitioner-supported CBT-SH is recommended in U.K. national treatment guidelines for mild to moderate depression. However, some patients’ conditions don’t respond, and dropout rates are high.

The Low-Intensity Guided Help Through Mindfulness (LIGHTMind) trial tested practitioner-supported MBCT-SH as an alternative.

The findings have “important implications” for the more than 100,000 people currently offered CBT-SH for depression in the Improving Access to Psychological Therapies (IAPT) program each year and in publicly funded services elsewhere, the researchers noted.

If translated into routine practice, “this would see many more people recovering from depression while costing health services less money,” they added.

The study was published online in JAMA Psychiatry .
 

Practice changing?

The trial included 410 adults (mean age, 32 years; 62% women) with mild to moderate depression who were recruited from 10 publicly funded psychological therapy services in England as part of the IAPT program.

Participants were given one of two established self-help workbooks – The Mindful Way Workbook: An 8-Week Program to Free Yourself from Depression and Emotional Distress, written by the pioneers of MBCT, or Overcoming Depression and Low Mood, 3rd Edition: A Five Areas Approach, which is a CBT-SH program widely used in IAPT.

Use of the self-help books was supported by six structured phone or in-person sessions with a trained psychological well-being practitioner.

The primary outcome was depression symptom severity at 16 weeks, which was determined on the basis of Patient Health Questionnaire 9 (PHQ-9) score.

At 16 weeks following randomization, MBCT-SH led to significantly greater reductions in depression symptom severity compared with CBT-SH (mean PHQ-9 score, 7.2 vs. 8.6; between-group difference, 1.5 points; P = .009; d = −0.36).

MBCT-SH also had superior effects on anxiety symptom severity at 16 weeks.

At the 42-week follow-up, between-group effects on depression and anxiety symptom severity remained in the hypothesized direction but were nonsignificant.

This could be due in part by the greater postintervention psychological therapy accessed by participants in the CBT-SH group, the investigators noted.

Practitioner-supported MBCT-SH was more cost-effective than supported CBT-SH.

On average, the CBT-SH intervention cost health services £526 ($631) more per participant than the MBCT-SH intervention over the 42-week follow-up. The probability of MBCT-SH being cost-effective compared with CBT-SH exceeded 95%, the researchers noted.
 

Useful model for the United States

Commenting for this news organization, Zindel Segal, PhD, professor of psychology, University of Toronto, Scarborough, cautioned against making too much of the differences between the groups, because CBT-SH “trended positive and had a pretty healthy effect size, it just never reached significance.

“I wouldn’t say mindfulness drastically outperformed cognitive therapy. But cognitive therapy is a robust treatment in its own right, and so doing a little bit better is significant,” Dr. Segal said.

He also noted that, appropriately, the trial enrolled adults who were experiencing moderate depression and were not acutely ill. “That’s one of the rationales for self-help compared to providing patients with a more resource-intensive group treatment.

“If you look at the needs of people with moderate depression, what you find is that for cognitive therapy to work, negative thoughts and feelings need to be pervasive in order to make use of the techniques,” Dr. Segal explained.

“With mindfulness, you don’t need any to have constant negative thoughts or feelings. Anything that arises in your experience serves as grist for mill in terms of concentration and focus,” Dr. Segal said.

He also noted that mindfulness-based intervention is “more optimized” for people who are experiencing some measure of recovery or remission.

“It’s well suited for that, as it trends towards the wellness spectrum. But for people who might have greater levels of acuity or severity, cognitive-behavioral therapy might be indicated,” said Dr. Segal.

He also said the U.K. study findings are relevant to U.S. patients with depression.

“While it’s not disseminated in the same way through any kind of national program, the self-help books that are used are widely available, and the support that people were offered, either in person, telephone, or email, could be easily delivered. This would be a very useful model,” Dr. Segal said.

The LIGHTMind trial was funded by the National Institute for Health and Care Research and the Brighton and Sussex Clinical Trials Unit. Dr. Strauss has received grants from Headspace, is research lead for Sussex Mindfulness Centre, and has been chief investigator on National Institute for Health and Care Research. Dr. Segal is one of the authors of the MBCT-SH workbooks used in the study.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PSYCHIATRY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New AHA statement on pediatric primary hypertension issued

Article Type
Changed
Tue, 04/04/2023 - 13:57

 

Amplified by the childhood obesity epidemic, primary hypertension is now the leading type of pediatric hypertension, especially in adolescence, yet the condition is “underrecognized,” the American Heart Association said in a new scientific statement.

“Children can have secondary hypertension that is caused by an underlying condition such as chronic kidney disease, endocrine disorders, cardiac anomalies, and some syndromes. However, primary hypertension is now recognized as the most common type of hypertension in childhood,” Bonita Falkner, MD, chair of the writing group and emeritus professor of medicine and pediatrics, Thomas Jefferson University, Philadelphia, said in an interview.

And hypertensive children are “highly likely” to become hypertensive adults and to have measurable target organ injury, particularly left ventricular hypertrophy and vascular stiffening, the writing group noted. 

The AHA statement on primary pediatric hypertension was published online in Hypertension.

Primary or essential hypertension occurs in up to 5% of children and adolescents in the United States and other countries.

The American Academy of Pediatrics (AAP), European Society of Hypertension and Hypertension Canada all define hypertension as repeated BP readings at or above the 95th percentile for children, but the thresholds differ by age.

The AAP adopts 130/80 mm Hg starting at age 13 years; the European Society of Hypertension adopts 140/90 mm Hg starting at age 16 years; and Hypertension Canada adopts 120/80 mm Hg for those aged 6-11 years and 130/85 mm Hg for those aged 12-17 years.

Adolescents entering adulthood with a BP < 120/80 mm Hg is an optimal goal, the writing group advised.

They recommend that health care professionals be trained on evidence-based methods to obtain accurate and reliable BP values with either auscultatory or oscillometric methods.

When the initial BP measurement is abnormal, repeat measurement by auscultation is recommended, within the same visit if possible, and then within weeks if the screening BP is hypertensive, or months if the screening BP is elevated.

Because BP levels are variable, even within a single visit, “best practice” is to obtain up to three BP measurements and to record the average of the latter two measurements unless the first measurement is normal, the writing group said. Further confirmation of diagnosis of hypertension can be obtained with 24-hour ambulatory BP monitoring (ABPM).

“Primary hypertension in youth is difficult to recognize in asymptomatic, otherwise healthy youth. There is now evidence that children and adolescents with primary hypertension may also have cardiac and vascular injury due to the hypertension,” Dr. Falkner told this news organization.

“If not identified and treated, the condition can progress to hypertension in young adulthood with heightened risk of premature cardiovascular events,” Dr. Falkner said.

The writing group said “primordial prevention” is an important public health goal because a population with lower BP will have fewer comorbidities related to hypertension and CVD.

Modifiable risk factors for primary hypertension in childhood include obesity, physical inactivity and poor diet/nutrition, disturbed sleep patterns, and environmental stress.

A healthy lifestyle in childhood – including eating healthy food, encouraging physical activity that leads to improved physical fitness and healthy sleep, and avoiding the development of obesity – may help mitigate the risk of hypertension in childhood, the writing group noted.  

Looking ahead, they said efforts to improve recognition and diagnosis of high BP in children, as well as clinical trials to evaluate medical treatment and recommend public health initiatives, are all vital to combat rising rates of primary hypertension in children.

This scientific statement was prepared by the volunteer writing group on behalf of the American Heart Association’s Council on Hypertension, the Council on Lifelong Congenital Heart Disease and Heart Health in the Young, the Council on Kidney in Cardiovascular Disease, the Council on Lifestyle and Cardiometabolic Health, and the Council on Cardiovascular and Stroke Nursing.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Amplified by the childhood obesity epidemic, primary hypertension is now the leading type of pediatric hypertension, especially in adolescence, yet the condition is “underrecognized,” the American Heart Association said in a new scientific statement.

“Children can have secondary hypertension that is caused by an underlying condition such as chronic kidney disease, endocrine disorders, cardiac anomalies, and some syndromes. However, primary hypertension is now recognized as the most common type of hypertension in childhood,” Bonita Falkner, MD, chair of the writing group and emeritus professor of medicine and pediatrics, Thomas Jefferson University, Philadelphia, said in an interview.

And hypertensive children are “highly likely” to become hypertensive adults and to have measurable target organ injury, particularly left ventricular hypertrophy and vascular stiffening, the writing group noted. 

The AHA statement on primary pediatric hypertension was published online in Hypertension.

Primary or essential hypertension occurs in up to 5% of children and adolescents in the United States and other countries.

The American Academy of Pediatrics (AAP), European Society of Hypertension and Hypertension Canada all define hypertension as repeated BP readings at or above the 95th percentile for children, but the thresholds differ by age.

The AAP adopts 130/80 mm Hg starting at age 13 years; the European Society of Hypertension adopts 140/90 mm Hg starting at age 16 years; and Hypertension Canada adopts 120/80 mm Hg for those aged 6-11 years and 130/85 mm Hg for those aged 12-17 years.

Adolescents entering adulthood with a BP < 120/80 mm Hg is an optimal goal, the writing group advised.

They recommend that health care professionals be trained on evidence-based methods to obtain accurate and reliable BP values with either auscultatory or oscillometric methods.

When the initial BP measurement is abnormal, repeat measurement by auscultation is recommended, within the same visit if possible, and then within weeks if the screening BP is hypertensive, or months if the screening BP is elevated.

Because BP levels are variable, even within a single visit, “best practice” is to obtain up to three BP measurements and to record the average of the latter two measurements unless the first measurement is normal, the writing group said. Further confirmation of diagnosis of hypertension can be obtained with 24-hour ambulatory BP monitoring (ABPM).

“Primary hypertension in youth is difficult to recognize in asymptomatic, otherwise healthy youth. There is now evidence that children and adolescents with primary hypertension may also have cardiac and vascular injury due to the hypertension,” Dr. Falkner told this news organization.

“If not identified and treated, the condition can progress to hypertension in young adulthood with heightened risk of premature cardiovascular events,” Dr. Falkner said.

The writing group said “primordial prevention” is an important public health goal because a population with lower BP will have fewer comorbidities related to hypertension and CVD.

Modifiable risk factors for primary hypertension in childhood include obesity, physical inactivity and poor diet/nutrition, disturbed sleep patterns, and environmental stress.

A healthy lifestyle in childhood – including eating healthy food, encouraging physical activity that leads to improved physical fitness and healthy sleep, and avoiding the development of obesity – may help mitigate the risk of hypertension in childhood, the writing group noted.  

Looking ahead, they said efforts to improve recognition and diagnosis of high BP in children, as well as clinical trials to evaluate medical treatment and recommend public health initiatives, are all vital to combat rising rates of primary hypertension in children.

This scientific statement was prepared by the volunteer writing group on behalf of the American Heart Association’s Council on Hypertension, the Council on Lifelong Congenital Heart Disease and Heart Health in the Young, the Council on Kidney in Cardiovascular Disease, the Council on Lifestyle and Cardiometabolic Health, and the Council on Cardiovascular and Stroke Nursing.
 

A version of this article first appeared on Medscape.com.

 

Amplified by the childhood obesity epidemic, primary hypertension is now the leading type of pediatric hypertension, especially in adolescence, yet the condition is “underrecognized,” the American Heart Association said in a new scientific statement.

“Children can have secondary hypertension that is caused by an underlying condition such as chronic kidney disease, endocrine disorders, cardiac anomalies, and some syndromes. However, primary hypertension is now recognized as the most common type of hypertension in childhood,” Bonita Falkner, MD, chair of the writing group and emeritus professor of medicine and pediatrics, Thomas Jefferson University, Philadelphia, said in an interview.

And hypertensive children are “highly likely” to become hypertensive adults and to have measurable target organ injury, particularly left ventricular hypertrophy and vascular stiffening, the writing group noted. 

The AHA statement on primary pediatric hypertension was published online in Hypertension.

Primary or essential hypertension occurs in up to 5% of children and adolescents in the United States and other countries.

The American Academy of Pediatrics (AAP), European Society of Hypertension and Hypertension Canada all define hypertension as repeated BP readings at or above the 95th percentile for children, but the thresholds differ by age.

The AAP adopts 130/80 mm Hg starting at age 13 years; the European Society of Hypertension adopts 140/90 mm Hg starting at age 16 years; and Hypertension Canada adopts 120/80 mm Hg for those aged 6-11 years and 130/85 mm Hg for those aged 12-17 years.

Adolescents entering adulthood with a BP < 120/80 mm Hg is an optimal goal, the writing group advised.

They recommend that health care professionals be trained on evidence-based methods to obtain accurate and reliable BP values with either auscultatory or oscillometric methods.

When the initial BP measurement is abnormal, repeat measurement by auscultation is recommended, within the same visit if possible, and then within weeks if the screening BP is hypertensive, or months if the screening BP is elevated.

Because BP levels are variable, even within a single visit, “best practice” is to obtain up to three BP measurements and to record the average of the latter two measurements unless the first measurement is normal, the writing group said. Further confirmation of diagnosis of hypertension can be obtained with 24-hour ambulatory BP monitoring (ABPM).

“Primary hypertension in youth is difficult to recognize in asymptomatic, otherwise healthy youth. There is now evidence that children and adolescents with primary hypertension may also have cardiac and vascular injury due to the hypertension,” Dr. Falkner told this news organization.

“If not identified and treated, the condition can progress to hypertension in young adulthood with heightened risk of premature cardiovascular events,” Dr. Falkner said.

The writing group said “primordial prevention” is an important public health goal because a population with lower BP will have fewer comorbidities related to hypertension and CVD.

Modifiable risk factors for primary hypertension in childhood include obesity, physical inactivity and poor diet/nutrition, disturbed sleep patterns, and environmental stress.

A healthy lifestyle in childhood – including eating healthy food, encouraging physical activity that leads to improved physical fitness and healthy sleep, and avoiding the development of obesity – may help mitigate the risk of hypertension in childhood, the writing group noted.  

Looking ahead, they said efforts to improve recognition and diagnosis of high BP in children, as well as clinical trials to evaluate medical treatment and recommend public health initiatives, are all vital to combat rising rates of primary hypertension in children.

This scientific statement was prepared by the volunteer writing group on behalf of the American Heart Association’s Council on Hypertension, the Council on Lifelong Congenital Heart Disease and Heart Health in the Young, the Council on Kidney in Cardiovascular Disease, the Council on Lifestyle and Cardiometabolic Health, and the Council on Cardiovascular and Stroke Nursing.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HYPERTENSION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cancer risk elevated after stroke in younger people

Article Type
Changed
Tue, 04/04/2023 - 17:43

 

Younger people who experience stroke or intracerebral hemorrhage have about a three- to fivefold increased risk of being diagnosed with cancer in the next few years, new research shows.

In young people, stroke might be the first manifestation of an underlying cancer, according to the investigators, led by Jamie Verhoeven, MD, PhD, with the department of neurology, Radboud University Medical Centre, Nijmegen, the Netherlands.

The new study can be viewed as a “stepping stone for future studies investigating the usefulness of screening for cancer after stroke,” the researchers say.

The study was published online in JAMA Network Open.

Currently, the diagnostic workup for young people with stroke includes searching for rare clotting disorders, although screening for cancer is not regularly performed.

Some research suggests that stroke and cancer are linked, but the literature is limited. In prior studies among people of all ages, cancer incidence after stroke has been variable – from 1% to 5% at 1 year and from 11% to 30% after 10 years.

To the team’s knowledge, only two studies have described the incidence of cancer after stroke among younger patients. One put the risk at 0.5% for people aged 18-50 years in the first year after stroke; the other described a cumulative risk of 17.3% in the 10 years after stroke for patients aged 18-55 years.

Using Dutch data, Dr. Verhoeven and colleagues identified 27,616 young stroke patients (age, 15-49 years; median age, 45 years) and 362,782 older stroke patients (median age, 76 years).

The cumulative incidence of any new cancer at 10 years was 3.7% among the younger stroke patients and 8.5% among the older stroke patients.

The incidence of a new cancer after stroke among younger patients was higher among women than men, while the opposite was true for older stroke patients.

Compared with the general population, younger stroke patients had a more than 2.5-fold greater likelihood of being diagnosed with a new cancer in the first year after ischemic stroke (standardized incidence ratio, 2.6). The risk was highest for lung cancer (SIR, 6.9), followed by hematologic cancers (SIR, 5.2).

Compared with the general population, younger stroke patients had nearly a 5.5-fold greater likelihood of being diagnosed with a new cancer in the first year after intracerebral hemorrhage (SIR, 5.4), and the risk was highest for hematologic cancers (SIR, 14.2).

In younger patients, the cumulative incidence of any cancer decreased over the years but remained significantly higher for 8 years following a stroke.

For patients aged 50 years or older, the 1-year risk for any new cancer after either ischemic stroke or intracerebral hemorrhage was 1.2 times higher, compared with the general population.

“We typically think of occult cancer as being a cause of stroke in an older population, given that the incidence of cancer increases over time [but] what this study shows is that we probably do need to consider occult cancer as an underlying cause of stroke even in a younger population,” said Laura Gioia, MD, stroke neurologist at the University of Montreal, who was not involved in the research.

Dr. Verhoeven and colleagues conclude that their finding supports the hypothesis of a causal link between cancer and stroke. Given the timing between stroke and cancer diagnosis, cancer may have been present when the stroke occurred and possibly played a role in causing it, the authors note. However, conclusions on causal mechanisms cannot be drawn from the current study.

The question of whether young stroke patients should be screened for cancer is a tough one, Dr. Gioia noted. “Cancer represents a small percentage of causes of stroke. That means you would have to screen a lot of people with a benefit that is still uncertain for the moment,” Dr. Gioia said in an interview.

“I think we need to keep cancer in mind as a cause of stroke in our young patients, and that should probably guide our history-taking with the patient and consider imaging when it’s appropriate and when we think that there could be an underlying occult cancer,” Dr. Gioia suggested.

The study was funded in part through unrestricted funding by Stryker, Medtronic, and Cerenovus. Dr. Verhoeven and Dr. Gioia have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Younger people who experience stroke or intracerebral hemorrhage have about a three- to fivefold increased risk of being diagnosed with cancer in the next few years, new research shows.

In young people, stroke might be the first manifestation of an underlying cancer, according to the investigators, led by Jamie Verhoeven, MD, PhD, with the department of neurology, Radboud University Medical Centre, Nijmegen, the Netherlands.

The new study can be viewed as a “stepping stone for future studies investigating the usefulness of screening for cancer after stroke,” the researchers say.

The study was published online in JAMA Network Open.

Currently, the diagnostic workup for young people with stroke includes searching for rare clotting disorders, although screening for cancer is not regularly performed.

Some research suggests that stroke and cancer are linked, but the literature is limited. In prior studies among people of all ages, cancer incidence after stroke has been variable – from 1% to 5% at 1 year and from 11% to 30% after 10 years.

To the team’s knowledge, only two studies have described the incidence of cancer after stroke among younger patients. One put the risk at 0.5% for people aged 18-50 years in the first year after stroke; the other described a cumulative risk of 17.3% in the 10 years after stroke for patients aged 18-55 years.

Using Dutch data, Dr. Verhoeven and colleagues identified 27,616 young stroke patients (age, 15-49 years; median age, 45 years) and 362,782 older stroke patients (median age, 76 years).

The cumulative incidence of any new cancer at 10 years was 3.7% among the younger stroke patients and 8.5% among the older stroke patients.

The incidence of a new cancer after stroke among younger patients was higher among women than men, while the opposite was true for older stroke patients.

Compared with the general population, younger stroke patients had a more than 2.5-fold greater likelihood of being diagnosed with a new cancer in the first year after ischemic stroke (standardized incidence ratio, 2.6). The risk was highest for lung cancer (SIR, 6.9), followed by hematologic cancers (SIR, 5.2).

Compared with the general population, younger stroke patients had nearly a 5.5-fold greater likelihood of being diagnosed with a new cancer in the first year after intracerebral hemorrhage (SIR, 5.4), and the risk was highest for hematologic cancers (SIR, 14.2).

In younger patients, the cumulative incidence of any cancer decreased over the years but remained significantly higher for 8 years following a stroke.

For patients aged 50 years or older, the 1-year risk for any new cancer after either ischemic stroke or intracerebral hemorrhage was 1.2 times higher, compared with the general population.

“We typically think of occult cancer as being a cause of stroke in an older population, given that the incidence of cancer increases over time [but] what this study shows is that we probably do need to consider occult cancer as an underlying cause of stroke even in a younger population,” said Laura Gioia, MD, stroke neurologist at the University of Montreal, who was not involved in the research.

Dr. Verhoeven and colleagues conclude that their finding supports the hypothesis of a causal link between cancer and stroke. Given the timing between stroke and cancer diagnosis, cancer may have been present when the stroke occurred and possibly played a role in causing it, the authors note. However, conclusions on causal mechanisms cannot be drawn from the current study.

The question of whether young stroke patients should be screened for cancer is a tough one, Dr. Gioia noted. “Cancer represents a small percentage of causes of stroke. That means you would have to screen a lot of people with a benefit that is still uncertain for the moment,” Dr. Gioia said in an interview.

“I think we need to keep cancer in mind as a cause of stroke in our young patients, and that should probably guide our history-taking with the patient and consider imaging when it’s appropriate and when we think that there could be an underlying occult cancer,” Dr. Gioia suggested.

The study was funded in part through unrestricted funding by Stryker, Medtronic, and Cerenovus. Dr. Verhoeven and Dr. Gioia have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Younger people who experience stroke or intracerebral hemorrhage have about a three- to fivefold increased risk of being diagnosed with cancer in the next few years, new research shows.

In young people, stroke might be the first manifestation of an underlying cancer, according to the investigators, led by Jamie Verhoeven, MD, PhD, with the department of neurology, Radboud University Medical Centre, Nijmegen, the Netherlands.

The new study can be viewed as a “stepping stone for future studies investigating the usefulness of screening for cancer after stroke,” the researchers say.

The study was published online in JAMA Network Open.

Currently, the diagnostic workup for young people with stroke includes searching for rare clotting disorders, although screening for cancer is not regularly performed.

Some research suggests that stroke and cancer are linked, but the literature is limited. In prior studies among people of all ages, cancer incidence after stroke has been variable – from 1% to 5% at 1 year and from 11% to 30% after 10 years.

To the team’s knowledge, only two studies have described the incidence of cancer after stroke among younger patients. One put the risk at 0.5% for people aged 18-50 years in the first year after stroke; the other described a cumulative risk of 17.3% in the 10 years after stroke for patients aged 18-55 years.

Using Dutch data, Dr. Verhoeven and colleagues identified 27,616 young stroke patients (age, 15-49 years; median age, 45 years) and 362,782 older stroke patients (median age, 76 years).

The cumulative incidence of any new cancer at 10 years was 3.7% among the younger stroke patients and 8.5% among the older stroke patients.

The incidence of a new cancer after stroke among younger patients was higher among women than men, while the opposite was true for older stroke patients.

Compared with the general population, younger stroke patients had a more than 2.5-fold greater likelihood of being diagnosed with a new cancer in the first year after ischemic stroke (standardized incidence ratio, 2.6). The risk was highest for lung cancer (SIR, 6.9), followed by hematologic cancers (SIR, 5.2).

Compared with the general population, younger stroke patients had nearly a 5.5-fold greater likelihood of being diagnosed with a new cancer in the first year after intracerebral hemorrhage (SIR, 5.4), and the risk was highest for hematologic cancers (SIR, 14.2).

In younger patients, the cumulative incidence of any cancer decreased over the years but remained significantly higher for 8 years following a stroke.

For patients aged 50 years or older, the 1-year risk for any new cancer after either ischemic stroke or intracerebral hemorrhage was 1.2 times higher, compared with the general population.

“We typically think of occult cancer as being a cause of stroke in an older population, given that the incidence of cancer increases over time [but] what this study shows is that we probably do need to consider occult cancer as an underlying cause of stroke even in a younger population,” said Laura Gioia, MD, stroke neurologist at the University of Montreal, who was not involved in the research.

Dr. Verhoeven and colleagues conclude that their finding supports the hypothesis of a causal link between cancer and stroke. Given the timing between stroke and cancer diagnosis, cancer may have been present when the stroke occurred and possibly played a role in causing it, the authors note. However, conclusions on causal mechanisms cannot be drawn from the current study.

The question of whether young stroke patients should be screened for cancer is a tough one, Dr. Gioia noted. “Cancer represents a small percentage of causes of stroke. That means you would have to screen a lot of people with a benefit that is still uncertain for the moment,” Dr. Gioia said in an interview.

“I think we need to keep cancer in mind as a cause of stroke in our young patients, and that should probably guide our history-taking with the patient and consider imaging when it’s appropriate and when we think that there could be an underlying occult cancer,” Dr. Gioia suggested.

The study was funded in part through unrestricted funding by Stryker, Medtronic, and Cerenovus. Dr. Verhoeven and Dr. Gioia have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Cluster, migraine headache strongly linked to circadian rhythm

Article Type
Changed
Mon, 04/03/2023 - 14:18

 

Cluster headache and migraine have strong ties to the circadian system at multiple levels, say new findings that could have significant treatment implications.

A meta-analysis of 16 studies showed a circadian pattern in 71% of cluster headache attacks (3,490 of 4,953), with a clear circadian peak between 9:00 p.m. and 3:00 a.m.

Migraine was also associated with a circadian pattern in 50% of cases (2,698 of 5,385) across eight studies, with a clear circadian trough between 11:00 p.m. and 7:00 a.m.

Seasonal peaks were also evident for cluster headache (spring and autumn) and migraine (April to October).

“In the short term, these findings help us explain the timing to patients – for example, it is possible that a headache at 8 a.m. is due to their internal body clock instead of their pillow, or breakfast food, or morning medications,” lead investigator Mark Burish, MD, PhD, associate professor, department of neurosurgery, at University of Texas Health Houston, told this news organization.

“In the long term, these findings do suggest that medications that target the circadian system could be effective in migraine and headache patients,” Dr. Burish added.

The study was published online in Neurology.


 

Treatment implications?

Across studies, chronotype was “highly variable” for both cluster headache and migraine, the investigators report.

Cluster headache was associated with lower melatonin and higher cortisol levels, compared with non–cluster headache controls.

On a genetic level, cluster headache was associated with two core circadian genes (CLOCK and REV-ERB–alpha), and five of the nine genes that increase the likelihood of having cluster headache are genes with a circadian pattern of expression.

Migraine headache was associated with lower urinary melatonin levels and with the core circadian genes, CK1-delta and ROR-alpha, and 110 of the 168 genes associated with migraine were clock-controlled genes.

“The data suggest that both of these headache disorders are highly circadian at multiple levels, especially cluster headache,” Dr. Burish said in a release.

“This reinforces the importance of the hypothalamus – the area of the brain that houses the primary biological clock – and its role in cluster headache and migraine. It also raises the question of the genetics of triggers such as sleep changes that are known triggers for migraine and are cues for the body’s circadian rhythm,” Dr. Burish said.

“We hope that future research will look into circadian medications as a new treatment option for migraine and cluster headache patients,” Dr. Burish told this news organization.
 

Importance of sleep regulation

The authors of an accompanying editorial note that even though the study doesn’t have immediate clinical implications, it offers a better understanding of the way chronobiologic factors may influence treatment.

“At a minimum, interventions known to regulate and improve sleep (e.g., melatonin, cognitive behavioral therapy), and which are safe and straightforward to introduce, may be useful in some individuals susceptible to circadian misalignment or sleep disorders,” write Heidi Sutherland, PhD, and Lyn Griffiths, PhD, with Queensland University of Technology, Brisbane, Australia.

“Treatment of comorbidities (e.g., insomnia) that result in sleep disturbances may also help headache management. Furthermore, chronobiological aspects of any pharmacological interventions should be considered, as some frequently used headache and migraine drugs can modulate circadian cycles and influence the expression of circadian genes (e.g., verapamil), or have sleep-related side effects,” they add.

A limitation of the study was the lack of information on factors that could influence the circadian cycle, such as medications; other disorders, such as bipolar disorder; or circadian rhythm issues, such as night-shift work.

The study was supported by grants from the Japan Society for the Promotion of Science, the National Institutes of Health, The Welch Foundation, and The Will Erwin Headache Research Foundation. Dr. Burish is an unpaid member of the medical advisory board of Clusterbusters, and a site investigator for a cluster headache clinical trial funded by Lundbeck. Dr. Sutherland has received grant funding from the U.S. Migraine Research Foundation, and received institute support from Queensland University of Technology for genetics research. Dr. Griffiths has received grant funding from the Australian NHMRC, U.S. Department of Defense, and the U.S. Migraine Research Foundation, and consultancy funding from TEVA.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Cluster headache and migraine have strong ties to the circadian system at multiple levels, say new findings that could have significant treatment implications.

A meta-analysis of 16 studies showed a circadian pattern in 71% of cluster headache attacks (3,490 of 4,953), with a clear circadian peak between 9:00 p.m. and 3:00 a.m.

Migraine was also associated with a circadian pattern in 50% of cases (2,698 of 5,385) across eight studies, with a clear circadian trough between 11:00 p.m. and 7:00 a.m.

Seasonal peaks were also evident for cluster headache (spring and autumn) and migraine (April to October).

“In the short term, these findings help us explain the timing to patients – for example, it is possible that a headache at 8 a.m. is due to their internal body clock instead of their pillow, or breakfast food, or morning medications,” lead investigator Mark Burish, MD, PhD, associate professor, department of neurosurgery, at University of Texas Health Houston, told this news organization.

“In the long term, these findings do suggest that medications that target the circadian system could be effective in migraine and headache patients,” Dr. Burish added.

The study was published online in Neurology.


 

Treatment implications?

Across studies, chronotype was “highly variable” for both cluster headache and migraine, the investigators report.

Cluster headache was associated with lower melatonin and higher cortisol levels, compared with non–cluster headache controls.

On a genetic level, cluster headache was associated with two core circadian genes (CLOCK and REV-ERB–alpha), and five of the nine genes that increase the likelihood of having cluster headache are genes with a circadian pattern of expression.

Migraine headache was associated with lower urinary melatonin levels and with the core circadian genes, CK1-delta and ROR-alpha, and 110 of the 168 genes associated with migraine were clock-controlled genes.

“The data suggest that both of these headache disorders are highly circadian at multiple levels, especially cluster headache,” Dr. Burish said in a release.

“This reinforces the importance of the hypothalamus – the area of the brain that houses the primary biological clock – and its role in cluster headache and migraine. It also raises the question of the genetics of triggers such as sleep changes that are known triggers for migraine and are cues for the body’s circadian rhythm,” Dr. Burish said.

“We hope that future research will look into circadian medications as a new treatment option for migraine and cluster headache patients,” Dr. Burish told this news organization.
 

Importance of sleep regulation

The authors of an accompanying editorial note that even though the study doesn’t have immediate clinical implications, it offers a better understanding of the way chronobiologic factors may influence treatment.

“At a minimum, interventions known to regulate and improve sleep (e.g., melatonin, cognitive behavioral therapy), and which are safe and straightforward to introduce, may be useful in some individuals susceptible to circadian misalignment or sleep disorders,” write Heidi Sutherland, PhD, and Lyn Griffiths, PhD, with Queensland University of Technology, Brisbane, Australia.

“Treatment of comorbidities (e.g., insomnia) that result in sleep disturbances may also help headache management. Furthermore, chronobiological aspects of any pharmacological interventions should be considered, as some frequently used headache and migraine drugs can modulate circadian cycles and influence the expression of circadian genes (e.g., verapamil), or have sleep-related side effects,” they add.

A limitation of the study was the lack of information on factors that could influence the circadian cycle, such as medications; other disorders, such as bipolar disorder; or circadian rhythm issues, such as night-shift work.

The study was supported by grants from the Japan Society for the Promotion of Science, the National Institutes of Health, The Welch Foundation, and The Will Erwin Headache Research Foundation. Dr. Burish is an unpaid member of the medical advisory board of Clusterbusters, and a site investigator for a cluster headache clinical trial funded by Lundbeck. Dr. Sutherland has received grant funding from the U.S. Migraine Research Foundation, and received institute support from Queensland University of Technology for genetics research. Dr. Griffiths has received grant funding from the Australian NHMRC, U.S. Department of Defense, and the U.S. Migraine Research Foundation, and consultancy funding from TEVA.

A version of this article first appeared on Medscape.com.

 

Cluster headache and migraine have strong ties to the circadian system at multiple levels, say new findings that could have significant treatment implications.

A meta-analysis of 16 studies showed a circadian pattern in 71% of cluster headache attacks (3,490 of 4,953), with a clear circadian peak between 9:00 p.m. and 3:00 a.m.

Migraine was also associated with a circadian pattern in 50% of cases (2,698 of 5,385) across eight studies, with a clear circadian trough between 11:00 p.m. and 7:00 a.m.

Seasonal peaks were also evident for cluster headache (spring and autumn) and migraine (April to October).

“In the short term, these findings help us explain the timing to patients – for example, it is possible that a headache at 8 a.m. is due to their internal body clock instead of their pillow, or breakfast food, or morning medications,” lead investigator Mark Burish, MD, PhD, associate professor, department of neurosurgery, at University of Texas Health Houston, told this news organization.

“In the long term, these findings do suggest that medications that target the circadian system could be effective in migraine and headache patients,” Dr. Burish added.

The study was published online in Neurology.


 

Treatment implications?

Across studies, chronotype was “highly variable” for both cluster headache and migraine, the investigators report.

Cluster headache was associated with lower melatonin and higher cortisol levels, compared with non–cluster headache controls.

On a genetic level, cluster headache was associated with two core circadian genes (CLOCK and REV-ERB–alpha), and five of the nine genes that increase the likelihood of having cluster headache are genes with a circadian pattern of expression.

Migraine headache was associated with lower urinary melatonin levels and with the core circadian genes, CK1-delta and ROR-alpha, and 110 of the 168 genes associated with migraine were clock-controlled genes.

“The data suggest that both of these headache disorders are highly circadian at multiple levels, especially cluster headache,” Dr. Burish said in a release.

“This reinforces the importance of the hypothalamus – the area of the brain that houses the primary biological clock – and its role in cluster headache and migraine. It also raises the question of the genetics of triggers such as sleep changes that are known triggers for migraine and are cues for the body’s circadian rhythm,” Dr. Burish said.

“We hope that future research will look into circadian medications as a new treatment option for migraine and cluster headache patients,” Dr. Burish told this news organization.
 

Importance of sleep regulation

The authors of an accompanying editorial note that even though the study doesn’t have immediate clinical implications, it offers a better understanding of the way chronobiologic factors may influence treatment.

“At a minimum, interventions known to regulate and improve sleep (e.g., melatonin, cognitive behavioral therapy), and which are safe and straightforward to introduce, may be useful in some individuals susceptible to circadian misalignment or sleep disorders,” write Heidi Sutherland, PhD, and Lyn Griffiths, PhD, with Queensland University of Technology, Brisbane, Australia.

“Treatment of comorbidities (e.g., insomnia) that result in sleep disturbances may also help headache management. Furthermore, chronobiological aspects of any pharmacological interventions should be considered, as some frequently used headache and migraine drugs can modulate circadian cycles and influence the expression of circadian genes (e.g., verapamil), or have sleep-related side effects,” they add.

A limitation of the study was the lack of information on factors that could influence the circadian cycle, such as medications; other disorders, such as bipolar disorder; or circadian rhythm issues, such as night-shift work.

The study was supported by grants from the Japan Society for the Promotion of Science, the National Institutes of Health, The Welch Foundation, and The Will Erwin Headache Research Foundation. Dr. Burish is an unpaid member of the medical advisory board of Clusterbusters, and a site investigator for a cluster headache clinical trial funded by Lundbeck. Dr. Sutherland has received grant funding from the U.S. Migraine Research Foundation, and received institute support from Queensland University of Technology for genetics research. Dr. Griffiths has received grant funding from the Australian NHMRC, U.S. Department of Defense, and the U.S. Migraine Research Foundation, and consultancy funding from TEVA.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Startling’ cost barriers after abnormal screening mammogram

Article Type
Changed
Mon, 04/03/2023 - 14:22

 

Despite federal legislation doing away with cost-sharing for initial breast cancer screening, out-of-pocket costs for needed follow-up tests remain significant financial barriers for many women.

An analysis of claims data found that women with higher cost-sharing undergo fewer subsequent breast diagnostic tests after an abnormal screening mammogram, compared with peers with lower cost-sharing.

“The chief clinical implication is that women with abnormal mammograms – that is, potentially at risk for cancer – are deciding not to follow-up on diagnostic imaging because of high out-of-pocket costs,” Danny Hughes, PhD, professor, College of Health Solutions, Arizona State University in Phoenix, told this news organization.

One course of action for radiologists is to “strongly communicate the importance of adhering to recommended follow-on testing,” Dr. Hughes said.

Another is to “work to pass national and state legislation, such as recently passed [legislation] in Connecticut, that removes out-of-pocket costs for follow-on diagnostic breast imaging and biopsy in the same way that these patient costs are prohibited for screening mammography,” he suggested.

The study was published online in JAMA Network Open.


 

‘Worrisome’ findings

The Affordable Care Act removed out-of-pocket costs for preventive health care, such as screening mammograms in women aged 40 and over.

However, lingering cost barriers remain for some individuals who have a positive initial screening mammogram and need follow-up tests. For instance, research shows that women in high-deductible plans, which often have higher out-of-pocket costs than other plans, may experience delays in follow-on care, including diagnostic breast imaging.

Dr. Hughes and colleagues examined the association between the degree of patient cost-sharing across different health plans – those dominated by copays, coinsurance, or deductibles as well as those classified as balanced across the three categories – and the use of diagnostic breast cancer imaging after a screening mammogram.

The data came from Optum’s database of administrative health claims for members of large commercial and Medicare Advantage health plans. The team used a machine learning algorithm to rank patient insurance plans by type of cost-sharing.

The sample included 230,845 mostly White (71%) women 40 years and older with no prior history of breast cancer who underwent screening mammography. These women were covered by 22,828 distinct insurance plans associated with roughly 6 million enrollees and nearly 45 million distinct medical claims.

Plans dominated by coinsurance had the lowest average out-of-pocket costs ($945), followed by plans balanced across the three cost-sharing categories ($1,017), plans dominated by copays ($1,020), and plans dominated by deductibles ($1,186).

Compared with women with coinsurance plans, those with copay- and deductible-dominated plans underwent significantly fewer subsequent breast-imaging procedures – 24 and 16 fewer procedures per 1,000 women, respectively.

Use of follow-on breast MRI was nearly 24% lower among women in plans with the highest cost-sharing versus those in plans with the lowest cost-sharing.

The team found no statistically significant difference in breast biopsy use between plan types.

Considering the risks posed by an unconfirmed positive mammogram result, these findings are “startling” and question the efficacy of legislation that eliminated cost-sharing from many preventive services, including screening mammograms, Dr. Hughes and colleagues write.

“Additional policy changes, such as removing cost-sharing for subsequent tests after abnormal screening results or bundling all breast cancer diagnostic testing into a single reimbursement, may provide avenues to mitigate these financial barriers to care,” the authors add.

The authors of an accompanying editorial found the study’s main finding – that some women who have an abnormal result on a mammogram may not get appropriate follow-up because of cost – is “worrisome.” 

“From a population health perspective, failure to complete the screening process limits the program’s effectiveness and likely exacerbates health disparities,” write Ilana Richman, MD, with Yale University, New Haven, Conn., and A. Mark Fendrick, MD, with the University of Michigan, Ann Arbor.

“On an individual level, high out-of-pocket costs may directly contribute to worse health outcomes or require individuals to use scarce financial resources that may otherwise be used for critical items such as food or rent,” Dr. Richman and Dr. Fendrick add. And “the removal of financial barriers for the entire breast cancer screening process has potential to improve total screening uptake and follow-up rates.”

Support for the study was provided by the Harvey L. Neiman Health Policy Institute. Dr. Hughes has reported no relevant financial relationships. Dr. Richman has reported receiving salary support from the Centers for Medicare & Medicaid Services to develop health care quality measures outside the submitted work. Dr. Fendrick has reported serving as a consultant for AbbVie, Amgen, Bayer, CareFirst, BlueCross BlueShield, Centivo, Community Oncology Association, Covered California, EmblemHealth, Exact Sciences, GRAIL, Harvard University, HealthCorum, Hygieia, Johnson & Johnson, MedZed, Merck, Mercer, Montana Health Cooperative, Phathom Pharmaceuticals, Proton Intelligence, RA Capital, Teladoc Health, U.S. Department of Defense, Virginia Center for Health Innovation, Washington Health Benefit Exchange, Wildflower Health, and Yale-New Haven Health System; and serving as a consultant for and holding equity in Health at Scale Technologies, Pair Team, Sempre Health, Silver Fern Health, and Wellth.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

Despite federal legislation doing away with cost-sharing for initial breast cancer screening, out-of-pocket costs for needed follow-up tests remain significant financial barriers for many women.

An analysis of claims data found that women with higher cost-sharing undergo fewer subsequent breast diagnostic tests after an abnormal screening mammogram, compared with peers with lower cost-sharing.

“The chief clinical implication is that women with abnormal mammograms – that is, potentially at risk for cancer – are deciding not to follow-up on diagnostic imaging because of high out-of-pocket costs,” Danny Hughes, PhD, professor, College of Health Solutions, Arizona State University in Phoenix, told this news organization.

One course of action for radiologists is to “strongly communicate the importance of adhering to recommended follow-on testing,” Dr. Hughes said.

Another is to “work to pass national and state legislation, such as recently passed [legislation] in Connecticut, that removes out-of-pocket costs for follow-on diagnostic breast imaging and biopsy in the same way that these patient costs are prohibited for screening mammography,” he suggested.

The study was published online in JAMA Network Open.


 

‘Worrisome’ findings

The Affordable Care Act removed out-of-pocket costs for preventive health care, such as screening mammograms in women aged 40 and over.

However, lingering cost barriers remain for some individuals who have a positive initial screening mammogram and need follow-up tests. For instance, research shows that women in high-deductible plans, which often have higher out-of-pocket costs than other plans, may experience delays in follow-on care, including diagnostic breast imaging.

Dr. Hughes and colleagues examined the association between the degree of patient cost-sharing across different health plans – those dominated by copays, coinsurance, or deductibles as well as those classified as balanced across the three categories – and the use of diagnostic breast cancer imaging after a screening mammogram.

The data came from Optum’s database of administrative health claims for members of large commercial and Medicare Advantage health plans. The team used a machine learning algorithm to rank patient insurance plans by type of cost-sharing.

The sample included 230,845 mostly White (71%) women 40 years and older with no prior history of breast cancer who underwent screening mammography. These women were covered by 22,828 distinct insurance plans associated with roughly 6 million enrollees and nearly 45 million distinct medical claims.

Plans dominated by coinsurance had the lowest average out-of-pocket costs ($945), followed by plans balanced across the three cost-sharing categories ($1,017), plans dominated by copays ($1,020), and plans dominated by deductibles ($1,186).

Compared with women with coinsurance plans, those with copay- and deductible-dominated plans underwent significantly fewer subsequent breast-imaging procedures – 24 and 16 fewer procedures per 1,000 women, respectively.

Use of follow-on breast MRI was nearly 24% lower among women in plans with the highest cost-sharing versus those in plans with the lowest cost-sharing.

The team found no statistically significant difference in breast biopsy use between plan types.

Considering the risks posed by an unconfirmed positive mammogram result, these findings are “startling” and question the efficacy of legislation that eliminated cost-sharing from many preventive services, including screening mammograms, Dr. Hughes and colleagues write.

“Additional policy changes, such as removing cost-sharing for subsequent tests after abnormal screening results or bundling all breast cancer diagnostic testing into a single reimbursement, may provide avenues to mitigate these financial barriers to care,” the authors add.

The authors of an accompanying editorial found the study’s main finding – that some women who have an abnormal result on a mammogram may not get appropriate follow-up because of cost – is “worrisome.” 

“From a population health perspective, failure to complete the screening process limits the program’s effectiveness and likely exacerbates health disparities,” write Ilana Richman, MD, with Yale University, New Haven, Conn., and A. Mark Fendrick, MD, with the University of Michigan, Ann Arbor.

“On an individual level, high out-of-pocket costs may directly contribute to worse health outcomes or require individuals to use scarce financial resources that may otherwise be used for critical items such as food or rent,” Dr. Richman and Dr. Fendrick add. And “the removal of financial barriers for the entire breast cancer screening process has potential to improve total screening uptake and follow-up rates.”

Support for the study was provided by the Harvey L. Neiman Health Policy Institute. Dr. Hughes has reported no relevant financial relationships. Dr. Richman has reported receiving salary support from the Centers for Medicare & Medicaid Services to develop health care quality measures outside the submitted work. Dr. Fendrick has reported serving as a consultant for AbbVie, Amgen, Bayer, CareFirst, BlueCross BlueShield, Centivo, Community Oncology Association, Covered California, EmblemHealth, Exact Sciences, GRAIL, Harvard University, HealthCorum, Hygieia, Johnson & Johnson, MedZed, Merck, Mercer, Montana Health Cooperative, Phathom Pharmaceuticals, Proton Intelligence, RA Capital, Teladoc Health, U.S. Department of Defense, Virginia Center for Health Innovation, Washington Health Benefit Exchange, Wildflower Health, and Yale-New Haven Health System; and serving as a consultant for and holding equity in Health at Scale Technologies, Pair Team, Sempre Health, Silver Fern Health, and Wellth.

A version of this article originally appeared on Medscape.com.

 

Despite federal legislation doing away with cost-sharing for initial breast cancer screening, out-of-pocket costs for needed follow-up tests remain significant financial barriers for many women.

An analysis of claims data found that women with higher cost-sharing undergo fewer subsequent breast diagnostic tests after an abnormal screening mammogram, compared with peers with lower cost-sharing.

“The chief clinical implication is that women with abnormal mammograms – that is, potentially at risk for cancer – are deciding not to follow-up on diagnostic imaging because of high out-of-pocket costs,” Danny Hughes, PhD, professor, College of Health Solutions, Arizona State University in Phoenix, told this news organization.

One course of action for radiologists is to “strongly communicate the importance of adhering to recommended follow-on testing,” Dr. Hughes said.

Another is to “work to pass national and state legislation, such as recently passed [legislation] in Connecticut, that removes out-of-pocket costs for follow-on diagnostic breast imaging and biopsy in the same way that these patient costs are prohibited for screening mammography,” he suggested.

The study was published online in JAMA Network Open.


 

‘Worrisome’ findings

The Affordable Care Act removed out-of-pocket costs for preventive health care, such as screening mammograms in women aged 40 and over.

However, lingering cost barriers remain for some individuals who have a positive initial screening mammogram and need follow-up tests. For instance, research shows that women in high-deductible plans, which often have higher out-of-pocket costs than other plans, may experience delays in follow-on care, including diagnostic breast imaging.

Dr. Hughes and colleagues examined the association between the degree of patient cost-sharing across different health plans – those dominated by copays, coinsurance, or deductibles as well as those classified as balanced across the three categories – and the use of diagnostic breast cancer imaging after a screening mammogram.

The data came from Optum’s database of administrative health claims for members of large commercial and Medicare Advantage health plans. The team used a machine learning algorithm to rank patient insurance plans by type of cost-sharing.

The sample included 230,845 mostly White (71%) women 40 years and older with no prior history of breast cancer who underwent screening mammography. These women were covered by 22,828 distinct insurance plans associated with roughly 6 million enrollees and nearly 45 million distinct medical claims.

Plans dominated by coinsurance had the lowest average out-of-pocket costs ($945), followed by plans balanced across the three cost-sharing categories ($1,017), plans dominated by copays ($1,020), and plans dominated by deductibles ($1,186).

Compared with women with coinsurance plans, those with copay- and deductible-dominated plans underwent significantly fewer subsequent breast-imaging procedures – 24 and 16 fewer procedures per 1,000 women, respectively.

Use of follow-on breast MRI was nearly 24% lower among women in plans with the highest cost-sharing versus those in plans with the lowest cost-sharing.

The team found no statistically significant difference in breast biopsy use between plan types.

Considering the risks posed by an unconfirmed positive mammogram result, these findings are “startling” and question the efficacy of legislation that eliminated cost-sharing from many preventive services, including screening mammograms, Dr. Hughes and colleagues write.

“Additional policy changes, such as removing cost-sharing for subsequent tests after abnormal screening results or bundling all breast cancer diagnostic testing into a single reimbursement, may provide avenues to mitigate these financial barriers to care,” the authors add.

The authors of an accompanying editorial found the study’s main finding – that some women who have an abnormal result on a mammogram may not get appropriate follow-up because of cost – is “worrisome.” 

“From a population health perspective, failure to complete the screening process limits the program’s effectiveness and likely exacerbates health disparities,” write Ilana Richman, MD, with Yale University, New Haven, Conn., and A. Mark Fendrick, MD, with the University of Michigan, Ann Arbor.

“On an individual level, high out-of-pocket costs may directly contribute to worse health outcomes or require individuals to use scarce financial resources that may otherwise be used for critical items such as food or rent,” Dr. Richman and Dr. Fendrick add. And “the removal of financial barriers for the entire breast cancer screening process has potential to improve total screening uptake and follow-up rates.”

Support for the study was provided by the Harvey L. Neiman Health Policy Institute. Dr. Hughes has reported no relevant financial relationships. Dr. Richman has reported receiving salary support from the Centers for Medicare & Medicaid Services to develop health care quality measures outside the submitted work. Dr. Fendrick has reported serving as a consultant for AbbVie, Amgen, Bayer, CareFirst, BlueCross BlueShield, Centivo, Community Oncology Association, Covered California, EmblemHealth, Exact Sciences, GRAIL, Harvard University, HealthCorum, Hygieia, Johnson & Johnson, MedZed, Merck, Mercer, Montana Health Cooperative, Phathom Pharmaceuticals, Proton Intelligence, RA Capital, Teladoc Health, U.S. Department of Defense, Virginia Center for Health Innovation, Washington Health Benefit Exchange, Wildflower Health, and Yale-New Haven Health System; and serving as a consultant for and holding equity in Health at Scale Technologies, Pair Team, Sempre Health, Silver Fern Health, and Wellth.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Some diets better than others for heart protection

Article Type
Changed
Mon, 04/03/2023 - 14:24

 

In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.

Five other popular diets appeared to have little or no benefit with regard to these outcomes.

“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.

The results were published online in The BMJ.

Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.

Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.

For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.

The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.

There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).

On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.

There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.

The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.

The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.

The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.

The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.

The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.

Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

 

In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.

Five other popular diets appeared to have little or no benefit with regard to these outcomes.

“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.

The results were published online in The BMJ.

Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.

Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.

For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.

The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.

There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).

On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.

There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.

The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.

The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.

The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.

The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.

The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.

Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

 

In an analysis of randomized trials, the Mediterranean diet and low-fat diets were linked to reduced risks of all-cause mortality and nonfatal MI over 3 years in adults at increased risk for cardiovascular disease (CVD), while the Mediterranean diet also showed lower risk of stroke.

Five other popular diets appeared to have little or no benefit with regard to these outcomes.

“These findings with data presentations are extremely important for patients who are skeptical about the desirability of diet change,” wrote the authors, led by Giorgio Karam, a medical student at the University of Manitoba, Winnipeg.

The results were published online in The BMJ.

Dietary guidelines recommend various diets along with physical activity or other cointerventions for adults at increased CVD risk, but they are often based on low-certainty evidence from nonrandomized studies and on surrogate outcomes.

Several meta-analyses of randomized controlled trials with mortality and major CV outcomes have reported benefits of some dietary programs, but those studies did not use network meta-analysis to give absolute estimates and certainty of estimates for adults at intermediate and high risk, the authors noted.

For this study, Mr. Karam and colleagues conducted a comprehensive systematic review and network meta-analysis in which they compared the effects of seven popular structured diets on mortality and CVD events for adults with CVD or CVD risk factors.

The seven diet plans were the Mediterranean, low fat, very low fat, modified fat, combined low fat and low sodium, Ornish, and Pritikin diets. Data for the analysis came from 40 randomized controlled trials that involved 35,548 participants who were followed for an average of 3 years.

There was evidence of “moderate” certainty that the Mediterranean diet was superior to minimal intervention for all-cause mortality (odds ratio [OR], 0.72), CV mortality (OR, 0.55), stroke (OR, 0.65), and nonfatal MI (OR, 0.48).

On an absolute basis (per 1,000 over 5 years), the Mediterranean diet let to 17 fewer deaths from any cause, 13 fewer CV deaths, seven fewer strokes, and 17 fewer nonfatal MIs.

There was evidence of moderate certainty that a low-fat diet was superior to minimal intervention for prevention of all-cause mortality (OR, 0.84; nine fewer deaths per 1,000) and nonfatal MI (OR, 0.77; seven fewer deaths per 1,000). The low-fat diet had little to no benefit with regard to stroke reduction.

The Mediterranean diet was not “convincingly” superior to a low-fat diet for mortality or nonfatal MI, the authors noted.

The absolute effects for the Mediterranean and low-fat diets were more pronounced in adults at high CVD risk. With the Mediterranean diet, there were 36 fewer all-cause deaths and 39 fewer CV deaths per 1,000 over 5 years.

The five other dietary programs generally had “little or no benefit” compared with minimal intervention. The evidence was of low to moderate certainty.

The studies did not provide enough data to gauge the impact of the diets on angina, heart failure, peripheral vascular events, and atrial fibrillation.

The researchers say that strengths of their analysis include a comprehensive review and thorough literature search and a rigorous assessment of study bias. In addition, the researchers adhered to recognized GRADE methods for assessing the certainty of estimates.

Limitations of their work include not being able to measure adherence to dietary programs and the possibility that some of the benefits may have been due to other factors, such as drug treatment and support for quitting smoking.

The study had no specific funding. The authors have disclosed no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A new way to gauge suicide risk?

Article Type
Changed
Mon, 04/03/2023 - 09:54

It’s possible to flag suicide risk by automatically extracting clinical notes on social determinants of health (SDOH) from a patient’s electronic health record using natural language processing (NLP), a form of artificial intelligence, new research shows.

Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.

“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.

Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.

“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.

The study was published online JAMA Network Open.
 

Improved risk assessment

SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.

Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR.  However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.

Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).

Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.

The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.

All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.

Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).

Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).

When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).

“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.

“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.

Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”

In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”

This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.

This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Topics
Sections

It’s possible to flag suicide risk by automatically extracting clinical notes on social determinants of health (SDOH) from a patient’s electronic health record using natural language processing (NLP), a form of artificial intelligence, new research shows.

Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.

“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.

Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.

“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.

The study was published online JAMA Network Open.
 

Improved risk assessment

SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.

Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR.  However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.

Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).

Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.

The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.

All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.

Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).

Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).

When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).

“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.

“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.

Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”

In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”

This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.

This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

It’s possible to flag suicide risk by automatically extracting clinical notes on social determinants of health (SDOH) from a patient’s electronic health record using natural language processing (NLP), a form of artificial intelligence, new research shows.

Researchers found SDOH are risk factors for suicide among U.S. veterans and NLP can be leveraged to extract SDOH information from unstructured data in the EHR.

“Since SDOH is overwhelmingly described in EHR notes, the importance of NLP-extracted SDOH can be very significant, meaning that NLP can be used as an effective method for epidemiological and public health study,” senior investigator Hong Yu, PhD, from Miner School of Information and Computer Sciences, University of Massachusetts Lowell, told this news organization.

Although the study was conducted among U.S. veterans, the results likely hold for the general population as well.

“The NLP methods are generalizable. The SDOH categories are generalizable. There may be some variations in terms of the strength of associations in NLP-extracted SDOH and suicide death, but the overall findings are generalizable,” Dr. Yu said.

The study was published online JAMA Network Open.
 

Improved risk assessment

SDOH, which include factors such as socioeconomic status, access to healthy food, education, housing, and physical environment, are strong predictors of suicidal behaviors.

Several studies have identified a range of common risk factors for suicide using International Classification of Diseases (ICD) codes and other “structured” data from the EHR.  However, the use of unstructured EHR data from clinician notes has received little attention in investigating potential associations between suicide and SDOH.

Using the large Veterans Health Administration EHR system, the researchers determined associations between veterans’ death by suicide and recent SDOH, identified using both structured data (ICD-10 codes and Veterans Health Administration stop codes) and unstructured data (NLP-processed clinical notes).

Participants included 8,821 veterans who committed suicide and 35,284 matched controls. The cohort was mostly male (96%) and White (79%). The mean age was 58 years.

The NLP-extracted SDOH were social isolation, job or financial insecurity, housing instability, legal problems, violence, barriers to care, transition of care, and food insecurity.

All of these unstructured clinical notes on SDOH were significantly associated with increased risk for death by suicide.

Legal problems had the largest estimated effect size, more than twice the risk of those with no exposure (adjusted odds ratio 2.62; 95% confidence interval, 2.38-2.89), followed by violence (aOR, 2.34; 95% CI, 2.17-2.52) and social isolation (aOR, 1.94; 95% CI, 1.83-2.06).

Similarly, all of the structured SDOH – social or family problems, employment or financial problems, housing instability, legal problems, violence, and nonspecific psychosocial needs – also showed significant associations with increased risk for suicide death, once again, with legal problems linked to the highest risk (aOR, 2.63; 95% CI, 2.37-2.91).

When combining the structured and NLP-extracted unstructured data, the top three risk factors for death by suicide were legal problems (aOR, 2.66; 95% CI 2.46-2.89), violence (aOR, 2.12; 95% CI, 1.98-2.27), and nonspecific psychosocial needs (aOR, 2.07; 95% CI, 1.92-2.23).

“To our knowledge, this the first large-scale study to implement and use an NLP system to extract SDOH information from unstructured EHR data,” the researchers write.

“We strongly believe that analyzing all available SDOH information, including those contained in clinical notes, can help develop a better system for risk assessment and suicide prevention. However, more studies are required to investigate ways of seamlessly incorporating SDOHs into existing health care systems,” they conclude.

Dr. Yu said it’s also important to note that their NLP system is built upon “the most advanced deep-learning technologies and therefore is more generalizable than most existing work that mainly used rule-based approaches or traditional machine learning for identifying social determinants of health.”

In an accompanying editorial, Ishanu Chattopadhyay, PhD, of the University of Chicago, said this suggests that unstructured clinical notes “may efficiently identify at-risk individuals even when structured data on the relevant variables are missing or incomplete.”

This work may provide “the foundation for addressing the key hurdles in enacting efficient universal assessment for suicide risk among the veterans and perhaps in the general population,” Dr. Chattopadhyay added.

This research was funded by a grant from the National Institute of Mental Health. The study authors and editorialist report no relevant financial relationships.

A version of this article originally appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article