More than one-third of adults in the US could have NAFLD by 2050

Article Type
Changed
Mon, 12/04/2023 - 12:26

More than one out of three adults in the United States could have nonalcoholic fatty liver disease (NAFLD) by 2050, substantially increasing the national clinical burden, according to investigators.

These findings suggest that health care systems should prepare for “large increases” in cases of hepatocellular carcinoma (HCC) and need for liver transplants, reported lead author Phuc Le, PhD, MPH, of the Cleveland Clinic, and colleagues.

Dr. Phuc Le, Cleveland Clinic
Cleveland Clinic
Dr. Phuc Le

“Following the alarming rise in prevalence of obesity and diabetes, NAFLD is projected to become the leading indication for liver transplant in the United States in the next decade,” Dr. Le and colleagues wrote in their abstract for the annual meeting of the American Association for the Study of Liver Diseases. “A better understanding of the clinical burden associated with NAFLD will enable health systems to prepare to meet this imminent demand from patients.”

To this end, Dr. Le and colleagues developed an agent-based state transition model to predict future prevalence of NAFLD and associated outcomes.

In the first part of the model, the investigators simulated population growth in the United States using Census Bureau data, including new births and immigration, from the year 2000 onward. The second part of the model simulated natural progression of NAFLD in adults via 14 associated conditions and events, including steatosis, nonalcoholic steatohepatitis (NASH), HCC, liver transplants, liver-related mortality, and others.

By first comparing simulated findings with actual findings between 2000 and 2018, the investigators confirmed that their model could reliably predict the intended epidemiological parameters.

Next, they turned their model toward the future.

It predicted that the prevalence of NAFLD among US adults will rise from 27.8% in 2020 to 34.3% in 2050. Over the same timeframe, prevalence of NASH is predicted to increase from 20.0% to 21.8%, proportion of NAFLD cases developing cirrhosis is expected to increase from 1.9% to 3.1%, and liver-related mortality is estimated to rise from 0.4% to 1% of all deaths.

The model also predicted that the burden of HCC will increase from 10,400 to 19,300 new cases per year, while liver transplant burden will more than double, from 1,700 to 4,200 transplants per year.

“Our model forecasts substantial clinical burden of NAFLD over the next three decades,” Dr. Le said in a virtual press conference. “And in the absence of effective treatments, health systems should plan for large increases in the number of liver cancer cases and the need for liver transplant.”

During the press conference, Norah Terrault, MD, president of the AASLD from the University of Southern California, Los Angeles, noted that all of the reported outcomes, including increasing rates of liver cancer, cirrhosis, and transplants are “potentially preventable.”

Dr. Norah Terrault, University of Southern California, Los Angeles
Keck School of Medicine
Dr. Norah Terrault

Dr. Terrault went on to suggest ways of combating this increasing burden of NAFLD, which she referred to as metabolic dysfunction–associated steatotic liver disease (MASLD), the name now recommended by the AASLD.

“There’s no way we’re going to be able to transplant our way out of this,” Dr. Terrault said. “We need to be bringing greater awareness both to patients, as well as to providers about how we seek out the diagnosis. And we need to bring greater awareness to the population around the things that contribute to MASLD.”

Rates of obesity and diabetes continue to rise, Dr. Terrault said, explaining why MASLD is more common than ever. To counteract these trends, she called for greater awareness of driving factors, such as dietary choices and sedentary lifestyle.

“These are all really important messages that we want to get out to the population, and are really the cornerstones for how we approach the management of patients who have MASLD,” Dr. Terrault said.

In discussion with Dr. Terrault, Dr. Le agreed that increased education may help stem the rising tide of disease, while treatment advances could also increase the odds of a brighter future.

“If we improve our management of NAFLD, or NAFLD-related comorbidities, and if we can develop an effective treatment for NAFLD, then obviously the future would not be so dark,” Dr. Le said, noting promising phase 3 data that would be presented at the meeting. “We are hopeful that the future of disease burden will not be as bad as our model predicts.”

The study was funded by the Agency for Healthcare Research and Quality. The investigators disclosed no conflicts of interest.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

More than one out of three adults in the United States could have nonalcoholic fatty liver disease (NAFLD) by 2050, substantially increasing the national clinical burden, according to investigators.

These findings suggest that health care systems should prepare for “large increases” in cases of hepatocellular carcinoma (HCC) and need for liver transplants, reported lead author Phuc Le, PhD, MPH, of the Cleveland Clinic, and colleagues.

Dr. Phuc Le, Cleveland Clinic
Cleveland Clinic
Dr. Phuc Le

“Following the alarming rise in prevalence of obesity and diabetes, NAFLD is projected to become the leading indication for liver transplant in the United States in the next decade,” Dr. Le and colleagues wrote in their abstract for the annual meeting of the American Association for the Study of Liver Diseases. “A better understanding of the clinical burden associated with NAFLD will enable health systems to prepare to meet this imminent demand from patients.”

To this end, Dr. Le and colleagues developed an agent-based state transition model to predict future prevalence of NAFLD and associated outcomes.

In the first part of the model, the investigators simulated population growth in the United States using Census Bureau data, including new births and immigration, from the year 2000 onward. The second part of the model simulated natural progression of NAFLD in adults via 14 associated conditions and events, including steatosis, nonalcoholic steatohepatitis (NASH), HCC, liver transplants, liver-related mortality, and others.

By first comparing simulated findings with actual findings between 2000 and 2018, the investigators confirmed that their model could reliably predict the intended epidemiological parameters.

Next, they turned their model toward the future.

It predicted that the prevalence of NAFLD among US adults will rise from 27.8% in 2020 to 34.3% in 2050. Over the same timeframe, prevalence of NASH is predicted to increase from 20.0% to 21.8%, proportion of NAFLD cases developing cirrhosis is expected to increase from 1.9% to 3.1%, and liver-related mortality is estimated to rise from 0.4% to 1% of all deaths.

The model also predicted that the burden of HCC will increase from 10,400 to 19,300 new cases per year, while liver transplant burden will more than double, from 1,700 to 4,200 transplants per year.

“Our model forecasts substantial clinical burden of NAFLD over the next three decades,” Dr. Le said in a virtual press conference. “And in the absence of effective treatments, health systems should plan for large increases in the number of liver cancer cases and the need for liver transplant.”

During the press conference, Norah Terrault, MD, president of the AASLD from the University of Southern California, Los Angeles, noted that all of the reported outcomes, including increasing rates of liver cancer, cirrhosis, and transplants are “potentially preventable.”

Dr. Norah Terrault, University of Southern California, Los Angeles
Keck School of Medicine
Dr. Norah Terrault

Dr. Terrault went on to suggest ways of combating this increasing burden of NAFLD, which she referred to as metabolic dysfunction–associated steatotic liver disease (MASLD), the name now recommended by the AASLD.

“There’s no way we’re going to be able to transplant our way out of this,” Dr. Terrault said. “We need to be bringing greater awareness both to patients, as well as to providers about how we seek out the diagnosis. And we need to bring greater awareness to the population around the things that contribute to MASLD.”

Rates of obesity and diabetes continue to rise, Dr. Terrault said, explaining why MASLD is more common than ever. To counteract these trends, she called for greater awareness of driving factors, such as dietary choices and sedentary lifestyle.

“These are all really important messages that we want to get out to the population, and are really the cornerstones for how we approach the management of patients who have MASLD,” Dr. Terrault said.

In discussion with Dr. Terrault, Dr. Le agreed that increased education may help stem the rising tide of disease, while treatment advances could also increase the odds of a brighter future.

“If we improve our management of NAFLD, or NAFLD-related comorbidities, and if we can develop an effective treatment for NAFLD, then obviously the future would not be so dark,” Dr. Le said, noting promising phase 3 data that would be presented at the meeting. “We are hopeful that the future of disease burden will not be as bad as our model predicts.”

The study was funded by the Agency for Healthcare Research and Quality. The investigators disclosed no conflicts of interest.

More than one out of three adults in the United States could have nonalcoholic fatty liver disease (NAFLD) by 2050, substantially increasing the national clinical burden, according to investigators.

These findings suggest that health care systems should prepare for “large increases” in cases of hepatocellular carcinoma (HCC) and need for liver transplants, reported lead author Phuc Le, PhD, MPH, of the Cleveland Clinic, and colleagues.

Dr. Phuc Le, Cleveland Clinic
Cleveland Clinic
Dr. Phuc Le

“Following the alarming rise in prevalence of obesity and diabetes, NAFLD is projected to become the leading indication for liver transplant in the United States in the next decade,” Dr. Le and colleagues wrote in their abstract for the annual meeting of the American Association for the Study of Liver Diseases. “A better understanding of the clinical burden associated with NAFLD will enable health systems to prepare to meet this imminent demand from patients.”

To this end, Dr. Le and colleagues developed an agent-based state transition model to predict future prevalence of NAFLD and associated outcomes.

In the first part of the model, the investigators simulated population growth in the United States using Census Bureau data, including new births and immigration, from the year 2000 onward. The second part of the model simulated natural progression of NAFLD in adults via 14 associated conditions and events, including steatosis, nonalcoholic steatohepatitis (NASH), HCC, liver transplants, liver-related mortality, and others.

By first comparing simulated findings with actual findings between 2000 and 2018, the investigators confirmed that their model could reliably predict the intended epidemiological parameters.

Next, they turned their model toward the future.

It predicted that the prevalence of NAFLD among US adults will rise from 27.8% in 2020 to 34.3% in 2050. Over the same timeframe, prevalence of NASH is predicted to increase from 20.0% to 21.8%, proportion of NAFLD cases developing cirrhosis is expected to increase from 1.9% to 3.1%, and liver-related mortality is estimated to rise from 0.4% to 1% of all deaths.

The model also predicted that the burden of HCC will increase from 10,400 to 19,300 new cases per year, while liver transplant burden will more than double, from 1,700 to 4,200 transplants per year.

“Our model forecasts substantial clinical burden of NAFLD over the next three decades,” Dr. Le said in a virtual press conference. “And in the absence of effective treatments, health systems should plan for large increases in the number of liver cancer cases and the need for liver transplant.”

During the press conference, Norah Terrault, MD, president of the AASLD from the University of Southern California, Los Angeles, noted that all of the reported outcomes, including increasing rates of liver cancer, cirrhosis, and transplants are “potentially preventable.”

Dr. Norah Terrault, University of Southern California, Los Angeles
Keck School of Medicine
Dr. Norah Terrault

Dr. Terrault went on to suggest ways of combating this increasing burden of NAFLD, which she referred to as metabolic dysfunction–associated steatotic liver disease (MASLD), the name now recommended by the AASLD.

“There’s no way we’re going to be able to transplant our way out of this,” Dr. Terrault said. “We need to be bringing greater awareness both to patients, as well as to providers about how we seek out the diagnosis. And we need to bring greater awareness to the population around the things that contribute to MASLD.”

Rates of obesity and diabetes continue to rise, Dr. Terrault said, explaining why MASLD is more common than ever. To counteract these trends, she called for greater awareness of driving factors, such as dietary choices and sedentary lifestyle.

“These are all really important messages that we want to get out to the population, and are really the cornerstones for how we approach the management of patients who have MASLD,” Dr. Terrault said.

In discussion with Dr. Terrault, Dr. Le agreed that increased education may help stem the rising tide of disease, while treatment advances could also increase the odds of a brighter future.

“If we improve our management of NAFLD, or NAFLD-related comorbidities, and if we can develop an effective treatment for NAFLD, then obviously the future would not be so dark,” Dr. Le said, noting promising phase 3 data that would be presented at the meeting. “We are hopeful that the future of disease burden will not be as bad as our model predicts.”

The study was funded by the Agency for Healthcare Research and Quality. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE LIVER MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Food insecurity increases risk of adolescent MASLD

Article Type
Changed
Mon, 12/04/2023 - 12:21

Adolescents facing food insecurity have a significantly increased risk of metabolic dysfunction-associated steatotic liver disease (MASLD), likely due to overconsumption of low-cost, ultra-processed, unbalanced diets, according to a recent study.

These findings suggest that more work is needed to ensure that eligible adolescents can access Supplemental Nutrition Assistance Program (SNAP) benefits and have opportunities to engage in physical activities through school-associated programs, reported principal investigator Zobair M. Younossi, MD, MPH, professor and chairman of the Beatty Liver and Obesity Research Program, Inova Health System, Falls Church, Virginia, and colleagues.

Dr. Zobair M. Younossi, professor and chairman of the Beatty Liver and Obesity Research Program, Inova Health System, Falls Church, Va.
Dr. Zobair M. Younossi

Dr. Younossi presented the findings in November during a press conference at the annual meeting of the American Association for the Study of Liver Diseases.

“Food insecurity among children is about 10.2% in the United States,” Dr. Younossi said. “[Food insecurity has] been shown to be a risk factor for MASLD among adults, but the data and children and adolescents are really lacking at the moment.”

To address this knowledge gap, Dr. Younossi and colleagues analyzed data from 771 adolescents aged 12-18 years in the National Health and Nutrition Examination Survey (2017-2018). Among these participants, 9.8% reported food insecurity and 10.8% had MASLD. Rates of obesity and central obesity were 22.5% and 45.4%, respectively, while 1.0% had diabetes and 20.9% had prediabetes.

Among adolescents facing food insecurity, more than half (51.5%) did not eat enough food, a vast majority (93.2%) could not access a balanced meal, and almost all (98.9%) relied upon low-cost food for daily sustenance.

The prevalence of MASLD in the food insecure group was almost twice as high as in the food secure group (18.7% vs 9.9%), and advanced fibrosis was about 9 times more common (2.8% vs. 0.3%). Food insecure participants were also more likely to come from a low-income household (70.4% vs. 25.7%) and participate in SNAP (62.4% vs. 25.1%).

Adjusting for SNAP participation, demographic factors, and metabolic disease showed that food insecurity independently increased risk of MASLD by more than twofold (odds ratio [OR], 2.62; 95% CI, 1.07–6.41). The negative effect of food insecurity was almost twice as strong in participants living in a low-income household (OR, 4.79; 95% CI, 1.44–15.86).

“The association between food insecurity and MASLD/NAFLD is most likely the result of not being able to eat a balanced meal and more likely having to purchase low-cost food,” Dr. Younossi said. “Together, these factors may lead to a cycle of overeating along with the overconsumption of ultra-processed foods and sugar-sweetened food and beverages.”

He went on to suggest that more work is needed to remove “systemic and structural barriers” that prevent eligible adolescents from participating in SNAP, while offering support so they can participate in “more physical activity in school and in after-school programs.”

Elliot Benjamin Tapper, MD, associate professor of medicine at the University of Michigan, Ann Arbor, recently published a similar study in the Journal of Clinical Gastroenterology linking food scarcity and MASLD in adults.

Elliot Benjamin Tapper, MD, associate professor of medicine at the University of Michigan, Ann Arbor
Michigan Medicine
Dr. Elliot Benjamin Tapper

In an interview, Dr. Tapper praised this new study by Dr. Younossi and colleagues because it “identifies a serious unmet need” among younger individuals, who may stand to benefit most from early intervention.

“The goal [of screening] is to prevent the development of progressive disease,” Dr. Tapper said. “Our current guidelines for screening for advanced liver disease and people with risk factors focus exclusively on adults. If you waited longer, then there’s a risk that these [younger] people [in the study] would have progressed to a later stage of disease.”

Dr. Tapper predicted increased enthusiasm for MAFLD screening among adolescents in response to these findings, but he cautioned that conventional educational intervention is unlikely to yield significant benefit.

“If you’re food insecure, you can’t go out and buy salmon and olive oil to follow the Mediterranean diet,” Dr. Tapper said. In this era, where the people who are at risk tomorrow are young and food insecure, we have to come up with a way of tailoring our interventions to the means that are available to these patients.”

To this end, health care providers need to collaborate with individuals who have personally dealt with food scarcity to implement practicable interventions.

“Referral to social work has to be paired with some kind of standard teaching,” Dr. Tapper said. “How would I use social and nutritional assistance programs to eat in a liver-healthy way? What can I avoid? [Educational materials] should be written by and edited by people with lived experience; i.e., people who have food insecurity or have walked a mile in those shoes.”

Dr. Younossi disclosed relationships with Merck, Abbott, AstraZeneca, and others. Dr. Tapper disclosed relationships with Takeda, Novo Nordisk, Madrigal, and others.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Adolescents facing food insecurity have a significantly increased risk of metabolic dysfunction-associated steatotic liver disease (MASLD), likely due to overconsumption of low-cost, ultra-processed, unbalanced diets, according to a recent study.

These findings suggest that more work is needed to ensure that eligible adolescents can access Supplemental Nutrition Assistance Program (SNAP) benefits and have opportunities to engage in physical activities through school-associated programs, reported principal investigator Zobair M. Younossi, MD, MPH, professor and chairman of the Beatty Liver and Obesity Research Program, Inova Health System, Falls Church, Virginia, and colleagues.

Dr. Zobair M. Younossi, professor and chairman of the Beatty Liver and Obesity Research Program, Inova Health System, Falls Church, Va.
Dr. Zobair M. Younossi

Dr. Younossi presented the findings in November during a press conference at the annual meeting of the American Association for the Study of Liver Diseases.

“Food insecurity among children is about 10.2% in the United States,” Dr. Younossi said. “[Food insecurity has] been shown to be a risk factor for MASLD among adults, but the data and children and adolescents are really lacking at the moment.”

To address this knowledge gap, Dr. Younossi and colleagues analyzed data from 771 adolescents aged 12-18 years in the National Health and Nutrition Examination Survey (2017-2018). Among these participants, 9.8% reported food insecurity and 10.8% had MASLD. Rates of obesity and central obesity were 22.5% and 45.4%, respectively, while 1.0% had diabetes and 20.9% had prediabetes.

Among adolescents facing food insecurity, more than half (51.5%) did not eat enough food, a vast majority (93.2%) could not access a balanced meal, and almost all (98.9%) relied upon low-cost food for daily sustenance.

The prevalence of MASLD in the food insecure group was almost twice as high as in the food secure group (18.7% vs 9.9%), and advanced fibrosis was about 9 times more common (2.8% vs. 0.3%). Food insecure participants were also more likely to come from a low-income household (70.4% vs. 25.7%) and participate in SNAP (62.4% vs. 25.1%).

Adjusting for SNAP participation, demographic factors, and metabolic disease showed that food insecurity independently increased risk of MASLD by more than twofold (odds ratio [OR], 2.62; 95% CI, 1.07–6.41). The negative effect of food insecurity was almost twice as strong in participants living in a low-income household (OR, 4.79; 95% CI, 1.44–15.86).

“The association between food insecurity and MASLD/NAFLD is most likely the result of not being able to eat a balanced meal and more likely having to purchase low-cost food,” Dr. Younossi said. “Together, these factors may lead to a cycle of overeating along with the overconsumption of ultra-processed foods and sugar-sweetened food and beverages.”

He went on to suggest that more work is needed to remove “systemic and structural barriers” that prevent eligible adolescents from participating in SNAP, while offering support so they can participate in “more physical activity in school and in after-school programs.”

Elliot Benjamin Tapper, MD, associate professor of medicine at the University of Michigan, Ann Arbor, recently published a similar study in the Journal of Clinical Gastroenterology linking food scarcity and MASLD in adults.

Elliot Benjamin Tapper, MD, associate professor of medicine at the University of Michigan, Ann Arbor
Michigan Medicine
Dr. Elliot Benjamin Tapper

In an interview, Dr. Tapper praised this new study by Dr. Younossi and colleagues because it “identifies a serious unmet need” among younger individuals, who may stand to benefit most from early intervention.

“The goal [of screening] is to prevent the development of progressive disease,” Dr. Tapper said. “Our current guidelines for screening for advanced liver disease and people with risk factors focus exclusively on adults. If you waited longer, then there’s a risk that these [younger] people [in the study] would have progressed to a later stage of disease.”

Dr. Tapper predicted increased enthusiasm for MAFLD screening among adolescents in response to these findings, but he cautioned that conventional educational intervention is unlikely to yield significant benefit.

“If you’re food insecure, you can’t go out and buy salmon and olive oil to follow the Mediterranean diet,” Dr. Tapper said. In this era, where the people who are at risk tomorrow are young and food insecure, we have to come up with a way of tailoring our interventions to the means that are available to these patients.”

To this end, health care providers need to collaborate with individuals who have personally dealt with food scarcity to implement practicable interventions.

“Referral to social work has to be paired with some kind of standard teaching,” Dr. Tapper said. “How would I use social and nutritional assistance programs to eat in a liver-healthy way? What can I avoid? [Educational materials] should be written by and edited by people with lived experience; i.e., people who have food insecurity or have walked a mile in those shoes.”

Dr. Younossi disclosed relationships with Merck, Abbott, AstraZeneca, and others. Dr. Tapper disclosed relationships with Takeda, Novo Nordisk, Madrigal, and others.

Adolescents facing food insecurity have a significantly increased risk of metabolic dysfunction-associated steatotic liver disease (MASLD), likely due to overconsumption of low-cost, ultra-processed, unbalanced diets, according to a recent study.

These findings suggest that more work is needed to ensure that eligible adolescents can access Supplemental Nutrition Assistance Program (SNAP) benefits and have opportunities to engage in physical activities through school-associated programs, reported principal investigator Zobair M. Younossi, MD, MPH, professor and chairman of the Beatty Liver and Obesity Research Program, Inova Health System, Falls Church, Virginia, and colleagues.

Dr. Zobair M. Younossi, professor and chairman of the Beatty Liver and Obesity Research Program, Inova Health System, Falls Church, Va.
Dr. Zobair M. Younossi

Dr. Younossi presented the findings in November during a press conference at the annual meeting of the American Association for the Study of Liver Diseases.

“Food insecurity among children is about 10.2% in the United States,” Dr. Younossi said. “[Food insecurity has] been shown to be a risk factor for MASLD among adults, but the data and children and adolescents are really lacking at the moment.”

To address this knowledge gap, Dr. Younossi and colleagues analyzed data from 771 adolescents aged 12-18 years in the National Health and Nutrition Examination Survey (2017-2018). Among these participants, 9.8% reported food insecurity and 10.8% had MASLD. Rates of obesity and central obesity were 22.5% and 45.4%, respectively, while 1.0% had diabetes and 20.9% had prediabetes.

Among adolescents facing food insecurity, more than half (51.5%) did not eat enough food, a vast majority (93.2%) could not access a balanced meal, and almost all (98.9%) relied upon low-cost food for daily sustenance.

The prevalence of MASLD in the food insecure group was almost twice as high as in the food secure group (18.7% vs 9.9%), and advanced fibrosis was about 9 times more common (2.8% vs. 0.3%). Food insecure participants were also more likely to come from a low-income household (70.4% vs. 25.7%) and participate in SNAP (62.4% vs. 25.1%).

Adjusting for SNAP participation, demographic factors, and metabolic disease showed that food insecurity independently increased risk of MASLD by more than twofold (odds ratio [OR], 2.62; 95% CI, 1.07–6.41). The negative effect of food insecurity was almost twice as strong in participants living in a low-income household (OR, 4.79; 95% CI, 1.44–15.86).

“The association between food insecurity and MASLD/NAFLD is most likely the result of not being able to eat a balanced meal and more likely having to purchase low-cost food,” Dr. Younossi said. “Together, these factors may lead to a cycle of overeating along with the overconsumption of ultra-processed foods and sugar-sweetened food and beverages.”

He went on to suggest that more work is needed to remove “systemic and structural barriers” that prevent eligible adolescents from participating in SNAP, while offering support so they can participate in “more physical activity in school and in after-school programs.”

Elliot Benjamin Tapper, MD, associate professor of medicine at the University of Michigan, Ann Arbor, recently published a similar study in the Journal of Clinical Gastroenterology linking food scarcity and MASLD in adults.

Elliot Benjamin Tapper, MD, associate professor of medicine at the University of Michigan, Ann Arbor
Michigan Medicine
Dr. Elliot Benjamin Tapper

In an interview, Dr. Tapper praised this new study by Dr. Younossi and colleagues because it “identifies a serious unmet need” among younger individuals, who may stand to benefit most from early intervention.

“The goal [of screening] is to prevent the development of progressive disease,” Dr. Tapper said. “Our current guidelines for screening for advanced liver disease and people with risk factors focus exclusively on adults. If you waited longer, then there’s a risk that these [younger] people [in the study] would have progressed to a later stage of disease.”

Dr. Tapper predicted increased enthusiasm for MAFLD screening among adolescents in response to these findings, but he cautioned that conventional educational intervention is unlikely to yield significant benefit.

“If you’re food insecure, you can’t go out and buy salmon and olive oil to follow the Mediterranean diet,” Dr. Tapper said. In this era, where the people who are at risk tomorrow are young and food insecure, we have to come up with a way of tailoring our interventions to the means that are available to these patients.”

To this end, health care providers need to collaborate with individuals who have personally dealt with food scarcity to implement practicable interventions.

“Referral to social work has to be paired with some kind of standard teaching,” Dr. Tapper said. “How would I use social and nutritional assistance programs to eat in a liver-healthy way? What can I avoid? [Educational materials] should be written by and edited by people with lived experience; i.e., people who have food insecurity or have walked a mile in those shoes.”

Dr. Younossi disclosed relationships with Merck, Abbott, AstraZeneca, and others. Dr. Tapper disclosed relationships with Takeda, Novo Nordisk, Madrigal, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE LIVER MEETING

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Pancreatic cystic neoplasms rarely turn cancerous, study shows

Article Type
Changed
Tue, 12/05/2023 - 21:37

Individuals with intraductal papillary mucinous neoplasms (IPMNs) that lack “worrisome or high-risk features” have no greater risk of pancreatic cancer than individuals without IPMNs, based on a retrospective cohort study from Mayo Clinic.

These findings, if validated in a larger population, could challenge current surveillance practices for IPMNs, reported researchers who were led by Shounak Majumder, MD, a gastroenterologist in the pancreas clinic at Mayo Clinic, Rochester, Minn. The study was published in JAMA Network Open.

“Among intraductal papillary mucinous neoplasms (IPMNs) that were Fukuoka negative at baseline, fewer than 10% developed worrisome or high-risk features on follow-up. Pancreatic cancer development in IPMN was a rare event overall,” the authors wrote.

Dr. Shounak Majumder
Dr. Shounak Majumder

“Current international consensus guidelines for the management of IPMNs recommend image-based surveillance with the aim to detect clinical and imaging features of advanced neoplasia,” the authors wrote. Yet “there are no population-based estimates of the burden of pancreatic cancer in individuals with IPMNs or the proportion of pancreatic cancers that develop from or adjacent to an IPMN.”

Researchers aimed to address this knowledge gap with a population-based cohort study. Drawing data from the Rochester Epidemiology Project, which includes longitudinal medical records from residents of Olmsted County, Minn., investigators identified two cohorts. The first group comprised 2,114 patients 50 years old or older who had undergone abdominal CT scans between 2000 and 2015, among whom 231 (10.9%) had IPMNs. The second cohort included 320 patients diagnosed with pancreatic cancer between 2000 and 2019, among whom 31 (9.8%) had IPMNs.

Further analysis showed that 81% of the patients with IPMNs in the first cohort lacked Fukuoka high-risk or worrisome features. Within this subgroup, the incidence rate of pancreatic cancer per 100 years was not significantly different than among individuals without IPMNs.

“Although the risk of IPMN-PC is has been extensively described, our population-based study further demonstrates that most IPMNs did not progress in Fukuoka stage and did not transform into pancreatic cancer, a similar message was expressed by the current American Gastroenterological Association pancreatic cyst guidelines, published in 2015, and studies published in 2022 and 2016,” the investigators wrote.

Analyzing the cohort of 320 patients with pancreatic cancer showed those with IPMNs had significantly better outcomes than those without IPMNs, including longer survival and lower rate of metastatic disease upon diagnosis. These findings align with previous research, the investigators wrote.

In an accompanying editorial, Stefano Crippa, MD, PhD, of Istituto di Ricovero e Cura a Carattere Scientifico San Raffaele Scientific Institute, Milan, and colleagues offered their perspective on the findings.

“Although results of this study should be validated in larger cohorts, they represent useful clinical data from an unselected population-based cohort that helps challenge current IPMN surveillance policies that recommend lifetime active surveillance for all fit individuals,” they wrote. “Currently, we can use follow-up data from studies like this one to identify patients with IPMNs who are not at risk of progression based on clinical-radiological parameters. We can furthermore start selecting subgroups of patients with limited life expectancy due to age or comorbidities to be considered for surveillance discontinuation.”

Timothy Louis Frankel, MD, a gastrointestinal surgeon at the University of Michigan, Ann Arbor, specializing in malignancies, said the findings are most useful for reassuring patients who have been diagnosed with an IPMN.

“The real take-home message is that in the absence of worrisome features people [with an IPMN] should feel comfortable that their risk is no higher than the general population for developing pancreatic cancer,” Dr. Frankel said in an interview.

Before any changes to surveillance can be considered, however, Dr. Frankel echoed the investigators’ call for a larger study, noting the relatively small population, most of whom (92%) were White.

“We do know that pancreas cancer and pancreas diseases vary significantly by race,” Dr. Frankel said. “So we do need to be a little bit cautious about changing the way that we manage patients based on a fairly homogeneous subset.”

He also pointed out that two patients had IPMNs that developed increased risk over time.

“They actually went from no risk features to having features that put them at risk,” Dr. Frankel said. “Those are patients who were saved by surveillance. So I’m not sure that this study was necessarily designed to let us know if and when we can stop following these lesions.”

Study authors had no relevant disclosures. The editorial writers reported no conflicts of interest.

Publications
Topics
Sections

Individuals with intraductal papillary mucinous neoplasms (IPMNs) that lack “worrisome or high-risk features” have no greater risk of pancreatic cancer than individuals without IPMNs, based on a retrospective cohort study from Mayo Clinic.

These findings, if validated in a larger population, could challenge current surveillance practices for IPMNs, reported researchers who were led by Shounak Majumder, MD, a gastroenterologist in the pancreas clinic at Mayo Clinic, Rochester, Minn. The study was published in JAMA Network Open.

“Among intraductal papillary mucinous neoplasms (IPMNs) that were Fukuoka negative at baseline, fewer than 10% developed worrisome or high-risk features on follow-up. Pancreatic cancer development in IPMN was a rare event overall,” the authors wrote.

Dr. Shounak Majumder
Dr. Shounak Majumder

“Current international consensus guidelines for the management of IPMNs recommend image-based surveillance with the aim to detect clinical and imaging features of advanced neoplasia,” the authors wrote. Yet “there are no population-based estimates of the burden of pancreatic cancer in individuals with IPMNs or the proportion of pancreatic cancers that develop from or adjacent to an IPMN.”

Researchers aimed to address this knowledge gap with a population-based cohort study. Drawing data from the Rochester Epidemiology Project, which includes longitudinal medical records from residents of Olmsted County, Minn., investigators identified two cohorts. The first group comprised 2,114 patients 50 years old or older who had undergone abdominal CT scans between 2000 and 2015, among whom 231 (10.9%) had IPMNs. The second cohort included 320 patients diagnosed with pancreatic cancer between 2000 and 2019, among whom 31 (9.8%) had IPMNs.

Further analysis showed that 81% of the patients with IPMNs in the first cohort lacked Fukuoka high-risk or worrisome features. Within this subgroup, the incidence rate of pancreatic cancer per 100 years was not significantly different than among individuals without IPMNs.

“Although the risk of IPMN-PC is has been extensively described, our population-based study further demonstrates that most IPMNs did not progress in Fukuoka stage and did not transform into pancreatic cancer, a similar message was expressed by the current American Gastroenterological Association pancreatic cyst guidelines, published in 2015, and studies published in 2022 and 2016,” the investigators wrote.

Analyzing the cohort of 320 patients with pancreatic cancer showed those with IPMNs had significantly better outcomes than those without IPMNs, including longer survival and lower rate of metastatic disease upon diagnosis. These findings align with previous research, the investigators wrote.

In an accompanying editorial, Stefano Crippa, MD, PhD, of Istituto di Ricovero e Cura a Carattere Scientifico San Raffaele Scientific Institute, Milan, and colleagues offered their perspective on the findings.

“Although results of this study should be validated in larger cohorts, they represent useful clinical data from an unselected population-based cohort that helps challenge current IPMN surveillance policies that recommend lifetime active surveillance for all fit individuals,” they wrote. “Currently, we can use follow-up data from studies like this one to identify patients with IPMNs who are not at risk of progression based on clinical-radiological parameters. We can furthermore start selecting subgroups of patients with limited life expectancy due to age or comorbidities to be considered for surveillance discontinuation.”

Timothy Louis Frankel, MD, a gastrointestinal surgeon at the University of Michigan, Ann Arbor, specializing in malignancies, said the findings are most useful for reassuring patients who have been diagnosed with an IPMN.

“The real take-home message is that in the absence of worrisome features people [with an IPMN] should feel comfortable that their risk is no higher than the general population for developing pancreatic cancer,” Dr. Frankel said in an interview.

Before any changes to surveillance can be considered, however, Dr. Frankel echoed the investigators’ call for a larger study, noting the relatively small population, most of whom (92%) were White.

“We do know that pancreas cancer and pancreas diseases vary significantly by race,” Dr. Frankel said. “So we do need to be a little bit cautious about changing the way that we manage patients based on a fairly homogeneous subset.”

He also pointed out that two patients had IPMNs that developed increased risk over time.

“They actually went from no risk features to having features that put them at risk,” Dr. Frankel said. “Those are patients who were saved by surveillance. So I’m not sure that this study was necessarily designed to let us know if and when we can stop following these lesions.”

Study authors had no relevant disclosures. The editorial writers reported no conflicts of interest.

Individuals with intraductal papillary mucinous neoplasms (IPMNs) that lack “worrisome or high-risk features” have no greater risk of pancreatic cancer than individuals without IPMNs, based on a retrospective cohort study from Mayo Clinic.

These findings, if validated in a larger population, could challenge current surveillance practices for IPMNs, reported researchers who were led by Shounak Majumder, MD, a gastroenterologist in the pancreas clinic at Mayo Clinic, Rochester, Minn. The study was published in JAMA Network Open.

“Among intraductal papillary mucinous neoplasms (IPMNs) that were Fukuoka negative at baseline, fewer than 10% developed worrisome or high-risk features on follow-up. Pancreatic cancer development in IPMN was a rare event overall,” the authors wrote.

Dr. Shounak Majumder
Dr. Shounak Majumder

“Current international consensus guidelines for the management of IPMNs recommend image-based surveillance with the aim to detect clinical and imaging features of advanced neoplasia,” the authors wrote. Yet “there are no population-based estimates of the burden of pancreatic cancer in individuals with IPMNs or the proportion of pancreatic cancers that develop from or adjacent to an IPMN.”

Researchers aimed to address this knowledge gap with a population-based cohort study. Drawing data from the Rochester Epidemiology Project, which includes longitudinal medical records from residents of Olmsted County, Minn., investigators identified two cohorts. The first group comprised 2,114 patients 50 years old or older who had undergone abdominal CT scans between 2000 and 2015, among whom 231 (10.9%) had IPMNs. The second cohort included 320 patients diagnosed with pancreatic cancer between 2000 and 2019, among whom 31 (9.8%) had IPMNs.

Further analysis showed that 81% of the patients with IPMNs in the first cohort lacked Fukuoka high-risk or worrisome features. Within this subgroup, the incidence rate of pancreatic cancer per 100 years was not significantly different than among individuals without IPMNs.

“Although the risk of IPMN-PC is has been extensively described, our population-based study further demonstrates that most IPMNs did not progress in Fukuoka stage and did not transform into pancreatic cancer, a similar message was expressed by the current American Gastroenterological Association pancreatic cyst guidelines, published in 2015, and studies published in 2022 and 2016,” the investigators wrote.

Analyzing the cohort of 320 patients with pancreatic cancer showed those with IPMNs had significantly better outcomes than those without IPMNs, including longer survival and lower rate of metastatic disease upon diagnosis. These findings align with previous research, the investigators wrote.

In an accompanying editorial, Stefano Crippa, MD, PhD, of Istituto di Ricovero e Cura a Carattere Scientifico San Raffaele Scientific Institute, Milan, and colleagues offered their perspective on the findings.

“Although results of this study should be validated in larger cohorts, they represent useful clinical data from an unselected population-based cohort that helps challenge current IPMN surveillance policies that recommend lifetime active surveillance for all fit individuals,” they wrote. “Currently, we can use follow-up data from studies like this one to identify patients with IPMNs who are not at risk of progression based on clinical-radiological parameters. We can furthermore start selecting subgroups of patients with limited life expectancy due to age or comorbidities to be considered for surveillance discontinuation.”

Timothy Louis Frankel, MD, a gastrointestinal surgeon at the University of Michigan, Ann Arbor, specializing in malignancies, said the findings are most useful for reassuring patients who have been diagnosed with an IPMN.

“The real take-home message is that in the absence of worrisome features people [with an IPMN] should feel comfortable that their risk is no higher than the general population for developing pancreatic cancer,” Dr. Frankel said in an interview.

Before any changes to surveillance can be considered, however, Dr. Frankel echoed the investigators’ call for a larger study, noting the relatively small population, most of whom (92%) were White.

“We do know that pancreas cancer and pancreas diseases vary significantly by race,” Dr. Frankel said. “So we do need to be a little bit cautious about changing the way that we manage patients based on a fairly homogeneous subset.”

He also pointed out that two patients had IPMNs that developed increased risk over time.

“They actually went from no risk features to having features that put them at risk,” Dr. Frankel said. “Those are patients who were saved by surveillance. So I’m not sure that this study was necessarily designed to let us know if and when we can stop following these lesions.”

Study authors had no relevant disclosures. The editorial writers reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

U.S. kids are taking melatonin for sleep, despite evidence gap

Article Type
Changed
Tue, 11/28/2023 - 10:44

Melatonin usage has become increasingly common among children in the United States, with almost one in five kids over the age of 5 having taken the sleep aid in the past 30 days, according to a recent study.

These findings should prompt clinicians to discuss with parents the various factors that could be driving sleep disturbances, and potential safety issues associated with melatonin usage, lead author Lauren E. Hartstein, PhD, a postdoctoral fellow in the Sleep and Development Lab at the University of Colorado, Boulder, and colleagues reported.

Lauren E. Hartstein, PhD, a postdoctoral fellow in the Sleep and Development Lab at the University of Colorado, Boulder
Dr. Lauren E. Hartstein

Writing in JAMA Pediatrics, the investigators noted that melatonin products are notorious for mislabeling, with active ingredient quantities as much as three times higher than the labeled amount. This issue is particularly concerning, they added, as calls to poison control for melatonin ingestion jumped more than fivefold from 2012 to 2021, with most cases involving children younger than 5 years. Meanwhile, scant evidence is available to characterize intentional usage in the same population.

“Current data are lacking on the prevalence of melatonin use and the frequency, dosing, and timing of melatonin administration in U.S. youth,” Dr. Hartstein and colleagues wrote.

To address this knowledge gap, the investigators conducted an online survey of parents with children and adolescents aged 1.0-13.9 years. The survey asked parents to report any melatonin usage in their children in the past 30 days.

Parents reporting melatonin usage were asked about frequency, dose, timing of administration before bedtime, and duration of use.

Findings were reported within three age groups: preschool (1-4 years), school aged (5-9 years), and preteen (10-13 years).

The survey revealed that almost one in five children in the older age groups were using melatonin, with a rate of 18.5% in the school-aged group and 19.4% in the preteen group. In comparison, 5.6% of preschool children had received melatonin for sleep in the past 30 days.
 

A significant uptick in usage

These findings point to a significant uptick in usage, according to Dr. Hartstein and colleagues, who cited a 2017-2018 study that found just 1.3% of U.S. children had taken melatonin in the past 30 days.

In the present study, melatonin was typically administered 30 minutes before bedtime, most often as a gummy (64.3%) or chewable tablet (27.0%).

Frequency of administration was similar between age groups and trended toward a bimodal pattern, with melatonin often given either 1 day per week or 7 days per week.

Median dose increased significantly with age, from 0.5 mg in the preschool group to 1.0 mg in the school-aged group and 2.0 mg in the preteen group. Median duration also showed a significant upward trend, with 12-month, 18-month, and 21-month durations, respectively, for ascending age groups.

The investigators concluded that melatonin usage among U.S. adolescents and children is “exceedingly common,” despite a lack of evidence to support long-term safety or guide optimal dosing.
 

Is melatonin use masking other sleep issues?

“Widespread melatonin use across developmental stages may suggest a high prevalence of sleep disruption, which deserves accurate diagnosis and effective treatment,” Dr. Hartstein and colleagues wrote. “Dissemination of information regarding safety concerns, such as overdose and supplement mislabeling, is necessary. Clinicians should discuss with parents the factors associated with sleep difficulties and effective behavioral strategies.”

Large-scale, long-term studies are needed, they added, to generate relevant safety and efficacy data, and to characterize the factors driving melatonin administration by parents.

Dr. Alfonso J. Padilla, assistant clinical professor of sleep medicine at the David Geffen School of Medicine at UCLA
courtesy UCLA
Dr. Alfonso J. Padilla

“Studies like these add to our knowledge base and give us insight into what patients or parents may be doing that can impact overall health,” said Alfonso J. Padilla, MD, assistant clinical professor of sleep medicine at the University of California, Los Angeles, in a written comment. “Often, in normal encounters with our patients we may not be able to gather this information easily. It may help open conversations about sleep issues that are not being addressed.”

Dr. Padilla suggested that parents may believe that melatonin is safe because it is not regulated by the Food and Drug Administration, when in fact they could be negatively impacting their children’s sleep. He noted that short-term risks include altered circadian rhythm and vivid dreams or nightmares, while long-term safety remains unclear.

“As a sleep physician, I use melatonin for specific indications only,” Dr. Padilla said. “I may use it in small children that are having difficulty falling asleep, especially in children with autism or special needs. I also use it for help in adjustment in circadian rhythm, especially in adolescents.”

He recommends melatonin, he added, if he has a complete case history, and melatonin is suitable for that patient.

Typically, it’s not.

“Most often a medication is not the answer for the sleep concern that parents are having about their child,” he said.

The investigators disclosed grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Colorado Clinical and Translational Science Award Program of the National Center for Advancing Translational Sciences of the National Institutes of Health. They reported no conflicts of interest.

Publications
Topics
Sections

Melatonin usage has become increasingly common among children in the United States, with almost one in five kids over the age of 5 having taken the sleep aid in the past 30 days, according to a recent study.

These findings should prompt clinicians to discuss with parents the various factors that could be driving sleep disturbances, and potential safety issues associated with melatonin usage, lead author Lauren E. Hartstein, PhD, a postdoctoral fellow in the Sleep and Development Lab at the University of Colorado, Boulder, and colleagues reported.

Lauren E. Hartstein, PhD, a postdoctoral fellow in the Sleep and Development Lab at the University of Colorado, Boulder
Dr. Lauren E. Hartstein

Writing in JAMA Pediatrics, the investigators noted that melatonin products are notorious for mislabeling, with active ingredient quantities as much as three times higher than the labeled amount. This issue is particularly concerning, they added, as calls to poison control for melatonin ingestion jumped more than fivefold from 2012 to 2021, with most cases involving children younger than 5 years. Meanwhile, scant evidence is available to characterize intentional usage in the same population.

“Current data are lacking on the prevalence of melatonin use and the frequency, dosing, and timing of melatonin administration in U.S. youth,” Dr. Hartstein and colleagues wrote.

To address this knowledge gap, the investigators conducted an online survey of parents with children and adolescents aged 1.0-13.9 years. The survey asked parents to report any melatonin usage in their children in the past 30 days.

Parents reporting melatonin usage were asked about frequency, dose, timing of administration before bedtime, and duration of use.

Findings were reported within three age groups: preschool (1-4 years), school aged (5-9 years), and preteen (10-13 years).

The survey revealed that almost one in five children in the older age groups were using melatonin, with a rate of 18.5% in the school-aged group and 19.4% in the preteen group. In comparison, 5.6% of preschool children had received melatonin for sleep in the past 30 days.
 

A significant uptick in usage

These findings point to a significant uptick in usage, according to Dr. Hartstein and colleagues, who cited a 2017-2018 study that found just 1.3% of U.S. children had taken melatonin in the past 30 days.

In the present study, melatonin was typically administered 30 minutes before bedtime, most often as a gummy (64.3%) or chewable tablet (27.0%).

Frequency of administration was similar between age groups and trended toward a bimodal pattern, with melatonin often given either 1 day per week or 7 days per week.

Median dose increased significantly with age, from 0.5 mg in the preschool group to 1.0 mg in the school-aged group and 2.0 mg in the preteen group. Median duration also showed a significant upward trend, with 12-month, 18-month, and 21-month durations, respectively, for ascending age groups.

The investigators concluded that melatonin usage among U.S. adolescents and children is “exceedingly common,” despite a lack of evidence to support long-term safety or guide optimal dosing.
 

Is melatonin use masking other sleep issues?

“Widespread melatonin use across developmental stages may suggest a high prevalence of sleep disruption, which deserves accurate diagnosis and effective treatment,” Dr. Hartstein and colleagues wrote. “Dissemination of information regarding safety concerns, such as overdose and supplement mislabeling, is necessary. Clinicians should discuss with parents the factors associated with sleep difficulties and effective behavioral strategies.”

Large-scale, long-term studies are needed, they added, to generate relevant safety and efficacy data, and to characterize the factors driving melatonin administration by parents.

Dr. Alfonso J. Padilla, assistant clinical professor of sleep medicine at the David Geffen School of Medicine at UCLA
courtesy UCLA
Dr. Alfonso J. Padilla

“Studies like these add to our knowledge base and give us insight into what patients or parents may be doing that can impact overall health,” said Alfonso J. Padilla, MD, assistant clinical professor of sleep medicine at the University of California, Los Angeles, in a written comment. “Often, in normal encounters with our patients we may not be able to gather this information easily. It may help open conversations about sleep issues that are not being addressed.”

Dr. Padilla suggested that parents may believe that melatonin is safe because it is not regulated by the Food and Drug Administration, when in fact they could be negatively impacting their children’s sleep. He noted that short-term risks include altered circadian rhythm and vivid dreams or nightmares, while long-term safety remains unclear.

“As a sleep physician, I use melatonin for specific indications only,” Dr. Padilla said. “I may use it in small children that are having difficulty falling asleep, especially in children with autism or special needs. I also use it for help in adjustment in circadian rhythm, especially in adolescents.”

He recommends melatonin, he added, if he has a complete case history, and melatonin is suitable for that patient.

Typically, it’s not.

“Most often a medication is not the answer for the sleep concern that parents are having about their child,” he said.

The investigators disclosed grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Colorado Clinical and Translational Science Award Program of the National Center for Advancing Translational Sciences of the National Institutes of Health. They reported no conflicts of interest.

Melatonin usage has become increasingly common among children in the United States, with almost one in five kids over the age of 5 having taken the sleep aid in the past 30 days, according to a recent study.

These findings should prompt clinicians to discuss with parents the various factors that could be driving sleep disturbances, and potential safety issues associated with melatonin usage, lead author Lauren E. Hartstein, PhD, a postdoctoral fellow in the Sleep and Development Lab at the University of Colorado, Boulder, and colleagues reported.

Lauren E. Hartstein, PhD, a postdoctoral fellow in the Sleep and Development Lab at the University of Colorado, Boulder
Dr. Lauren E. Hartstein

Writing in JAMA Pediatrics, the investigators noted that melatonin products are notorious for mislabeling, with active ingredient quantities as much as three times higher than the labeled amount. This issue is particularly concerning, they added, as calls to poison control for melatonin ingestion jumped more than fivefold from 2012 to 2021, with most cases involving children younger than 5 years. Meanwhile, scant evidence is available to characterize intentional usage in the same population.

“Current data are lacking on the prevalence of melatonin use and the frequency, dosing, and timing of melatonin administration in U.S. youth,” Dr. Hartstein and colleagues wrote.

To address this knowledge gap, the investigators conducted an online survey of parents with children and adolescents aged 1.0-13.9 years. The survey asked parents to report any melatonin usage in their children in the past 30 days.

Parents reporting melatonin usage were asked about frequency, dose, timing of administration before bedtime, and duration of use.

Findings were reported within three age groups: preschool (1-4 years), school aged (5-9 years), and preteen (10-13 years).

The survey revealed that almost one in five children in the older age groups were using melatonin, with a rate of 18.5% in the school-aged group and 19.4% in the preteen group. In comparison, 5.6% of preschool children had received melatonin for sleep in the past 30 days.
 

A significant uptick in usage

These findings point to a significant uptick in usage, according to Dr. Hartstein and colleagues, who cited a 2017-2018 study that found just 1.3% of U.S. children had taken melatonin in the past 30 days.

In the present study, melatonin was typically administered 30 minutes before bedtime, most often as a gummy (64.3%) or chewable tablet (27.0%).

Frequency of administration was similar between age groups and trended toward a bimodal pattern, with melatonin often given either 1 day per week or 7 days per week.

Median dose increased significantly with age, from 0.5 mg in the preschool group to 1.0 mg in the school-aged group and 2.0 mg in the preteen group. Median duration also showed a significant upward trend, with 12-month, 18-month, and 21-month durations, respectively, for ascending age groups.

The investigators concluded that melatonin usage among U.S. adolescents and children is “exceedingly common,” despite a lack of evidence to support long-term safety or guide optimal dosing.
 

Is melatonin use masking other sleep issues?

“Widespread melatonin use across developmental stages may suggest a high prevalence of sleep disruption, which deserves accurate diagnosis and effective treatment,” Dr. Hartstein and colleagues wrote. “Dissemination of information regarding safety concerns, such as overdose and supplement mislabeling, is necessary. Clinicians should discuss with parents the factors associated with sleep difficulties and effective behavioral strategies.”

Large-scale, long-term studies are needed, they added, to generate relevant safety and efficacy data, and to characterize the factors driving melatonin administration by parents.

Dr. Alfonso J. Padilla, assistant clinical professor of sleep medicine at the David Geffen School of Medicine at UCLA
courtesy UCLA
Dr. Alfonso J. Padilla

“Studies like these add to our knowledge base and give us insight into what patients or parents may be doing that can impact overall health,” said Alfonso J. Padilla, MD, assistant clinical professor of sleep medicine at the University of California, Los Angeles, in a written comment. “Often, in normal encounters with our patients we may not be able to gather this information easily. It may help open conversations about sleep issues that are not being addressed.”

Dr. Padilla suggested that parents may believe that melatonin is safe because it is not regulated by the Food and Drug Administration, when in fact they could be negatively impacting their children’s sleep. He noted that short-term risks include altered circadian rhythm and vivid dreams or nightmares, while long-term safety remains unclear.

“As a sleep physician, I use melatonin for specific indications only,” Dr. Padilla said. “I may use it in small children that are having difficulty falling asleep, especially in children with autism or special needs. I also use it for help in adjustment in circadian rhythm, especially in adolescents.”

He recommends melatonin, he added, if he has a complete case history, and melatonin is suitable for that patient.

Typically, it’s not.

“Most often a medication is not the answer for the sleep concern that parents are having about their child,” he said.

The investigators disclosed grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development and the Colorado Clinical and Translational Science Award Program of the National Center for Advancing Translational Sciences of the National Institutes of Health. They reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Liver-resident T cells provide early protection against Listeria infection

Article Type
Changed
Mon, 11/13/2023 - 09:50

Liver-resident gamma delta T cells that produce interleukin(IL)-17 coordinate with hepatic macrophages to offer early protection against Listeria monocytogenes infection, according to investigators.

These finding suggest that gamma delta T17 cells could be a target for novel cell-based therapies against liver diseases, reported lead author Yanan Wang, PhD, of Shandong University, Jinan, China, and colleagues.

“Gamma delta T cells are located in mucosal tissues and other peripheral lymphoid tissues and are considered to act as the first line of defense within the immune system,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology. “Several studies have reported that IL-17A produced by gamma delta T cells plays a critical role in host defense after Listeria monocytogenes [infection] in the liver. However, in those studies, the details of the phenotypes, dynamic changes, proliferation activity, and cytokine production of the responding gamma delta T cell populations in the overall process of hepatic infection are unclear, and how they accumulated into the infection sites has not been elucidated.”

To address this knowledge gap, Dr. Wang and colleagues conducted a series of experiments involving gamma delta T cells from murine liver samples.

First, using single-cell RNA-sequencing (scRNA-seq), the investigators identified six clusters of hepatic gamma delta T cells.

“[This first step] revealed the unique gene expression characteristics and indicated the possible important roles in immune responses of hepatic gamma delta T17 cells,” they noted.

Next, the investigators measured expression of CD44 and CD27 in liver gamma delta cells.

“Expression of CD44 and CD27 has been used to distinguish IL-17A–, interferon gamma–producing, and other subsets of gamma delta T cells in the thymus, lymph nodes, lungs, and other peripheral lymphoid tissues,” they wrote.

These efforts revealed three subsets of hepatic gamma delta T cells, of which CD44hiCD27– gamma delta T cells were most abundant. Further analysis revealed expression profiles consistent with liver residency.

The next phases of the study characterized the immune roles of hepatic gamma delta T cells.

A comparison of Listeria monocytogenes infection in wild-type versus T-cell antigen receptor knockout mice, for example, showed that knockout mice had significantly more weight loss than did wild-type mice, greater bacterial load in the liver, and shorter survival times.

“As expected, the proportion and absolute numbers of gamma delta T cells in the liver of wild-type mice increased at day 3 and reached a peak at day 7 after infection,” the investigators wrote. “These data suggested that hepatic gamma delta T cells proliferated after infection and contributed to Lm clearance.”

Parabiosis experiments showed that the increased number of CD44hiCD27– gamma delta T cells in the livers of Listeria monocytogenes-infected mice were due to migration and proliferation of liver-resident gamma delta T cells instead of circulating gamma delta T cells. A transwell assay revealed that Kupffer cells and monocyte-derived macrophages promoted migration of CD44hiCD27– gamma delta T cells upon infection.

“Our study provides additional insight into liver-resident lymphocytes and will aid in targeting such tissue-resident lymphocyte populations to promote local immune surveillance,” the investigators concluded.

The study was supported by grants from the National Natural Science Foundation of China and the Shandong Provincial Natural Science Foundation. The investigators disclosed no conflicts of interest.

Publications
Topics
Sections

Liver-resident gamma delta T cells that produce interleukin(IL)-17 coordinate with hepatic macrophages to offer early protection against Listeria monocytogenes infection, according to investigators.

These finding suggest that gamma delta T17 cells could be a target for novel cell-based therapies against liver diseases, reported lead author Yanan Wang, PhD, of Shandong University, Jinan, China, and colleagues.

“Gamma delta T cells are located in mucosal tissues and other peripheral lymphoid tissues and are considered to act as the first line of defense within the immune system,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology. “Several studies have reported that IL-17A produced by gamma delta T cells plays a critical role in host defense after Listeria monocytogenes [infection] in the liver. However, in those studies, the details of the phenotypes, dynamic changes, proliferation activity, and cytokine production of the responding gamma delta T cell populations in the overall process of hepatic infection are unclear, and how they accumulated into the infection sites has not been elucidated.”

To address this knowledge gap, Dr. Wang and colleagues conducted a series of experiments involving gamma delta T cells from murine liver samples.

First, using single-cell RNA-sequencing (scRNA-seq), the investigators identified six clusters of hepatic gamma delta T cells.

“[This first step] revealed the unique gene expression characteristics and indicated the possible important roles in immune responses of hepatic gamma delta T17 cells,” they noted.

Next, the investigators measured expression of CD44 and CD27 in liver gamma delta cells.

“Expression of CD44 and CD27 has been used to distinguish IL-17A–, interferon gamma–producing, and other subsets of gamma delta T cells in the thymus, lymph nodes, lungs, and other peripheral lymphoid tissues,” they wrote.

These efforts revealed three subsets of hepatic gamma delta T cells, of which CD44hiCD27– gamma delta T cells were most abundant. Further analysis revealed expression profiles consistent with liver residency.

The next phases of the study characterized the immune roles of hepatic gamma delta T cells.

A comparison of Listeria monocytogenes infection in wild-type versus T-cell antigen receptor knockout mice, for example, showed that knockout mice had significantly more weight loss than did wild-type mice, greater bacterial load in the liver, and shorter survival times.

“As expected, the proportion and absolute numbers of gamma delta T cells in the liver of wild-type mice increased at day 3 and reached a peak at day 7 after infection,” the investigators wrote. “These data suggested that hepatic gamma delta T cells proliferated after infection and contributed to Lm clearance.”

Parabiosis experiments showed that the increased number of CD44hiCD27– gamma delta T cells in the livers of Listeria monocytogenes-infected mice were due to migration and proliferation of liver-resident gamma delta T cells instead of circulating gamma delta T cells. A transwell assay revealed that Kupffer cells and monocyte-derived macrophages promoted migration of CD44hiCD27– gamma delta T cells upon infection.

“Our study provides additional insight into liver-resident lymphocytes and will aid in targeting such tissue-resident lymphocyte populations to promote local immune surveillance,” the investigators concluded.

The study was supported by grants from the National Natural Science Foundation of China and the Shandong Provincial Natural Science Foundation. The investigators disclosed no conflicts of interest.

Liver-resident gamma delta T cells that produce interleukin(IL)-17 coordinate with hepatic macrophages to offer early protection against Listeria monocytogenes infection, according to investigators.

These finding suggest that gamma delta T17 cells could be a target for novel cell-based therapies against liver diseases, reported lead author Yanan Wang, PhD, of Shandong University, Jinan, China, and colleagues.

“Gamma delta T cells are located in mucosal tissues and other peripheral lymphoid tissues and are considered to act as the first line of defense within the immune system,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology. “Several studies have reported that IL-17A produced by gamma delta T cells plays a critical role in host defense after Listeria monocytogenes [infection] in the liver. However, in those studies, the details of the phenotypes, dynamic changes, proliferation activity, and cytokine production of the responding gamma delta T cell populations in the overall process of hepatic infection are unclear, and how they accumulated into the infection sites has not been elucidated.”

To address this knowledge gap, Dr. Wang and colleagues conducted a series of experiments involving gamma delta T cells from murine liver samples.

First, using single-cell RNA-sequencing (scRNA-seq), the investigators identified six clusters of hepatic gamma delta T cells.

“[This first step] revealed the unique gene expression characteristics and indicated the possible important roles in immune responses of hepatic gamma delta T17 cells,” they noted.

Next, the investigators measured expression of CD44 and CD27 in liver gamma delta cells.

“Expression of CD44 and CD27 has been used to distinguish IL-17A–, interferon gamma–producing, and other subsets of gamma delta T cells in the thymus, lymph nodes, lungs, and other peripheral lymphoid tissues,” they wrote.

These efforts revealed three subsets of hepatic gamma delta T cells, of which CD44hiCD27– gamma delta T cells were most abundant. Further analysis revealed expression profiles consistent with liver residency.

The next phases of the study characterized the immune roles of hepatic gamma delta T cells.

A comparison of Listeria monocytogenes infection in wild-type versus T-cell antigen receptor knockout mice, for example, showed that knockout mice had significantly more weight loss than did wild-type mice, greater bacterial load in the liver, and shorter survival times.

“As expected, the proportion and absolute numbers of gamma delta T cells in the liver of wild-type mice increased at day 3 and reached a peak at day 7 after infection,” the investigators wrote. “These data suggested that hepatic gamma delta T cells proliferated after infection and contributed to Lm clearance.”

Parabiosis experiments showed that the increased number of CD44hiCD27– gamma delta T cells in the livers of Listeria monocytogenes-infected mice were due to migration and proliferation of liver-resident gamma delta T cells instead of circulating gamma delta T cells. A transwell assay revealed that Kupffer cells and monocyte-derived macrophages promoted migration of CD44hiCD27– gamma delta T cells upon infection.

“Our study provides additional insight into liver-resident lymphocytes and will aid in targeting such tissue-resident lymphocyte populations to promote local immune surveillance,” the investigators concluded.

The study was supported by grants from the National Natural Science Foundation of China and the Shandong Provincial Natural Science Foundation. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AGA publishes CPU for AI in colon polyp diagnosis and management

Article Type
Changed
Fri, 11/10/2023 - 09:07

The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.

The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.

Dr. Jason Samarasena of the University of California Irvine, Orange, California,
Dr. Jason Samarasena

“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.

With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
 

CADe

“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.

CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.

CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
 

CADx

The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.

Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.

Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.

“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.

Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
 

Computer-aided quality assessment systems

Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.

Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”

“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
 

 

 

Looking ahead

Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.

“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”

This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.

Publications
Topics
Sections

The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.

The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.

Dr. Jason Samarasena of the University of California Irvine, Orange, California,
Dr. Jason Samarasena

“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.

With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
 

CADe

“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.

CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.

CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
 

CADx

The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.

Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.

Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.

“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.

Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
 

Computer-aided quality assessment systems

Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.

Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”

“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
 

 

 

Looking ahead

Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.

“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”

This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.

The American Gastroenterological Association has published a Clinical Practice Update (CPU) on artificial intelligence (AI) for diagnosing and managing colorectal polyps.

The CPU, authored by Jason Samarasena, MD, of UCI Health, Orange, Calif., and colleagues, draws on recent studies and clinical experience to discuss ways that AI is already reshaping colonoscopy, and what opportunities may lie ahead.

Dr. Jason Samarasena of the University of California Irvine, Orange, California,
Dr. Jason Samarasena

“As with any emerging technology, there are important questions and challenges that need to be addressed to ensure that AI tools are introduced safely and effectively into clinical endoscopic practice, ”they wrote in Gastroenterology.

With advances in processing speed and deep-learning technology, AI “computer vision” can now analyze live video of a colonoscopy in progress, enabling computer-aided detection (CADe) and computer-aided diagnosis (CADx), which the panelists described as the two most important developments in the area.
 

CADe

“In the last several years, numerous prospective, multicenter studies have found that real-time use of AI CADe tools during colonoscopy leads to improvements in adenoma detection and other related performance metrics,” Dr. Samarasena and colleagues wrote.

CADe has yielded mixed success in real-world practice, however, with some studies reporting worse detection metrics after implementing the new technology. Dr. Samarasena and colleagues offered a variety of possible explanations for these findings, including a “ceiling effect” among highly adept endoscopists, reduced operator vigilance caused by false confidence in the technology, and potential confounding inherent to unblinded trials.

CADe may also increase health care costs and burden, they suggested, as the technology tends to catch small benign polyps, prompting unnecessary resections and shortened colonoscopy surveillance intervals.
 

CADx

The above, unintended consequences of CADe may be counteracted by CADx, which uses computer vision to predict which lesions have benign histology, enabling “resect-and discard” or “diagnose-and-leave” strategies.

Such approaches could significantly reduce rates of polypectomy and/or histopathology, saving an estimated $33 million–150 million per year, according to the update.

Results of real-time CADx clinical trials have been “encouraging,” Dr. Samarasena and colleagues wrote, noting that emerging technology–compatible white-light endoscopy can achieve a negative predictive value of almost 98% for lesions less than 5 mm in diameter, potentially reducing polypectomy rate by almost half.

“Increasing endoscopist confidence in optical diagnosis may be an important step toward broader implementation of leave in situ and resect-and-discard strategies, but successful implementation will also require CADx tools that seamlessly integrate the endoscopic work flow, without the need for image enhancement or magnification,” the panelists wrote.

Reimbursement models may also need to be reworked, they suggested, as many GI practices depend on a steady stream of revenue from pathology services.
 

Computer-aided quality assessment systems

Beyond optical detection and diagnosis, AI tools are also being developed to improve colonoscopy technique.

Investigators are studying quality assessment systems that use AI offer feedback on a range of endoscopist skills, including colonic-fold evaluation, level of mucosal exposure, and withdrawal time, the latter of which is visualized by a “speedometer” that “paints” the mucosa with “a graphical representation of the colon.”

“In the future, these types of AI-based systems may support trainees and lower-performing endoscopists to reduce exposure errors and, more broadly, may empower physician practices and hospital systems with more nuanced and actionable data on an array of factors that contribute to colonoscopy quality,” the panelists wrote.
 

 

 

Looking ahead

Dr. Samarasena and colleagues concluded by suggesting that the AI tools in usage and development are just the beginning of a wave of technology that will revolutionize how colonoscopies are performed.

“Eventually, we predict an AI suite of tools for colonoscopy will seem indispensable, as a powerful adjunct to support safe and efficient clinical practice,” they wrote. “As technological innovation progresses, we can expect that the future for AI in endoscopy will be a hybrid model, where the unique capabilities of physicians and our AI tools will be seamlessly intertwined to optimize patient care.”

This CPU was commissioned and approved by the AGA Institute Clinical Practice Updates Committee and the AGA Governing Board. The investigators disclosed relationships with Olympus, Neptune Medical, Conmed, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Even a short course of opioids could jeopardize IBD patient health

Article Type
Changed
Thu, 11/09/2023 - 12:34

Short- or long-term use of opioids may increase risk of poor outcomes in patients with inflammatory bowel disease (IBD), according to investigators.

These findings amplify the safety signal from previous inpatient studies by showing that even a short course of opioids in an outpatient setting may increase risks of corticosteroid use and emergency department utilization, prompting caution among prescribers, reported Laura Telfer, MS, of Penn State College of Medicine, Hershey, Pa., and colleagues.

“Opioids are frequently prescribed to treat pain associated with IBD,” the investigators wrote in Gastro Hep Advances. “Unfortunately, they are associated with many problems in IBD, including increased risk of emergency room visits, hospitalization, surgery, and mortality. Chronic opioid use may also exacerbate symptoms and induce IBD flares, prompting discontinuation, thus increasing the risk of opioid withdrawal syndrome. Ironically, there is no published evidence that opioids even help to improve abdominal pain in IBD, particularly in the long term. Notably, most studies investigating opioid use in IBD have been limited to hospitalized patients, and few have directly evaluated the impact of opioid prescription length.”

Laura Telfer, MS, of Penn State College of Medicine, Hershey, Pennsylvania
Penn State College of Medicine
Laura Telfer

To address this knowledge gap, Ms. Telfer and colleagues conducted a retrospective, population-based cohort study involving patients with IBD who were classified as either long-term opioid users, short-term opioid users, or nonusers. Drawing data from more than 80,000 patients in the TriNetX Diamond Network, the investigators evaluated relative, intergroup risks for corticosteroid use, emergency department utilization, mortality, and IBD-related surgery.

Comparing short-term opioid users and nonusers revealed that short-term use more than doubled the risk of corticosteroid prescription (relative risk [RR], 2.517; P less than .001), and increased the risk of an emergency department visit by approximately 32% (RR, 1.315; P less than .001). Long-term use was associated with a similar doubling in risk of corticosteroid prescription (RR, 2.383; P less than .001), and an even greater risk of emergency department utilization (RR, 2.083; P less than .001). Risks of death or IBD-related surgery did not differ for either of these comparisons.

Next, the investigators compared long-term opioid use versus short-term opioid use. This suggested a duration-related effect, as long-term users were 57% more likely than were short-term users to utilize emergency department services (RR, 1.572; P less than .001). No significant differences for the other outcomes were detected in this comparison.

“Unlike previous studies, we did not find an association between opioid use and IBD-related surgery or death,” the investigators wrote. “Notably, these [previously reported] associations utilized opioid dosage (e.g., morphine equivalent or number of prescriptions), rather than length of opioid prescription (as we did). We also focused on IBD outpatients, while prior studies evaluated (in part or completely) inpatient populations, who typically present with more severe illness.”

Still, they added, the present findings should serve as a warning to prescribers considering even a short course of opioids for patients with IBD.

“This study demonstrates that prescribing opioids to IBD outpatients carries significant, specific risks, regardless of prescription length,” Ms. Telfer and colleagues wrote. “Healthcare professionals should exercise caution before prescribing these agents.”

The study was supported by the Peter and Marshia Carlino Early Career Professorship in Inflammatory Bowel Disease, the Margot E. Walrath Career Development Professorship in Gastroenterology, and the National Institutes of Health. The investigators disclosed no conflicts of interest.

Body

 

Given that objective control of inflammation does not always correlate with improvement in abdominal pain scores, the use of opioids in patients with inflammatory bowel diseases (IBD) remains a difficult area of clinical practice and research. In this study, Telfer and colleagues performed a retrospective analysis using the TriNetX Diamond Network to assess the impact of opioid use on health-associated outcomes and evaluate for a differential impact on outcomes depending on the length of opioid prescription. When compared to non–opioid users, both short- and long-term opioid users were more likely to utilize corticosteroids and emergency department services. However, in contrast to prior studies, there was no increased risk for mortality demonstrated among those patients with short- or long-term opioid use.

Edward L. Barnes, MD, MPH, University of North Carolina at Chapel Hill
Jennifer Layton, MBA
Dr. Edward L. Barnes
In addition to demonstrating the potential risks associated with both short- and long-term opioid use among patients with IBD, this study also reemphasizes the need for appropriately addressing the drivers of pain in IBD and appropriate methods of treating this underlying pain. Despite the use of a well-constructed data source, given the retrospective nature of this manuscript it is difficult to untangle the cause vs. association of opioid use and increased corticosteroid use. However, the recognition there is an underlying driver of pain in patients with IBD that must be addressed should prompt continued analysis of the best method of pain control, the reasons for chronic opioid use in this population, and early treatment approaches to avoid opioid use and the related adverse IBD-related outcomes demonstrated in this study.
 

Edward L. Barnes, MD, MPH, is assistant professor of medicine at the University of North Carolina at Chapel Hill. He disclosed having served as a consultant for Target RWE (not relevant to this commentary).

Publications
Topics
Sections
Body

 

Given that objective control of inflammation does not always correlate with improvement in abdominal pain scores, the use of opioids in patients with inflammatory bowel diseases (IBD) remains a difficult area of clinical practice and research. In this study, Telfer and colleagues performed a retrospective analysis using the TriNetX Diamond Network to assess the impact of opioid use on health-associated outcomes and evaluate for a differential impact on outcomes depending on the length of opioid prescription. When compared to non–opioid users, both short- and long-term opioid users were more likely to utilize corticosteroids and emergency department services. However, in contrast to prior studies, there was no increased risk for mortality demonstrated among those patients with short- or long-term opioid use.

Edward L. Barnes, MD, MPH, University of North Carolina at Chapel Hill
Jennifer Layton, MBA
Dr. Edward L. Barnes
In addition to demonstrating the potential risks associated with both short- and long-term opioid use among patients with IBD, this study also reemphasizes the need for appropriately addressing the drivers of pain in IBD and appropriate methods of treating this underlying pain. Despite the use of a well-constructed data source, given the retrospective nature of this manuscript it is difficult to untangle the cause vs. association of opioid use and increased corticosteroid use. However, the recognition there is an underlying driver of pain in patients with IBD that must be addressed should prompt continued analysis of the best method of pain control, the reasons for chronic opioid use in this population, and early treatment approaches to avoid opioid use and the related adverse IBD-related outcomes demonstrated in this study.
 

Edward L. Barnes, MD, MPH, is assistant professor of medicine at the University of North Carolina at Chapel Hill. He disclosed having served as a consultant for Target RWE (not relevant to this commentary).

Body

 

Given that objective control of inflammation does not always correlate with improvement in abdominal pain scores, the use of opioids in patients with inflammatory bowel diseases (IBD) remains a difficult area of clinical practice and research. In this study, Telfer and colleagues performed a retrospective analysis using the TriNetX Diamond Network to assess the impact of opioid use on health-associated outcomes and evaluate for a differential impact on outcomes depending on the length of opioid prescription. When compared to non–opioid users, both short- and long-term opioid users were more likely to utilize corticosteroids and emergency department services. However, in contrast to prior studies, there was no increased risk for mortality demonstrated among those patients with short- or long-term opioid use.

Edward L. Barnes, MD, MPH, University of North Carolina at Chapel Hill
Jennifer Layton, MBA
Dr. Edward L. Barnes
In addition to demonstrating the potential risks associated with both short- and long-term opioid use among patients with IBD, this study also reemphasizes the need for appropriately addressing the drivers of pain in IBD and appropriate methods of treating this underlying pain. Despite the use of a well-constructed data source, given the retrospective nature of this manuscript it is difficult to untangle the cause vs. association of opioid use and increased corticosteroid use. However, the recognition there is an underlying driver of pain in patients with IBD that must be addressed should prompt continued analysis of the best method of pain control, the reasons for chronic opioid use in this population, and early treatment approaches to avoid opioid use and the related adverse IBD-related outcomes demonstrated in this study.
 

Edward L. Barnes, MD, MPH, is assistant professor of medicine at the University of North Carolina at Chapel Hill. He disclosed having served as a consultant for Target RWE (not relevant to this commentary).

Short- or long-term use of opioids may increase risk of poor outcomes in patients with inflammatory bowel disease (IBD), according to investigators.

These findings amplify the safety signal from previous inpatient studies by showing that even a short course of opioids in an outpatient setting may increase risks of corticosteroid use and emergency department utilization, prompting caution among prescribers, reported Laura Telfer, MS, of Penn State College of Medicine, Hershey, Pa., and colleagues.

“Opioids are frequently prescribed to treat pain associated with IBD,” the investigators wrote in Gastro Hep Advances. “Unfortunately, they are associated with many problems in IBD, including increased risk of emergency room visits, hospitalization, surgery, and mortality. Chronic opioid use may also exacerbate symptoms and induce IBD flares, prompting discontinuation, thus increasing the risk of opioid withdrawal syndrome. Ironically, there is no published evidence that opioids even help to improve abdominal pain in IBD, particularly in the long term. Notably, most studies investigating opioid use in IBD have been limited to hospitalized patients, and few have directly evaluated the impact of opioid prescription length.”

Laura Telfer, MS, of Penn State College of Medicine, Hershey, Pennsylvania
Penn State College of Medicine
Laura Telfer

To address this knowledge gap, Ms. Telfer and colleagues conducted a retrospective, population-based cohort study involving patients with IBD who were classified as either long-term opioid users, short-term opioid users, or nonusers. Drawing data from more than 80,000 patients in the TriNetX Diamond Network, the investigators evaluated relative, intergroup risks for corticosteroid use, emergency department utilization, mortality, and IBD-related surgery.

Comparing short-term opioid users and nonusers revealed that short-term use more than doubled the risk of corticosteroid prescription (relative risk [RR], 2.517; P less than .001), and increased the risk of an emergency department visit by approximately 32% (RR, 1.315; P less than .001). Long-term use was associated with a similar doubling in risk of corticosteroid prescription (RR, 2.383; P less than .001), and an even greater risk of emergency department utilization (RR, 2.083; P less than .001). Risks of death or IBD-related surgery did not differ for either of these comparisons.

Next, the investigators compared long-term opioid use versus short-term opioid use. This suggested a duration-related effect, as long-term users were 57% more likely than were short-term users to utilize emergency department services (RR, 1.572; P less than .001). No significant differences for the other outcomes were detected in this comparison.

“Unlike previous studies, we did not find an association between opioid use and IBD-related surgery or death,” the investigators wrote. “Notably, these [previously reported] associations utilized opioid dosage (e.g., morphine equivalent or number of prescriptions), rather than length of opioid prescription (as we did). We also focused on IBD outpatients, while prior studies evaluated (in part or completely) inpatient populations, who typically present with more severe illness.”

Still, they added, the present findings should serve as a warning to prescribers considering even a short course of opioids for patients with IBD.

“This study demonstrates that prescribing opioids to IBD outpatients carries significant, specific risks, regardless of prescription length,” Ms. Telfer and colleagues wrote. “Healthcare professionals should exercise caution before prescribing these agents.”

The study was supported by the Peter and Marshia Carlino Early Career Professorship in Inflammatory Bowel Disease, the Margot E. Walrath Career Development Professorship in Gastroenterology, and the National Institutes of Health. The investigators disclosed no conflicts of interest.

Short- or long-term use of opioids may increase risk of poor outcomes in patients with inflammatory bowel disease (IBD), according to investigators.

These findings amplify the safety signal from previous inpatient studies by showing that even a short course of opioids in an outpatient setting may increase risks of corticosteroid use and emergency department utilization, prompting caution among prescribers, reported Laura Telfer, MS, of Penn State College of Medicine, Hershey, Pa., and colleagues.

“Opioids are frequently prescribed to treat pain associated with IBD,” the investigators wrote in Gastro Hep Advances. “Unfortunately, they are associated with many problems in IBD, including increased risk of emergency room visits, hospitalization, surgery, and mortality. Chronic opioid use may also exacerbate symptoms and induce IBD flares, prompting discontinuation, thus increasing the risk of opioid withdrawal syndrome. Ironically, there is no published evidence that opioids even help to improve abdominal pain in IBD, particularly in the long term. Notably, most studies investigating opioid use in IBD have been limited to hospitalized patients, and few have directly evaluated the impact of opioid prescription length.”

Laura Telfer, MS, of Penn State College of Medicine, Hershey, Pennsylvania
Penn State College of Medicine
Laura Telfer

To address this knowledge gap, Ms. Telfer and colleagues conducted a retrospective, population-based cohort study involving patients with IBD who were classified as either long-term opioid users, short-term opioid users, or nonusers. Drawing data from more than 80,000 patients in the TriNetX Diamond Network, the investigators evaluated relative, intergroup risks for corticosteroid use, emergency department utilization, mortality, and IBD-related surgery.

Comparing short-term opioid users and nonusers revealed that short-term use more than doubled the risk of corticosteroid prescription (relative risk [RR], 2.517; P less than .001), and increased the risk of an emergency department visit by approximately 32% (RR, 1.315; P less than .001). Long-term use was associated with a similar doubling in risk of corticosteroid prescription (RR, 2.383; P less than .001), and an even greater risk of emergency department utilization (RR, 2.083; P less than .001). Risks of death or IBD-related surgery did not differ for either of these comparisons.

Next, the investigators compared long-term opioid use versus short-term opioid use. This suggested a duration-related effect, as long-term users were 57% more likely than were short-term users to utilize emergency department services (RR, 1.572; P less than .001). No significant differences for the other outcomes were detected in this comparison.

“Unlike previous studies, we did not find an association between opioid use and IBD-related surgery or death,” the investigators wrote. “Notably, these [previously reported] associations utilized opioid dosage (e.g., morphine equivalent or number of prescriptions), rather than length of opioid prescription (as we did). We also focused on IBD outpatients, while prior studies evaluated (in part or completely) inpatient populations, who typically present with more severe illness.”

Still, they added, the present findings should serve as a warning to prescribers considering even a short course of opioids for patients with IBD.

“This study demonstrates that prescribing opioids to IBD outpatients carries significant, specific risks, regardless of prescription length,” Ms. Telfer and colleagues wrote. “Healthcare professionals should exercise caution before prescribing these agents.”

The study was supported by the Peter and Marshia Carlino Early Career Professorship in Inflammatory Bowel Disease, the Margot E. Walrath Career Development Professorship in Gastroenterology, and the National Institutes of Health. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTRO HEP ADVANCES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Microsimulation model identifies 4-year window for pancreatic cancer screening

Article Type
Changed
Sun, 11/12/2023 - 23:46

It takes an average of 4 years for a pancreatic lesion to progress from high-grade dysplasia (HGD) to cancer, suggesting a window of opportunity for screening, based on a microsimulation model.

To seize this opportunity, however, a greater understanding of natural disease course is needed, along with more sensitive screening tools, reported Brechtje D. M. Koopmann, MD, of Erasmus Medical Center, Rotterdam, the Netherlands, and colleagues.

Previous studies have suggested that the window of opportunity for pancreatic cancer screening may span decades, with estimates ranging from 12 to 50 years, the investigators wrote. Their report was published in Gastroenterology.

“Unfortunately, the poor results of pancreatic cancer screening do not align with this assumption, leaving unanswered whether this large window of opportunity truly exists,” they noted. “Microsimulation modeling, combined with available, if limited data, can provide new information on the natural disease course.”

For the present study, the investigators used the Microsimulation Screening Analysis (MISCAN) model, which has guided development of screening programs around the world for cervical, breast, and colorectal cancer. The model incorporates natural disease course, screening, and demographic data, then uses observable inputs such as precursor lesion prevalence and cancer incidence to estimate unobservable outcomes like stage durations and precursor lesion onset.

Dr. Koopmann and colleagues programmed this model with Dutch pancreatic cancer incidence data and findings from Japanese autopsy cases without pancreatic cancer.

First, the model offered insights into precursor lesion prevalence.

The estimated prevalence of any cystic lesion in the pancreas was 6.1% for individuals 50 years of age and 29.6% for those 80 years of age. Solid precursor lesions (PanINs) were estimated to be mainly multifocal (three or more lesions) in individuals older than 80 years. By this age, almost 12% had at least two PanINs. For those lesions that eventually became cancerous, the mean time since cyst onset was estimated to be 8.8 years, and mean time since PanIN onset was 9.0 years.

However, less than 10% of cystic and PanIN lesions progress to become cancers. PanIN lesions are not visible on imaging, and therefore current screening focuses on finding cystic precursor lesions, although these represent only about 10% of pancreatic cancers.

“Given the low pancreatic cancer progression risk of cysts, evaluation of the efficiency of current surveillance guidelines is necessary,” the investigators noted.

Screening should instead focus on identifying high-grade dysplastic lesions, they suggested. While these lesions may have a very low estimated prevalence, at just 0.3% among individuals 90 years of age, they present the greatest risk of pancreatic cancer.

For precursor cysts exhibiting HGD that progressed to pancreatic cancer, the mean interval between dysplasia and cancer was just 4 years. Among 13.7% of individuals, the interval was less than 1 year, suggesting an even shorter window of opportunity for screening.

Beyond this brief timeframe, low test sensitivity explains why screening efforts to date have fallen short, the investigators wrote.

Better tests are “urgently needed,” they added, while acknowledging the challenges inherent to this endeavor. Previous research has shown that precursor lesions in the pancreas are often less than 5 mm in diameter, making them extremely challenging to detect. An effective tool would need to identify solid precursor lesions (PanINs), and also need to simultaneously determine grade of dysplasia.

“Biomarkers could be the future in this matter,” the investigators suggested.

Dr. Koopmann and colleagues concluded by noting that more research is needed to characterize the pathophysiology of pancreatic cancer. On their part, “the current model will be validated, adjusted, and improved whenever new data from autopsy or prospective surveillance studies become available.”

The study was funded in part by Maag Lever Darm Stichting. The investigators disclosed no conflicts of interest.

Body

 

We continue to search for a way to effectively screen for and prevent pancreatic cancer. Most pancreatic cancers come from pancreatic intraepithelial neoplasms (PanINs), which are essentially invisible on imaging. Pancreatic cysts are relatively common, and only a small number will progress to cancer. Screening via MRI or EUS can look for high-risk features of visible cysts or find early-stage cancers, but whom to screen, how often, and what to do with the results remains unclear. Many of the steps from development of the initial cyst or PanIN to the transformation to cancer cannot be observed, and as such this is a perfect application for disease modeling that allows us to fill in the gaps of what can be observed and estimate what we cannot see.

Mary Linton B Peters, MD, Beth Israel Deaconess Medical Center, Boston.
Beth Israel Deaconess Medical Center
Dr. Mary Linton B. Peters
In this study, the Dutch Pancreatic Cancer Group has developed a model of the behavior of pancreatic precursor lesions (cysts and PanINs) that helps us understand the timeline of cancer development. This model substantiates that although cysts and PanINs are common and increase with age, most (about 90%) will not transform into cancer. It also shows that high-grade dysplasia exists on average for 4 years before transformation, which could be a window of opportunity for screening and intervention. The challenge is how to detect these lesions. This model illustrates that biology is giving us a window of opportunity, but that we need to find the biomarkers to take advantage of that window.

Mary Linton B. Peters, MD, MS, is a medical oncologist specializing in hepatic and pancreatobiliary cancers at Beth Israel Deaconess Medical Center, Boston, an assistant professor at Harvard Medical School, and a senior scientist at the Institute for Technology Assessment of Massachusetts General Hospital. She reports unrelated institutional research funding from NuCana and Helsinn.

Publications
Topics
Sections
Body

 

We continue to search for a way to effectively screen for and prevent pancreatic cancer. Most pancreatic cancers come from pancreatic intraepithelial neoplasms (PanINs), which are essentially invisible on imaging. Pancreatic cysts are relatively common, and only a small number will progress to cancer. Screening via MRI or EUS can look for high-risk features of visible cysts or find early-stage cancers, but whom to screen, how often, and what to do with the results remains unclear. Many of the steps from development of the initial cyst or PanIN to the transformation to cancer cannot be observed, and as such this is a perfect application for disease modeling that allows us to fill in the gaps of what can be observed and estimate what we cannot see.

Mary Linton B Peters, MD, Beth Israel Deaconess Medical Center, Boston.
Beth Israel Deaconess Medical Center
Dr. Mary Linton B. Peters
In this study, the Dutch Pancreatic Cancer Group has developed a model of the behavior of pancreatic precursor lesions (cysts and PanINs) that helps us understand the timeline of cancer development. This model substantiates that although cysts and PanINs are common and increase with age, most (about 90%) will not transform into cancer. It also shows that high-grade dysplasia exists on average for 4 years before transformation, which could be a window of opportunity for screening and intervention. The challenge is how to detect these lesions. This model illustrates that biology is giving us a window of opportunity, but that we need to find the biomarkers to take advantage of that window.

Mary Linton B. Peters, MD, MS, is a medical oncologist specializing in hepatic and pancreatobiliary cancers at Beth Israel Deaconess Medical Center, Boston, an assistant professor at Harvard Medical School, and a senior scientist at the Institute for Technology Assessment of Massachusetts General Hospital. She reports unrelated institutional research funding from NuCana and Helsinn.

Body

 

We continue to search for a way to effectively screen for and prevent pancreatic cancer. Most pancreatic cancers come from pancreatic intraepithelial neoplasms (PanINs), which are essentially invisible on imaging. Pancreatic cysts are relatively common, and only a small number will progress to cancer. Screening via MRI or EUS can look for high-risk features of visible cysts or find early-stage cancers, but whom to screen, how often, and what to do with the results remains unclear. Many of the steps from development of the initial cyst or PanIN to the transformation to cancer cannot be observed, and as such this is a perfect application for disease modeling that allows us to fill in the gaps of what can be observed and estimate what we cannot see.

Mary Linton B Peters, MD, Beth Israel Deaconess Medical Center, Boston.
Beth Israel Deaconess Medical Center
Dr. Mary Linton B. Peters
In this study, the Dutch Pancreatic Cancer Group has developed a model of the behavior of pancreatic precursor lesions (cysts and PanINs) that helps us understand the timeline of cancer development. This model substantiates that although cysts and PanINs are common and increase with age, most (about 90%) will not transform into cancer. It also shows that high-grade dysplasia exists on average for 4 years before transformation, which could be a window of opportunity for screening and intervention. The challenge is how to detect these lesions. This model illustrates that biology is giving us a window of opportunity, but that we need to find the biomarkers to take advantage of that window.

Mary Linton B. Peters, MD, MS, is a medical oncologist specializing in hepatic and pancreatobiliary cancers at Beth Israel Deaconess Medical Center, Boston, an assistant professor at Harvard Medical School, and a senior scientist at the Institute for Technology Assessment of Massachusetts General Hospital. She reports unrelated institutional research funding from NuCana and Helsinn.

It takes an average of 4 years for a pancreatic lesion to progress from high-grade dysplasia (HGD) to cancer, suggesting a window of opportunity for screening, based on a microsimulation model.

To seize this opportunity, however, a greater understanding of natural disease course is needed, along with more sensitive screening tools, reported Brechtje D. M. Koopmann, MD, of Erasmus Medical Center, Rotterdam, the Netherlands, and colleagues.

Previous studies have suggested that the window of opportunity for pancreatic cancer screening may span decades, with estimates ranging from 12 to 50 years, the investigators wrote. Their report was published in Gastroenterology.

“Unfortunately, the poor results of pancreatic cancer screening do not align with this assumption, leaving unanswered whether this large window of opportunity truly exists,” they noted. “Microsimulation modeling, combined with available, if limited data, can provide new information on the natural disease course.”

For the present study, the investigators used the Microsimulation Screening Analysis (MISCAN) model, which has guided development of screening programs around the world for cervical, breast, and colorectal cancer. The model incorporates natural disease course, screening, and demographic data, then uses observable inputs such as precursor lesion prevalence and cancer incidence to estimate unobservable outcomes like stage durations and precursor lesion onset.

Dr. Koopmann and colleagues programmed this model with Dutch pancreatic cancer incidence data and findings from Japanese autopsy cases without pancreatic cancer.

First, the model offered insights into precursor lesion prevalence.

The estimated prevalence of any cystic lesion in the pancreas was 6.1% for individuals 50 years of age and 29.6% for those 80 years of age. Solid precursor lesions (PanINs) were estimated to be mainly multifocal (three or more lesions) in individuals older than 80 years. By this age, almost 12% had at least two PanINs. For those lesions that eventually became cancerous, the mean time since cyst onset was estimated to be 8.8 years, and mean time since PanIN onset was 9.0 years.

However, less than 10% of cystic and PanIN lesions progress to become cancers. PanIN lesions are not visible on imaging, and therefore current screening focuses on finding cystic precursor lesions, although these represent only about 10% of pancreatic cancers.

“Given the low pancreatic cancer progression risk of cysts, evaluation of the efficiency of current surveillance guidelines is necessary,” the investigators noted.

Screening should instead focus on identifying high-grade dysplastic lesions, they suggested. While these lesions may have a very low estimated prevalence, at just 0.3% among individuals 90 years of age, they present the greatest risk of pancreatic cancer.

For precursor cysts exhibiting HGD that progressed to pancreatic cancer, the mean interval between dysplasia and cancer was just 4 years. Among 13.7% of individuals, the interval was less than 1 year, suggesting an even shorter window of opportunity for screening.

Beyond this brief timeframe, low test sensitivity explains why screening efforts to date have fallen short, the investigators wrote.

Better tests are “urgently needed,” they added, while acknowledging the challenges inherent to this endeavor. Previous research has shown that precursor lesions in the pancreas are often less than 5 mm in diameter, making them extremely challenging to detect. An effective tool would need to identify solid precursor lesions (PanINs), and also need to simultaneously determine grade of dysplasia.

“Biomarkers could be the future in this matter,” the investigators suggested.

Dr. Koopmann and colleagues concluded by noting that more research is needed to characterize the pathophysiology of pancreatic cancer. On their part, “the current model will be validated, adjusted, and improved whenever new data from autopsy or prospective surveillance studies become available.”

The study was funded in part by Maag Lever Darm Stichting. The investigators disclosed no conflicts of interest.

It takes an average of 4 years for a pancreatic lesion to progress from high-grade dysplasia (HGD) to cancer, suggesting a window of opportunity for screening, based on a microsimulation model.

To seize this opportunity, however, a greater understanding of natural disease course is needed, along with more sensitive screening tools, reported Brechtje D. M. Koopmann, MD, of Erasmus Medical Center, Rotterdam, the Netherlands, and colleagues.

Previous studies have suggested that the window of opportunity for pancreatic cancer screening may span decades, with estimates ranging from 12 to 50 years, the investigators wrote. Their report was published in Gastroenterology.

“Unfortunately, the poor results of pancreatic cancer screening do not align with this assumption, leaving unanswered whether this large window of opportunity truly exists,” they noted. “Microsimulation modeling, combined with available, if limited data, can provide new information on the natural disease course.”

For the present study, the investigators used the Microsimulation Screening Analysis (MISCAN) model, which has guided development of screening programs around the world for cervical, breast, and colorectal cancer. The model incorporates natural disease course, screening, and demographic data, then uses observable inputs such as precursor lesion prevalence and cancer incidence to estimate unobservable outcomes like stage durations and precursor lesion onset.

Dr. Koopmann and colleagues programmed this model with Dutch pancreatic cancer incidence data and findings from Japanese autopsy cases without pancreatic cancer.

First, the model offered insights into precursor lesion prevalence.

The estimated prevalence of any cystic lesion in the pancreas was 6.1% for individuals 50 years of age and 29.6% for those 80 years of age. Solid precursor lesions (PanINs) were estimated to be mainly multifocal (three or more lesions) in individuals older than 80 years. By this age, almost 12% had at least two PanINs. For those lesions that eventually became cancerous, the mean time since cyst onset was estimated to be 8.8 years, and mean time since PanIN onset was 9.0 years.

However, less than 10% of cystic and PanIN lesions progress to become cancers. PanIN lesions are not visible on imaging, and therefore current screening focuses on finding cystic precursor lesions, although these represent only about 10% of pancreatic cancers.

“Given the low pancreatic cancer progression risk of cysts, evaluation of the efficiency of current surveillance guidelines is necessary,” the investigators noted.

Screening should instead focus on identifying high-grade dysplastic lesions, they suggested. While these lesions may have a very low estimated prevalence, at just 0.3% among individuals 90 years of age, they present the greatest risk of pancreatic cancer.

For precursor cysts exhibiting HGD that progressed to pancreatic cancer, the mean interval between dysplasia and cancer was just 4 years. Among 13.7% of individuals, the interval was less than 1 year, suggesting an even shorter window of opportunity for screening.

Beyond this brief timeframe, low test sensitivity explains why screening efforts to date have fallen short, the investigators wrote.

Better tests are “urgently needed,” they added, while acknowledging the challenges inherent to this endeavor. Previous research has shown that precursor lesions in the pancreas are often less than 5 mm in diameter, making them extremely challenging to detect. An effective tool would need to identify solid precursor lesions (PanINs), and also need to simultaneously determine grade of dysplasia.

“Biomarkers could be the future in this matter,” the investigators suggested.

Dr. Koopmann and colleagues concluded by noting that more research is needed to characterize the pathophysiology of pancreatic cancer. On their part, “the current model will be validated, adjusted, and improved whenever new data from autopsy or prospective surveillance studies become available.”

The study was funded in part by Maag Lever Darm Stichting. The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Many preoperative EAC biopsies fail to predict true tumor grade, leading to unnecessary esophagectomy

Article Type
Changed
Thu, 11/09/2023 - 10:06

Preoperative biopsy results in patients with esophageal adenocarcinoma (EAC) often misrepresent true tumor grade, according to a recent retrospective study.

Inaccurate preoperative biopsy findings could mean that patients who are candidates for endoscopic resection (ER) are unnecessarily undergoing esophagectomy, a procedure with greater risks of morbidity and mortality, reported Ravi S. Shah, MD, of Cleveland Clinic, and colleagues.

Dr. Ravi S. Shah of the Cleveland Clinic
Cleveland Clinic
Dr. Ravi S. Shah

“It is unclear how accurate tumor differentiation on endoscopic biopsies is and if it can be used for clinical decision-making,” the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy. “Given that tumors may be considerably heterogeneous in gland formation, the limited amount of tissue obtained from endoscopic forceps biopsies may not be representative of the entire tumor for pathologic grading, which may result in discrepant tumor grading between biopsy and resection specimens.”

While previous studies have compared esophagogastroduodenoscopy-guided biopsy results with histological findings after surgical resection, scant evidence is available to compare biopsy findings with both surgically and endoscopically resected tissue.

Despite this potential knowledge gap, “many patients with poorly differentiated EAC on preresection biopsy do not undergo ER, with the belief that the final resection pathology would be noncurative,” the investigators noted.

To help clarify how congruent pre- and postoperative biopsies are for both resection methods, Dr. Shah and colleagues conducted a retrospective study involving 346 EAC lesions. Samples were drawn from 121 ERs and 225 esophagectomies performed at two tertiary referral centers. Preoperative and postoperative findings were compared for accuracy and for level of agreement via Gwet’s AC2 interrater analysis.

For all evaluable lesions, preoperative biopsy had an accuracy of 68%, with a “substantial” agreement coefficient (Gwet’s AC2, 0.70; P less than .001). Accuracy in the esophagectomy group was similar, at 72%, again with “substantial” agreement (Gwet’s AC2, 0.74; P less than .001). For the ER group, however, accuracy was just 56%, with a “moderate” level of agreement (Gwet’s AC2, 0.60; P less than .001).

“We speculate that the discrepancy of tumor differentiation on endoscopic forceps biopsies and resection specimens is due to nonrepresentative sampling of tumors to accurately determine the percentage of gland formation and thus tumor grade,” the investigators noted.

Further analysis showed that 22.7% of moderately differentiated tumors were upgraded to poorly differentiated upon final histology. Conversely, 19.6% of poorly differentiated tumors were downgraded to moderately differentiated. Downgrading was even more common among T1a tumors, 40% of which were changed from poorly to moderately differentiated between pre- and postprocedural histology.

These latter findings concerning downgrading are particularly relevant for clinical decision-making, the investigators noted, as patients with poorly differentiated EAC on preoperative biopsy are typically sent for esophagectomy – a more invasive and riskier procedure – out of concern that ER will be noncurative.

“If poor differentiation was the only high-risk feature, these patients may have unnecessarily undergone esophagectomy,” Dr. Shah and colleagues wrote. “Especially in marginal surgical candidates, staging ER should be considered in patients with early esophageal cancer with preoperative biopsies showing poorly differentiated cancer.”

The investigators disclosed relationships with Medtronic, Lucid Diagnostics, Lumendi, and others.

Publications
Topics
Sections

Preoperative biopsy results in patients with esophageal adenocarcinoma (EAC) often misrepresent true tumor grade, according to a recent retrospective study.

Inaccurate preoperative biopsy findings could mean that patients who are candidates for endoscopic resection (ER) are unnecessarily undergoing esophagectomy, a procedure with greater risks of morbidity and mortality, reported Ravi S. Shah, MD, of Cleveland Clinic, and colleagues.

Dr. Ravi S. Shah of the Cleveland Clinic
Cleveland Clinic
Dr. Ravi S. Shah

“It is unclear how accurate tumor differentiation on endoscopic biopsies is and if it can be used for clinical decision-making,” the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy. “Given that tumors may be considerably heterogeneous in gland formation, the limited amount of tissue obtained from endoscopic forceps biopsies may not be representative of the entire tumor for pathologic grading, which may result in discrepant tumor grading between biopsy and resection specimens.”

While previous studies have compared esophagogastroduodenoscopy-guided biopsy results with histological findings after surgical resection, scant evidence is available to compare biopsy findings with both surgically and endoscopically resected tissue.

Despite this potential knowledge gap, “many patients with poorly differentiated EAC on preresection biopsy do not undergo ER, with the belief that the final resection pathology would be noncurative,” the investigators noted.

To help clarify how congruent pre- and postoperative biopsies are for both resection methods, Dr. Shah and colleagues conducted a retrospective study involving 346 EAC lesions. Samples were drawn from 121 ERs and 225 esophagectomies performed at two tertiary referral centers. Preoperative and postoperative findings were compared for accuracy and for level of agreement via Gwet’s AC2 interrater analysis.

For all evaluable lesions, preoperative biopsy had an accuracy of 68%, with a “substantial” agreement coefficient (Gwet’s AC2, 0.70; P less than .001). Accuracy in the esophagectomy group was similar, at 72%, again with “substantial” agreement (Gwet’s AC2, 0.74; P less than .001). For the ER group, however, accuracy was just 56%, with a “moderate” level of agreement (Gwet’s AC2, 0.60; P less than .001).

“We speculate that the discrepancy of tumor differentiation on endoscopic forceps biopsies and resection specimens is due to nonrepresentative sampling of tumors to accurately determine the percentage of gland formation and thus tumor grade,” the investigators noted.

Further analysis showed that 22.7% of moderately differentiated tumors were upgraded to poorly differentiated upon final histology. Conversely, 19.6% of poorly differentiated tumors were downgraded to moderately differentiated. Downgrading was even more common among T1a tumors, 40% of which were changed from poorly to moderately differentiated between pre- and postprocedural histology.

These latter findings concerning downgrading are particularly relevant for clinical decision-making, the investigators noted, as patients with poorly differentiated EAC on preoperative biopsy are typically sent for esophagectomy – a more invasive and riskier procedure – out of concern that ER will be noncurative.

“If poor differentiation was the only high-risk feature, these patients may have unnecessarily undergone esophagectomy,” Dr. Shah and colleagues wrote. “Especially in marginal surgical candidates, staging ER should be considered in patients with early esophageal cancer with preoperative biopsies showing poorly differentiated cancer.”

The investigators disclosed relationships with Medtronic, Lucid Diagnostics, Lumendi, and others.

Preoperative biopsy results in patients with esophageal adenocarcinoma (EAC) often misrepresent true tumor grade, according to a recent retrospective study.

Inaccurate preoperative biopsy findings could mean that patients who are candidates for endoscopic resection (ER) are unnecessarily undergoing esophagectomy, a procedure with greater risks of morbidity and mortality, reported Ravi S. Shah, MD, of Cleveland Clinic, and colleagues.

Dr. Ravi S. Shah of the Cleveland Clinic
Cleveland Clinic
Dr. Ravi S. Shah

“It is unclear how accurate tumor differentiation on endoscopic biopsies is and if it can be used for clinical decision-making,” the investigators wrote in Techniques and Innovations in Gastrointestinal Endoscopy. “Given that tumors may be considerably heterogeneous in gland formation, the limited amount of tissue obtained from endoscopic forceps biopsies may not be representative of the entire tumor for pathologic grading, which may result in discrepant tumor grading between biopsy and resection specimens.”

While previous studies have compared esophagogastroduodenoscopy-guided biopsy results with histological findings after surgical resection, scant evidence is available to compare biopsy findings with both surgically and endoscopically resected tissue.

Despite this potential knowledge gap, “many patients with poorly differentiated EAC on preresection biopsy do not undergo ER, with the belief that the final resection pathology would be noncurative,” the investigators noted.

To help clarify how congruent pre- and postoperative biopsies are for both resection methods, Dr. Shah and colleagues conducted a retrospective study involving 346 EAC lesions. Samples were drawn from 121 ERs and 225 esophagectomies performed at two tertiary referral centers. Preoperative and postoperative findings were compared for accuracy and for level of agreement via Gwet’s AC2 interrater analysis.

For all evaluable lesions, preoperative biopsy had an accuracy of 68%, with a “substantial” agreement coefficient (Gwet’s AC2, 0.70; P less than .001). Accuracy in the esophagectomy group was similar, at 72%, again with “substantial” agreement (Gwet’s AC2, 0.74; P less than .001). For the ER group, however, accuracy was just 56%, with a “moderate” level of agreement (Gwet’s AC2, 0.60; P less than .001).

“We speculate that the discrepancy of tumor differentiation on endoscopic forceps biopsies and resection specimens is due to nonrepresentative sampling of tumors to accurately determine the percentage of gland formation and thus tumor grade,” the investigators noted.

Further analysis showed that 22.7% of moderately differentiated tumors were upgraded to poorly differentiated upon final histology. Conversely, 19.6% of poorly differentiated tumors were downgraded to moderately differentiated. Downgrading was even more common among T1a tumors, 40% of which were changed from poorly to moderately differentiated between pre- and postprocedural histology.

These latter findings concerning downgrading are particularly relevant for clinical decision-making, the investigators noted, as patients with poorly differentiated EAC on preoperative biopsy are typically sent for esophagectomy – a more invasive and riskier procedure – out of concern that ER will be noncurative.

“If poor differentiation was the only high-risk feature, these patients may have unnecessarily undergone esophagectomy,” Dr. Shah and colleagues wrote. “Especially in marginal surgical candidates, staging ER should be considered in patients with early esophageal cancer with preoperative biopsies showing poorly differentiated cancer.”

The investigators disclosed relationships with Medtronic, Lucid Diagnostics, Lumendi, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM TECHNIQUES AND INNOVATIONS IN GASTROINTESTINAL ENDOSCOPY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Report cards, additional observer improve adenoma detection rate

Article Type
Changed
Wed, 11/08/2023 - 16:14

Endoscopy centers may be able to improve their adenoma detection rate (ADR) by employing report cards and ensuring that each procedure is attended by an additional observer, according to results of a recent meta-analysis.

Although multimodal interventions like extra training with periodic feedback showed some signs of improving ADR, withdrawal time monitoring was not significantly associated with a better detection rate, reported Anshul Arora, MD, of Western University, London, Ont., and colleagues.

“Given the increased risk of postcolonoscopy colorectal cancer associated with low ADR, improving [this performance metric] has become a major focus for quality improvement,” the investigators wrote in Clinical Gastroenterology and Hepatology.

They noted that “numerous strategies” have been evaluated for this purpose, which may be sorted into three groups: endoscopy unit–level interventions (i.e., system changes), procedure-targeted interventions (i.e., technique changes), and technology-based interventions.

“Of these categories, endoscopy unit–level interventions are perhaps the easiest to implement widely because they generally require fewer changes in the technical aspect of how a colonoscopy is performed,” the investigators wrote. “Thus, the objective of this study was to conduct a systematic review and meta-analysis to identify endoscopy unit–level interventions aimed at improving ADRs and their effectiveness.”

To this end, Dr. Arora and colleagues analyzed data from 34 randomized controlled trials and observational studies involving 1,501 endoscopists and 371,041 procedures. They evaluated the relationship between ADR and implementation of four interventions: a performance report card, a multimodal intervention (e.g., training sessions with periodic feedback), presence of an additional observer, and withdrawal time monitoring.

Provision of report cards was associated with the greatest improvement in ADR, at 28% (odds ratio, 1.28; 95% confidence interval, 1.13-1.45; P less than .001), followed by presence of an additional observer, which bumped ADR by 25% (OR, 1.25; 95% CI, 1.09-1.43; P = .002). The impact of multimodal interventions was “borderline significant,” the investigators wrote, with an 18% improvement in ADR (OR, 1.18; 95% CI, 1.00-1.40; P = .05). In contrast, withdrawal time monitoring showed no significant benefit (OR, 1.35; 95% CI, 0.93-1.96; P = .11).

In their discussion, Dr. Arora and colleagues offered guidance on the use of report cards, which were associated with the greatest improvement in ADR.

“We found that benchmarking individual endoscopists against their peers was important for improving ADR performance because this was the common thread among all report card–based interventions,” they wrote. “In terms of the method of delivery for feedback, only one study used public reporting of colonoscopy quality indicators, whereas the rest delivered report cards privately to physicians. This suggests that confidential feedback did not impede self-improvement, which is desirable to avoid stigmatization of low ADR performers.”

The findings also suggest that additional observers can boost ADR without specialized training.

“[The benefit of an additional observer] may be explained by the presence of a second set of eyes to identify polyps or, more pragmatically, by the Hawthorne effect, whereby endoscopists may be more careful because they know someone else is watching the screen,” the investigators wrote. “Regardless, extra training for the observer does not seem to be necessary because the three RCTs [evaluating this intervention] all used endoscopy nurses who did not receive any additional polyp detection training. Thus, endoscopy unit nurses should be encouraged to speak up should they see a polyp the endoscopist missed.”

The investigators disclosed no conflicts of interest.

Body

 

The effectiveness of colonoscopy to prevent colorectal cancer depends on the quality of the exam. Adenoma detection rate (ADR) is a validated quality indicator, associated with lower risk of postcolonoscopy colorectal cancer. There are multiple interventions that can improve endoscopists’ ADR, but it is unclear which ones are higher yield than others. This study summarizes the existing studies on various interventions and finds the largest increase in ADR with the use of physician report cards. This is not surprising, as report cards both provide measurement and are an intervention for improvement.

Interestingly the included studies mostly used individual confidential report cards, and demonstrated an improvement in ADR. Having a second set of eyes looking at the monitor was also associated with increase in ADR. Whether it’s the observer picking up missed polyps, or the endoscopist doing a more thorough exam because someone else is watching the screen, is unclear. This is the same principle that current computer assisted detection (CADe) devices help with. While having a second observer may not be practical or cost effective, and CADe is expensive, the take-away is that there are multiple ways to improve ADR, and at the very least every physician should be receiving report cards or feedback on their quality indicators and working towards achieving and exceeding the minimum benchmarks.

Aasma Shaukat, MD, MPH, is the Robert M. and Mary H. Glickman professor of medicine, New York University Grossman School of Medicine where she also holds a professorship in population health. She serves as director of outcomes research in the division of gastroenterology and hepatology, and codirector of Translational Research Education and Careers (TREC). She disclosed serving as an adviser for Motus-GI and Iterative Health.

Publications
Topics
Sections
Body

 

The effectiveness of colonoscopy to prevent colorectal cancer depends on the quality of the exam. Adenoma detection rate (ADR) is a validated quality indicator, associated with lower risk of postcolonoscopy colorectal cancer. There are multiple interventions that can improve endoscopists’ ADR, but it is unclear which ones are higher yield than others. This study summarizes the existing studies on various interventions and finds the largest increase in ADR with the use of physician report cards. This is not surprising, as report cards both provide measurement and are an intervention for improvement.

Interestingly the included studies mostly used individual confidential report cards, and demonstrated an improvement in ADR. Having a second set of eyes looking at the monitor was also associated with increase in ADR. Whether it’s the observer picking up missed polyps, or the endoscopist doing a more thorough exam because someone else is watching the screen, is unclear. This is the same principle that current computer assisted detection (CADe) devices help with. While having a second observer may not be practical or cost effective, and CADe is expensive, the take-away is that there are multiple ways to improve ADR, and at the very least every physician should be receiving report cards or feedback on their quality indicators and working towards achieving and exceeding the minimum benchmarks.

Aasma Shaukat, MD, MPH, is the Robert M. and Mary H. Glickman professor of medicine, New York University Grossman School of Medicine where she also holds a professorship in population health. She serves as director of outcomes research in the division of gastroenterology and hepatology, and codirector of Translational Research Education and Careers (TREC). She disclosed serving as an adviser for Motus-GI and Iterative Health.

Body

 

The effectiveness of colonoscopy to prevent colorectal cancer depends on the quality of the exam. Adenoma detection rate (ADR) is a validated quality indicator, associated with lower risk of postcolonoscopy colorectal cancer. There are multiple interventions that can improve endoscopists’ ADR, but it is unclear which ones are higher yield than others. This study summarizes the existing studies on various interventions and finds the largest increase in ADR with the use of physician report cards. This is not surprising, as report cards both provide measurement and are an intervention for improvement.

Interestingly the included studies mostly used individual confidential report cards, and demonstrated an improvement in ADR. Having a second set of eyes looking at the monitor was also associated with increase in ADR. Whether it’s the observer picking up missed polyps, or the endoscopist doing a more thorough exam because someone else is watching the screen, is unclear. This is the same principle that current computer assisted detection (CADe) devices help with. While having a second observer may not be practical or cost effective, and CADe is expensive, the take-away is that there are multiple ways to improve ADR, and at the very least every physician should be receiving report cards or feedback on their quality indicators and working towards achieving and exceeding the minimum benchmarks.

Aasma Shaukat, MD, MPH, is the Robert M. and Mary H. Glickman professor of medicine, New York University Grossman School of Medicine where she also holds a professorship in population health. She serves as director of outcomes research in the division of gastroenterology and hepatology, and codirector of Translational Research Education and Careers (TREC). She disclosed serving as an adviser for Motus-GI and Iterative Health.

Endoscopy centers may be able to improve their adenoma detection rate (ADR) by employing report cards and ensuring that each procedure is attended by an additional observer, according to results of a recent meta-analysis.

Although multimodal interventions like extra training with periodic feedback showed some signs of improving ADR, withdrawal time monitoring was not significantly associated with a better detection rate, reported Anshul Arora, MD, of Western University, London, Ont., and colleagues.

“Given the increased risk of postcolonoscopy colorectal cancer associated with low ADR, improving [this performance metric] has become a major focus for quality improvement,” the investigators wrote in Clinical Gastroenterology and Hepatology.

They noted that “numerous strategies” have been evaluated for this purpose, which may be sorted into three groups: endoscopy unit–level interventions (i.e., system changes), procedure-targeted interventions (i.e., technique changes), and technology-based interventions.

“Of these categories, endoscopy unit–level interventions are perhaps the easiest to implement widely because they generally require fewer changes in the technical aspect of how a colonoscopy is performed,” the investigators wrote. “Thus, the objective of this study was to conduct a systematic review and meta-analysis to identify endoscopy unit–level interventions aimed at improving ADRs and their effectiveness.”

To this end, Dr. Arora and colleagues analyzed data from 34 randomized controlled trials and observational studies involving 1,501 endoscopists and 371,041 procedures. They evaluated the relationship between ADR and implementation of four interventions: a performance report card, a multimodal intervention (e.g., training sessions with periodic feedback), presence of an additional observer, and withdrawal time monitoring.

Provision of report cards was associated with the greatest improvement in ADR, at 28% (odds ratio, 1.28; 95% confidence interval, 1.13-1.45; P less than .001), followed by presence of an additional observer, which bumped ADR by 25% (OR, 1.25; 95% CI, 1.09-1.43; P = .002). The impact of multimodal interventions was “borderline significant,” the investigators wrote, with an 18% improvement in ADR (OR, 1.18; 95% CI, 1.00-1.40; P = .05). In contrast, withdrawal time monitoring showed no significant benefit (OR, 1.35; 95% CI, 0.93-1.96; P = .11).

In their discussion, Dr. Arora and colleagues offered guidance on the use of report cards, which were associated with the greatest improvement in ADR.

“We found that benchmarking individual endoscopists against their peers was important for improving ADR performance because this was the common thread among all report card–based interventions,” they wrote. “In terms of the method of delivery for feedback, only one study used public reporting of colonoscopy quality indicators, whereas the rest delivered report cards privately to physicians. This suggests that confidential feedback did not impede self-improvement, which is desirable to avoid stigmatization of low ADR performers.”

The findings also suggest that additional observers can boost ADR without specialized training.

“[The benefit of an additional observer] may be explained by the presence of a second set of eyes to identify polyps or, more pragmatically, by the Hawthorne effect, whereby endoscopists may be more careful because they know someone else is watching the screen,” the investigators wrote. “Regardless, extra training for the observer does not seem to be necessary because the three RCTs [evaluating this intervention] all used endoscopy nurses who did not receive any additional polyp detection training. Thus, endoscopy unit nurses should be encouraged to speak up should they see a polyp the endoscopist missed.”

The investigators disclosed no conflicts of interest.

Endoscopy centers may be able to improve their adenoma detection rate (ADR) by employing report cards and ensuring that each procedure is attended by an additional observer, according to results of a recent meta-analysis.

Although multimodal interventions like extra training with periodic feedback showed some signs of improving ADR, withdrawal time monitoring was not significantly associated with a better detection rate, reported Anshul Arora, MD, of Western University, London, Ont., and colleagues.

“Given the increased risk of postcolonoscopy colorectal cancer associated with low ADR, improving [this performance metric] has become a major focus for quality improvement,” the investigators wrote in Clinical Gastroenterology and Hepatology.

They noted that “numerous strategies” have been evaluated for this purpose, which may be sorted into three groups: endoscopy unit–level interventions (i.e., system changes), procedure-targeted interventions (i.e., technique changes), and technology-based interventions.

“Of these categories, endoscopy unit–level interventions are perhaps the easiest to implement widely because they generally require fewer changes in the technical aspect of how a colonoscopy is performed,” the investigators wrote. “Thus, the objective of this study was to conduct a systematic review and meta-analysis to identify endoscopy unit–level interventions aimed at improving ADRs and their effectiveness.”

To this end, Dr. Arora and colleagues analyzed data from 34 randomized controlled trials and observational studies involving 1,501 endoscopists and 371,041 procedures. They evaluated the relationship between ADR and implementation of four interventions: a performance report card, a multimodal intervention (e.g., training sessions with periodic feedback), presence of an additional observer, and withdrawal time monitoring.

Provision of report cards was associated with the greatest improvement in ADR, at 28% (odds ratio, 1.28; 95% confidence interval, 1.13-1.45; P less than .001), followed by presence of an additional observer, which bumped ADR by 25% (OR, 1.25; 95% CI, 1.09-1.43; P = .002). The impact of multimodal interventions was “borderline significant,” the investigators wrote, with an 18% improvement in ADR (OR, 1.18; 95% CI, 1.00-1.40; P = .05). In contrast, withdrawal time monitoring showed no significant benefit (OR, 1.35; 95% CI, 0.93-1.96; P = .11).

In their discussion, Dr. Arora and colleagues offered guidance on the use of report cards, which were associated with the greatest improvement in ADR.

“We found that benchmarking individual endoscopists against their peers was important for improving ADR performance because this was the common thread among all report card–based interventions,” they wrote. “In terms of the method of delivery for feedback, only one study used public reporting of colonoscopy quality indicators, whereas the rest delivered report cards privately to physicians. This suggests that confidential feedback did not impede self-improvement, which is desirable to avoid stigmatization of low ADR performers.”

The findings also suggest that additional observers can boost ADR without specialized training.

“[The benefit of an additional observer] may be explained by the presence of a second set of eyes to identify polyps or, more pragmatically, by the Hawthorne effect, whereby endoscopists may be more careful because they know someone else is watching the screen,” the investigators wrote. “Regardless, extra training for the observer does not seem to be necessary because the three RCTs [evaluating this intervention] all used endoscopy nurses who did not receive any additional polyp detection training. Thus, endoscopy unit nurses should be encouraged to speak up should they see a polyp the endoscopist missed.”

The investigators disclosed no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article