New mRNA Vaccine May Shield Against C difficile Infections

Article Type
Changed
Tue, 10/29/2024 - 16:30
Display Headline
New mRNA Vaccine May Shield Against C difficile Infections

A group of researchers from the University of Pennsylvania, Philadelphia, has developed a messenger RNA (mRNA) vaccine, delivered via lipid nanoparticles (LNPs) — the same type as the COVID-19 vaccine produced by Moderna and Pfizer — targeting Clostridioides difficile (formerly Clostridium difficile). According to the authors, the results of their preclinical studypublished in Science, demonstrated this technology as a promising platform for C difficile vaccine development and could be the starting point for curbing intestinal infections that, in their most severe forms (pseudomembranous colitistoxic megacolon), can be fatal.

An Increasingly Pressing Issue

C difficile is the leading cause of infectious diarrhea acquired in healthcare settings. In recent years, community-acquired C difficile infections have also become more frequent. The increase in infections has been attributed to the emergence of highly virulent and antibiotic-resistant strains.

2019 study reported a global incidence of C difficile infections at 2.2 per 1000 hospital admissions per year and 3.5 per 10,000 patient-days per year.
 

The Vaccine Candidate

Vaccine candidates tested so far have used toxoids or recombinant proteins targeting the combined repetitive oligopeptide (CROP) or receptor-binding domain (RBD) of the two primary C difficile toxins, TcdA and TcdB. The US researchers are now exploring the mRNA-LNP vaccine approach to target multiple antigens simultaneously. They developed a bivalent vaccine (including the CROP and RBD domains of both toxins) and a trivalent vaccine (with an additional virulence factor, the metalloprotease Pro-Pro endopeptidase-1).

Mice vaccinated with the bivalent and trivalent vaccines produced immunoglobulin G antibody titers two to four times higher than those elicited by recombinant protein with an adjuvant. The vaccination stimulated the proliferation of follicular T helper cells and the antigen-specific response of B lymphocytes, laying the foundation for a strong and long-lasting humoral response. The vaccines were also immunogenic in hamsters.

Vaccinated mice not only survived a toxin dose five times higher than the 100% lethal dose but also demonstrated the vaccine’s protective effect through serum transfer; unvaccinated mice given serum from vaccinated mice survived the lethal challenge. More importantly, when exposed to a lethal dose of the bacterium itself, all vaccinated mice survived.

To demonstrate the vaccine’s efficacy in patients with a history of C difficile infection and high recurrence risk — ideal candidates for vaccination — the researchers vaccinated mice that had previously survived a sublethal infection. Six months after the initial infection and vaccination, these mice remained protected against mortality when reexposed to the bacterium.

Additionally, a quadrivalent vaccine that included an immunogen targeting C difficile spores — key agents in transmission — also proved effective. Low levels of bacteria and toxins in the feces of mice vaccinated in this way suggested that spore vaccination could limit initial colonization.

In tests with nonhuman primates, two doses of the vaccines targeting either the vegetative form or the spores elicited strong immune responses against bacterial toxins and virulence factors. Human trials may indeed be on the horizon.
 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A group of researchers from the University of Pennsylvania, Philadelphia, has developed a messenger RNA (mRNA) vaccine, delivered via lipid nanoparticles (LNPs) — the same type as the COVID-19 vaccine produced by Moderna and Pfizer — targeting Clostridioides difficile (formerly Clostridium difficile). According to the authors, the results of their preclinical studypublished in Science, demonstrated this technology as a promising platform for C difficile vaccine development and could be the starting point for curbing intestinal infections that, in their most severe forms (pseudomembranous colitistoxic megacolon), can be fatal.

An Increasingly Pressing Issue

C difficile is the leading cause of infectious diarrhea acquired in healthcare settings. In recent years, community-acquired C difficile infections have also become more frequent. The increase in infections has been attributed to the emergence of highly virulent and antibiotic-resistant strains.

2019 study reported a global incidence of C difficile infections at 2.2 per 1000 hospital admissions per year and 3.5 per 10,000 patient-days per year.
 

The Vaccine Candidate

Vaccine candidates tested so far have used toxoids or recombinant proteins targeting the combined repetitive oligopeptide (CROP) or receptor-binding domain (RBD) of the two primary C difficile toxins, TcdA and TcdB. The US researchers are now exploring the mRNA-LNP vaccine approach to target multiple antigens simultaneously. They developed a bivalent vaccine (including the CROP and RBD domains of both toxins) and a trivalent vaccine (with an additional virulence factor, the metalloprotease Pro-Pro endopeptidase-1).

Mice vaccinated with the bivalent and trivalent vaccines produced immunoglobulin G antibody titers two to four times higher than those elicited by recombinant protein with an adjuvant. The vaccination stimulated the proliferation of follicular T helper cells and the antigen-specific response of B lymphocytes, laying the foundation for a strong and long-lasting humoral response. The vaccines were also immunogenic in hamsters.

Vaccinated mice not only survived a toxin dose five times higher than the 100% lethal dose but also demonstrated the vaccine’s protective effect through serum transfer; unvaccinated mice given serum from vaccinated mice survived the lethal challenge. More importantly, when exposed to a lethal dose of the bacterium itself, all vaccinated mice survived.

To demonstrate the vaccine’s efficacy in patients with a history of C difficile infection and high recurrence risk — ideal candidates for vaccination — the researchers vaccinated mice that had previously survived a sublethal infection. Six months after the initial infection and vaccination, these mice remained protected against mortality when reexposed to the bacterium.

Additionally, a quadrivalent vaccine that included an immunogen targeting C difficile spores — key agents in transmission — also proved effective. Low levels of bacteria and toxins in the feces of mice vaccinated in this way suggested that spore vaccination could limit initial colonization.

In tests with nonhuman primates, two doses of the vaccines targeting either the vegetative form or the spores elicited strong immune responses against bacterial toxins and virulence factors. Human trials may indeed be on the horizon.
 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

A group of researchers from the University of Pennsylvania, Philadelphia, has developed a messenger RNA (mRNA) vaccine, delivered via lipid nanoparticles (LNPs) — the same type as the COVID-19 vaccine produced by Moderna and Pfizer — targeting Clostridioides difficile (formerly Clostridium difficile). According to the authors, the results of their preclinical studypublished in Science, demonstrated this technology as a promising platform for C difficile vaccine development and could be the starting point for curbing intestinal infections that, in their most severe forms (pseudomembranous colitistoxic megacolon), can be fatal.

An Increasingly Pressing Issue

C difficile is the leading cause of infectious diarrhea acquired in healthcare settings. In recent years, community-acquired C difficile infections have also become more frequent. The increase in infections has been attributed to the emergence of highly virulent and antibiotic-resistant strains.

2019 study reported a global incidence of C difficile infections at 2.2 per 1000 hospital admissions per year and 3.5 per 10,000 patient-days per year.
 

The Vaccine Candidate

Vaccine candidates tested so far have used toxoids or recombinant proteins targeting the combined repetitive oligopeptide (CROP) or receptor-binding domain (RBD) of the two primary C difficile toxins, TcdA and TcdB. The US researchers are now exploring the mRNA-LNP vaccine approach to target multiple antigens simultaneously. They developed a bivalent vaccine (including the CROP and RBD domains of both toxins) and a trivalent vaccine (with an additional virulence factor, the metalloprotease Pro-Pro endopeptidase-1).

Mice vaccinated with the bivalent and trivalent vaccines produced immunoglobulin G antibody titers two to four times higher than those elicited by recombinant protein with an adjuvant. The vaccination stimulated the proliferation of follicular T helper cells and the antigen-specific response of B lymphocytes, laying the foundation for a strong and long-lasting humoral response. The vaccines were also immunogenic in hamsters.

Vaccinated mice not only survived a toxin dose five times higher than the 100% lethal dose but also demonstrated the vaccine’s protective effect through serum transfer; unvaccinated mice given serum from vaccinated mice survived the lethal challenge. More importantly, when exposed to a lethal dose of the bacterium itself, all vaccinated mice survived.

To demonstrate the vaccine’s efficacy in patients with a history of C difficile infection and high recurrence risk — ideal candidates for vaccination — the researchers vaccinated mice that had previously survived a sublethal infection. Six months after the initial infection and vaccination, these mice remained protected against mortality when reexposed to the bacterium.

Additionally, a quadrivalent vaccine that included an immunogen targeting C difficile spores — key agents in transmission — also proved effective. Low levels of bacteria and toxins in the feces of mice vaccinated in this way suggested that spore vaccination could limit initial colonization.

In tests with nonhuman primates, two doses of the vaccines targeting either the vegetative form or the spores elicited strong immune responses against bacterial toxins and virulence factors. Human trials may indeed be on the horizon.
 

This story was translated from Univadis Italy using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Display Headline
New mRNA Vaccine May Shield Against C difficile Infections
Display Headline
New mRNA Vaccine May Shield Against C difficile Infections
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How Has the RSV Season Changed Since the Pandemic Began?

Article Type
Changed
Wed, 07/10/2024 - 11:54

recent study published in JAMA Network Open described the epidemiological characteristics of respiratory syncytial virus (RSV) infection in Ontario, Canada, after the onset of the COVID-19 pandemic. It is the latest in a series of studies that suggest that virus circulation dynamics and hospitalizations have changed over time. These are crucial pieces of information for managing the seasonal epidemic.
 

News From Canada

The Canadian study compared hospitalization rates and characteristics of children aged < 5 years who were admitted to the hospital for RSV infection during three prepandemic seasons (2017-2020) and two “postpandemic” seasons (2021-2023).

Compared with the prepandemic period, the 2021-2022 RSV season peaked a little earlier (early December instead of mid-December) but had comparable hospitalization rates. The 2022-2023 season, on the other hand, peaked a month earlier with a more than doubled hospitalization rate. Hospitalizations increased from about 2000 to 4977. In 2022, hospitalizations also occurred in spring and summer. In 2022-2023, more hospitalizations than expected were observed, especially in the 24-59–month-old group.

The percentage of patients hospitalized in intensive care units (ICUs) increased (11.4% in 2021-2022 and 13.9% in 2022-2023 compared with 9.8% in 2017-2018), and the ICU hospitalization rate tripled compared with the prepandemic period. No differences were observed in ICU length of stay or severe outcomes (such as use of extracorporeal membrane oxygenation or hospital mortality). The use of mechanical ventilation increased, however.
 

News From the USA

Another recent study, published in Pediatrics, provides an overview of RSV epidemiology in the United States based on data collected from seven pediatric hospitals across the country. Data from 2021 and 2022 were compared with those from four prepandemic seasons (2016-2020).

Most observations agree with what was reported in the Canadian study. In the four prepandemic years, the peak of RSV-associated hospitalizations was recorded in December-January. In 2021, it was in July, and in 2022, it was in November. Hospitalization rates of RSV-positive patients in 2021 and 2022 were higher than those in the prepandemic period. In 2022, compared with 2021, the hospitalization rate of children aged < 2 years did not change, while that of children aged 24-59 months increased significantly.

In 2022, the percentage of children requiring oxygen therapy was higher. But unlike in the other study, the percentage of children undergoing mechanical ventilation or those hospitalized in ICUs was not significantly different from the past. It is worth noting that in 2022, multiple respiratory coinfections were more frequently found in RSV-positive hospitalized children.
 

News From Italy

“In our experience, as well, the epidemiology of RSV has shown changes following the pandemic,” Marta Luisa Ciofi degli Atti, MD, head of the Epidemiology, Clinical Pathways, and Clinical Risk Complex Operating Unit at the Bambino Gesù Pediatric Hospital in Rome, Italy, told Univadis Italy. “Before the pandemic, RSV infection peaks were regularly in late December-January. The pandemic, with its containment measures, interrupted the typical seasonality of RSV: A season was skipped, and in 2021, there was a season that was different from all previous ones because it was anticipated, with a peak in October-November and a much higher incidence. In 2022, we also had a higher autumn incidence compared with the past, with a peak in November. However, the number of confirmed infections approached prepandemic levels. The season was also anticipated in 2023, so prepandemic epidemiology does not seem to have stabilized yet.”

As did Canada and the USA, Italy had an increase in incidence among older children in 2022. “Cases of children aged 1-4 years increased from 24% in 2018 to 30%, and those of children aged 5-9 years from 5.4% to 8.7%,” said Dr. Ciofi degli Atti. “Children in the first year of life were similarly affected in the pre- and postpandemic periods, while cases increased among older children. It is as if there has been an accumulation of susceptible patients: Children who did not get sick in the first year of life during the pandemic and got sick later in the postpandemic period.”
 

 

 

Predicting (and Preventing) Chaos

As described in an article recently published in the Italian Journal of Pediatrics, Dr. Ciofi degli Atti worked on a model to predict the peak of RSV infections. “It is a mathematical predictive model that, based on observations in a certain number of seasons, allows the estimation of expectations,” she explained. It is challenging to develop a model when there are highly disruptive events such as a pandemic, she added, but these situations make predictive tools of the utmost interest. “The predictive capacity for the 2023 season was good: We had predicted that the peak would be reached in week 49, and indeed, the peak was observed in December.”

The study’s authors noted that in the years considered, the seasonal peak of RSV infections always occurred 4-5 weeks after the week in which the number of hospitalizations doubled or tripled. “It is a curve that rises very rapidly,” said the epidemiologist.

“RSV infection causes severe clinical conditions that affect young children who may need hospitalization and sometimes respiratory assistance. The epidemic peaks within a few weeks and has a disruptive effect on healthcare organization,” said Dr. Ciofi degli Atti. “Preventive vaccination is a huge opportunity in terms of health benefits for young children, who are directly involved, and also to reduce the impact that seasonal RSV epidemics have on hospital pathways. At the national and regional levels, work is therefore underway to start vaccination to prevent the circulation of this virus.”
 

This story was translated from Univadis Italy, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

recent study published in JAMA Network Open described the epidemiological characteristics of respiratory syncytial virus (RSV) infection in Ontario, Canada, after the onset of the COVID-19 pandemic. It is the latest in a series of studies that suggest that virus circulation dynamics and hospitalizations have changed over time. These are crucial pieces of information for managing the seasonal epidemic.
 

News From Canada

The Canadian study compared hospitalization rates and characteristics of children aged < 5 years who were admitted to the hospital for RSV infection during three prepandemic seasons (2017-2020) and two “postpandemic” seasons (2021-2023).

Compared with the prepandemic period, the 2021-2022 RSV season peaked a little earlier (early December instead of mid-December) but had comparable hospitalization rates. The 2022-2023 season, on the other hand, peaked a month earlier with a more than doubled hospitalization rate. Hospitalizations increased from about 2000 to 4977. In 2022, hospitalizations also occurred in spring and summer. In 2022-2023, more hospitalizations than expected were observed, especially in the 24-59–month-old group.

The percentage of patients hospitalized in intensive care units (ICUs) increased (11.4% in 2021-2022 and 13.9% in 2022-2023 compared with 9.8% in 2017-2018), and the ICU hospitalization rate tripled compared with the prepandemic period. No differences were observed in ICU length of stay or severe outcomes (such as use of extracorporeal membrane oxygenation or hospital mortality). The use of mechanical ventilation increased, however.
 

News From the USA

Another recent study, published in Pediatrics, provides an overview of RSV epidemiology in the United States based on data collected from seven pediatric hospitals across the country. Data from 2021 and 2022 were compared with those from four prepandemic seasons (2016-2020).

Most observations agree with what was reported in the Canadian study. In the four prepandemic years, the peak of RSV-associated hospitalizations was recorded in December-January. In 2021, it was in July, and in 2022, it was in November. Hospitalization rates of RSV-positive patients in 2021 and 2022 were higher than those in the prepandemic period. In 2022, compared with 2021, the hospitalization rate of children aged < 2 years did not change, while that of children aged 24-59 months increased significantly.

In 2022, the percentage of children requiring oxygen therapy was higher. But unlike in the other study, the percentage of children undergoing mechanical ventilation or those hospitalized in ICUs was not significantly different from the past. It is worth noting that in 2022, multiple respiratory coinfections were more frequently found in RSV-positive hospitalized children.
 

News From Italy

“In our experience, as well, the epidemiology of RSV has shown changes following the pandemic,” Marta Luisa Ciofi degli Atti, MD, head of the Epidemiology, Clinical Pathways, and Clinical Risk Complex Operating Unit at the Bambino Gesù Pediatric Hospital in Rome, Italy, told Univadis Italy. “Before the pandemic, RSV infection peaks were regularly in late December-January. The pandemic, with its containment measures, interrupted the typical seasonality of RSV: A season was skipped, and in 2021, there was a season that was different from all previous ones because it was anticipated, with a peak in October-November and a much higher incidence. In 2022, we also had a higher autumn incidence compared with the past, with a peak in November. However, the number of confirmed infections approached prepandemic levels. The season was also anticipated in 2023, so prepandemic epidemiology does not seem to have stabilized yet.”

As did Canada and the USA, Italy had an increase in incidence among older children in 2022. “Cases of children aged 1-4 years increased from 24% in 2018 to 30%, and those of children aged 5-9 years from 5.4% to 8.7%,” said Dr. Ciofi degli Atti. “Children in the first year of life were similarly affected in the pre- and postpandemic periods, while cases increased among older children. It is as if there has been an accumulation of susceptible patients: Children who did not get sick in the first year of life during the pandemic and got sick later in the postpandemic period.”
 

 

 

Predicting (and Preventing) Chaos

As described in an article recently published in the Italian Journal of Pediatrics, Dr. Ciofi degli Atti worked on a model to predict the peak of RSV infections. “It is a mathematical predictive model that, based on observations in a certain number of seasons, allows the estimation of expectations,” she explained. It is challenging to develop a model when there are highly disruptive events such as a pandemic, she added, but these situations make predictive tools of the utmost interest. “The predictive capacity for the 2023 season was good: We had predicted that the peak would be reached in week 49, and indeed, the peak was observed in December.”

The study’s authors noted that in the years considered, the seasonal peak of RSV infections always occurred 4-5 weeks after the week in which the number of hospitalizations doubled or tripled. “It is a curve that rises very rapidly,” said the epidemiologist.

“RSV infection causes severe clinical conditions that affect young children who may need hospitalization and sometimes respiratory assistance. The epidemic peaks within a few weeks and has a disruptive effect on healthcare organization,” said Dr. Ciofi degli Atti. “Preventive vaccination is a huge opportunity in terms of health benefits for young children, who are directly involved, and also to reduce the impact that seasonal RSV epidemics have on hospital pathways. At the national and regional levels, work is therefore underway to start vaccination to prevent the circulation of this virus.”
 

This story was translated from Univadis Italy, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

recent study published in JAMA Network Open described the epidemiological characteristics of respiratory syncytial virus (RSV) infection in Ontario, Canada, after the onset of the COVID-19 pandemic. It is the latest in a series of studies that suggest that virus circulation dynamics and hospitalizations have changed over time. These are crucial pieces of information for managing the seasonal epidemic.
 

News From Canada

The Canadian study compared hospitalization rates and characteristics of children aged < 5 years who were admitted to the hospital for RSV infection during three prepandemic seasons (2017-2020) and two “postpandemic” seasons (2021-2023).

Compared with the prepandemic period, the 2021-2022 RSV season peaked a little earlier (early December instead of mid-December) but had comparable hospitalization rates. The 2022-2023 season, on the other hand, peaked a month earlier with a more than doubled hospitalization rate. Hospitalizations increased from about 2000 to 4977. In 2022, hospitalizations also occurred in spring and summer. In 2022-2023, more hospitalizations than expected were observed, especially in the 24-59–month-old group.

The percentage of patients hospitalized in intensive care units (ICUs) increased (11.4% in 2021-2022 and 13.9% in 2022-2023 compared with 9.8% in 2017-2018), and the ICU hospitalization rate tripled compared with the prepandemic period. No differences were observed in ICU length of stay or severe outcomes (such as use of extracorporeal membrane oxygenation or hospital mortality). The use of mechanical ventilation increased, however.
 

News From the USA

Another recent study, published in Pediatrics, provides an overview of RSV epidemiology in the United States based on data collected from seven pediatric hospitals across the country. Data from 2021 and 2022 were compared with those from four prepandemic seasons (2016-2020).

Most observations agree with what was reported in the Canadian study. In the four prepandemic years, the peak of RSV-associated hospitalizations was recorded in December-January. In 2021, it was in July, and in 2022, it was in November. Hospitalization rates of RSV-positive patients in 2021 and 2022 were higher than those in the prepandemic period. In 2022, compared with 2021, the hospitalization rate of children aged < 2 years did not change, while that of children aged 24-59 months increased significantly.

In 2022, the percentage of children requiring oxygen therapy was higher. But unlike in the other study, the percentage of children undergoing mechanical ventilation or those hospitalized in ICUs was not significantly different from the past. It is worth noting that in 2022, multiple respiratory coinfections were more frequently found in RSV-positive hospitalized children.
 

News From Italy

“In our experience, as well, the epidemiology of RSV has shown changes following the pandemic,” Marta Luisa Ciofi degli Atti, MD, head of the Epidemiology, Clinical Pathways, and Clinical Risk Complex Operating Unit at the Bambino Gesù Pediatric Hospital in Rome, Italy, told Univadis Italy. “Before the pandemic, RSV infection peaks were regularly in late December-January. The pandemic, with its containment measures, interrupted the typical seasonality of RSV: A season was skipped, and in 2021, there was a season that was different from all previous ones because it was anticipated, with a peak in October-November and a much higher incidence. In 2022, we also had a higher autumn incidence compared with the past, with a peak in November. However, the number of confirmed infections approached prepandemic levels. The season was also anticipated in 2023, so prepandemic epidemiology does not seem to have stabilized yet.”

As did Canada and the USA, Italy had an increase in incidence among older children in 2022. “Cases of children aged 1-4 years increased from 24% in 2018 to 30%, and those of children aged 5-9 years from 5.4% to 8.7%,” said Dr. Ciofi degli Atti. “Children in the first year of life were similarly affected in the pre- and postpandemic periods, while cases increased among older children. It is as if there has been an accumulation of susceptible patients: Children who did not get sick in the first year of life during the pandemic and got sick later in the postpandemic period.”
 

 

 

Predicting (and Preventing) Chaos

As described in an article recently published in the Italian Journal of Pediatrics, Dr. Ciofi degli Atti worked on a model to predict the peak of RSV infections. “It is a mathematical predictive model that, based on observations in a certain number of seasons, allows the estimation of expectations,” she explained. It is challenging to develop a model when there are highly disruptive events such as a pandemic, she added, but these situations make predictive tools of the utmost interest. “The predictive capacity for the 2023 season was good: We had predicted that the peak would be reached in week 49, and indeed, the peak was observed in December.”

The study’s authors noted that in the years considered, the seasonal peak of RSV infections always occurred 4-5 weeks after the week in which the number of hospitalizations doubled or tripled. “It is a curve that rises very rapidly,” said the epidemiologist.

“RSV infection causes severe clinical conditions that affect young children who may need hospitalization and sometimes respiratory assistance. The epidemic peaks within a few weeks and has a disruptive effect on healthcare organization,” said Dr. Ciofi degli Atti. “Preventive vaccination is a huge opportunity in terms of health benefits for young children, who are directly involved, and also to reduce the impact that seasonal RSV epidemics have on hospital pathways. At the national and regional levels, work is therefore underway to start vaccination to prevent the circulation of this virus.”
 

This story was translated from Univadis Italy, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Why Lung Cancer Screening Is Not for Everyone

Article Type
Changed
Wed, 04/24/2024 - 12:29

 

A study conducted in the United States showed that many individuals undergo lung cancer screening despite having a higher likelihood of experiencing harm rather than benefit. Why does this happen? Could it also occur in Italy?

Reasons in Favor

The authors of the study, which was published in Annals of Family Medicine interviewed 40 former military personnel with a significant history of smoking. Though the patients presented with various comorbidities and had a limited life expectancy, the Veterans Health Administration had offered them lung cancer screening.

Of the 40 respondents, 26 had accepted the screening test. When asked why they had done so, they responded, “to take care of my health and achieve my life goals,” “because screening is an opportunity to identify potential issues,” “because it was recommended by a doctor I trust,” and “because I don’t want to regret not accepting it.” Strangely, when deciding about lung cancer screening, the respondents did not consider their poor health or life expectancy.
 

Potential Harms 

The screening was also welcomed because low-dose computed tomography (LDCT) is a noninvasive test. However, many participants were unaware that the screening needed to be repeated annually and that further imaging or other types of tests could follow LDCT, such as biopsies and bronchoscopies.

Many did not recall discussing with the doctor the potential harms of screening, including overdiagnosis, stress due to false positives, and complications and risks associated with investigations and treatments. Informed about this, several patients stated that they would not necessarily undergo further tests or antitumor treatments, especially if intensive or invasive.

The authors of the article emphasized the importance of shared decision-making with patients who have a marginal expected benefit from screening. But is it correct to offer screening under these conditions? Guidelines advise against screening individuals with limited life expectancy and multiple comorbidities because the risk-benefit ratio is not favorable.
 

Screening in Italy

Italy has no organized public program for lung screening. However, in 2022, the Rete Italiana Screening Polmonare (RISP) program for early lung cancer diagnosis was launched. Supported by European funds, it is coordinated by the National Cancer Institute (INT) in Milan and aims to recruit 10,000 high-risk candidates for free screening at 18 hospitals across Italy.

Optimizing participant selection is important in any screening, but in a program like RISP, it is essential, said Alessandro Pardolesi, MD, a thoracic surgeon at INT. “Subjects with multiple comorbidities would create a limit to the study, because there would be too many confounding factors. By maintaining correct inclusion criteria, we can build a reproducible model to demonstrate that screening has a clear social and economic impact. Only after proving its effectiveness can we consider extending it to patients with pre-existing issues or who are very elderly,” he said. The RISP project is limited to participants aged 55-75 years. Participants must be smokers or have quit smoking no more than 15 years ago, with an average consumption of 20 cigarettes per day for 30 years.

Participant selection for the RISP program is also dictated by the costs to be incurred. “If something emerges from the CT scan, whether oncologic or not, it needs to be investigated, triggering mechanisms that consume time, space, and resources,” said Dr. Pardolesi. The economic aspect is crucial for determining the effectiveness of screening. “We need to demonstrate that in addition to increasing the patient’s life expectancy, healthcare costs are reduced. By anticipating the diagnosis, the intervention is less expensive, the patient is discharged in three days, and there’s no need for therapy, so there’s a saving. This is important, given the increasingly evident economic problems of the Italian public health system,” said Dr. Pardolesi.

This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

A study conducted in the United States showed that many individuals undergo lung cancer screening despite having a higher likelihood of experiencing harm rather than benefit. Why does this happen? Could it also occur in Italy?

Reasons in Favor

The authors of the study, which was published in Annals of Family Medicine interviewed 40 former military personnel with a significant history of smoking. Though the patients presented with various comorbidities and had a limited life expectancy, the Veterans Health Administration had offered them lung cancer screening.

Of the 40 respondents, 26 had accepted the screening test. When asked why they had done so, they responded, “to take care of my health and achieve my life goals,” “because screening is an opportunity to identify potential issues,” “because it was recommended by a doctor I trust,” and “because I don’t want to regret not accepting it.” Strangely, when deciding about lung cancer screening, the respondents did not consider their poor health or life expectancy.
 

Potential Harms 

The screening was also welcomed because low-dose computed tomography (LDCT) is a noninvasive test. However, many participants were unaware that the screening needed to be repeated annually and that further imaging or other types of tests could follow LDCT, such as biopsies and bronchoscopies.

Many did not recall discussing with the doctor the potential harms of screening, including overdiagnosis, stress due to false positives, and complications and risks associated with investigations and treatments. Informed about this, several patients stated that they would not necessarily undergo further tests or antitumor treatments, especially if intensive or invasive.

The authors of the article emphasized the importance of shared decision-making with patients who have a marginal expected benefit from screening. But is it correct to offer screening under these conditions? Guidelines advise against screening individuals with limited life expectancy and multiple comorbidities because the risk-benefit ratio is not favorable.
 

Screening in Italy

Italy has no organized public program for lung screening. However, in 2022, the Rete Italiana Screening Polmonare (RISP) program for early lung cancer diagnosis was launched. Supported by European funds, it is coordinated by the National Cancer Institute (INT) in Milan and aims to recruit 10,000 high-risk candidates for free screening at 18 hospitals across Italy.

Optimizing participant selection is important in any screening, but in a program like RISP, it is essential, said Alessandro Pardolesi, MD, a thoracic surgeon at INT. “Subjects with multiple comorbidities would create a limit to the study, because there would be too many confounding factors. By maintaining correct inclusion criteria, we can build a reproducible model to demonstrate that screening has a clear social and economic impact. Only after proving its effectiveness can we consider extending it to patients with pre-existing issues or who are very elderly,” he said. The RISP project is limited to participants aged 55-75 years. Participants must be smokers or have quit smoking no more than 15 years ago, with an average consumption of 20 cigarettes per day for 30 years.

Participant selection for the RISP program is also dictated by the costs to be incurred. “If something emerges from the CT scan, whether oncologic or not, it needs to be investigated, triggering mechanisms that consume time, space, and resources,” said Dr. Pardolesi. The economic aspect is crucial for determining the effectiveness of screening. “We need to demonstrate that in addition to increasing the patient’s life expectancy, healthcare costs are reduced. By anticipating the diagnosis, the intervention is less expensive, the patient is discharged in three days, and there’s no need for therapy, so there’s a saving. This is important, given the increasingly evident economic problems of the Italian public health system,” said Dr. Pardolesi.

This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

A study conducted in the United States showed that many individuals undergo lung cancer screening despite having a higher likelihood of experiencing harm rather than benefit. Why does this happen? Could it also occur in Italy?

Reasons in Favor

The authors of the study, which was published in Annals of Family Medicine interviewed 40 former military personnel with a significant history of smoking. Though the patients presented with various comorbidities and had a limited life expectancy, the Veterans Health Administration had offered them lung cancer screening.

Of the 40 respondents, 26 had accepted the screening test. When asked why they had done so, they responded, “to take care of my health and achieve my life goals,” “because screening is an opportunity to identify potential issues,” “because it was recommended by a doctor I trust,” and “because I don’t want to regret not accepting it.” Strangely, when deciding about lung cancer screening, the respondents did not consider their poor health or life expectancy.
 

Potential Harms 

The screening was also welcomed because low-dose computed tomography (LDCT) is a noninvasive test. However, many participants were unaware that the screening needed to be repeated annually and that further imaging or other types of tests could follow LDCT, such as biopsies and bronchoscopies.

Many did not recall discussing with the doctor the potential harms of screening, including overdiagnosis, stress due to false positives, and complications and risks associated with investigations and treatments. Informed about this, several patients stated that they would not necessarily undergo further tests or antitumor treatments, especially if intensive or invasive.

The authors of the article emphasized the importance of shared decision-making with patients who have a marginal expected benefit from screening. But is it correct to offer screening under these conditions? Guidelines advise against screening individuals with limited life expectancy and multiple comorbidities because the risk-benefit ratio is not favorable.
 

Screening in Italy

Italy has no organized public program for lung screening. However, in 2022, the Rete Italiana Screening Polmonare (RISP) program for early lung cancer diagnosis was launched. Supported by European funds, it is coordinated by the National Cancer Institute (INT) in Milan and aims to recruit 10,000 high-risk candidates for free screening at 18 hospitals across Italy.

Optimizing participant selection is important in any screening, but in a program like RISP, it is essential, said Alessandro Pardolesi, MD, a thoracic surgeon at INT. “Subjects with multiple comorbidities would create a limit to the study, because there would be too many confounding factors. By maintaining correct inclusion criteria, we can build a reproducible model to demonstrate that screening has a clear social and economic impact. Only after proving its effectiveness can we consider extending it to patients with pre-existing issues or who are very elderly,” he said. The RISP project is limited to participants aged 55-75 years. Participants must be smokers or have quit smoking no more than 15 years ago, with an average consumption of 20 cigarettes per day for 30 years.

Participant selection for the RISP program is also dictated by the costs to be incurred. “If something emerges from the CT scan, whether oncologic or not, it needs to be investigated, triggering mechanisms that consume time, space, and resources,” said Dr. Pardolesi. The economic aspect is crucial for determining the effectiveness of screening. “We need to demonstrate that in addition to increasing the patient’s life expectancy, healthcare costs are reduced. By anticipating the diagnosis, the intervention is less expensive, the patient is discharged in three days, and there’s no need for therapy, so there’s a saving. This is important, given the increasingly evident economic problems of the Italian public health system,” said Dr. Pardolesi.

This story was translated from Univadis Italy, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Robotic Mastectomy Is Feasible, But Is It Safe?

Article Type
Changed
Thu, 02/08/2024 - 13:50

A study from the University of Texas demonstrates the feasibility of using robotic single-port laparoscopy in nipple-sparing mastectomy (NSM), a type of conservative mastectomy preserving the skin and nipple-areola complex. The new findings potentially expand the application of robotic surgery to a larger patient population but doubts about the safety of this approach linger.

Robotic Mastectomy

The first surgeries involving the Da Vinci robotic surgeon for breast removal date to 2015. Multiport robotic surgery faces significant obstacles in this field, however. Feasibility studies have primarily focused on women with small breasts, corresponding to cup size C or smaller.

In the study that was published in JAMA Surgery, surgeons used the more cost-effective single-port platform for bilateral NSM procedures. Among the 20 patients included in the analysis (age, 29-63 years), 11 underwent prophylactic mastectomy (for a high risk for cancer) and 9 had mastectomy for breast tumors. Breast sizes ranged from A cup to D cup.

The duration of the procedure, from skin incision to suture for both breasts, ranged from 205 to 351 minutes (median, 277 minutes). No immediate operative complications (eg, hematoma) occurred, and there was no need for conversion to open surgery in any case. Over the 36-month follow-up, there were no recurrences. About 95% of patients retained skin sensitivity and 55% retained nipple sensitivity.
 

Unanswered Questions

In an accompanying article, Monica Morrow, MD, director of surgical breast oncology at the Memorial Sloan-Kettering Cancer Center in New York, acknowledged that the new evidence confirms the surgical approach’s feasibility but deems it insufficient to adopt it lightly. “At this point, the issue is not whether robotic mastectomy can be done but whether there is sufficient information about its oncologic safety that it should be done,” she wrote.

In a 2019 statement that was updated in 2021, the US Food and Drug Administration stated, “The safety and effectiveness of using robotically assisted surgical devices in mastectomy procedures or in the prevention or treatment of breast cancer have not been established.” The significance of this caution is underscored by the experience with laparoscopic and robotic radical hysterectomies. These procedures were widely adopted until a randomized prospective study demonstrated lower disease-free and overall survival for the minimally invasive approach compared with open surgery.

The University of Texas surgeons stated that acceptable safety and oncological outcomes for robotic NSM compared with conventional NSM had been demonstrated. They cited two trials with 238 cases and a median follow-up of less than 3 years. Dr. Morrow wrote, “While these reports provide reassurance that gross residual tumor is not being left behind, they do not address the issue of failure to remove all of the breast tissue due to thick skin flaps, with the potential for development of late recurrence or new cancers.” It is worth noting that even with the traditional surgical approach, the 5-year local recurrence rate after NSM is approximately double when observed with shorter follow-ups.

According to Dr. Morrow, the high rate of sensory preservation observed with robotic surgery, a desirable outcome for patients, is also a cause for concern. “While this may be due to incision placement or minimal skin flap retraction, as suggested by the authors, it is equally plausible that this could be due to thick skin flaps with preservation of the terminal branches of the fourth intercostal nerve.”

Therefore, more information on long-term oncological outcomes in a large number of patients will be necessary to confirm the safety of the procedure. In addition, measuring patient-reported outcomes will be useful in demonstrating that the benefits of the procedure outweigh increased operating times and costs. 

This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A study from the University of Texas demonstrates the feasibility of using robotic single-port laparoscopy in nipple-sparing mastectomy (NSM), a type of conservative mastectomy preserving the skin and nipple-areola complex. The new findings potentially expand the application of robotic surgery to a larger patient population but doubts about the safety of this approach linger.

Robotic Mastectomy

The first surgeries involving the Da Vinci robotic surgeon for breast removal date to 2015. Multiport robotic surgery faces significant obstacles in this field, however. Feasibility studies have primarily focused on women with small breasts, corresponding to cup size C or smaller.

In the study that was published in JAMA Surgery, surgeons used the more cost-effective single-port platform for bilateral NSM procedures. Among the 20 patients included in the analysis (age, 29-63 years), 11 underwent prophylactic mastectomy (for a high risk for cancer) and 9 had mastectomy for breast tumors. Breast sizes ranged from A cup to D cup.

The duration of the procedure, from skin incision to suture for both breasts, ranged from 205 to 351 minutes (median, 277 minutes). No immediate operative complications (eg, hematoma) occurred, and there was no need for conversion to open surgery in any case. Over the 36-month follow-up, there were no recurrences. About 95% of patients retained skin sensitivity and 55% retained nipple sensitivity.
 

Unanswered Questions

In an accompanying article, Monica Morrow, MD, director of surgical breast oncology at the Memorial Sloan-Kettering Cancer Center in New York, acknowledged that the new evidence confirms the surgical approach’s feasibility but deems it insufficient to adopt it lightly. “At this point, the issue is not whether robotic mastectomy can be done but whether there is sufficient information about its oncologic safety that it should be done,” she wrote.

In a 2019 statement that was updated in 2021, the US Food and Drug Administration stated, “The safety and effectiveness of using robotically assisted surgical devices in mastectomy procedures or in the prevention or treatment of breast cancer have not been established.” The significance of this caution is underscored by the experience with laparoscopic and robotic radical hysterectomies. These procedures were widely adopted until a randomized prospective study demonstrated lower disease-free and overall survival for the minimally invasive approach compared with open surgery.

The University of Texas surgeons stated that acceptable safety and oncological outcomes for robotic NSM compared with conventional NSM had been demonstrated. They cited two trials with 238 cases and a median follow-up of less than 3 years. Dr. Morrow wrote, “While these reports provide reassurance that gross residual tumor is not being left behind, they do not address the issue of failure to remove all of the breast tissue due to thick skin flaps, with the potential for development of late recurrence or new cancers.” It is worth noting that even with the traditional surgical approach, the 5-year local recurrence rate after NSM is approximately double when observed with shorter follow-ups.

According to Dr. Morrow, the high rate of sensory preservation observed with robotic surgery, a desirable outcome for patients, is also a cause for concern. “While this may be due to incision placement or minimal skin flap retraction, as suggested by the authors, it is equally plausible that this could be due to thick skin flaps with preservation of the terminal branches of the fourth intercostal nerve.”

Therefore, more information on long-term oncological outcomes in a large number of patients will be necessary to confirm the safety of the procedure. In addition, measuring patient-reported outcomes will be useful in demonstrating that the benefits of the procedure outweigh increased operating times and costs. 

This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.

A study from the University of Texas demonstrates the feasibility of using robotic single-port laparoscopy in nipple-sparing mastectomy (NSM), a type of conservative mastectomy preserving the skin and nipple-areola complex. The new findings potentially expand the application of robotic surgery to a larger patient population but doubts about the safety of this approach linger.

Robotic Mastectomy

The first surgeries involving the Da Vinci robotic surgeon for breast removal date to 2015. Multiport robotic surgery faces significant obstacles in this field, however. Feasibility studies have primarily focused on women with small breasts, corresponding to cup size C or smaller.

In the study that was published in JAMA Surgery, surgeons used the more cost-effective single-port platform for bilateral NSM procedures. Among the 20 patients included in the analysis (age, 29-63 years), 11 underwent prophylactic mastectomy (for a high risk for cancer) and 9 had mastectomy for breast tumors. Breast sizes ranged from A cup to D cup.

The duration of the procedure, from skin incision to suture for both breasts, ranged from 205 to 351 minutes (median, 277 minutes). No immediate operative complications (eg, hematoma) occurred, and there was no need for conversion to open surgery in any case. Over the 36-month follow-up, there were no recurrences. About 95% of patients retained skin sensitivity and 55% retained nipple sensitivity.
 

Unanswered Questions

In an accompanying article, Monica Morrow, MD, director of surgical breast oncology at the Memorial Sloan-Kettering Cancer Center in New York, acknowledged that the new evidence confirms the surgical approach’s feasibility but deems it insufficient to adopt it lightly. “At this point, the issue is not whether robotic mastectomy can be done but whether there is sufficient information about its oncologic safety that it should be done,” she wrote.

In a 2019 statement that was updated in 2021, the US Food and Drug Administration stated, “The safety and effectiveness of using robotically assisted surgical devices in mastectomy procedures or in the prevention or treatment of breast cancer have not been established.” The significance of this caution is underscored by the experience with laparoscopic and robotic radical hysterectomies. These procedures were widely adopted until a randomized prospective study demonstrated lower disease-free and overall survival for the minimally invasive approach compared with open surgery.

The University of Texas surgeons stated that acceptable safety and oncological outcomes for robotic NSM compared with conventional NSM had been demonstrated. They cited two trials with 238 cases and a median follow-up of less than 3 years. Dr. Morrow wrote, “While these reports provide reassurance that gross residual tumor is not being left behind, they do not address the issue of failure to remove all of the breast tissue due to thick skin flaps, with the potential for development of late recurrence or new cancers.” It is worth noting that even with the traditional surgical approach, the 5-year local recurrence rate after NSM is approximately double when observed with shorter follow-ups.

According to Dr. Morrow, the high rate of sensory preservation observed with robotic surgery, a desirable outcome for patients, is also a cause for concern. “While this may be due to incision placement or minimal skin flap retraction, as suggested by the authors, it is equally plausible that this could be due to thick skin flaps with preservation of the terminal branches of the fourth intercostal nerve.”

Therefore, more information on long-term oncological outcomes in a large number of patients will be necessary to confirm the safety of the procedure. In addition, measuring patient-reported outcomes will be useful in demonstrating that the benefits of the procedure outweigh increased operating times and costs. 

This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA SURGERY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Toothbrushing in Hospital Reduces Infections and Death

Article Type
Changed
Tue, 01/16/2024 - 16:18

Daily toothbrushing is associated with a reduced incidence of hospital-acquired pneumonia (HAP), especially in patients on mechanical ventilation. This practice also is associated with lower intensive care unit (ICU) mortality, shorter ICU admissions, and shorter ventilator dependency. These are the findings of a meta-analysis published in JAMA Internal Medicine. Hospital policies must reassess the importance of oral hygiene even, or perhaps especially, in situations in which attention is focused elsewhere.

Oral Microbiota and Lungs

HAP largely results from the aspiration of microorganisms present in the oral cavity. In fact, the oral microbiota comprises an estimated 700 species of bacteria, fungi, viruses, and protozoa. There is a known link between oral health and the development of pneumonia, and rigorous oral hygiene is part of the recommendations for preventing HAP. But the methods that should be used for ensuring good hygiene haven’t been determined. The use of chlorhexidine-based mouthwash is debated because there is no evidence that it prevents pneumonia and because some studies have suggested a link between chlorhexidine and higher mortality rates.

Toothbrushing is potentially more effective than antiseptic at reducing the oral microbiota because the mechanical action breaks up plaque and other biofilms. Yet, guidelines have focused very little on brushing as a measure for preventing hospital-acquired infections, meaning that every hospital has its own way of doing things.
 

What Data Show

Selina Ehrenzeller, MD, and Michael Klompas, MD, MPH, of the department of population medicine at Harvard Medical School, Boston, conducted a systematic literature analysis to identify randomized clinical studies in which daily toothbrushing was shown to affect the risk for HAP in adult hospital inpatients. Fifteen studies met the inclusion criteria and were used for the meta-analysis. The effective population size was 2786 patients.

Daily toothbrushing was associated with a 33% lower risk for HAP (relative risk [RR], 0.67) and a 29% lower risk for ICU mortality (RR, 0.81). Reduction in pneumonia incidence was significant for patients receiving invasive mechanical ventilation (RR, 0.68) but not for patients who were not receiving invasive mechanical ventilation. Toothbrushing for patients in the ICU was associated with fewer days of mechanical ventilation (mean difference, −1.24 days) and a shorter ICU length of stay (mean difference, −1.78 days). Brushing twice a day vs more frequent intervals was associated with similar effect estimates. No differences were seen in duration of stay in various ICU subdepartments and in the use of antibiotics that were linked to daily toothbrushing.
 

Expert Opinion

“This study represents an exciting contribution to infection prevention and reinforces the notion that routine toothbrushing is an essential component of standard of care in ventilated patients,” Rupak Datta, MD, PhD, assistant professor of infectious diseases at Yale University in New Haven, Connecticut, and specialist in antimicrobial resistance in hospital settings, wrote in a commentary on the study. According to Dr. Datta, there is still uncertainty regarding the importance of this practice in preventing nonventilator-HAP, as the investigators could identify only two studies with nonventilated patients that met inclusion criteria. Other studies will be needed to help standardize toothbrushing in hospital patients admitted in general. “As the literature on HAP evolves,” concluded Dr. Datta, “oral hygiene may take on an indispensable role, similar to hand washing, in preventing and controlling hospital-acquired infections.”

This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Daily toothbrushing is associated with a reduced incidence of hospital-acquired pneumonia (HAP), especially in patients on mechanical ventilation. This practice also is associated with lower intensive care unit (ICU) mortality, shorter ICU admissions, and shorter ventilator dependency. These are the findings of a meta-analysis published in JAMA Internal Medicine. Hospital policies must reassess the importance of oral hygiene even, or perhaps especially, in situations in which attention is focused elsewhere.

Oral Microbiota and Lungs

HAP largely results from the aspiration of microorganisms present in the oral cavity. In fact, the oral microbiota comprises an estimated 700 species of bacteria, fungi, viruses, and protozoa. There is a known link between oral health and the development of pneumonia, and rigorous oral hygiene is part of the recommendations for preventing HAP. But the methods that should be used for ensuring good hygiene haven’t been determined. The use of chlorhexidine-based mouthwash is debated because there is no evidence that it prevents pneumonia and because some studies have suggested a link between chlorhexidine and higher mortality rates.

Toothbrushing is potentially more effective than antiseptic at reducing the oral microbiota because the mechanical action breaks up plaque and other biofilms. Yet, guidelines have focused very little on brushing as a measure for preventing hospital-acquired infections, meaning that every hospital has its own way of doing things.
 

What Data Show

Selina Ehrenzeller, MD, and Michael Klompas, MD, MPH, of the department of population medicine at Harvard Medical School, Boston, conducted a systematic literature analysis to identify randomized clinical studies in which daily toothbrushing was shown to affect the risk for HAP in adult hospital inpatients. Fifteen studies met the inclusion criteria and were used for the meta-analysis. The effective population size was 2786 patients.

Daily toothbrushing was associated with a 33% lower risk for HAP (relative risk [RR], 0.67) and a 29% lower risk for ICU mortality (RR, 0.81). Reduction in pneumonia incidence was significant for patients receiving invasive mechanical ventilation (RR, 0.68) but not for patients who were not receiving invasive mechanical ventilation. Toothbrushing for patients in the ICU was associated with fewer days of mechanical ventilation (mean difference, −1.24 days) and a shorter ICU length of stay (mean difference, −1.78 days). Brushing twice a day vs more frequent intervals was associated with similar effect estimates. No differences were seen in duration of stay in various ICU subdepartments and in the use of antibiotics that were linked to daily toothbrushing.
 

Expert Opinion

“This study represents an exciting contribution to infection prevention and reinforces the notion that routine toothbrushing is an essential component of standard of care in ventilated patients,” Rupak Datta, MD, PhD, assistant professor of infectious diseases at Yale University in New Haven, Connecticut, and specialist in antimicrobial resistance in hospital settings, wrote in a commentary on the study. According to Dr. Datta, there is still uncertainty regarding the importance of this practice in preventing nonventilator-HAP, as the investigators could identify only two studies with nonventilated patients that met inclusion criteria. Other studies will be needed to help standardize toothbrushing in hospital patients admitted in general. “As the literature on HAP evolves,” concluded Dr. Datta, “oral hygiene may take on an indispensable role, similar to hand washing, in preventing and controlling hospital-acquired infections.”

This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.

Daily toothbrushing is associated with a reduced incidence of hospital-acquired pneumonia (HAP), especially in patients on mechanical ventilation. This practice also is associated with lower intensive care unit (ICU) mortality, shorter ICU admissions, and shorter ventilator dependency. These are the findings of a meta-analysis published in JAMA Internal Medicine. Hospital policies must reassess the importance of oral hygiene even, or perhaps especially, in situations in which attention is focused elsewhere.

Oral Microbiota and Lungs

HAP largely results from the aspiration of microorganisms present in the oral cavity. In fact, the oral microbiota comprises an estimated 700 species of bacteria, fungi, viruses, and protozoa. There is a known link between oral health and the development of pneumonia, and rigorous oral hygiene is part of the recommendations for preventing HAP. But the methods that should be used for ensuring good hygiene haven’t been determined. The use of chlorhexidine-based mouthwash is debated because there is no evidence that it prevents pneumonia and because some studies have suggested a link between chlorhexidine and higher mortality rates.

Toothbrushing is potentially more effective than antiseptic at reducing the oral microbiota because the mechanical action breaks up plaque and other biofilms. Yet, guidelines have focused very little on brushing as a measure for preventing hospital-acquired infections, meaning that every hospital has its own way of doing things.
 

What Data Show

Selina Ehrenzeller, MD, and Michael Klompas, MD, MPH, of the department of population medicine at Harvard Medical School, Boston, conducted a systematic literature analysis to identify randomized clinical studies in which daily toothbrushing was shown to affect the risk for HAP in adult hospital inpatients. Fifteen studies met the inclusion criteria and were used for the meta-analysis. The effective population size was 2786 patients.

Daily toothbrushing was associated with a 33% lower risk for HAP (relative risk [RR], 0.67) and a 29% lower risk for ICU mortality (RR, 0.81). Reduction in pneumonia incidence was significant for patients receiving invasive mechanical ventilation (RR, 0.68) but not for patients who were not receiving invasive mechanical ventilation. Toothbrushing for patients in the ICU was associated with fewer days of mechanical ventilation (mean difference, −1.24 days) and a shorter ICU length of stay (mean difference, −1.78 days). Brushing twice a day vs more frequent intervals was associated with similar effect estimates. No differences were seen in duration of stay in various ICU subdepartments and in the use of antibiotics that were linked to daily toothbrushing.
 

Expert Opinion

“This study represents an exciting contribution to infection prevention and reinforces the notion that routine toothbrushing is an essential component of standard of care in ventilated patients,” Rupak Datta, MD, PhD, assistant professor of infectious diseases at Yale University in New Haven, Connecticut, and specialist in antimicrobial resistance in hospital settings, wrote in a commentary on the study. According to Dr. Datta, there is still uncertainty regarding the importance of this practice in preventing nonventilator-HAP, as the investigators could identify only two studies with nonventilated patients that met inclusion criteria. Other studies will be needed to help standardize toothbrushing in hospital patients admitted in general. “As the literature on HAP evolves,” concluded Dr. Datta, “oral hygiene may take on an indispensable role, similar to hand washing, in preventing and controlling hospital-acquired infections.”

This article was translated from Univadis Italy, which is part of the Medscape Professional Network. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AI Aids in Monitoring Asthma in Young Children

Article Type
Changed
Wed, 01/10/2024 - 14:21

Can asthma symptoms be monitored reliably at home? Until now, the answer would have been yes, but not in preschool-age patients. Recent findings in Annals of Family Medicine suggest that this limitation can be overcome with the assistance of artificial intelligence (AI). The use of an AI-assisted stethoscope can generate reliable data, even in young children, thus providing caregivers with information about asthma exacerbations.

Objectivity Challenge

A timely diagnosis of asthma exacerbations, which is crucial for proper disease management, requires effective home monitoring. While some lung function parameters, like peak expiratory flow (PEF), can be measured by patients at home, tools for this purpose are not designed for very young children.

“To achieve effective asthma management, patients should be given the necessary tools to allow them to recognize and respond to worsening asthma,” wrote the study authors. Despite the Global Initiative for Asthma identifying respiratory sounds as a fundamental parameter for exacerbation recognition, these are almost exclusively evaluated during doctor visits. Recognizing respiratory sounds and judging whether there has been a change can be challenging for those outside the medical profession.

To enhance home monitoring, researchers from the Department of Pediatric Pneumology and Rheumatology at the University of Lublin, Poland, experimented with the StethoMe stethoscope, which enables the recognition of pathologic signs, including continuous and transient noises. This AI-assisted stethoscope, trained on over 10,000 respiratory sound recordings, is certified as a Class IIa medical device in Europe.
 

The ‘Smart’ Stethoscope

The 6-month study enlisted 149 patients with asthma (90 children and 59 adults). Participants self-monitored (but parents or caregivers managed for children) once daily in the first 2 weeks and at least once weekly thereafter using three tools. The first was the StethoMe stethoscope, which was used for detecting respiratory sounds, respiratory rate (RR), heart rate (HR), and inspiration/expiration ratio (I/E). Patients were provided a “map” of chest points at which to position the stethoscope. The second was a pulse oximeter, which was used to measure oxygen saturation. The third was a peak flow meter for quantifying PEF. Simultaneously, a health questionnaire was completed.

Data from 6029 completed self-monitoring sessions were used to determine the most effective parameter for exacerbation recognition, quantified by the area under the receiver operating characteristic curve (AUC). The researchers concluded that the parameter with the best performance was wheeze intensity in young children (AUC 84%, 95% CI, 82%-85%), wheeze intensity in older children (AUC, 81%; 95% CI, 79%-84%), and questionnaire response for adults (AUC, 92%; 95% CI, 89%-95%). Combining multiple parameters increased effectiveness.

“The present results clearly show that a set of parameters (wheezes, rhonchi, coarse and fine crackles, HR, RR, and I/E) measured by a device such as an AI-aided home stethoscope allows for the detection of exacerbations without the need for performing PEF measurements, which can be equivocal,” the study authors concluded. “In addition, in the case of younger children (age, < 5 years), when introduced on a large scale, the analyzed home stethoscope appears to be a promising tool that might make asthma diagnosis more straightforward and substantially facilitate asthma monitoring.”

A version of this article first appeared on Medscape.com. This article was translated from Univadis Italy, which is part of the Medscape professional network.

Publications
Topics
Sections

Can asthma symptoms be monitored reliably at home? Until now, the answer would have been yes, but not in preschool-age patients. Recent findings in Annals of Family Medicine suggest that this limitation can be overcome with the assistance of artificial intelligence (AI). The use of an AI-assisted stethoscope can generate reliable data, even in young children, thus providing caregivers with information about asthma exacerbations.

Objectivity Challenge

A timely diagnosis of asthma exacerbations, which is crucial for proper disease management, requires effective home monitoring. While some lung function parameters, like peak expiratory flow (PEF), can be measured by patients at home, tools for this purpose are not designed for very young children.

“To achieve effective asthma management, patients should be given the necessary tools to allow them to recognize and respond to worsening asthma,” wrote the study authors. Despite the Global Initiative for Asthma identifying respiratory sounds as a fundamental parameter for exacerbation recognition, these are almost exclusively evaluated during doctor visits. Recognizing respiratory sounds and judging whether there has been a change can be challenging for those outside the medical profession.

To enhance home monitoring, researchers from the Department of Pediatric Pneumology and Rheumatology at the University of Lublin, Poland, experimented with the StethoMe stethoscope, which enables the recognition of pathologic signs, including continuous and transient noises. This AI-assisted stethoscope, trained on over 10,000 respiratory sound recordings, is certified as a Class IIa medical device in Europe.
 

The ‘Smart’ Stethoscope

The 6-month study enlisted 149 patients with asthma (90 children and 59 adults). Participants self-monitored (but parents or caregivers managed for children) once daily in the first 2 weeks and at least once weekly thereafter using three tools. The first was the StethoMe stethoscope, which was used for detecting respiratory sounds, respiratory rate (RR), heart rate (HR), and inspiration/expiration ratio (I/E). Patients were provided a “map” of chest points at which to position the stethoscope. The second was a pulse oximeter, which was used to measure oxygen saturation. The third was a peak flow meter for quantifying PEF. Simultaneously, a health questionnaire was completed.

Data from 6029 completed self-monitoring sessions were used to determine the most effective parameter for exacerbation recognition, quantified by the area under the receiver operating characteristic curve (AUC). The researchers concluded that the parameter with the best performance was wheeze intensity in young children (AUC 84%, 95% CI, 82%-85%), wheeze intensity in older children (AUC, 81%; 95% CI, 79%-84%), and questionnaire response for adults (AUC, 92%; 95% CI, 89%-95%). Combining multiple parameters increased effectiveness.

“The present results clearly show that a set of parameters (wheezes, rhonchi, coarse and fine crackles, HR, RR, and I/E) measured by a device such as an AI-aided home stethoscope allows for the detection of exacerbations without the need for performing PEF measurements, which can be equivocal,” the study authors concluded. “In addition, in the case of younger children (age, < 5 years), when introduced on a large scale, the analyzed home stethoscope appears to be a promising tool that might make asthma diagnosis more straightforward and substantially facilitate asthma monitoring.”

A version of this article first appeared on Medscape.com. This article was translated from Univadis Italy, which is part of the Medscape professional network.

Can asthma symptoms be monitored reliably at home? Until now, the answer would have been yes, but not in preschool-age patients. Recent findings in Annals of Family Medicine suggest that this limitation can be overcome with the assistance of artificial intelligence (AI). The use of an AI-assisted stethoscope can generate reliable data, even in young children, thus providing caregivers with information about asthma exacerbations.

Objectivity Challenge

A timely diagnosis of asthma exacerbations, which is crucial for proper disease management, requires effective home monitoring. While some lung function parameters, like peak expiratory flow (PEF), can be measured by patients at home, tools for this purpose are not designed for very young children.

“To achieve effective asthma management, patients should be given the necessary tools to allow them to recognize and respond to worsening asthma,” wrote the study authors. Despite the Global Initiative for Asthma identifying respiratory sounds as a fundamental parameter for exacerbation recognition, these are almost exclusively evaluated during doctor visits. Recognizing respiratory sounds and judging whether there has been a change can be challenging for those outside the medical profession.

To enhance home monitoring, researchers from the Department of Pediatric Pneumology and Rheumatology at the University of Lublin, Poland, experimented with the StethoMe stethoscope, which enables the recognition of pathologic signs, including continuous and transient noises. This AI-assisted stethoscope, trained on over 10,000 respiratory sound recordings, is certified as a Class IIa medical device in Europe.
 

The ‘Smart’ Stethoscope

The 6-month study enlisted 149 patients with asthma (90 children and 59 adults). Participants self-monitored (but parents or caregivers managed for children) once daily in the first 2 weeks and at least once weekly thereafter using three tools. The first was the StethoMe stethoscope, which was used for detecting respiratory sounds, respiratory rate (RR), heart rate (HR), and inspiration/expiration ratio (I/E). Patients were provided a “map” of chest points at which to position the stethoscope. The second was a pulse oximeter, which was used to measure oxygen saturation. The third was a peak flow meter for quantifying PEF. Simultaneously, a health questionnaire was completed.

Data from 6029 completed self-monitoring sessions were used to determine the most effective parameter for exacerbation recognition, quantified by the area under the receiver operating characteristic curve (AUC). The researchers concluded that the parameter with the best performance was wheeze intensity in young children (AUC 84%, 95% CI, 82%-85%), wheeze intensity in older children (AUC, 81%; 95% CI, 79%-84%), and questionnaire response for adults (AUC, 92%; 95% CI, 89%-95%). Combining multiple parameters increased effectiveness.

“The present results clearly show that a set of parameters (wheezes, rhonchi, coarse and fine crackles, HR, RR, and I/E) measured by a device such as an AI-aided home stethoscope allows for the detection of exacerbations without the need for performing PEF measurements, which can be equivocal,” the study authors concluded. “In addition, in the case of younger children (age, < 5 years), when introduced on a large scale, the analyzed home stethoscope appears to be a promising tool that might make asthma diagnosis more straightforward and substantially facilitate asthma monitoring.”

A version of this article first appeared on Medscape.com. This article was translated from Univadis Italy, which is part of the Medscape professional network.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF FAMILY MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Defining difficult-to-treat inflammatory bowel disease

Article Type
Changed
Fri, 08/11/2023 - 16:44

Despite advances in treatment, a large proportion of patients with inflammatory bowel disease (IBD) do not achieve or stay in remission even after further lines of treatment have been given. Up until now, one major obstacle has impeded our interpretation of studies focusing on patients suffering from this chronic condition: the lack of standard criteria and terminology among authors.

Under the guidance of the endpoints cluster of the International Organization for the Study of Inflammatory Bowel Disease (IOIBD), a group of experts held a consensus meeting to propose a common operative definition for “difficult-to-treat IBD.” It’s the first step to better understanding this condition and designing targeted studies and interventions.
 

The definition

After the meeting, the experts agreed that “difficult-to-treat IBD” is defined by these characteristics:

  • The failure of biologics and advanced small molecules with at least two different mechanisms of action.
  • Postoperative recurrence of Crohn’s disease after two surgical resections in adults or one in children.
  • Chronic antibiotic-refractory pouchitis (inflammation of the ileal pouch-anal anastomosis [J-pouch] created in patients with ulcerative colitis who have had total colectomy surgery).
  • Complex perianal disease (difficult-to-treat Crohn’s disease).
  • Comorbid psychosocial complications that impair disease management (for example, comorbid disorders that obstruct treatment compliance, participation in follow-up visits, or objective assessment of symptoms by clinicians).

The path here

The starting point was the IOIBD-sponsored 2022 global survey in which doctors treating patients with IBD were asked what they thought contributed to difficult-to-treat IBD. Using the responses from that survey, a series of statements were drawn up covering these three main areas: failure of medical and surgical treatments, disease phenotypes, and specific complaints from patients (not limited to bowel disease).

The statements were scrutinized by a 16-person task force made up of experts from eight European countries, Canada, Japan, Israel, and the United States. The project and its findings were published in the journal The Lancet Gastroenterology & Hepatology.

Using the modified Delphi technique, the experts argued for or against the 20 statements proposed. Consensus was achieved for five of these statements (meaning that at least 75% of voters were in agreement).
 

What does it mean?

“The scope of this consensus initiative was twofold,” explain the authors. “First, we wanted to help standardize study reporting and promote clinical study designs that include patients with difficult-to-treat IBD by proposing common terminology. Second, we hoped to identify, within clinical practice, a group of patients requiring specific treatment or referral to a specialist unit. For patients with conditions resistant to two or more advanced drug types (what is referred to as difficult-to-treat IBD), more aggressive treatment strategies, such as combined therapies or multidisciplinary approaches, should be taken into consideration.

“In the field of rheumatology, the creation of common criteria for difficult-to-treat rheumatoid arthritis has allowed researchers to concentrate their efforts on identifying progressive disease markers, assessing drug efficacy, mechanisms of inefficacy, personalized management strategies, and analyzing the use of health care resources and costs. Similar advances could be achieved in the area of inflammatory bowel disease.”

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Publications
Topics
Sections

Despite advances in treatment, a large proportion of patients with inflammatory bowel disease (IBD) do not achieve or stay in remission even after further lines of treatment have been given. Up until now, one major obstacle has impeded our interpretation of studies focusing on patients suffering from this chronic condition: the lack of standard criteria and terminology among authors.

Under the guidance of the endpoints cluster of the International Organization for the Study of Inflammatory Bowel Disease (IOIBD), a group of experts held a consensus meeting to propose a common operative definition for “difficult-to-treat IBD.” It’s the first step to better understanding this condition and designing targeted studies and interventions.
 

The definition

After the meeting, the experts agreed that “difficult-to-treat IBD” is defined by these characteristics:

  • The failure of biologics and advanced small molecules with at least two different mechanisms of action.
  • Postoperative recurrence of Crohn’s disease after two surgical resections in adults or one in children.
  • Chronic antibiotic-refractory pouchitis (inflammation of the ileal pouch-anal anastomosis [J-pouch] created in patients with ulcerative colitis who have had total colectomy surgery).
  • Complex perianal disease (difficult-to-treat Crohn’s disease).
  • Comorbid psychosocial complications that impair disease management (for example, comorbid disorders that obstruct treatment compliance, participation in follow-up visits, or objective assessment of symptoms by clinicians).

The path here

The starting point was the IOIBD-sponsored 2022 global survey in which doctors treating patients with IBD were asked what they thought contributed to difficult-to-treat IBD. Using the responses from that survey, a series of statements were drawn up covering these three main areas: failure of medical and surgical treatments, disease phenotypes, and specific complaints from patients (not limited to bowel disease).

The statements were scrutinized by a 16-person task force made up of experts from eight European countries, Canada, Japan, Israel, and the United States. The project and its findings were published in the journal The Lancet Gastroenterology & Hepatology.

Using the modified Delphi technique, the experts argued for or against the 20 statements proposed. Consensus was achieved for five of these statements (meaning that at least 75% of voters were in agreement).
 

What does it mean?

“The scope of this consensus initiative was twofold,” explain the authors. “First, we wanted to help standardize study reporting and promote clinical study designs that include patients with difficult-to-treat IBD by proposing common terminology. Second, we hoped to identify, within clinical practice, a group of patients requiring specific treatment or referral to a specialist unit. For patients with conditions resistant to two or more advanced drug types (what is referred to as difficult-to-treat IBD), more aggressive treatment strategies, such as combined therapies or multidisciplinary approaches, should be taken into consideration.

“In the field of rheumatology, the creation of common criteria for difficult-to-treat rheumatoid arthritis has allowed researchers to concentrate their efforts on identifying progressive disease markers, assessing drug efficacy, mechanisms of inefficacy, personalized management strategies, and analyzing the use of health care resources and costs. Similar advances could be achieved in the area of inflammatory bowel disease.”

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Despite advances in treatment, a large proportion of patients with inflammatory bowel disease (IBD) do not achieve or stay in remission even after further lines of treatment have been given. Up until now, one major obstacle has impeded our interpretation of studies focusing on patients suffering from this chronic condition: the lack of standard criteria and terminology among authors.

Under the guidance of the endpoints cluster of the International Organization for the Study of Inflammatory Bowel Disease (IOIBD), a group of experts held a consensus meeting to propose a common operative definition for “difficult-to-treat IBD.” It’s the first step to better understanding this condition and designing targeted studies and interventions.
 

The definition

After the meeting, the experts agreed that “difficult-to-treat IBD” is defined by these characteristics:

  • The failure of biologics and advanced small molecules with at least two different mechanisms of action.
  • Postoperative recurrence of Crohn’s disease after two surgical resections in adults or one in children.
  • Chronic antibiotic-refractory pouchitis (inflammation of the ileal pouch-anal anastomosis [J-pouch] created in patients with ulcerative colitis who have had total colectomy surgery).
  • Complex perianal disease (difficult-to-treat Crohn’s disease).
  • Comorbid psychosocial complications that impair disease management (for example, comorbid disorders that obstruct treatment compliance, participation in follow-up visits, or objective assessment of symptoms by clinicians).

The path here

The starting point was the IOIBD-sponsored 2022 global survey in which doctors treating patients with IBD were asked what they thought contributed to difficult-to-treat IBD. Using the responses from that survey, a series of statements were drawn up covering these three main areas: failure of medical and surgical treatments, disease phenotypes, and specific complaints from patients (not limited to bowel disease).

The statements were scrutinized by a 16-person task force made up of experts from eight European countries, Canada, Japan, Israel, and the United States. The project and its findings were published in the journal The Lancet Gastroenterology & Hepatology.

Using the modified Delphi technique, the experts argued for or against the 20 statements proposed. Consensus was achieved for five of these statements (meaning that at least 75% of voters were in agreement).
 

What does it mean?

“The scope of this consensus initiative was twofold,” explain the authors. “First, we wanted to help standardize study reporting and promote clinical study designs that include patients with difficult-to-treat IBD by proposing common terminology. Second, we hoped to identify, within clinical practice, a group of patients requiring specific treatment or referral to a specialist unit. For patients with conditions resistant to two or more advanced drug types (what is referred to as difficult-to-treat IBD), more aggressive treatment strategies, such as combined therapies or multidisciplinary approaches, should be taken into consideration.

“In the field of rheumatology, the creation of common criteria for difficult-to-treat rheumatoid arthritis has allowed researchers to concentrate their efforts on identifying progressive disease markers, assessing drug efficacy, mechanisms of inefficacy, personalized management strategies, and analyzing the use of health care resources and costs. Similar advances could be achieved in the area of inflammatory bowel disease.”

This article was translated from Univadis Italy. A version appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

AI efforts make strides in predicting progression to RA

Article Type
Changed
Tue, 06/13/2023 - 15:05

MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.

In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.

Dr. Lennart Jans, head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium
Dr. Lennart Jans
Dr. Lennart Jans

Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
 

RA prediction

Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”

Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”

The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.

The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.

The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.

“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”

“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
 

 

 

A ‘transitional phase’ of applying AI techniques

“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”

Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.

“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.

“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
 

Predicting disease progression

The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.

Dr. Takayuki Fujii, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University's Graduate School of Medicine in Japan
Dr. Takayuki Fujii
Dr. Takayuki Fujii

Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.

“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”

“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA]  is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”

Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.

In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.

Dr. Lennart Jans, head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium
Dr. Lennart Jans
Dr. Lennart Jans

Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
 

RA prediction

Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”

Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”

The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.

The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.

The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.

“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”

“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
 

 

 

A ‘transitional phase’ of applying AI techniques

“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”

Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.

“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.

“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
 

Predicting disease progression

The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.

Dr. Takayuki Fujii, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University's Graduate School of Medicine in Japan
Dr. Takayuki Fujii
Dr. Takayuki Fujii

Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.

“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”

“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA]  is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”

Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.

A version of this article first appeared on Medscape.com.

MILAN – Two independent efforts to use artificial intelligence (AI) to predict the development of early rheumatoid arthritis (RA) from patients with signs and symptoms not meeting full disease criteria showed good, near expert-level accuracy, according to findings from two studies presented at the annual European Congress of Rheumatology.

In one study, researchers from Leiden University Medical Center in the Netherlands developed an AI-based method to automatically analyze MR scans of extremities in order to predict early rheumatoid arthritis. The second study involved a Japanese research team that used machine learning to create a model capable of predicting progression from undifferentiated arthritis (UA) to RA. Both approaches would facilitate early diagnosis of RA, enabling timely treatment and improved clinical outcomes.

Dr. Lennart Jans, head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium
Dr. Lennart Jans
Dr. Lennart Jans

Lennart Jans, MD, PhD, who was not involved in either study but works with AI-assisted imaging analysis on a daily basis as head of clinics in musculoskeletal radiology at Ghent University Hospital and a professor of radiology at Ghent University in Belgium, said that integrating AI into health care poses several challenging aspects that need to be addressed. “There are three main challenges associated with the development and implementation of AI-based tools in clinical practice,” he said. “Firstly, obtaining heterogeneous datasets from different image hardware vendors, diverse racial and ethnic backgrounds, and various ages and genders is crucial for training and testing the AI algorithms. Secondly, AI algorithms need to achieve a predetermined performance level depending on the specific use case. Finally, a regulatory pathway must be followed to obtain the necessary FDA or MDR [medical devices regulation] certification before applying an AI use case in clinical practice.”
 

RA prediction

Yanli Li, the first author of the study and a member of the division of image processing at Leiden University Medical Center, explained the potential benefits of early RA prediction. “If we could determine whether a patient presenting with clinically suspected arthralgia (CSA) or early onset arthritis (EAC) is likely to develop RA in the near future, physicians could initiate treatment earlier, reducing the risk of disease progression.”

Currently, rheumatologists estimate the likelihood of developing RA by visually scoring MR scans using the RAMRIS scoring system. “We decided to explore the use of AI,” Dr. Li explained, “because it could save time, reduce costs and labor, eliminate the need for scoring training, and allow for hypothesis-free discoveries.”

The research team collected MR scans of the hands and feet from Leiden University Medical Center’s radiology department. The dataset consisted of images from 177 healthy individuals, 692 subjects with CSA (including 113 who developed RA), and 969 with EAC (including 447 who developed RA). The images underwent automated preprocessing to remove artifacts and standardize the input for the computer. Subsequently, a deep learning model was trained to predict RA development within a 2-year time frame.

The training process involved several steps. Initially, the researchers pretrained the model to learn anatomy by masking parts of the images and tasking the computer with reconstructing them. Subsequently, the AI was trained to differentiate between the groups (EAC vs. healthy and CSA vs. healthy), then between RA and other disorders. Finally, the AI model was trained to predict RA.

The accuracy of the model was evaluated using the area under the receiver operator characteristic curve (AUROC). The model that was trained using MR scans of the hands (including the wrist and metacarpophalangeal joints) achieved a mean AUROC of 0.84 for distinguishing EAC from healthy subjects and 0.83 for distinguishing CSA from healthy subjects. The model trained using MR scans of both the hands and feet achieved a mean AUROC of 0.71 for distinguishing RA from non-RA cases in EAC. The accuracy of the model in predicting RA using MR scans of the hands was 0.73, which closely matches the reported accuracy of visual scoring by human experts (0.74). Importantly, the generation and analysis of heat maps suggested that the deep learning model predicts RA based on known inflammatory signals.

“Automatic RA prediction using AI interpretation of MR scans is feasible,” Dr. Li said. “Incorporating additional clinical data will likely further enhance the AI prediction, and the heat maps may contribute to the discovery of new MRI biomarkers for RA development.”

“AI models and engines have achieved near-expertise levels for various use cases, including the early detection of RA on MRI scans of the hands,” said Dr. Jans, the Ghent University radiologist. “We are observing the same progress in AI detection of rheumatic diseases in other imaging modalities, such as radiography, CT, and ultrasound. However, it is important to note that the reported performances often apply to selected cohorts with standardized imaging protocols. The next challenge [for Dr. Li and colleagues, and others] will be to train and test these algorithms using more heterogeneous datasets to make them applicable in real-world settings.”
 

 

 

A ‘transitional phase’ of applying AI techniques

“In a medical setting, as computer scientists, we face unique challenges,” pointed out Berend C. Stoel, MSc, PhD, the senior author of the Leiden study. “Our team consists of approximately 30-35 researchers, primarily electrical engineers or computer scientists, situated within the radiology department of Leiden University Medical Center. Our focus is on image processing, seeking AI-based solutions for image analysis, particularly utilizing deep learning techniques.”

Their objective is to validate this method more broadly, and to achieve that, they require collaboration with other hospitals. Up until now, they have primarily worked with a specific type of MR images, extremity MR scans. These scans are conducted in only a few centers equipped with extremity MR scanners, which can accommodate only hands or feet.

“We are currently in a transitional phase, aiming to apply our methods to standard MR scans, which are more widely available,” Dr. Stoel informed this news organization. “We are engaged in various projects. One project, nearing completion, involves the scoring of early RA, where we train the computer to imitate the actions of rheumatologists or radiologists. We started with a relatively straightforward approach, but AI offers a multitude of possibilities. In the project presented at EULAR, we manipulated the images in a different manner, attempting to predict future events. We also have a parallel project where we employ AI to detect inflammatory changes over time by analyzing sequences of images (MR scans). Furthermore, we have developed AI models to distinguish between treatment and placebo groups. Once the neural network has been trained for this task, we can inquire about the location and timing of changes, thereby gaining insights into the therapy’s response.

“When considering the history of AI, it has experienced both ups and downs. We are currently in a promising phase, but if certain projects fail, expectations might diminish. My hope is that we will indeed revolutionize and enhance disease diagnosis, monitoring, and prediction. Additionally, AI may provide us with additional information that we, as humans, may not be able to extract from these images. However, it is difficult to predict where we will stand in 5-10 years,” he concluded.
 

Predicting disease progression

The second study, which explored the application of AI in predicting the progression of undifferentiated arthritis (UA) to RA, was presented by Takayuki Fujii, MD, PhD, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University’s Graduate School of Medicine in Japan. “Predicting the progression of RA from UA remains an unmet medical need,” he reminded the audience.

Dr. Takayuki Fujii, assistant professor in the department of advanced medicine for rheumatic diseases at Kyoto University's Graduate School of Medicine in Japan
Dr. Takayuki Fujii
Dr. Takayuki Fujii

Dr. Fujii’s team used data from the KURAMA cohort, a large observational RA cohort from a single center, to develop a machine learning model. The study included a total of 322 patients initially diagnosed with UA. The deep neural network (DNN) model was trained using 24 clinical features that are easily obtainable in routine clinical practice, such as age, sex, C-reactive protein (CRP) levels, and disease activity score in 28 joints using erythrocyte sedimentation rate (DAS28-ESR). The DNN model achieved a prediction accuracy of 85.1% in the training cohort. When the model was applied to validation data from an external dataset consisting of 88 patients from the ANSWER cohort, a large multicenter observational RA cohort, the prediction accuracy was 80%.

“We have developed a machine learning model that can predict the progression of RA from UA using clinical parameters,” Dr. Fujii concluded. “This model has the potential to assist rheumatologists in providing appropriate care and timely intervention for patients with UA.”

“Dr. Fujii presented a fascinating study,” Dr. Jans said. “They achieved an accuracy of 80% when applying a DNN model to predict progression from UA to RA. This level of accuracy is relatively high and certainly promising. However, it is important to consider that a pre-test probability of 30% [for progressing from UA to RA]  is also relatively high, which partially explains the high accuracy. Nonetheless, this study represents a significant step forward in the clinical management of patients with UA, as it helps identify those who may benefit the most from regular clinical follow-up.”

Dr. Li and Dr. Stoel report no relevant financial relationships with industry. Dr. Fujii has received speaking fees from Asahi Kasei, AbbVie, Chugai, and Tanabe Mitsubishi Pharma. Dr. Jans has received speaking fees from AbbVie, UCB, Lilly, and Novartis; he is cofounder of RheumaFinder. The Leiden study was funded by the Dutch Research Council and the China Scholarship Council. The study by Dr. Fujii and colleagues had no outside funding.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT EULAR 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Two biologics show no difference in axial spondyloarthritis radiographic progression over 2 years

Article Type
Changed
Thu, 06/01/2023 - 23:05

– Secukinumab (Cosentyx) and biosimilar adalimumab-adaz (Hyrimoz) injection proved to have similar efficacy for limiting spinal radiographic progression over a 2-year period in patients with radiographic axial spondyloarthritis (r-axSpA) in the SURPASS study, a phase 3b, randomized controlled trial.

The study, presented at the annual European Congress of Rheumatology, represents the first head-to-head trial comparing the effects of two biologic disease-modifying antirheumatic drugs (bDMARDs) in axSpA. Notably, secukinumab and adalimumab-adaz target different pathways as an interleukin-17A inhibitor and a tumor necrosis factor (TNF) inhibitor, respectively.

Both TNF and IL-17A have been implicated in the pathogenesis of axSpA. Anti-TNF agents and the IL-17A inhibitor secukinumab have demonstrated effectiveness in improving symptoms, signs, and physical function in patients with axSpA and are approved therapies for the disease. However, limited data exist regarding the effect of bDMARDs in slowing radiographic progression, which is a key therapeutic goal in axSpA to prevent irreversible structural damage.

The SURPASS trial, funded by Novartis, enrolled 859 biologic-naive adult patients with moderate to severe r-axSpA. Participants were randomly assigned (1:1:1) to receive secukinumab 150 mg (n = 287), secukinumab 300 mg (n = 286), or adalimumab-adaz 40 mg (n = 286). The primary endpoint was the proportion of patients with no radiographic progression at the 2-year mark (week 104). Radiographic progression was defined as a change from baseline in modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS; range, 0-72) of 0.5 or less. The radiographic assessments were conducted by three independent evaluators who were blinded to treatment and the chronology of images.

Baseline characteristics indicated that the study population (78.5% male; mean age, 42.1 years) had a high risk of radiographic progression. The proportion of patients with no radiographic progression at week 104 was 66.1% in the secukinumab 150-mg arm, 66.9% in the secukinumab 300-mg arm, and 65.6% in the adalimumab-adaz arm. The mean change from baseline in mSASSS was 0.54, 0.55, and 0.72, respectively.

Notably, more than half of the patients (56.9%, 53.8%, and 53.3%, respectively) with at least one syndesmophyte at baseline did not develop new syndesmophytes over the 2-year period. The observed reductions in sacroiliac joint and spinal edema were comparable across all treatment groups. The safety profile of secukinumab and adalimumab-adaz was consistent with their well-established profiles.

Dr. Xenofon Baraliakos
Dr. Xenofon Baraliakos

No significant differences were observed between the treatment groups in terms of the primary and secondary endpoints. Study presenter and lead author Xenofon Baraliakos, MD, PhD, medical director of the Rheumatism Centre and professor of internal medicine and rheumatology at Ruhr University Bochum (Germany), stated: “Anti-TNF therapy has been considered the gold-standard treatment for axial spondyloarthritis in terms of slowing or halting radiographic progression. Our aim was to investigate whether other modes of action, such as IL-17 inhibition, achieve the same results. The primary hypothesis was that IL-17 inhibition could be even more effective than TNF blockade. However, our data indicate that secukinumab is at least as good as TNF blockers.

“Several risk factors, including high C-reactive protein [CRP] levels, male gender, high disease activity, and baseline radiographic damage (e.g., presence of syndesmophytes), are associated with structural progression,” Dr. Baraliakos explained. “We performed subgroup analyses and found no differences. This is a positive outcome as it suggests that there is no need to select patients based on either secukinumab or anti-TNF agents.”

When making treatment decisions, other factors must be taken into consideration. “Our study specifically examined radiographic progression. The clinical outcomes, indications, and contraindications for anti-TNF agents and secukinumab differ,” Dr. Baraliakos explained. “For instance, secukinumab may be preferred for patients with psoriasis, while adalimumab is more suitable for those with inflammatory bowel disease. Although these bDMARDs are not interchangeable, they have the same positive effect on radiographic progression.”
 

 

 

Not a definitive answer about structural progression

An open question remains. Alexandre Sepriano, MD, PhD, a rheumatologist at Hospital Egas Moniz and researcher at NOVA Medical School, both in Lisbon, commented: “The study was designed to maximize the chances of detecting a difference, if any, in spinal radiographic progression between secukinumab 150 mg and 300 mg and adalimumab. The included patients had a high risk of progression at baseline; in addition to back pain, they either had elevated CRP or at least one syndesmophyte on spine radiographs. Consequently, baseline structural damage was high [mean mSASSS, 17].”

Alexandre Sepriano, MD, of Leiden University Medical Centre, the Netherlands, and EMEUNET’s Chair-Elect.
Dr. Alexandre Sepriano

“After 2 years, no difference was observed in the percentage of patients with no progression across the study arms. This finding does not definitively answer whether bDMARDs can modify structural progression or if secukinumab and adalimumab are equally effective in this regard,” explained Dr. Sepriano, who was not involved in the study. “However, there is good news for patients. Both secukinumab and adalimumab are potent anti-inflammatory drugs that effectively alleviate axial inflammation caused by the disease. This was demonstrated by the reduction in inflammatory scores on MRI in the SURPASS study. It aligns with robust evidence that both IL-17 inhibitors and TNF inhibitors are effective in improving symptoms in individuals with axSpA.

“Researchers continue to make significant efforts to understand how axial inflammation contributes to pathological new bone formation in axSpA,” Dr. Sepriano continued. “Understanding these mechanisms can guide future research aimed at interfering with disease progression. Furthermore, the use of new methods to quantify structural progression in axSpA, such as low-dose CT, which has shown greater sensitivity to change than traditional methods, can pave the way for new studies with fewer patients and shorter follow-up periods, thereby increasing the likelihood of detecting treatment effects.”

Dr. Baraliakos has received speaking and consulting fees and grant/research support from AbbVie, Bristol-Myers Squibb, Celgene, Chugai, Merck Sharp & Dohme, Novartis, Pfizer, and UCB. Dr. Sepriano has received speaking and/or consulting fees from AbbVie, Novartis, UCB, and Lilly. The trial was sponsored by Novartis.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– Secukinumab (Cosentyx) and biosimilar adalimumab-adaz (Hyrimoz) injection proved to have similar efficacy for limiting spinal radiographic progression over a 2-year period in patients with radiographic axial spondyloarthritis (r-axSpA) in the SURPASS study, a phase 3b, randomized controlled trial.

The study, presented at the annual European Congress of Rheumatology, represents the first head-to-head trial comparing the effects of two biologic disease-modifying antirheumatic drugs (bDMARDs) in axSpA. Notably, secukinumab and adalimumab-adaz target different pathways as an interleukin-17A inhibitor and a tumor necrosis factor (TNF) inhibitor, respectively.

Both TNF and IL-17A have been implicated in the pathogenesis of axSpA. Anti-TNF agents and the IL-17A inhibitor secukinumab have demonstrated effectiveness in improving symptoms, signs, and physical function in patients with axSpA and are approved therapies for the disease. However, limited data exist regarding the effect of bDMARDs in slowing radiographic progression, which is a key therapeutic goal in axSpA to prevent irreversible structural damage.

The SURPASS trial, funded by Novartis, enrolled 859 biologic-naive adult patients with moderate to severe r-axSpA. Participants were randomly assigned (1:1:1) to receive secukinumab 150 mg (n = 287), secukinumab 300 mg (n = 286), or adalimumab-adaz 40 mg (n = 286). The primary endpoint was the proportion of patients with no radiographic progression at the 2-year mark (week 104). Radiographic progression was defined as a change from baseline in modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS; range, 0-72) of 0.5 or less. The radiographic assessments were conducted by three independent evaluators who were blinded to treatment and the chronology of images.

Baseline characteristics indicated that the study population (78.5% male; mean age, 42.1 years) had a high risk of radiographic progression. The proportion of patients with no radiographic progression at week 104 was 66.1% in the secukinumab 150-mg arm, 66.9% in the secukinumab 300-mg arm, and 65.6% in the adalimumab-adaz arm. The mean change from baseline in mSASSS was 0.54, 0.55, and 0.72, respectively.

Notably, more than half of the patients (56.9%, 53.8%, and 53.3%, respectively) with at least one syndesmophyte at baseline did not develop new syndesmophytes over the 2-year period. The observed reductions in sacroiliac joint and spinal edema were comparable across all treatment groups. The safety profile of secukinumab and adalimumab-adaz was consistent with their well-established profiles.

Dr. Xenofon Baraliakos
Dr. Xenofon Baraliakos

No significant differences were observed between the treatment groups in terms of the primary and secondary endpoints. Study presenter and lead author Xenofon Baraliakos, MD, PhD, medical director of the Rheumatism Centre and professor of internal medicine and rheumatology at Ruhr University Bochum (Germany), stated: “Anti-TNF therapy has been considered the gold-standard treatment for axial spondyloarthritis in terms of slowing or halting radiographic progression. Our aim was to investigate whether other modes of action, such as IL-17 inhibition, achieve the same results. The primary hypothesis was that IL-17 inhibition could be even more effective than TNF blockade. However, our data indicate that secukinumab is at least as good as TNF blockers.

“Several risk factors, including high C-reactive protein [CRP] levels, male gender, high disease activity, and baseline radiographic damage (e.g., presence of syndesmophytes), are associated with structural progression,” Dr. Baraliakos explained. “We performed subgroup analyses and found no differences. This is a positive outcome as it suggests that there is no need to select patients based on either secukinumab or anti-TNF agents.”

When making treatment decisions, other factors must be taken into consideration. “Our study specifically examined radiographic progression. The clinical outcomes, indications, and contraindications for anti-TNF agents and secukinumab differ,” Dr. Baraliakos explained. “For instance, secukinumab may be preferred for patients with psoriasis, while adalimumab is more suitable for those with inflammatory bowel disease. Although these bDMARDs are not interchangeable, they have the same positive effect on radiographic progression.”
 

 

 

Not a definitive answer about structural progression

An open question remains. Alexandre Sepriano, MD, PhD, a rheumatologist at Hospital Egas Moniz and researcher at NOVA Medical School, both in Lisbon, commented: “The study was designed to maximize the chances of detecting a difference, if any, in spinal radiographic progression between secukinumab 150 mg and 300 mg and adalimumab. The included patients had a high risk of progression at baseline; in addition to back pain, they either had elevated CRP or at least one syndesmophyte on spine radiographs. Consequently, baseline structural damage was high [mean mSASSS, 17].”

Alexandre Sepriano, MD, of Leiden University Medical Centre, the Netherlands, and EMEUNET’s Chair-Elect.
Dr. Alexandre Sepriano

“After 2 years, no difference was observed in the percentage of patients with no progression across the study arms. This finding does not definitively answer whether bDMARDs can modify structural progression or if secukinumab and adalimumab are equally effective in this regard,” explained Dr. Sepriano, who was not involved in the study. “However, there is good news for patients. Both secukinumab and adalimumab are potent anti-inflammatory drugs that effectively alleviate axial inflammation caused by the disease. This was demonstrated by the reduction in inflammatory scores on MRI in the SURPASS study. It aligns with robust evidence that both IL-17 inhibitors and TNF inhibitors are effective in improving symptoms in individuals with axSpA.

“Researchers continue to make significant efforts to understand how axial inflammation contributes to pathological new bone formation in axSpA,” Dr. Sepriano continued. “Understanding these mechanisms can guide future research aimed at interfering with disease progression. Furthermore, the use of new methods to quantify structural progression in axSpA, such as low-dose CT, which has shown greater sensitivity to change than traditional methods, can pave the way for new studies with fewer patients and shorter follow-up periods, thereby increasing the likelihood of detecting treatment effects.”

Dr. Baraliakos has received speaking and consulting fees and grant/research support from AbbVie, Bristol-Myers Squibb, Celgene, Chugai, Merck Sharp & Dohme, Novartis, Pfizer, and UCB. Dr. Sepriano has received speaking and/or consulting fees from AbbVie, Novartis, UCB, and Lilly. The trial was sponsored by Novartis.

A version of this article first appeared on Medscape.com.

– Secukinumab (Cosentyx) and biosimilar adalimumab-adaz (Hyrimoz) injection proved to have similar efficacy for limiting spinal radiographic progression over a 2-year period in patients with radiographic axial spondyloarthritis (r-axSpA) in the SURPASS study, a phase 3b, randomized controlled trial.

The study, presented at the annual European Congress of Rheumatology, represents the first head-to-head trial comparing the effects of two biologic disease-modifying antirheumatic drugs (bDMARDs) in axSpA. Notably, secukinumab and adalimumab-adaz target different pathways as an interleukin-17A inhibitor and a tumor necrosis factor (TNF) inhibitor, respectively.

Both TNF and IL-17A have been implicated in the pathogenesis of axSpA. Anti-TNF agents and the IL-17A inhibitor secukinumab have demonstrated effectiveness in improving symptoms, signs, and physical function in patients with axSpA and are approved therapies for the disease. However, limited data exist regarding the effect of bDMARDs in slowing radiographic progression, which is a key therapeutic goal in axSpA to prevent irreversible structural damage.

The SURPASS trial, funded by Novartis, enrolled 859 biologic-naive adult patients with moderate to severe r-axSpA. Participants were randomly assigned (1:1:1) to receive secukinumab 150 mg (n = 287), secukinumab 300 mg (n = 286), or adalimumab-adaz 40 mg (n = 286). The primary endpoint was the proportion of patients with no radiographic progression at the 2-year mark (week 104). Radiographic progression was defined as a change from baseline in modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS; range, 0-72) of 0.5 or less. The radiographic assessments were conducted by three independent evaluators who were blinded to treatment and the chronology of images.

Baseline characteristics indicated that the study population (78.5% male; mean age, 42.1 years) had a high risk of radiographic progression. The proportion of patients with no radiographic progression at week 104 was 66.1% in the secukinumab 150-mg arm, 66.9% in the secukinumab 300-mg arm, and 65.6% in the adalimumab-adaz arm. The mean change from baseline in mSASSS was 0.54, 0.55, and 0.72, respectively.

Notably, more than half of the patients (56.9%, 53.8%, and 53.3%, respectively) with at least one syndesmophyte at baseline did not develop new syndesmophytes over the 2-year period. The observed reductions in sacroiliac joint and spinal edema were comparable across all treatment groups. The safety profile of secukinumab and adalimumab-adaz was consistent with their well-established profiles.

Dr. Xenofon Baraliakos
Dr. Xenofon Baraliakos

No significant differences were observed between the treatment groups in terms of the primary and secondary endpoints. Study presenter and lead author Xenofon Baraliakos, MD, PhD, medical director of the Rheumatism Centre and professor of internal medicine and rheumatology at Ruhr University Bochum (Germany), stated: “Anti-TNF therapy has been considered the gold-standard treatment for axial spondyloarthritis in terms of slowing or halting radiographic progression. Our aim was to investigate whether other modes of action, such as IL-17 inhibition, achieve the same results. The primary hypothesis was that IL-17 inhibition could be even more effective than TNF blockade. However, our data indicate that secukinumab is at least as good as TNF blockers.

“Several risk factors, including high C-reactive protein [CRP] levels, male gender, high disease activity, and baseline radiographic damage (e.g., presence of syndesmophytes), are associated with structural progression,” Dr. Baraliakos explained. “We performed subgroup analyses and found no differences. This is a positive outcome as it suggests that there is no need to select patients based on either secukinumab or anti-TNF agents.”

When making treatment decisions, other factors must be taken into consideration. “Our study specifically examined radiographic progression. The clinical outcomes, indications, and contraindications for anti-TNF agents and secukinumab differ,” Dr. Baraliakos explained. “For instance, secukinumab may be preferred for patients with psoriasis, while adalimumab is more suitable for those with inflammatory bowel disease. Although these bDMARDs are not interchangeable, they have the same positive effect on radiographic progression.”
 

 

 

Not a definitive answer about structural progression

An open question remains. Alexandre Sepriano, MD, PhD, a rheumatologist at Hospital Egas Moniz and researcher at NOVA Medical School, both in Lisbon, commented: “The study was designed to maximize the chances of detecting a difference, if any, in spinal radiographic progression between secukinumab 150 mg and 300 mg and adalimumab. The included patients had a high risk of progression at baseline; in addition to back pain, they either had elevated CRP or at least one syndesmophyte on spine radiographs. Consequently, baseline structural damage was high [mean mSASSS, 17].”

Alexandre Sepriano, MD, of Leiden University Medical Centre, the Netherlands, and EMEUNET’s Chair-Elect.
Dr. Alexandre Sepriano

“After 2 years, no difference was observed in the percentage of patients with no progression across the study arms. This finding does not definitively answer whether bDMARDs can modify structural progression or if secukinumab and adalimumab are equally effective in this regard,” explained Dr. Sepriano, who was not involved in the study. “However, there is good news for patients. Both secukinumab and adalimumab are potent anti-inflammatory drugs that effectively alleviate axial inflammation caused by the disease. This was demonstrated by the reduction in inflammatory scores on MRI in the SURPASS study. It aligns with robust evidence that both IL-17 inhibitors and TNF inhibitors are effective in improving symptoms in individuals with axSpA.

“Researchers continue to make significant efforts to understand how axial inflammation contributes to pathological new bone formation in axSpA,” Dr. Sepriano continued. “Understanding these mechanisms can guide future research aimed at interfering with disease progression. Furthermore, the use of new methods to quantify structural progression in axSpA, such as low-dose CT, which has shown greater sensitivity to change than traditional methods, can pave the way for new studies with fewer patients and shorter follow-up periods, thereby increasing the likelihood of detecting treatment effects.”

Dr. Baraliakos has received speaking and consulting fees and grant/research support from AbbVie, Bristol-Myers Squibb, Celgene, Chugai, Merck Sharp & Dohme, Novartis, Pfizer, and UCB. Dr. Sepriano has received speaking and/or consulting fees from AbbVie, Novartis, UCB, and Lilly. The trial was sponsored by Novartis.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

AT EULAR 2023

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Why do GI symptoms persist in some children with celiac disease?

Article Type
Changed
Tue, 01/24/2023 - 14:46

 

FROM WORLD JOURNAL OF GASTROENTEROLOGY

One year after a diagnosis of celiac disease, more than one in five children present with functional gastrointestinal disorders (FGIDs) despite following a gluten-free diet (GFD). Developing FGIDs may be linked to caloric intake and percentage of food fat, but it does not change between a GFD with processed foods or a GFD with natural products.

These are the main findings of a study run jointly by the “Federico II” University of Naples and the “Luigi Vanvitelli” University of Campania, the results of which were published in the World Journal of Gastroenterology.

Unlike in previous studies, the criteria used in this study (the Rome IV criteria) allowed investigators to diagnose FGIDs even when other organic diseases, such as celiac disease or chronic inflammatory bowel disease, were present. The evidence obtained shows that adult individuals with celiac disease are at an increased risk for functional abdominal pain, even if they adhere well to a GFD. The researchers at the University of Campania wanted to determine the prevalence of FGIDs in the pediatric age group, which has been a poorly explored area.

The study authors enrolled 104 pediatric patients (aged 1-18 years) who had been diagnosed with celiac disease. The patients were randomly divided into two groups. Group A (n = 55) received a controlled GFD with processed foods (diet 1), and group B (n = 49) received a controlled GFD with > 60% natural products (diet 2). The presence of FGIDs was assessed at diagnosis (T0) and after 12 months (T1), and any potential link to the type of diet was analyzed.

The number of symptomatic children at enrollment was 30 of 55 (54.5%) in group A and 25 of 49 (51%) in group B. After 12 months, despite negative serology for celiac disease, the prevalence of FGIDs was 10/55 (18%) in group A and 8/49 (16.3%) in group B. There was no statistically significant difference between the two groups at T1. The most common disorder was functional constipation, followed by postprandial distress syndrome. At T1, the macro- and micronutrient intake was similar between the two groups, with no significant differences in nutrient analysis. However, in both groups, the prevalence of FGIDs was lower in patients who were consuming fewer calories (odds ratio [OR], 0.99; 95% confidence interval [CI], 0.99-1.00) and fat (OR, 0.33; 95% CI, 0.65-0.95). The figure was very close to being statistically significant (P = .055).

“This is the first study to show that the presence of functional GI symptoms in children with celiac disease on a GFD are possibly related to higher caloric and fat intake,” wrote the study authors. “It remains to be determined whether the risk is due to the persistence of a chronic inflammatory process or to nutritional factors. Long-term monitoring studies will assist in determining the natural history of these functional symptoms.”

The study authors reported having no relevant financial conflicts.

This article was translated from Univadis Italy and a version appeared on Medscape.com.

Publications
Topics
Sections

 

FROM WORLD JOURNAL OF GASTROENTEROLOGY

One year after a diagnosis of celiac disease, more than one in five children present with functional gastrointestinal disorders (FGIDs) despite following a gluten-free diet (GFD). Developing FGIDs may be linked to caloric intake and percentage of food fat, but it does not change between a GFD with processed foods or a GFD with natural products.

These are the main findings of a study run jointly by the “Federico II” University of Naples and the “Luigi Vanvitelli” University of Campania, the results of which were published in the World Journal of Gastroenterology.

Unlike in previous studies, the criteria used in this study (the Rome IV criteria) allowed investigators to diagnose FGIDs even when other organic diseases, such as celiac disease or chronic inflammatory bowel disease, were present. The evidence obtained shows that adult individuals with celiac disease are at an increased risk for functional abdominal pain, even if they adhere well to a GFD. The researchers at the University of Campania wanted to determine the prevalence of FGIDs in the pediatric age group, which has been a poorly explored area.

The study authors enrolled 104 pediatric patients (aged 1-18 years) who had been diagnosed with celiac disease. The patients were randomly divided into two groups. Group A (n = 55) received a controlled GFD with processed foods (diet 1), and group B (n = 49) received a controlled GFD with > 60% natural products (diet 2). The presence of FGIDs was assessed at diagnosis (T0) and after 12 months (T1), and any potential link to the type of diet was analyzed.

The number of symptomatic children at enrollment was 30 of 55 (54.5%) in group A and 25 of 49 (51%) in group B. After 12 months, despite negative serology for celiac disease, the prevalence of FGIDs was 10/55 (18%) in group A and 8/49 (16.3%) in group B. There was no statistically significant difference between the two groups at T1. The most common disorder was functional constipation, followed by postprandial distress syndrome. At T1, the macro- and micronutrient intake was similar between the two groups, with no significant differences in nutrient analysis. However, in both groups, the prevalence of FGIDs was lower in patients who were consuming fewer calories (odds ratio [OR], 0.99; 95% confidence interval [CI], 0.99-1.00) and fat (OR, 0.33; 95% CI, 0.65-0.95). The figure was very close to being statistically significant (P = .055).

“This is the first study to show that the presence of functional GI symptoms in children with celiac disease on a GFD are possibly related to higher caloric and fat intake,” wrote the study authors. “It remains to be determined whether the risk is due to the persistence of a chronic inflammatory process or to nutritional factors. Long-term monitoring studies will assist in determining the natural history of these functional symptoms.”

The study authors reported having no relevant financial conflicts.

This article was translated from Univadis Italy and a version appeared on Medscape.com.

 

FROM WORLD JOURNAL OF GASTROENTEROLOGY

One year after a diagnosis of celiac disease, more than one in five children present with functional gastrointestinal disorders (FGIDs) despite following a gluten-free diet (GFD). Developing FGIDs may be linked to caloric intake and percentage of food fat, but it does not change between a GFD with processed foods or a GFD with natural products.

These are the main findings of a study run jointly by the “Federico II” University of Naples and the “Luigi Vanvitelli” University of Campania, the results of which were published in the World Journal of Gastroenterology.

Unlike in previous studies, the criteria used in this study (the Rome IV criteria) allowed investigators to diagnose FGIDs even when other organic diseases, such as celiac disease or chronic inflammatory bowel disease, were present. The evidence obtained shows that adult individuals with celiac disease are at an increased risk for functional abdominal pain, even if they adhere well to a GFD. The researchers at the University of Campania wanted to determine the prevalence of FGIDs in the pediatric age group, which has been a poorly explored area.

The study authors enrolled 104 pediatric patients (aged 1-18 years) who had been diagnosed with celiac disease. The patients were randomly divided into two groups. Group A (n = 55) received a controlled GFD with processed foods (diet 1), and group B (n = 49) received a controlled GFD with > 60% natural products (diet 2). The presence of FGIDs was assessed at diagnosis (T0) and after 12 months (T1), and any potential link to the type of diet was analyzed.

The number of symptomatic children at enrollment was 30 of 55 (54.5%) in group A and 25 of 49 (51%) in group B. After 12 months, despite negative serology for celiac disease, the prevalence of FGIDs was 10/55 (18%) in group A and 8/49 (16.3%) in group B. There was no statistically significant difference between the two groups at T1. The most common disorder was functional constipation, followed by postprandial distress syndrome. At T1, the macro- and micronutrient intake was similar between the two groups, with no significant differences in nutrient analysis. However, in both groups, the prevalence of FGIDs was lower in patients who were consuming fewer calories (odds ratio [OR], 0.99; 95% confidence interval [CI], 0.99-1.00) and fat (OR, 0.33; 95% CI, 0.65-0.95). The figure was very close to being statistically significant (P = .055).

“This is the first study to show that the presence of functional GI symptoms in children with celiac disease on a GFD are possibly related to higher caloric and fat intake,” wrote the study authors. “It remains to be determined whether the risk is due to the persistence of a chronic inflammatory process or to nutritional factors. Long-term monitoring studies will assist in determining the natural history of these functional symptoms.”

The study authors reported having no relevant financial conflicts.

This article was translated from Univadis Italy and a version appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article