Failure to rescue occurs more often among women of color

Article Type
Changed
Fri, 04/23/2021 - 12:39

 

In the United States, the rate of mortality caused by severe maternal morbidity has improved over time, but failure to rescue is significantly more common among racial and ethnic minorities.

These failures are a “major contributing factor” to the disproportionately higher rate of maternal mortality among women of color, reported lead author Jean Guglielminotti, MD, PhD, of Columbia University, New York, and colleagues.

“Racial and ethnic disparities in severe maternal morbidity are a growing public health concern in the United States,” the investigators wrote in Obstetrics & Gynecology.

“The reported incidence of severe maternal morbidity is twofold to threefold higher among Black American women, compared with non-Hispanic White women; and although the difference is less pronounced, the incidence of severe maternal morbidity also is higher among Hispanic, Asian and Pacific Islander, and Native American women.”

The ensuant, disproportionate risk of maternal mortality may be further exacerbated by disparities in hospitals, according to the investigators. They noted that non-Hispanic White women tend to give birth in different hospitals than racial and ethnic minorities, and the hospitals serving people of color “are characterized by lower performance on maternal safety indicators.”

Even within hospitals that most often serve minorities, severe maternal morbidity is more common among women of color than women who are White, they added.

“However, the simple severe maternal morbidity rate is insufficient to assess hospital performance and should be complemented with the rate of failure to rescue,” wrote Dr. Guglielminotti and colleagues.
 

Measuring failure to rescue across racial and ethnic groups

According to the investigators, failure-to-rescue rate advances focus from complications themselves – which can occur when care is appropriate and may stem from patient characteristics – to a hospital’s response to such complications.

Using this metric, a 2016 study by Friedman and colleagues, which included data from 1998 to 2010, showed failure to rescue was more common among Hispanic and non-Hispanic Black women than white women.

The present study built upon these findings with data from almost 74 million delivery hospitalizations in the National Inpatient Sample (1999-2017). The population included 993,864 women with severe maternal morbidity, among whom 4,328 died.

Overall, the failure-to-rescue rate decreased over the course of the study from 13.2% in 1999-2000 to 4.5% in 2017 (P < .001).

Yet racial and ethnic inequities were apparent.

Compared with White women, non-Hispanic Black women had a significantly higher failure-to-rescue rate ratio (1.79; 95% CI, 1.77-1.81), as did Hispanic women (RR, 1.08; 95% CI, 1.06-1.09), women of other non-White racial/ethnic backgrounds (RR, 1.39; 95% CI, 1.37-1.41), and women documented without racial/ethnic designations (RR, 1.43; 95% CI, 1.42-1.45).

“Failure to rescue from severe maternal morbidity remains a major contributing factor to the excess maternal mortality in racial and ethnic minority women in the United States,” the investigators concluded. “This finding underscores the need to identify factors accounting for these disparities and develop hospital-based interventions to reduce excess maternal mortality in racial and ethnic minority women.”
 

Striving for progress through systemic change

According to Eve Espey, MD, MPH, of the University of New Mexico, Albuquerque, “this study adds to the literature demonstrating that structural racism and implicit bias have profound negative impacts,” which “has implications for action.”

Dr. Eve Espey, University of New Mexico, Albuquerque
Dr. Eve Espey

“We must increase efforts to improve maternal safety, including the rollout of Alliance for Innovation on Maternal Health [AIM] bundles through statewide perinatal quality collaboratives,” Dr. Espey said. “AIM bundle implementation must focus on the context of health inequities related to racism and bias. Similarly, we must consider large scale public policy changes building on the Affordable Care Act, such as universal health coverage throughout the life span, [which] equitably increases access to quality health care for all.”

Constance Bohon, MD, of Sibley Memorial Hospital, Washington, offered a similar viewpoint, and suggested that further analyses could reveal the impacts of systemic changes, thereby guiding future interventions.

Dr. Constance J. Bohon
Dr. Constance J. Bohon

“It would be interesting to determine if declines in failure to rescue rates were greatest in states that implemented AIM safety bundles [in 2012] as compared with the states that did not,” Dr. Bohon said. “The same assessment could be made with a comparison between the states that did and those that did not approve the Medicaid expansion [in 2014]. Other beneficial data would be a comparison of the failure-to-rescue rates in hospitals that provide the same obstetrical level of care. Further studies need to be done in order to identify factors that have the greatest impact on the failure-to-rescue rate. Subsequently, proposals can be suggested for actions that can be taken to decrease the excess maternal mortality in racial and ethnic minorities.”
 

Comparing the U.S. with the rest of the world

In an accompanying editorial, Marian F. MacDorman, PhD, of the University of Maryland, College Park, and Eugene Declercq, PhD, of Boston University, put the findings in a global context.

They noted that, in the United States over the past 2 decades, the rate of maternal mortality has either remained flat or increased, depending on study methodology; however, the relative state of affairs between the United States and the rest of the world is more straightforward.

“What is clear is that U.S. maternal mortality did not decline from 2000 to 2018,” wrote Dr. MacDorman and Dr. Declercq. “This contrasts with World Health Organization data showing that maternal mortality declined by 38% worldwide and by 53% in Europe from 2000 to 2017. In fact, North America was the only world region to not show substantial declines in maternal mortality during the period, and U.S. maternal mortality rates are nearly twice those in Europe.”

Within the US, these shortcomings are felt most acutely among racial and ethnic minorities, they noted, as the present study suggests.

“The U.S. is still plagued by wide racial disparities, with similar or larger Black-White maternal mortality disparities in 2018 than existed in the 1940s,” they wrote. “Thus, any euphoria generated by the lack of increase in maternal mortality (if accurate) must be set in the context of worldwide improvements, in which the U.S. is an outlier with no improvement. The U.S. can and should do better!”

To this end, Dr. MacDorman and Dr. Declercq wrote, “additional training and vigilance among clinicians can help to avert these largely preventable deaths. In addition, applying this same rigor to preventing deaths that occur in the community before and after birth, combined with a focus on social determinants among women during the reproductive years, will be essential to lowering U.S. maternal mortality overall and eliminating longstanding racial inequities.”

The study received no external funding. The investigators reported no conflicts of interest.

Publications
Topics
Sections

 

In the United States, the rate of mortality caused by severe maternal morbidity has improved over time, but failure to rescue is significantly more common among racial and ethnic minorities.

These failures are a “major contributing factor” to the disproportionately higher rate of maternal mortality among women of color, reported lead author Jean Guglielminotti, MD, PhD, of Columbia University, New York, and colleagues.

“Racial and ethnic disparities in severe maternal morbidity are a growing public health concern in the United States,” the investigators wrote in Obstetrics & Gynecology.

“The reported incidence of severe maternal morbidity is twofold to threefold higher among Black American women, compared with non-Hispanic White women; and although the difference is less pronounced, the incidence of severe maternal morbidity also is higher among Hispanic, Asian and Pacific Islander, and Native American women.”

The ensuant, disproportionate risk of maternal mortality may be further exacerbated by disparities in hospitals, according to the investigators. They noted that non-Hispanic White women tend to give birth in different hospitals than racial and ethnic minorities, and the hospitals serving people of color “are characterized by lower performance on maternal safety indicators.”

Even within hospitals that most often serve minorities, severe maternal morbidity is more common among women of color than women who are White, they added.

“However, the simple severe maternal morbidity rate is insufficient to assess hospital performance and should be complemented with the rate of failure to rescue,” wrote Dr. Guglielminotti and colleagues.
 

Measuring failure to rescue across racial and ethnic groups

According to the investigators, failure-to-rescue rate advances focus from complications themselves – which can occur when care is appropriate and may stem from patient characteristics – to a hospital’s response to such complications.

Using this metric, a 2016 study by Friedman and colleagues, which included data from 1998 to 2010, showed failure to rescue was more common among Hispanic and non-Hispanic Black women than white women.

The present study built upon these findings with data from almost 74 million delivery hospitalizations in the National Inpatient Sample (1999-2017). The population included 993,864 women with severe maternal morbidity, among whom 4,328 died.

Overall, the failure-to-rescue rate decreased over the course of the study from 13.2% in 1999-2000 to 4.5% in 2017 (P < .001).

Yet racial and ethnic inequities were apparent.

Compared with White women, non-Hispanic Black women had a significantly higher failure-to-rescue rate ratio (1.79; 95% CI, 1.77-1.81), as did Hispanic women (RR, 1.08; 95% CI, 1.06-1.09), women of other non-White racial/ethnic backgrounds (RR, 1.39; 95% CI, 1.37-1.41), and women documented without racial/ethnic designations (RR, 1.43; 95% CI, 1.42-1.45).

“Failure to rescue from severe maternal morbidity remains a major contributing factor to the excess maternal mortality in racial and ethnic minority women in the United States,” the investigators concluded. “This finding underscores the need to identify factors accounting for these disparities and develop hospital-based interventions to reduce excess maternal mortality in racial and ethnic minority women.”
 

Striving for progress through systemic change

According to Eve Espey, MD, MPH, of the University of New Mexico, Albuquerque, “this study adds to the literature demonstrating that structural racism and implicit bias have profound negative impacts,” which “has implications for action.”

Dr. Eve Espey, University of New Mexico, Albuquerque
Dr. Eve Espey

“We must increase efforts to improve maternal safety, including the rollout of Alliance for Innovation on Maternal Health [AIM] bundles through statewide perinatal quality collaboratives,” Dr. Espey said. “AIM bundle implementation must focus on the context of health inequities related to racism and bias. Similarly, we must consider large scale public policy changes building on the Affordable Care Act, such as universal health coverage throughout the life span, [which] equitably increases access to quality health care for all.”

Constance Bohon, MD, of Sibley Memorial Hospital, Washington, offered a similar viewpoint, and suggested that further analyses could reveal the impacts of systemic changes, thereby guiding future interventions.

Dr. Constance J. Bohon
Dr. Constance J. Bohon

“It would be interesting to determine if declines in failure to rescue rates were greatest in states that implemented AIM safety bundles [in 2012] as compared with the states that did not,” Dr. Bohon said. “The same assessment could be made with a comparison between the states that did and those that did not approve the Medicaid expansion [in 2014]. Other beneficial data would be a comparison of the failure-to-rescue rates in hospitals that provide the same obstetrical level of care. Further studies need to be done in order to identify factors that have the greatest impact on the failure-to-rescue rate. Subsequently, proposals can be suggested for actions that can be taken to decrease the excess maternal mortality in racial and ethnic minorities.”
 

Comparing the U.S. with the rest of the world

In an accompanying editorial, Marian F. MacDorman, PhD, of the University of Maryland, College Park, and Eugene Declercq, PhD, of Boston University, put the findings in a global context.

They noted that, in the United States over the past 2 decades, the rate of maternal mortality has either remained flat or increased, depending on study methodology; however, the relative state of affairs between the United States and the rest of the world is more straightforward.

“What is clear is that U.S. maternal mortality did not decline from 2000 to 2018,” wrote Dr. MacDorman and Dr. Declercq. “This contrasts with World Health Organization data showing that maternal mortality declined by 38% worldwide and by 53% in Europe from 2000 to 2017. In fact, North America was the only world region to not show substantial declines in maternal mortality during the period, and U.S. maternal mortality rates are nearly twice those in Europe.”

Within the US, these shortcomings are felt most acutely among racial and ethnic minorities, they noted, as the present study suggests.

“The U.S. is still plagued by wide racial disparities, with similar or larger Black-White maternal mortality disparities in 2018 than existed in the 1940s,” they wrote. “Thus, any euphoria generated by the lack of increase in maternal mortality (if accurate) must be set in the context of worldwide improvements, in which the U.S. is an outlier with no improvement. The U.S. can and should do better!”

To this end, Dr. MacDorman and Dr. Declercq wrote, “additional training and vigilance among clinicians can help to avert these largely preventable deaths. In addition, applying this same rigor to preventing deaths that occur in the community before and after birth, combined with a focus on social determinants among women during the reproductive years, will be essential to lowering U.S. maternal mortality overall and eliminating longstanding racial inequities.”

The study received no external funding. The investigators reported no conflicts of interest.

 

In the United States, the rate of mortality caused by severe maternal morbidity has improved over time, but failure to rescue is significantly more common among racial and ethnic minorities.

These failures are a “major contributing factor” to the disproportionately higher rate of maternal mortality among women of color, reported lead author Jean Guglielminotti, MD, PhD, of Columbia University, New York, and colleagues.

“Racial and ethnic disparities in severe maternal morbidity are a growing public health concern in the United States,” the investigators wrote in Obstetrics & Gynecology.

“The reported incidence of severe maternal morbidity is twofold to threefold higher among Black American women, compared with non-Hispanic White women; and although the difference is less pronounced, the incidence of severe maternal morbidity also is higher among Hispanic, Asian and Pacific Islander, and Native American women.”

The ensuant, disproportionate risk of maternal mortality may be further exacerbated by disparities in hospitals, according to the investigators. They noted that non-Hispanic White women tend to give birth in different hospitals than racial and ethnic minorities, and the hospitals serving people of color “are characterized by lower performance on maternal safety indicators.”

Even within hospitals that most often serve minorities, severe maternal morbidity is more common among women of color than women who are White, they added.

“However, the simple severe maternal morbidity rate is insufficient to assess hospital performance and should be complemented with the rate of failure to rescue,” wrote Dr. Guglielminotti and colleagues.
 

Measuring failure to rescue across racial and ethnic groups

According to the investigators, failure-to-rescue rate advances focus from complications themselves – which can occur when care is appropriate and may stem from patient characteristics – to a hospital’s response to such complications.

Using this metric, a 2016 study by Friedman and colleagues, which included data from 1998 to 2010, showed failure to rescue was more common among Hispanic and non-Hispanic Black women than white women.

The present study built upon these findings with data from almost 74 million delivery hospitalizations in the National Inpatient Sample (1999-2017). The population included 993,864 women with severe maternal morbidity, among whom 4,328 died.

Overall, the failure-to-rescue rate decreased over the course of the study from 13.2% in 1999-2000 to 4.5% in 2017 (P < .001).

Yet racial and ethnic inequities were apparent.

Compared with White women, non-Hispanic Black women had a significantly higher failure-to-rescue rate ratio (1.79; 95% CI, 1.77-1.81), as did Hispanic women (RR, 1.08; 95% CI, 1.06-1.09), women of other non-White racial/ethnic backgrounds (RR, 1.39; 95% CI, 1.37-1.41), and women documented without racial/ethnic designations (RR, 1.43; 95% CI, 1.42-1.45).

“Failure to rescue from severe maternal morbidity remains a major contributing factor to the excess maternal mortality in racial and ethnic minority women in the United States,” the investigators concluded. “This finding underscores the need to identify factors accounting for these disparities and develop hospital-based interventions to reduce excess maternal mortality in racial and ethnic minority women.”
 

Striving for progress through systemic change

According to Eve Espey, MD, MPH, of the University of New Mexico, Albuquerque, “this study adds to the literature demonstrating that structural racism and implicit bias have profound negative impacts,” which “has implications for action.”

Dr. Eve Espey, University of New Mexico, Albuquerque
Dr. Eve Espey

“We must increase efforts to improve maternal safety, including the rollout of Alliance for Innovation on Maternal Health [AIM] bundles through statewide perinatal quality collaboratives,” Dr. Espey said. “AIM bundle implementation must focus on the context of health inequities related to racism and bias. Similarly, we must consider large scale public policy changes building on the Affordable Care Act, such as universal health coverage throughout the life span, [which] equitably increases access to quality health care for all.”

Constance Bohon, MD, of Sibley Memorial Hospital, Washington, offered a similar viewpoint, and suggested that further analyses could reveal the impacts of systemic changes, thereby guiding future interventions.

Dr. Constance J. Bohon
Dr. Constance J. Bohon

“It would be interesting to determine if declines in failure to rescue rates were greatest in states that implemented AIM safety bundles [in 2012] as compared with the states that did not,” Dr. Bohon said. “The same assessment could be made with a comparison between the states that did and those that did not approve the Medicaid expansion [in 2014]. Other beneficial data would be a comparison of the failure-to-rescue rates in hospitals that provide the same obstetrical level of care. Further studies need to be done in order to identify factors that have the greatest impact on the failure-to-rescue rate. Subsequently, proposals can be suggested for actions that can be taken to decrease the excess maternal mortality in racial and ethnic minorities.”
 

Comparing the U.S. with the rest of the world

In an accompanying editorial, Marian F. MacDorman, PhD, of the University of Maryland, College Park, and Eugene Declercq, PhD, of Boston University, put the findings in a global context.

They noted that, in the United States over the past 2 decades, the rate of maternal mortality has either remained flat or increased, depending on study methodology; however, the relative state of affairs between the United States and the rest of the world is more straightforward.

“What is clear is that U.S. maternal mortality did not decline from 2000 to 2018,” wrote Dr. MacDorman and Dr. Declercq. “This contrasts with World Health Organization data showing that maternal mortality declined by 38% worldwide and by 53% in Europe from 2000 to 2017. In fact, North America was the only world region to not show substantial declines in maternal mortality during the period, and U.S. maternal mortality rates are nearly twice those in Europe.”

Within the US, these shortcomings are felt most acutely among racial and ethnic minorities, they noted, as the present study suggests.

“The U.S. is still plagued by wide racial disparities, with similar or larger Black-White maternal mortality disparities in 2018 than existed in the 1940s,” they wrote. “Thus, any euphoria generated by the lack of increase in maternal mortality (if accurate) must be set in the context of worldwide improvements, in which the U.S. is an outlier with no improvement. The U.S. can and should do better!”

To this end, Dr. MacDorman and Dr. Declercq wrote, “additional training and vigilance among clinicians can help to avert these largely preventable deaths. In addition, applying this same rigor to preventing deaths that occur in the community before and after birth, combined with a focus on social determinants among women during the reproductive years, will be essential to lowering U.S. maternal mortality overall and eliminating longstanding racial inequities.”

The study received no external funding. The investigators reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM OBSTETRICS & GYNECOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Stethoscope and Doppler may outperform newer intrapartum fetal monitoring techniques

Article Type
Changed
Thu, 04/15/2021 - 14:08

For intrapartum fetal surveillance, the old way may be the best way, according to a meta-analysis involving more than 118,000 patients.

Intermittent auscultation with a Pinard stethoscope and handheld Doppler was associated with a significantly lower risk of emergency cesarean deliveries than newer monitoring techniques without jeopardizing maternal or neonatal outcomes, reported lead author Bassel H. Al Wattar, MD, PhD, of University of Warwick, Coventry, England, and University College London Hospitals, and colleagues.

“Over the last 50 years, several newer surveillance methods have been evaluated, with varied uptake in practice,” the investigators wrote in the Canadian Medical Association Journal, noting that cardiotocography (CTG) is the most common method for high-risk pregnancies, typically coupled with at least one other modality, such as fetal scalp pH analysis (FBS), fetal pulse oximetry (FPO), or fetal heart electrocardiogram (STAN).

“Despite extensive investment in clinical research, the overall effectiveness of such methods in improving maternal and neonatal outcomes remains debatable as stillbirth rates have plateaued worldwide, while cesarean delivery rates continue to rise,” the investigators wrote. Previous meta-analyses have relied upon head-to-head comparisons of monitoring techniques and did not take into account effects on maternal and neonatal outcomes.

To address this knowledge gap, Dr. Al Wattar and colleagues conducted the present systematic review and meta-analysis, ultimately including 33 trials with 118,863 women who underwent intrapartum fetal surveillance, dating back to 1976. Ten surveillance types were evaluated, including intermittent auscultation with Pinard stethoscope and handheld Doppler, CTG with or without computer-aided decision models (cCTG), and CTG or cCTG combined with one or two other techniques, such as FBS, FPO, and STAN.

This revealed that intermittent auscultation outperformed all other techniques in terms of emergency cesarean deliveries and emergency cesarean deliveries because of fetal distress.

Specifically, intermittent auscultation significantly reduced risk of emergency cesarean deliveries, compared with CTG (relative risk, 0.83; 95% confidence interval, 0.72-0.97), CTG-FBS (RR, 0.71; 95% CI, 0.63-0.80), CTG-lactate (RR, 0.77; 95% CI, 0.64-0.92), and FPO-CTG-FBS (RR, 0.81; 95% CI, 0.67-0.99). Conversely, compared with IA, STAN-CTG-FBS and cCTG-FBS raised risk of emergency cesarean deliveries by 17% and 21%, respectively.

Compared with other modalities, the superiority of intermittent auscultation was even more pronounced in terms of emergency cesarean deliveries because of fetal distress. Intermittent auscultation reduced risk by 43%, compared with CTG, 66% compared with CTG-FBS, 58%, compared with FPO-CTG, and 17%, compared with FPO-CTG-FBS. Conversely, compared with intermittent auscultation, STAN-CTG and cCTG-FBS increased risk of emergency cesarean deliveries because of fetal distress by 39% and 80%, respectively.

Further analysis showed that all types of surveillance had similar effects on neonatal outcomes, such as admission to neonatal unit and neonatal acidemia. Although a combination of STAN or FPO with CTG-FBS “seemed to improve the likelihood of reducing adverse neonatal outcomes,” the investigators noted that these differences were not significant in network meta­-analysis.

“New fetal surveillance methods did not improve neonatal outcomes or reduce unnecessary maternal interventions,” Dr. Al Wattar and colleagues concluded. “Further evidence is needed to evaluate the effects of fetal pulse oximetry and fetal heart electrocardiography in labor.”

Dr. Courtney Rhoades

Courtney Rhoades, DO, MBA, FACOG, medical director of labor and delivery and assistant professor of obstetrics and gynecology at the University of Florida, Jacksonville, suggested that the meta-analysis supports the safety of intermittent auscultation, but the results may not be entirely applicable to real-world practice.

“It is hard, in practice, to draw the same conclusion that they do in the study that the newer methods may cause too many emergency C-sections because our fetal monitoring equipment, methodology for interpretation, ability to do emergency C-sections and maternal risk factors have changed in the last 50 years,” Dr. Rhoades said. “Continuous fetal monitoring gives more data points during labor, and with more data points, there are more opportunities to interpret and act – either correctly or incorrectly. As they state in the study, the decision to do a C-section is multifactorial.”

Dr. Rhoades, who recently authored a textbook chapter on intrapartum monitoring and fetal assessment, recommended that intermittent auscultation be reserved for low-risk patients.

“The American College of Obstetricians and Gynecologists has endorsed intermittent auscultation for low-risk pregnancies and this study affirms their support,” Dr. Rhoades said. “Women with a low-risk pregnancy can benefit from intermittent auscultation because it allows them more autonomy and movement during labor so it should be offered to our low-risk patients.”

Dr. Al Wattar reported a personal Academic Clinical Lectureship from the U.K. National Health Institute of Research. Dr. Khan disclosed funding from the Beatriz Galindo Program Grant given to the University of Granada by the Ministry of Science, Innovation, and Universities of the Spanish Government.

Publications
Topics
Sections

For intrapartum fetal surveillance, the old way may be the best way, according to a meta-analysis involving more than 118,000 patients.

Intermittent auscultation with a Pinard stethoscope and handheld Doppler was associated with a significantly lower risk of emergency cesarean deliveries than newer monitoring techniques without jeopardizing maternal or neonatal outcomes, reported lead author Bassel H. Al Wattar, MD, PhD, of University of Warwick, Coventry, England, and University College London Hospitals, and colleagues.

“Over the last 50 years, several newer surveillance methods have been evaluated, with varied uptake in practice,” the investigators wrote in the Canadian Medical Association Journal, noting that cardiotocography (CTG) is the most common method for high-risk pregnancies, typically coupled with at least one other modality, such as fetal scalp pH analysis (FBS), fetal pulse oximetry (FPO), or fetal heart electrocardiogram (STAN).

“Despite extensive investment in clinical research, the overall effectiveness of such methods in improving maternal and neonatal outcomes remains debatable as stillbirth rates have plateaued worldwide, while cesarean delivery rates continue to rise,” the investigators wrote. Previous meta-analyses have relied upon head-to-head comparisons of monitoring techniques and did not take into account effects on maternal and neonatal outcomes.

To address this knowledge gap, Dr. Al Wattar and colleagues conducted the present systematic review and meta-analysis, ultimately including 33 trials with 118,863 women who underwent intrapartum fetal surveillance, dating back to 1976. Ten surveillance types were evaluated, including intermittent auscultation with Pinard stethoscope and handheld Doppler, CTG with or without computer-aided decision models (cCTG), and CTG or cCTG combined with one or two other techniques, such as FBS, FPO, and STAN.

This revealed that intermittent auscultation outperformed all other techniques in terms of emergency cesarean deliveries and emergency cesarean deliveries because of fetal distress.

Specifically, intermittent auscultation significantly reduced risk of emergency cesarean deliveries, compared with CTG (relative risk, 0.83; 95% confidence interval, 0.72-0.97), CTG-FBS (RR, 0.71; 95% CI, 0.63-0.80), CTG-lactate (RR, 0.77; 95% CI, 0.64-0.92), and FPO-CTG-FBS (RR, 0.81; 95% CI, 0.67-0.99). Conversely, compared with IA, STAN-CTG-FBS and cCTG-FBS raised risk of emergency cesarean deliveries by 17% and 21%, respectively.

Compared with other modalities, the superiority of intermittent auscultation was even more pronounced in terms of emergency cesarean deliveries because of fetal distress. Intermittent auscultation reduced risk by 43%, compared with CTG, 66% compared with CTG-FBS, 58%, compared with FPO-CTG, and 17%, compared with FPO-CTG-FBS. Conversely, compared with intermittent auscultation, STAN-CTG and cCTG-FBS increased risk of emergency cesarean deliveries because of fetal distress by 39% and 80%, respectively.

Further analysis showed that all types of surveillance had similar effects on neonatal outcomes, such as admission to neonatal unit and neonatal acidemia. Although a combination of STAN or FPO with CTG-FBS “seemed to improve the likelihood of reducing adverse neonatal outcomes,” the investigators noted that these differences were not significant in network meta­-analysis.

“New fetal surveillance methods did not improve neonatal outcomes or reduce unnecessary maternal interventions,” Dr. Al Wattar and colleagues concluded. “Further evidence is needed to evaluate the effects of fetal pulse oximetry and fetal heart electrocardiography in labor.”

Dr. Courtney Rhoades

Courtney Rhoades, DO, MBA, FACOG, medical director of labor and delivery and assistant professor of obstetrics and gynecology at the University of Florida, Jacksonville, suggested that the meta-analysis supports the safety of intermittent auscultation, but the results may not be entirely applicable to real-world practice.

“It is hard, in practice, to draw the same conclusion that they do in the study that the newer methods may cause too many emergency C-sections because our fetal monitoring equipment, methodology for interpretation, ability to do emergency C-sections and maternal risk factors have changed in the last 50 years,” Dr. Rhoades said. “Continuous fetal monitoring gives more data points during labor, and with more data points, there are more opportunities to interpret and act – either correctly or incorrectly. As they state in the study, the decision to do a C-section is multifactorial.”

Dr. Rhoades, who recently authored a textbook chapter on intrapartum monitoring and fetal assessment, recommended that intermittent auscultation be reserved for low-risk patients.

“The American College of Obstetricians and Gynecologists has endorsed intermittent auscultation for low-risk pregnancies and this study affirms their support,” Dr. Rhoades said. “Women with a low-risk pregnancy can benefit from intermittent auscultation because it allows them more autonomy and movement during labor so it should be offered to our low-risk patients.”

Dr. Al Wattar reported a personal Academic Clinical Lectureship from the U.K. National Health Institute of Research. Dr. Khan disclosed funding from the Beatriz Galindo Program Grant given to the University of Granada by the Ministry of Science, Innovation, and Universities of the Spanish Government.

For intrapartum fetal surveillance, the old way may be the best way, according to a meta-analysis involving more than 118,000 patients.

Intermittent auscultation with a Pinard stethoscope and handheld Doppler was associated with a significantly lower risk of emergency cesarean deliveries than newer monitoring techniques without jeopardizing maternal or neonatal outcomes, reported lead author Bassel H. Al Wattar, MD, PhD, of University of Warwick, Coventry, England, and University College London Hospitals, and colleagues.

“Over the last 50 years, several newer surveillance methods have been evaluated, with varied uptake in practice,” the investigators wrote in the Canadian Medical Association Journal, noting that cardiotocography (CTG) is the most common method for high-risk pregnancies, typically coupled with at least one other modality, such as fetal scalp pH analysis (FBS), fetal pulse oximetry (FPO), or fetal heart electrocardiogram (STAN).

“Despite extensive investment in clinical research, the overall effectiveness of such methods in improving maternal and neonatal outcomes remains debatable as stillbirth rates have plateaued worldwide, while cesarean delivery rates continue to rise,” the investigators wrote. Previous meta-analyses have relied upon head-to-head comparisons of monitoring techniques and did not take into account effects on maternal and neonatal outcomes.

To address this knowledge gap, Dr. Al Wattar and colleagues conducted the present systematic review and meta-analysis, ultimately including 33 trials with 118,863 women who underwent intrapartum fetal surveillance, dating back to 1976. Ten surveillance types were evaluated, including intermittent auscultation with Pinard stethoscope and handheld Doppler, CTG with or without computer-aided decision models (cCTG), and CTG or cCTG combined with one or two other techniques, such as FBS, FPO, and STAN.

This revealed that intermittent auscultation outperformed all other techniques in terms of emergency cesarean deliveries and emergency cesarean deliveries because of fetal distress.

Specifically, intermittent auscultation significantly reduced risk of emergency cesarean deliveries, compared with CTG (relative risk, 0.83; 95% confidence interval, 0.72-0.97), CTG-FBS (RR, 0.71; 95% CI, 0.63-0.80), CTG-lactate (RR, 0.77; 95% CI, 0.64-0.92), and FPO-CTG-FBS (RR, 0.81; 95% CI, 0.67-0.99). Conversely, compared with IA, STAN-CTG-FBS and cCTG-FBS raised risk of emergency cesarean deliveries by 17% and 21%, respectively.

Compared with other modalities, the superiority of intermittent auscultation was even more pronounced in terms of emergency cesarean deliveries because of fetal distress. Intermittent auscultation reduced risk by 43%, compared with CTG, 66% compared with CTG-FBS, 58%, compared with FPO-CTG, and 17%, compared with FPO-CTG-FBS. Conversely, compared with intermittent auscultation, STAN-CTG and cCTG-FBS increased risk of emergency cesarean deliveries because of fetal distress by 39% and 80%, respectively.

Further analysis showed that all types of surveillance had similar effects on neonatal outcomes, such as admission to neonatal unit and neonatal acidemia. Although a combination of STAN or FPO with CTG-FBS “seemed to improve the likelihood of reducing adverse neonatal outcomes,” the investigators noted that these differences were not significant in network meta­-analysis.

“New fetal surveillance methods did not improve neonatal outcomes or reduce unnecessary maternal interventions,” Dr. Al Wattar and colleagues concluded. “Further evidence is needed to evaluate the effects of fetal pulse oximetry and fetal heart electrocardiography in labor.”

Dr. Courtney Rhoades

Courtney Rhoades, DO, MBA, FACOG, medical director of labor and delivery and assistant professor of obstetrics and gynecology at the University of Florida, Jacksonville, suggested that the meta-analysis supports the safety of intermittent auscultation, but the results may not be entirely applicable to real-world practice.

“It is hard, in practice, to draw the same conclusion that they do in the study that the newer methods may cause too many emergency C-sections because our fetal monitoring equipment, methodology for interpretation, ability to do emergency C-sections and maternal risk factors have changed in the last 50 years,” Dr. Rhoades said. “Continuous fetal monitoring gives more data points during labor, and with more data points, there are more opportunities to interpret and act – either correctly or incorrectly. As they state in the study, the decision to do a C-section is multifactorial.”

Dr. Rhoades, who recently authored a textbook chapter on intrapartum monitoring and fetal assessment, recommended that intermittent auscultation be reserved for low-risk patients.

“The American College of Obstetricians and Gynecologists has endorsed intermittent auscultation for low-risk pregnancies and this study affirms their support,” Dr. Rhoades said. “Women with a low-risk pregnancy can benefit from intermittent auscultation because it allows them more autonomy and movement during labor so it should be offered to our low-risk patients.”

Dr. Al Wattar reported a personal Academic Clinical Lectureship from the U.K. National Health Institute of Research. Dr. Khan disclosed funding from the Beatriz Galindo Program Grant given to the University of Granada by the Ministry of Science, Innovation, and Universities of the Spanish Government.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE CANADIAN MEDICAL ASSOCIATION JOURNAL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Low-risk adenomas may not elevate risk of CRC-related death

Article Type
Changed
Wed, 05/26/2021 - 13:41

Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.

Abhiram Duvvuri, MD, of the University of Kansas Medical Center, Kansas City
Dr. Abhiram Duvvuri

These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.

“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.

To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.

After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).

Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).

The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.

The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.

“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.

Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.

“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”

One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.

Publications
Topics
Sections

Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.

Abhiram Duvvuri, MD, of the University of Kansas Medical Center, Kansas City
Dr. Abhiram Duvvuri

These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.

“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.

To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.

After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).

Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).

The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.

The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.

“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.

Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.

“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”

One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.

Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.

Abhiram Duvvuri, MD, of the University of Kansas Medical Center, Kansas City
Dr. Abhiram Duvvuri

These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.

“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.

To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.

After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).

Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).

The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.

The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.

“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.

Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.

“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”

One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Surveillance endoscopy in Barrett’s may perform better than expected

Article Type
Changed
Wed, 05/26/2021 - 13:41

For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.

Dr. Lovekirat Dhaliwal, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn.
Dr. Lovekirat Dhaliwal

Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.

This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.

“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”

On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.

To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.

Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
 

Technology challenged by technique

The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.

“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.

The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).

“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.

“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”

The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.

“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.

The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.

Publications
Topics
Sections

For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.

Dr. Lovekirat Dhaliwal, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn.
Dr. Lovekirat Dhaliwal

Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.

This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.

“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”

On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.

To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.

Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
 

Technology challenged by technique

The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.

“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.

The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).

“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.

“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”

The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.

“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.

The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.

For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.

Dr. Lovekirat Dhaliwal, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn.
Dr. Lovekirat Dhaliwal

Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.

This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.

“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”

On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.

To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.

Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
 

Technology challenged by technique

The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.

“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.

The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).

“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.

“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”

The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.

“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.

The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Lasting norovirus immunity may depend on T cells

Norovirus-specific cell immunity is durable
Article Type
Changed
Fri, 04/09/2021 - 10:18

 

Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.

These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.

“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.

They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”

Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.

“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.

To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.

Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.

“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.

To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.

Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.

“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.

Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.

“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”

The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.

Body

 

Understanding the immune correlates of protection for norovirus is important for the development and evaluation of candidate vaccines and to better clarify the variation in host susceptibility to infection.

Craig B. Wilen, MD, Yale University, New Haven, Conn.
Dr. Craig B. Wilen

Prior research on the human immune response to norovirus infection has largely focused on the antibody response. There is less known about the antinorovirus T cell response, which can target and clear virus-infected cells. Notably, anti-viral CD8+ T cells are critical for control of norovirus infection in mouse models, which suggests a similarly important role in humans. In this study by Dr. Pattekar and colleagues, the authors generated human norovirus-specific peptides covering the entire viral proteome, and then they used these peptides to identify and characterize norovirus-specific CD8+ T cells from the blood, spleen, lymph nodes, and intestinal lamina propria of human donors who were not actively infected by norovirus. The authors identified virus-specific memory T cells in the blood and intestines. Further, they found several HLA class I restricted virus epitopes that are highly conserved amongst the most commonly circulating GII.4 noroviruses. These norovirus-specific T cells represented about 0.5% of all cells and reveal that norovirus induces a durable population of memory T cells.

Further research is needed to determine whether norovirus-specific CD8+ T cells are necessary or sufficient for preventing norovirus infection and disease in people. This important study provides novel tools and increases our understanding of cell-mediated immunity to human norovirus infection that will influence future vaccine design and evaluation for this important human pathogen.

Craig B. Wilen, MD, PhD, is assistant professor of laboratory medicine and immunobiology at Yale University, New Haven, Conn. He does not have any conflicts to disclose.

Publications
Topics
Sections
Body

 

Understanding the immune correlates of protection for norovirus is important for the development and evaluation of candidate vaccines and to better clarify the variation in host susceptibility to infection.

Craig B. Wilen, MD, Yale University, New Haven, Conn.
Dr. Craig B. Wilen

Prior research on the human immune response to norovirus infection has largely focused on the antibody response. There is less known about the antinorovirus T cell response, which can target and clear virus-infected cells. Notably, anti-viral CD8+ T cells are critical for control of norovirus infection in mouse models, which suggests a similarly important role in humans. In this study by Dr. Pattekar and colleagues, the authors generated human norovirus-specific peptides covering the entire viral proteome, and then they used these peptides to identify and characterize norovirus-specific CD8+ T cells from the blood, spleen, lymph nodes, and intestinal lamina propria of human donors who were not actively infected by norovirus. The authors identified virus-specific memory T cells in the blood and intestines. Further, they found several HLA class I restricted virus epitopes that are highly conserved amongst the most commonly circulating GII.4 noroviruses. These norovirus-specific T cells represented about 0.5% of all cells and reveal that norovirus induces a durable population of memory T cells.

Further research is needed to determine whether norovirus-specific CD8+ T cells are necessary or sufficient for preventing norovirus infection and disease in people. This important study provides novel tools and increases our understanding of cell-mediated immunity to human norovirus infection that will influence future vaccine design and evaluation for this important human pathogen.

Craig B. Wilen, MD, PhD, is assistant professor of laboratory medicine and immunobiology at Yale University, New Haven, Conn. He does not have any conflicts to disclose.

Body

 

Understanding the immune correlates of protection for norovirus is important for the development and evaluation of candidate vaccines and to better clarify the variation in host susceptibility to infection.

Craig B. Wilen, MD, Yale University, New Haven, Conn.
Dr. Craig B. Wilen

Prior research on the human immune response to norovirus infection has largely focused on the antibody response. There is less known about the antinorovirus T cell response, which can target and clear virus-infected cells. Notably, anti-viral CD8+ T cells are critical for control of norovirus infection in mouse models, which suggests a similarly important role in humans. In this study by Dr. Pattekar and colleagues, the authors generated human norovirus-specific peptides covering the entire viral proteome, and then they used these peptides to identify and characterize norovirus-specific CD8+ T cells from the blood, spleen, lymph nodes, and intestinal lamina propria of human donors who were not actively infected by norovirus. The authors identified virus-specific memory T cells in the blood and intestines. Further, they found several HLA class I restricted virus epitopes that are highly conserved amongst the most commonly circulating GII.4 noroviruses. These norovirus-specific T cells represented about 0.5% of all cells and reveal that norovirus induces a durable population of memory T cells.

Further research is needed to determine whether norovirus-specific CD8+ T cells are necessary or sufficient for preventing norovirus infection and disease in people. This important study provides novel tools and increases our understanding of cell-mediated immunity to human norovirus infection that will influence future vaccine design and evaluation for this important human pathogen.

Craig B. Wilen, MD, PhD, is assistant professor of laboratory medicine and immunobiology at Yale University, New Haven, Conn. He does not have any conflicts to disclose.

Title
Norovirus-specific cell immunity is durable
Norovirus-specific cell immunity is durable

 

Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.

These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.

“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.

They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”

Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.

“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.

To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.

Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.

“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.

To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.

Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.

“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.

Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.

“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”

The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.

 

Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.

These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.

“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.

They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”

Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.

“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.

To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.

Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.

“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.

To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.

Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.

“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.

Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.

“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”

The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Low-risk adenomas may not elevate risk of CRC-related death

What’s the best timing for CRC surveillance?
Article Type
Changed
Fri, 04/09/2021 - 09:08

Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.

Abhiram Duvvuri, MD, of the University of Kansas Medical Center, Kansas City
Dr. Abhiram Duvvuri

These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.

“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.

To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.

After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).

Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).

The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.

The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.

“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.

Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.

“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”

One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.

Body

 

Despite evidence suggesting that colorectal cancer (CRC) incidence and mortality can be decreased through the endoscopic removal of adenomatous polyps, the question remains as to whether further endoscopic surveillance is necessary after polypectomy and, if so, how often. The most recent iteration of the United States Multi-Society Task Force guidelines endorsed a lengthening of the surveillance interval following the removal of low-risk adenomas (LRAs), defined as 1-2 tubular adenomas <10 mm with low-grade dysplasia, while maintaining a shorter interval for high-risk adenomas (HRAs), defined as advanced adenomas (villous histology, high-grade dysplasia, or >10 mm) or >3 adenomas.

Dr. Reid M. Ness, Vanderbilt University Medical Center, Nashville, Tenn.
Dr. Reid M. Ness

Dr. Duvvuri and colleagues present the results of a systematic review and meta-analysis of studies examining metachronous CRC incidence and mortality following index colonoscopy. They found a small but statistically significant increase in the incidence of CRC but no significant difference in CRC mortality when comparing patients with LRAs to those with no adenomas. In contrast, they found both a statistically and clinically significant difference in CRC incidence/mortality when comparing patients with HRAs to both those with no adenomas and those with LRAs. They concluded that these results support a recommendation for no difference in follow-up surveillance between patients with LRAs and no adenomas but do support more frequent surveillance for patients with HRAs at index colonoscopy.

Future studies should better examine the timing of neoplasm incidence/recurrence following adenoma removal and also examine metachronous CRC incidence/mortality in patients with sessile serrated lesions at index colonoscopy.

Reid M. Ness, MD, MPH, AGAF, is an associate professor in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center and at the VA Tennessee Valley Healthcare System, Nashville, campus. He is an investigator in the Vanderbilt-Ingram Cancer Center. Dr. Ness has no financial relationships to disclose.

Publications
Topics
Sections
Body

 

Despite evidence suggesting that colorectal cancer (CRC) incidence and mortality can be decreased through the endoscopic removal of adenomatous polyps, the question remains as to whether further endoscopic surveillance is necessary after polypectomy and, if so, how often. The most recent iteration of the United States Multi-Society Task Force guidelines endorsed a lengthening of the surveillance interval following the removal of low-risk adenomas (LRAs), defined as 1-2 tubular adenomas <10 mm with low-grade dysplasia, while maintaining a shorter interval for high-risk adenomas (HRAs), defined as advanced adenomas (villous histology, high-grade dysplasia, or >10 mm) or >3 adenomas.

Dr. Reid M. Ness, Vanderbilt University Medical Center, Nashville, Tenn.
Dr. Reid M. Ness

Dr. Duvvuri and colleagues present the results of a systematic review and meta-analysis of studies examining metachronous CRC incidence and mortality following index colonoscopy. They found a small but statistically significant increase in the incidence of CRC but no significant difference in CRC mortality when comparing patients with LRAs to those with no adenomas. In contrast, they found both a statistically and clinically significant difference in CRC incidence/mortality when comparing patients with HRAs to both those with no adenomas and those with LRAs. They concluded that these results support a recommendation for no difference in follow-up surveillance between patients with LRAs and no adenomas but do support more frequent surveillance for patients with HRAs at index colonoscopy.

Future studies should better examine the timing of neoplasm incidence/recurrence following adenoma removal and also examine metachronous CRC incidence/mortality in patients with sessile serrated lesions at index colonoscopy.

Reid M. Ness, MD, MPH, AGAF, is an associate professor in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center and at the VA Tennessee Valley Healthcare System, Nashville, campus. He is an investigator in the Vanderbilt-Ingram Cancer Center. Dr. Ness has no financial relationships to disclose.

Body

 

Despite evidence suggesting that colorectal cancer (CRC) incidence and mortality can be decreased through the endoscopic removal of adenomatous polyps, the question remains as to whether further endoscopic surveillance is necessary after polypectomy and, if so, how often. The most recent iteration of the United States Multi-Society Task Force guidelines endorsed a lengthening of the surveillance interval following the removal of low-risk adenomas (LRAs), defined as 1-2 tubular adenomas <10 mm with low-grade dysplasia, while maintaining a shorter interval for high-risk adenomas (HRAs), defined as advanced adenomas (villous histology, high-grade dysplasia, or >10 mm) or >3 adenomas.

Dr. Reid M. Ness, Vanderbilt University Medical Center, Nashville, Tenn.
Dr. Reid M. Ness

Dr. Duvvuri and colleagues present the results of a systematic review and meta-analysis of studies examining metachronous CRC incidence and mortality following index colonoscopy. They found a small but statistically significant increase in the incidence of CRC but no significant difference in CRC mortality when comparing patients with LRAs to those with no adenomas. In contrast, they found both a statistically and clinically significant difference in CRC incidence/mortality when comparing patients with HRAs to both those with no adenomas and those with LRAs. They concluded that these results support a recommendation for no difference in follow-up surveillance between patients with LRAs and no adenomas but do support more frequent surveillance for patients with HRAs at index colonoscopy.

Future studies should better examine the timing of neoplasm incidence/recurrence following adenoma removal and also examine metachronous CRC incidence/mortality in patients with sessile serrated lesions at index colonoscopy.

Reid M. Ness, MD, MPH, AGAF, is an associate professor in the division of gastroenterology, hepatology, and nutrition at Vanderbilt University Medical Center and at the VA Tennessee Valley Healthcare System, Nashville, campus. He is an investigator in the Vanderbilt-Ingram Cancer Center. Dr. Ness has no financial relationships to disclose.

Title
What’s the best timing for CRC surveillance?
What’s the best timing for CRC surveillance?

Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.

Abhiram Duvvuri, MD, of the University of Kansas Medical Center, Kansas City
Dr. Abhiram Duvvuri

These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.

“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.

To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.

After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).

Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).

The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.

The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.

“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.

Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.

“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”

One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.

Unlike high-risk adenomas (HRAs), low-risk adenomas (LRAs) have a minimal association with risk of metachronous colorectal cancer (CRC), and no relationship with odds of metachronous CRC-related mortality, according to a meta-analysis of more than 500,000 individuals.

Abhiram Duvvuri, MD, of the University of Kansas Medical Center, Kansas City
Dr. Abhiram Duvvuri

These findings should impact surveillance guidelines and make follow-up the same for individuals with LRAs or no adenomas, reported lead author Abhiram Duvvuri, MD, of the division of gastroenterology and hepatology at the University of Kansas, Kansas City, and colleagues. Currently, the United States Multi-Society Task Force on Colorectal Cancer advises colonoscopy intervals of 3 years for individuals with HRAs, 7-10 years for those with LRAs, and 10 years for those without adenomas.

“The evidence supporting these surveillance recommendations for clinically relevant endpoints such as cancer and cancer-related deaths among patients who undergo adenoma removal, particularly LRA, is minimal, because most of the evidence was based on the surrogate risk of metachronous advanced neoplasia,” the investigators wrote in Gastroenterology.

To provide more solid evidence, the investigators performed a systematic review and meta-analysis, ultimately analyzing 12 studies with data from 510,019 individuals at a mean age of 59.2 years. All studies reported rates of LRA, HRA, or no adenoma at baseline colonoscopy, plus incidence of metachronous CRC and/or CRC-related mortality. With these data, the investigators determined incidence of metachronous CRC and CRC-related mortality for each of the adenoma groups and also compared these incidences per 10,000 person-years of follow-up across groups.

After a mean follow-up of 8.5 years, patients with HRAs had a significantly higher rate of CRC compared with patients who had LRAs (13.81 vs. 4.5; odds ratio, 2.35; 95% confidence interval, 1.72-3.20) or no adenomas (13.81 vs. 3.4; OR, 2.92; 95% CI, 2.31-3.69). Similarly, but to a lesser degree, LRAs were associated with significantly greater risk of CRC than that of no adenomas (4.5 vs. 3.4; OR, 1.26; 95% CI, 1.06-1.51).

Data on CRC- related mortality further supported these minimal risk profiles because LRAs did not significantly increase the risk of CRC-related mortality compared with no adenomas (OR, 1.15; 95% CI, 0.76-1.74). In contrast, HRAs were associated with significantly greater risk of CRC-related death than that of both LRAs (OR, 2.48; 95% CI, 1.30-4.75) and no adenomas (OR, 2.69; 95% CI, 1.87-3.87).

The investigators acknowledged certain limitations of their study. For one, there were no randomized controlled trials in the meta-analysis, which can introduce bias. Loss of patients to follow-up is also possible; however, the investigators noted that there was a robust sample of patients available for study outcomes all the same. There is also risk of comparability bias in that HRA and LRA groups underwent more colonoscopies; however, the duration of follow-up and timing of last colonoscopy were similar among groups. Lastly, it’s possible the patient sample wasn’t representative because of healthy screenee bias, but the investigators compared groups against general population to minimize that bias.

The investigators also highlighted several strengths of their study that make their findings more reliable than those of past meta-analyses. For one, their study is the largest of its kind to date, and involved a significantly higher number of patients with LRA and no adenomas. Also, in contrast with previous studies, CRC and CRC-related mortality were evaluated rather than advanced adenomas, they noted.

“Furthermore, we also analyzed CRC incidence and mortality in the LRA group compared with the general population, with the [standardized incidence ratio] being lower and [standardized mortality ratio] being comparable, confirming that it is indeed a low-risk group,” they wrote.

Considering these strengths and the nature of their findings, Dr. Duvvuri and colleagues called for a more conservative approach to CRC surveillance among individuals with LRAs, and more research to investigate extending colonoscopy intervals even further.

“We recommend that the interval for follow-up colonoscopy should be the same in patients with LRAs or no adenomas but that the HRA group should have a more frequent surveillance interval for CRC surveillance compared with these groups,” they concluded. “Future studies should evaluate whether surveillance intervals could be lengthened beyond 10 years in the no-adenoma and LRA groups after an initial high-quality index colonoscopy.”

One author disclosed affiliations with Erbe, Cdx Labs, Aries, and others. Dr. Duvvuri and the remaining authors disclosed no conflicts.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Surveillance endoscopy in Barrett’s may perform better than expected

There’s still room for improvement
Article Type
Changed
Thu, 04/08/2021 - 14:41

For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.

Dr. Lovekirat Dhaliwal, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn.
Dr. Lovekirat Dhaliwal

Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.

This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.

“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”

On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.

To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.

Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
 

Technology challenged by technique

The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.

“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.

The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).

“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.

“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”

The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.

“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.

The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.

Body

 

The current study by Dr. Dhaliwal and colleagues evaluates the neoplasia detection rate (NDR) for high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC) during surveillance endoscopy, which is a proposed novel quality metric for BE. Within a population cohort, the investigators found the NDR was 4.9%, and this did not increase significantly during the study period from 1991 to 2019. Gastroenterologists were more likely to report visible abnormalities during endoscopy and this was a significant predictor of neoplasia detection in a multivariable model. However, the overall rate of missed HGD or EAC was 13%, and this was not associated with procedural specialty. Interestingly, even with only 57% adherence to Seattle protocol in this study, there was no association with missed lesions.

Dr. David A. Leiman, Duke University, Durham N.C.
Dr. David A. Leiman

Despite advances in endoscopic imaging and measures establishing quality for biopsy technique, there remains substantial room for improvement in the endoscopic management of patients with BE. While unable to evaluate all factors associated with neoplasia detection, the authors have provided an important real-world benchmark for NDR. Further study is needed to establish the connection between NDR and missed dysplasia, as well as its impact on outcomes such as EAC staging and mortality. Critically, understanding the role of specialized training and other factors such as inspection time to improve NDR is needed.

David A. Leiman, MD, MSHP, is the chair of the AGA Quality Committee. He is an assistant professor of medicine at Duke University, Durham, N.C., where he serves as director of esophageal research and quality. He has no conflicts.

Publications
Topics
Sections
Body

 

The current study by Dr. Dhaliwal and colleagues evaluates the neoplasia detection rate (NDR) for high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC) during surveillance endoscopy, which is a proposed novel quality metric for BE. Within a population cohort, the investigators found the NDR was 4.9%, and this did not increase significantly during the study period from 1991 to 2019. Gastroenterologists were more likely to report visible abnormalities during endoscopy and this was a significant predictor of neoplasia detection in a multivariable model. However, the overall rate of missed HGD or EAC was 13%, and this was not associated with procedural specialty. Interestingly, even with only 57% adherence to Seattle protocol in this study, there was no association with missed lesions.

Dr. David A. Leiman, Duke University, Durham N.C.
Dr. David A. Leiman

Despite advances in endoscopic imaging and measures establishing quality for biopsy technique, there remains substantial room for improvement in the endoscopic management of patients with BE. While unable to evaluate all factors associated with neoplasia detection, the authors have provided an important real-world benchmark for NDR. Further study is needed to establish the connection between NDR and missed dysplasia, as well as its impact on outcomes such as EAC staging and mortality. Critically, understanding the role of specialized training and other factors such as inspection time to improve NDR is needed.

David A. Leiman, MD, MSHP, is the chair of the AGA Quality Committee. He is an assistant professor of medicine at Duke University, Durham, N.C., where he serves as director of esophageal research and quality. He has no conflicts.

Body

 

The current study by Dr. Dhaliwal and colleagues evaluates the neoplasia detection rate (NDR) for high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC) during surveillance endoscopy, which is a proposed novel quality metric for BE. Within a population cohort, the investigators found the NDR was 4.9%, and this did not increase significantly during the study period from 1991 to 2019. Gastroenterologists were more likely to report visible abnormalities during endoscopy and this was a significant predictor of neoplasia detection in a multivariable model. However, the overall rate of missed HGD or EAC was 13%, and this was not associated with procedural specialty. Interestingly, even with only 57% adherence to Seattle protocol in this study, there was no association with missed lesions.

Dr. David A. Leiman, Duke University, Durham N.C.
Dr. David A. Leiman

Despite advances in endoscopic imaging and measures establishing quality for biopsy technique, there remains substantial room for improvement in the endoscopic management of patients with BE. While unable to evaluate all factors associated with neoplasia detection, the authors have provided an important real-world benchmark for NDR. Further study is needed to establish the connection between NDR and missed dysplasia, as well as its impact on outcomes such as EAC staging and mortality. Critically, understanding the role of specialized training and other factors such as inspection time to improve NDR is needed.

David A. Leiman, MD, MSHP, is the chair of the AGA Quality Committee. He is an assistant professor of medicine at Duke University, Durham, N.C., where he serves as director of esophageal research and quality. He has no conflicts.

Title
There’s still room for improvement
There’s still room for improvement

For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.

Dr. Lovekirat Dhaliwal, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn.
Dr. Lovekirat Dhaliwal

Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.

This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.

“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”

On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.

To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.

Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
 

Technology challenged by technique

The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.

“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.

The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).

“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.

“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”

The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.

“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.

The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.

For patients with Barrett’s esophagus, surveillance endoscopy detects high-grade dysplasia (HGD) and esophageal adenocarcinoma (EAC) more often than previously reported, according to a retrospective analysis of more than 1,000 patients.

Dr. Lovekirat Dhaliwal, division of gastroenterology and hepatology, Mayo Clinic, Rochester, Minn.
Dr. Lovekirat Dhaliwal

Neoplasia detection rate, defined as findings on initial surveillance endoscopy, was also lower than that observed in past studies, according to lead author Lovekirat Dhaliwal, MBBS, of Mayo Clinic, Rochester, Minn., and colleagues.

This study’s findings may help define quality control benchmarks for endoscopic surveillance of Barrett’s esophagus, the investigators wrote in Clinical Gastroenterology and Hepatology. Accurate metrics are needed, they noted, because almost 9 out of 10 patients with Barrett’s esophagus present with EAC outside of a surveillance program, which “may represent missed opportunities at screening.” At the same time, a previous study by the investigators and one from another group, have suggested that 25%-33% of HGD/EAC cases may go undetected by initial surveillance endoscopy.

“Dysplasia detection in [Barrett’s esophagus] is challenging because of its patchy distribution and often subtle appearance,” the investigators noted. “Lack of compliance with recommended biopsy guidelines is also well-documented.”

On the other hand, Dr. Dhaliwal and colleagues suggested that previous studies may not accurately portray community practice and, therefore, have limited value in determining quality control metrics. A 2019 review, for instance, reported a neoplasia detection rate of 7% among patients with Barrett’s esophagus, but this finding “is composed of data from largely referral center cohorts with endoscopy performed by experienced academic gastroenterologists,” they wrote, which may lead to overestimation of such detection.

To better characterize this landscape, the investigators conducted a retrospective analysis involving 1,066 patients with Barrett’s esophagus who underwent initial surveillance endoscopy between 1991 and 2019. Approximately three out of four surveillance endoscopies (77%) were performed by gastroenterologists, while the remaining were performed by nongastroenterologists, such as family practitioners or surgeons. About 60% of patients were adequately biopsied according to the Seattle protocol.

Analysis revealed that the neoplasia detection rate was 4.9% (95% confidence interval, 3.8%-6.4%), which is less than the previously reported rate of 7%. HGD was more common than EAC (33 cases vs. 20 cases). Out of 1,066 patients, 391 without neoplasia on initial endoscopy underwent repeat endoscopy within a year. Among these individuals, HGD or EAC was detected in eight patients, which suggests that 13% of diagnoses were missed on initial endoscopy, a rate well below the previously reported range of 25%-33%.
 

Technology challenged by technique

The neoplasia detection rate “appeared to increase significantly from 1991 to 2019 on univariate analysis (particularly after 2000), but this was not observed on multivariate analysis,” the investigators wrote. “This was despite the introduction of high definition monitors and high resolution endoscopes in subsequent years.

“This may suggest that in a low dysplasia prevalence setting, basic techniques such as careful white light inspection of the [Barrett’s esophagus] mucosa along with targeted and Seattle protocol biopsies may be more important,” they noted.

The importance of technique may be further supported by another finding: Gastroenterologists detected neoplasia almost four times as often as did nongastroenterologists (odds ratio, 3.6; P = .0154).

“This finding is novel and may be due to additional training in endoscopy, lesion recognition, and familiarity with surveillance guidelines in gastroenterologists,” the investigators wrote. “If this finding is replicated in other cohorts, it may support recommendations for the performance of surveillance by endoscopists trained in gastrointestinal endoscopy and well-versed in surveillance guidelines.

“[U]sing neoplasia detection as a quality metric coupled with outcome measures such as missed dysplasia rates could improve adherence to established biopsy protocols and improve the quality of care to patients,” they wrote. “Ultimately, this can be an opportunity to develop a high-value, evidence-based quality metric in [Barrett’s esophagus] surveillance.”

The authors acknowledged some limitations to their study. Its retrospective design meant no one biopsy protocol could be adopted across the entire study period; however, the results were “unchanged” when restricted to the period after introduction of the Seattle protocol in 2000. The study’s long period could have left results susceptible to changing guidelines, but the neoplasia detection rates remained relatively stable over time.

“Because prior reports consisted largely of tertiary care center cohorts, our findings may reflect the absence of referral bias and be more generalizable,” the investigators wrote.

The study was funded by the National Institute of Aging and the National Cancer Institute. The investigators disclosed relationships with Celgene, Nine Point Medical, Takeda, and others.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Lasting norovirus immunity may depend on T cells

Article Type
Changed
Thu, 04/08/2021 - 14:06

 

Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.

These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.

“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.

They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”

Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.

“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.

To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.

Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.

“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.

To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.

Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.

“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.

Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.

“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”

The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.

Publications
Topics
Sections

 

Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.

These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.

“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.

They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”

Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.

“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.

To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.

Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.

“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.

To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.

Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.

“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.

Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.

“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”

The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.

 

Protection against norovirus gastroenteritis is supported in part by norovirus-specific CD8+ T cells that reside in peripheral, intestinal, and lymphoid tissues, according to investigators.

These findings, and the molecular tools used to discover them, could guide development of a norovirus vaccine and novel cellular therapies, according to lead author Ajinkya Pattekar, MD, of the University of Pennsylvania, Philadelphia, and colleagues.

“Currently, there are no approved pharmacologic therapies against norovirus, and despite several promising clinical trials, an effective vaccine is not available,” the investigators wrote in Cellular and Molecular Gastroenterology and Hepatology, which may stem from an incomplete understanding of norovirus immunity, according to Dr. Pattekar and colleagues.

They noted that most previous research has focused on humoral immunity, which appears variable between individuals, with some people exhibiting a strong humoral response, while others mount only partial humoral protection. The investigators also noted that, depending on which studies were examined, this type of defense could last years or fade within weeks to months and that “immune mechanisms other than antibodies may be important for protection against noroviruses.”

Specifically, cellular immunity may be at work. A 2020 study involving volunteers showed that T cells were cross-reactive to a type of norovirus the participants had never been exposed to.

“These findings suggest that T cells may target conserved epitopes and could offer cross-protection against a broad range of noroviruses,” Dr. Pattekar and colleagues wrote.

To test this hypothesis, they first collected peripheral blood mononuclear cells (PBMCs) from three healthy volunteers with unknown norovirus exposure history. Then serum samples were screened for norovirus functional antibodies via the binding between virus-like particles (VLPs) and histo–blood group antigens (HBGAs). This revealed disparate profiles of blocking antibodies against various norovirus strains. While donor 1 and donor 2 had antibodies against multiple strains, donor 3 lacked norovirus antibodies. Further testing showed that this latter individual was a nonsecretor with limited exposure history.

Next, the investigators tested donor PBMCs for norovirus-specific T-cell responses with use of overlapping libraries of peptides for each of the three norovirus open reading frames (ORF1, ORF2, and ORF3). T-cell responses, predominantly involving CD8+ T cells, were observed in all donors. While donor 1 had the greatest response to ORF1, donors 2 and 3 had responses that focused on ORF2.

“Thus, norovirus-specific T cells targeting ORF1 and ORF2 epitopes are present in peripheral blood from healthy donors regardless of secretor status,” the investigators wrote.

To better characterize T-cell epitopes, the investigators subdivided the overlapping peptide libraries into groups of shorter peptides, then exposed serum to these smaller component pools. This revealed eight HLA class I restricted epitopes that were derived from a genogroup II.4 pandemic norovirus strain; this group of variants has been responsible for all six of the norovirus pandemics since 1996.

Closer examination of the epitopes showed that they were “broadly conserved beyond GII.4.” Only one epitope exhibited variation in the C-terminal aromatic anchor, and it was nondominant. The investigators therefore identified seven immunodominant CD8+ epitopes, which they considered “valuable targets for vaccine and cell-based therapies.

“These data further confirm that epitope-specific CD8+ T cells are a universal feature of the overall norovirus immune response and could be an attractive target for future vaccines,” the investigators wrote.

Additional testing involving samples of spleen, mesenteric lymph nodes, and duodenum from deceased individuals showed presence of norovirus-specific CD8+ T cells, with particular abundance in intestinal tissue, and distinct phenotypes and functional properties in different tissue types.

“Future studies using tetramers and intestinal samples should build on these observations and fully define the location and microenvironment of norovirus-specific T cells,” the investigators wrote. “If carried out in the context of a vaccine trial, such studies could be highly valuable in elucidating tissue-resident memory correlates of norovirus immunity.”

The study was funded by the National Institutes of Health, the Wellcome Trust, and Deutsche Forschungsgemeinschaft. The investigators reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELLULAR AND MOLECULAR GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Pediatric NAFLD almost always stems from excess body weight, not other etiologies

Article Type
Changed
Thu, 04/15/2021 - 12:59

 

Nonalcoholic fatty liver disease (NAFLD) in children is almost always caused by excess body weight, not other etiologies, based on a retrospective analysis of 900 patients.

Just 2% of children with overweight or obesity and suspected NAFLD had other causes of liver disease, and none tested positive for autoimmune hepatitis (AIH), reported lead author Toshifumi Yodoshi, MD, PhD, of Cincinnati Children’s Hospital Medical Center, and colleagues.

“Currently, recommended testing of patients with suspected NAFLD includes ruling out the following conditions: AIH, Wilson disease, hemochromatosis, alpha-1 antitrypsin [A1AT] deficiency, viral hepatitis, celiac disease, and thyroid dysfunction,” the investigators wrote in Pediatrics.

Yet evidence supporting this particular battery of tests is scant; just one previous pediatric study has estimated the prevalence of other liver diseases among children with suspected NAFLD. The study showed that the second-most common etiology, after NAFLD, was AIH, at a rate of 4%.

But “the generalizability of these findings is uncertain,” noted Dr. Yodoshi and colleagues, as the study was conducted at one tertiary center in the western United States, among a population that was predominantly Hispanic.

This uncertainty spurred the present study, which was conducted at two pediatric centers: Cincinnati Children’s Hospital Medical Center (2009-2017) and Yale New Haven (Conn.) Children’s Hospital (2012-2017).

The final analysis involved 900 patients aged 18 years or younger with suspected NAFLD based on hepatic steatosis detected via imaging and/or elevated serum aminotransferases. Demographically, a slight majority of the patients were boys (63%), and approximately one-quarter (26%) were Hispanic. Median BMI z score was 2.45, with three out of four patients (76%) exhibiting severe obesity. Out of 900 patients, 358 (40%) underwent liver biopsy, among whom 46% had confirmed nonalcoholic steatohepatitis.

All patients underwent testing to exclude the aforementioned conditions using various diagnostics, revealing that just 2% of the population had etiologies other than NAFLD. Specifically, 11 children had thyroid dysfunction (1.2%), 3 had celiac disease (0.4%), 3 had A1AT deficiency (0.4%), 1 had hemophagocytic lymphohistiocytosis, and 1 had Hodgkin’s lymphoma. None of the children had Wilson disease, hepatitis B or C, or AIH.

Dr. Yodoshi and colleagues highlighted the latter finding, noting that 13% of the patients had autoantibodies for AIH, but “none met composite criteria.” This contrasts with the previous study from 2013, which found an AIH rate of 4%.

“Nonetheless,” the investigators went on, “NAFLD remains a diagnosis of exclusion, and key conditions that require specific treatments must be ruled out in the workup of patients with suspected NAFLD. In the future, the cost-effectiveness of this approach will need to be investigated.”

Dr. Francis Rushton
Dr. Francis Rushton

Interpreting the findings, Francis E. Rushton, MD, of Beaufort (S.C.) Memorial Hospital emphasized the implications for preventive and interventional health care.

“This study showing an absence of etiologies other than obesity in overweight children with NAFLD provides further impetus for pediatricians to work on both preventive and treatment regimens for weight issues,” Dr. Rushton said. “Linking community-based initiatives focused on adequate nutritional support with pediatric clinical support services is critical in solving issues related to overweight in children. Tracking BMI over time and developing healthy habit goals for patients are key parts of clinical interventions.” 

The study was funded by the National Institutes of Health. The investigators reported no conflicts of interest.

Publications
Topics
Sections

 

Nonalcoholic fatty liver disease (NAFLD) in children is almost always caused by excess body weight, not other etiologies, based on a retrospective analysis of 900 patients.

Just 2% of children with overweight or obesity and suspected NAFLD had other causes of liver disease, and none tested positive for autoimmune hepatitis (AIH), reported lead author Toshifumi Yodoshi, MD, PhD, of Cincinnati Children’s Hospital Medical Center, and colleagues.

“Currently, recommended testing of patients with suspected NAFLD includes ruling out the following conditions: AIH, Wilson disease, hemochromatosis, alpha-1 antitrypsin [A1AT] deficiency, viral hepatitis, celiac disease, and thyroid dysfunction,” the investigators wrote in Pediatrics.

Yet evidence supporting this particular battery of tests is scant; just one previous pediatric study has estimated the prevalence of other liver diseases among children with suspected NAFLD. The study showed that the second-most common etiology, after NAFLD, was AIH, at a rate of 4%.

But “the generalizability of these findings is uncertain,” noted Dr. Yodoshi and colleagues, as the study was conducted at one tertiary center in the western United States, among a population that was predominantly Hispanic.

This uncertainty spurred the present study, which was conducted at two pediatric centers: Cincinnati Children’s Hospital Medical Center (2009-2017) and Yale New Haven (Conn.) Children’s Hospital (2012-2017).

The final analysis involved 900 patients aged 18 years or younger with suspected NAFLD based on hepatic steatosis detected via imaging and/or elevated serum aminotransferases. Demographically, a slight majority of the patients were boys (63%), and approximately one-quarter (26%) were Hispanic. Median BMI z score was 2.45, with three out of four patients (76%) exhibiting severe obesity. Out of 900 patients, 358 (40%) underwent liver biopsy, among whom 46% had confirmed nonalcoholic steatohepatitis.

All patients underwent testing to exclude the aforementioned conditions using various diagnostics, revealing that just 2% of the population had etiologies other than NAFLD. Specifically, 11 children had thyroid dysfunction (1.2%), 3 had celiac disease (0.4%), 3 had A1AT deficiency (0.4%), 1 had hemophagocytic lymphohistiocytosis, and 1 had Hodgkin’s lymphoma. None of the children had Wilson disease, hepatitis B or C, or AIH.

Dr. Yodoshi and colleagues highlighted the latter finding, noting that 13% of the patients had autoantibodies for AIH, but “none met composite criteria.” This contrasts with the previous study from 2013, which found an AIH rate of 4%.

“Nonetheless,” the investigators went on, “NAFLD remains a diagnosis of exclusion, and key conditions that require specific treatments must be ruled out in the workup of patients with suspected NAFLD. In the future, the cost-effectiveness of this approach will need to be investigated.”

Dr. Francis Rushton
Dr. Francis Rushton

Interpreting the findings, Francis E. Rushton, MD, of Beaufort (S.C.) Memorial Hospital emphasized the implications for preventive and interventional health care.

“This study showing an absence of etiologies other than obesity in overweight children with NAFLD provides further impetus for pediatricians to work on both preventive and treatment regimens for weight issues,” Dr. Rushton said. “Linking community-based initiatives focused on adequate nutritional support with pediatric clinical support services is critical in solving issues related to overweight in children. Tracking BMI over time and developing healthy habit goals for patients are key parts of clinical interventions.” 

The study was funded by the National Institutes of Health. The investigators reported no conflicts of interest.

 

Nonalcoholic fatty liver disease (NAFLD) in children is almost always caused by excess body weight, not other etiologies, based on a retrospective analysis of 900 patients.

Just 2% of children with overweight or obesity and suspected NAFLD had other causes of liver disease, and none tested positive for autoimmune hepatitis (AIH), reported lead author Toshifumi Yodoshi, MD, PhD, of Cincinnati Children’s Hospital Medical Center, and colleagues.

“Currently, recommended testing of patients with suspected NAFLD includes ruling out the following conditions: AIH, Wilson disease, hemochromatosis, alpha-1 antitrypsin [A1AT] deficiency, viral hepatitis, celiac disease, and thyroid dysfunction,” the investigators wrote in Pediatrics.

Yet evidence supporting this particular battery of tests is scant; just one previous pediatric study has estimated the prevalence of other liver diseases among children with suspected NAFLD. The study showed that the second-most common etiology, after NAFLD, was AIH, at a rate of 4%.

But “the generalizability of these findings is uncertain,” noted Dr. Yodoshi and colleagues, as the study was conducted at one tertiary center in the western United States, among a population that was predominantly Hispanic.

This uncertainty spurred the present study, which was conducted at two pediatric centers: Cincinnati Children’s Hospital Medical Center (2009-2017) and Yale New Haven (Conn.) Children’s Hospital (2012-2017).

The final analysis involved 900 patients aged 18 years or younger with suspected NAFLD based on hepatic steatosis detected via imaging and/or elevated serum aminotransferases. Demographically, a slight majority of the patients were boys (63%), and approximately one-quarter (26%) were Hispanic. Median BMI z score was 2.45, with three out of four patients (76%) exhibiting severe obesity. Out of 900 patients, 358 (40%) underwent liver biopsy, among whom 46% had confirmed nonalcoholic steatohepatitis.

All patients underwent testing to exclude the aforementioned conditions using various diagnostics, revealing that just 2% of the population had etiologies other than NAFLD. Specifically, 11 children had thyroid dysfunction (1.2%), 3 had celiac disease (0.4%), 3 had A1AT deficiency (0.4%), 1 had hemophagocytic lymphohistiocytosis, and 1 had Hodgkin’s lymphoma. None of the children had Wilson disease, hepatitis B or C, or AIH.

Dr. Yodoshi and colleagues highlighted the latter finding, noting that 13% of the patients had autoantibodies for AIH, but “none met composite criteria.” This contrasts with the previous study from 2013, which found an AIH rate of 4%.

“Nonetheless,” the investigators went on, “NAFLD remains a diagnosis of exclusion, and key conditions that require specific treatments must be ruled out in the workup of patients with suspected NAFLD. In the future, the cost-effectiveness of this approach will need to be investigated.”

Dr. Francis Rushton
Dr. Francis Rushton

Interpreting the findings, Francis E. Rushton, MD, of Beaufort (S.C.) Memorial Hospital emphasized the implications for preventive and interventional health care.

“This study showing an absence of etiologies other than obesity in overweight children with NAFLD provides further impetus for pediatricians to work on both preventive and treatment regimens for weight issues,” Dr. Rushton said. “Linking community-based initiatives focused on adequate nutritional support with pediatric clinical support services is critical in solving issues related to overweight in children. Tracking BMI over time and developing healthy habit goals for patients are key parts of clinical interventions.” 

The study was funded by the National Institutes of Health. The investigators reported no conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PEDIATRICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads

Maternal caffeine consumption, even small amounts, may reduce neonatal size

Article Type
Changed
Fri, 03/26/2021 - 15:12

For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.

kjekol/thinkstock

That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.

“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.

Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.

Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.

“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.

Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).

Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.

Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.

Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.

Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).

Caffeine plasma concentrations supported these findings.

Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = 0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.

“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.

Dr. Sarah Prager
Dr. Sarah Prager

Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”

She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.

“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”

Dr. Robert Silver
Dr. Robert Silver

According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”

Still, he urged a cautious interpretation from a clinical perspective.

“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”

Dr. Silver suggested that the findings deserve additional investigation.

“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”

The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.

Publications
Topics
Sections

For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.

kjekol/thinkstock

That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.

“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.

Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.

Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.

“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.

Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).

Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.

Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.

Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.

Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).

Caffeine plasma concentrations supported these findings.

Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = 0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.

“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.

Dr. Sarah Prager
Dr. Sarah Prager

Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”

She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.

“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”

Dr. Robert Silver
Dr. Robert Silver

According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”

Still, he urged a cautious interpretation from a clinical perspective.

“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”

Dr. Silver suggested that the findings deserve additional investigation.

“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”

The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.

For pregnant women, just half a cup of coffee a day may reduce neonatal birth size and body weight, according to a prospective study involving more than 2,500 women.

kjekol/thinkstock

That’s only 50 mg of a caffeine day, which falls below the upper threshold of 200 mg set by the American College of Obstetricians and Gynecologists, lead author Jessica Gleason, PhD, MPH, of the Eunice Kennedy Shriver National Institute of Child Health and Human Development, Bethesda, Md, and colleagues reported.

“Systematic reviews and meta-analyses have reported that maternal caffeine consumption, even in doses lower than 200 mg, is associated with a higher risk for low birth weight, small for gestational age (SGA), and fetal growth restriction, suggesting there may be no safe amount of caffeine during pregnancy,” the investigators wrote in JAMA Network Open.

Findings to date have been inconsistent, with a 2014 meta-analysis reporting contrary or null results in four out of nine studies.

Dr. Gleason and colleagues suggested that such discrepancies may be caused by uncontrolled confounding factors in some of the studies, such as smoking, as well as the inadequacy of self-reporting, which fails to incorporate variations in caffeine content between beverages, or differences in rates of metabolism between individuals.

“To our knowledge, no studies have examined the association between caffeine intake and neonatal anthropometric measures beyond weight, length, and head circumference, and few have analyzed plasma concentrations of caffeine and its metabolites or genetic variations in the rate of metabolism associated with neonatal size,” the investigators wrote.

Dr. Gleason and colleagues set out to address this knowledge gap with a prospective cohort study, including 2,055 nonsmoking women with low risk of birth defects who presented at 12 centers between 2009 and 2013. Mean participant age was 28.3 years and mean body mass index was 23.6. Races and ethnicities were represented almost evenly even across four groups: Hispanic (28.2%), White (27.4%), Black (25.2%), and Asian/Pacific Islander (19.2%). Rate of caffeine metabolism was defined by the single-nucleotide variant rs762551 (CYP1A2*1F), according to which, slightly more women had slow metabolism (52.7%) than fast metabolism (47.3%).

Women were enrolled at 8-13 weeks’ gestational age, at which time they underwent interviews and blood draws, allowing for measurement of caffeine and paraxanthine plasma levels, as well as self-reported caffeine consumption during the preceding week.

Over the course of six visits, fetal growth was observed via ultrasound. Medical records were used to determine birth weights and neonatal anthropometric measures, including fat and skin fold mass, body length, and circumferences of the thigh, arm, abdomen, and head.

Neonatal measurements were compared with plasma levels of caffeine and paraxanthine, both continuously and as quartiles (Q1, ≤ 28.3 ng/mL; Q2, 28.4-157.1 ng/mL; Q3, 157.2-658.8 ng/mL; Q4, > 658.8 ng/mL). Comparisons were also made with self-reported caffeine intake.

Women who reported drinking 1-50 mg of caffeine per day had neonates with smaller subscapular skin folds (beta = –0.14 mm; 95% confidence interval, –0.27 to -–0.01 mm), while those who reported more than 50 mg per day had newborns with lower birth weight (beta = –66 g; 95% CI, –121 to –10 g), and smaller circumferences of mid-upper thigh (beta = –0.32 cm; 95% CI, –0.55 to –0.09 cm), anterior thigh skin fold (beta = –0.24 mm; 95% CI, –0.47 to -.01 mm), and mid-upper arm (beta = –0.17 cm; 95% CI, –0.31 to –0.02 cm).

Caffeine plasma concentrations supported these findings.

Compared with women who had caffeine plasma concentrations in the lowest quartile, those in the highest quartile gave birth to neonates with shorter length (beta = –0.44 cm; P = .04 for trend) and lower body weight (beta = –84.3 g; P = .04 for trend), as well as smaller mid-upper arm circumference (beta = -0.25 cm; P = .02 for trend), mid-upper thigh circumference (beta = –0.29 cm; P = .07 for trend), and head circumference (beta = 0.28 cm; P < .001 for trend). A comparison of lower and upper paraxanthine quartiles revealed the similar trends, as did analyses of continuous measures.

“Our results suggest that caffeine consumption during pregnancy, even at levels much lower than the recommended 200 mg per day of caffeine may be associated with decreased fetal growth,” the investigators concluded.

Dr. Sarah Prager
Dr. Sarah Prager

Sarah W. Prager, MD, of the University of Washington, Seattle, suggested that the findings “do not demonstrate that caffeine has a clinically meaningful negative clinical impact on newborn size and weight.”

She noted that there was no difference in the rate of SGA between plasma caffeine quartiles, and that most patients were thin, which may not accurately represent the U.S. population.

“Based on these new data, my take home message to patients would be that increasing amounts of caffeine can have a small but real impact on the size of their baby at birth, though it is unlikely to result in a diagnosis of SGA,” she said. “Pregnant patients may want to limit caffeine intake even more than the ACOG recommendation of 200 mg per day.”

Dr. Robert Silver
Dr. Robert Silver

According to Robert M. Silver, MD, of the University of Utah Health Sciences Center, Salt Lake City, “data from this study are of high quality, owing to the prospective cohort design, large numbers, assessment of biomarkers, and sophisticated analyses.”

Still, he urged a cautious interpretation from a clinical perspective.

“It is important to not overreact to these data,” he said. “The decrease in fetal growth associated with caffeine is small and may prove to be clinically meaningless. Accordingly, clinical recommendations regarding caffeine intake during pregnancy should not be modified solely based on this study.”

Dr. Silver suggested that the findings deserve additional investigation.

“These observations warrant further research about the effects of caffeine exposure during pregnancy,” he said. “Ideally, studies should assess the effect of caffeine exposure on fetal growth in various pregnancy epochs as well as on neonatal and childhood growth.”

The study was funded by the Intramural Research Program of the NICHD. Dr. Gerlanc is an employee of The Prospective Group, which was contracted to provide statistical support.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content