Birth method affects microbiome and vaccination response

Article Type
Changed
Fri, 11/18/2022 - 09:48

Babies born vaginally have a different microbiome to those born by Caesarean section and have heightened responses to childhood vaccinations, according to a new study heralded as “interesting and important” by experts.

The microbiome is known to play a role in immune responses to vaccination. However, the relationship between early-life effects on intestinal microbiota composition and subsequent childhood vaccine responses had remained poorly understood. In the new study, “the findings suggest that vaginal birthing resulted in a microbiota composition associated with an increase in a specific type of antibody response to two routine childhood vaccines in healthy babies, compared with Caesarean section,” the authors said.

Researchers from the University of Edinburgh, with colleagues at Spaarne Hospital and University Medical Centre in Utrecht, and the National Institute for Public Health and the Environment in The Netherlands, tracked the development of the gut microbiome in a cohort of 120 healthy, full-term infants and assessed their antibody levels following two common childhood vaccinations.

The study, published in Nature Communications, found “a clear relationship between microbes in the gut of those babies and levels of antibodies.” Not only was vaginal birth associated with increased levels of Bifidobacterium and Escherichia coli in the gut microbiome in the first months of life but also with higher IgG antibody responses against both pneumococcal and meningococcal vaccines.
 

Antibody responses doubled after vaginal birth

The babies were given pneumococcal and meningococcal vaccinations at 8 and 12 weeks, and saliva was collected at follow-up visits at ages 12 and 18 months for antibody measurement. In the 101 babies tested for pneumococcal antibodies, the researchers found that antibody levels were twice as high among babies delivered naturally, compared with those delivered by C-section. High levels of two gut bacteria in particular – Bifidobacterium and E. coli – were associated with high antibody responses to the pneumococcal vaccine, showing that the microbiome mediated the link between mode of delivery and pneumococcal vaccine responses.

In 66 babies tested for anti-meningococcal antibodies, antibodies were 1.7 times higher for vaginally-born babies than those delivered via C-section, and high antibody levels were particularly associated with high levels of E. coli in the babies’ microbiome.

The results were also influenced by breast-feeding, which even among children born vaginally was linked with 3.5 times higher pneumococcal antibody levels, compared with those of formula-fed children. In contrast, levels of antibodies against meningococcus were unaffected by breast-feeding status.
 

Microbiome ‘sets level of infection protection’

The team said: “The baby acquires Bifidobacterium and E. coli bacteria through natural birth, and human milk is needed to provide the sugars for these bacteria to thrive on.” They explained: “The gut microbiome is seeded at birth, developing rapidly over the first few months of life, and is influenced mostly by delivery mode, breast-feeding, and antibiotic use.” The babies’ microbiome in early life contributes the immune system’s response to vaccines, they said, “and sets the level of protection against certain infections in childhood.”

Study lead Professor Debby Bogaert, chair of pediatric medicine at the University of Edinburgh, said: “I think it is especially interesting that we identified several beneficial microbes to be the link between mode of delivery and vaccine responses. In the future, we may be able to supplement those bacteria to children born by C-section shortly after birth through – for example, mother-to-baby ‘fecal transplants’ or the use of specifically designed probiotics.”

First author Dr. Emma de Koff, a microbiology trainee at the Amsterdam University Medical Center, said: “We expected to find a link between the gut microbiome and the babies’ vaccine responses, however we never thought to find the strongest effects in the first weeks of life.”

The findings “could help to inform conversations about C-sections between expectant mothers and their doctors,” commented the researchers, who said that they could also “shape the design of more tailored vaccination programs.” For example, in the future, vaccination schedules could be adjusted based on the method of delivery or analysis of the baby’s microbiome.
 

 

 

Potential to rectify immune system after Caesarean

Responding to the study, Professor Neil Mabbott, personal chair in immunopathology at the Roslin Institute of the University of Edinburgh, told the Science Media Centre: “This is a very interesting and important study. The authors show that infants delivered by a vaginal birth had higher responses to the two different types of vaccines against bacterial diseases, and this was associated with higher abundances of the potentially beneficial bacteria known as Bifidobacterium and E. coli in their intestines.”

He added: “This study raises the possibility that it may be possible to treat infants, especially Caesarean-delivered infants, with a bacterial supplement, or even a product produced by these beneficial bacteria, to help improve their immune systems, enhance their responses to certain vaccines and reduce their susceptibility to infections.”

The study raises important questions, he said, including whether the increased antibody levels from pneumococcal and meningococcal vaccinations following vaginal birth also leads to increased protection of the infants against infection or serious disease. 

Sheena Cruickshank, immunologist and professor in biomedical sciences at the University of Manchester, England, commented: “It is now well established that the microbiome is important in immune development. In turn the mode of delivery and initial method of feeding is important in how the microbiome is first seeded in the baby.”

“However, other factors such as exposure to antibiotics and subsequent diet also play a role in how it then develops, making understanding the way the microbiome develops and changes quite complex. Microbes works as communities, and it can be difficult to determine whether changes in single species are important functionally. Breast milk also plays an important role in protecting the baby via transfer of maternal immunoglobulins, which will wane over a period of 6-12 months in the baby – thus ascertaining whether it’s the baby’s Ig is challenging.

“Given the complexity of the multitude of interactions, it is important that this is accounted for, and group sizes are large enough to ensure data is robust. Whilst this is an interesting study that adds to our knowledge of how the microbiome develops and the possible implications for immune development, it is still very preliminary, and the small group sizes warrant a need for further studies to verify this in larger groups.”

She added: “We will need to understand whether possible impacts of maternal delivery and feeding on immune development or vaccine responses can be restored by, for example, manipulating the microbiome.”

Professor Kim Barrett, vice dean for research at the University of California, Davis School of Medicine, said that, while further research was needed to uncover if and how manipulation of the human microbiome following C-section births might improve vaccine efficacy, “the work should at least lead to prompt additional consideration about an unintended consequence of the ever-increasing use of C-sections that may not be medically-necessary.”

Dr. Marie Lewis, researcher in gut microbiota at the University of Reading, England, said: “We have known for quite some time that the mode of delivery is incredibly important when it comes to the type of bacteria which colonize our guts. We also know that our gut bacteria in early life drive the development of our immune system, and natural births are linked with reduced risks of developing inflammatory conditions, such as asthma. It is therefore perhaps not really surprising that mode of delivery is also linked to responses to vaccinations.”

“The really interesting part here is the extent to which our gut microbiotas are accessible and changeable, and this important work could pave the way for administration of probiotics and prebiotics to improve vaccine responses in Caesarean-born children.”
 

 

 

‘Tantalizing data’

Dr. Chrissie Jones, associate professor of pediatric infectious diseases at the University of Southampton, and Southampton UK and education lead for the British Paediatric Allergy, Immunity, and Infection Group, said: “The link between method of delivery of the infant and the bacteria that live in the gut of the young infant has previously been shown. What is really interesting about this study is that, for the first time, the link between method of delivery (vaginal delivery vs. C-section), differences in bacterial communities of the gut, and differences in responses to vaccines is shown.”

“This study may give us fresh insights into the differences that we see in the amount of protective antibodies made after infant vaccination. It also gives us clues as to ways that we might be able to level the playing field for infants in the future – for instance, giving babies a safe cocktail of ‘friendly bacteria’ as a probiotic, or an additional dose of vaccine.”

“This study is the first step – it shows us a link or association but does not prove cause and effect that differences in the way babies are born alters how the immune system responds to vaccines. To prove this link we will need larger studies, but it is tantalizing data.”

The research was funded by Scotland’s Chief Scientist Office and the Netherlands Organisation for Scientific Research. DB received funding from OM pharma and Sanofi. All of the authors declared no other conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Babies born vaginally have a different microbiome to those born by Caesarean section and have heightened responses to childhood vaccinations, according to a new study heralded as “interesting and important” by experts.

The microbiome is known to play a role in immune responses to vaccination. However, the relationship between early-life effects on intestinal microbiota composition and subsequent childhood vaccine responses had remained poorly understood. In the new study, “the findings suggest that vaginal birthing resulted in a microbiota composition associated with an increase in a specific type of antibody response to two routine childhood vaccines in healthy babies, compared with Caesarean section,” the authors said.

Researchers from the University of Edinburgh, with colleagues at Spaarne Hospital and University Medical Centre in Utrecht, and the National Institute for Public Health and the Environment in The Netherlands, tracked the development of the gut microbiome in a cohort of 120 healthy, full-term infants and assessed their antibody levels following two common childhood vaccinations.

The study, published in Nature Communications, found “a clear relationship between microbes in the gut of those babies and levels of antibodies.” Not only was vaginal birth associated with increased levels of Bifidobacterium and Escherichia coli in the gut microbiome in the first months of life but also with higher IgG antibody responses against both pneumococcal and meningococcal vaccines.
 

Antibody responses doubled after vaginal birth

The babies were given pneumococcal and meningococcal vaccinations at 8 and 12 weeks, and saliva was collected at follow-up visits at ages 12 and 18 months for antibody measurement. In the 101 babies tested for pneumococcal antibodies, the researchers found that antibody levels were twice as high among babies delivered naturally, compared with those delivered by C-section. High levels of two gut bacteria in particular – Bifidobacterium and E. coli – were associated with high antibody responses to the pneumococcal vaccine, showing that the microbiome mediated the link between mode of delivery and pneumococcal vaccine responses.

In 66 babies tested for anti-meningococcal antibodies, antibodies were 1.7 times higher for vaginally-born babies than those delivered via C-section, and high antibody levels were particularly associated with high levels of E. coli in the babies’ microbiome.

The results were also influenced by breast-feeding, which even among children born vaginally was linked with 3.5 times higher pneumococcal antibody levels, compared with those of formula-fed children. In contrast, levels of antibodies against meningococcus were unaffected by breast-feeding status.
 

Microbiome ‘sets level of infection protection’

The team said: “The baby acquires Bifidobacterium and E. coli bacteria through natural birth, and human milk is needed to provide the sugars for these bacteria to thrive on.” They explained: “The gut microbiome is seeded at birth, developing rapidly over the first few months of life, and is influenced mostly by delivery mode, breast-feeding, and antibiotic use.” The babies’ microbiome in early life contributes the immune system’s response to vaccines, they said, “and sets the level of protection against certain infections in childhood.”

Study lead Professor Debby Bogaert, chair of pediatric medicine at the University of Edinburgh, said: “I think it is especially interesting that we identified several beneficial microbes to be the link between mode of delivery and vaccine responses. In the future, we may be able to supplement those bacteria to children born by C-section shortly after birth through – for example, mother-to-baby ‘fecal transplants’ or the use of specifically designed probiotics.”

First author Dr. Emma de Koff, a microbiology trainee at the Amsterdam University Medical Center, said: “We expected to find a link between the gut microbiome and the babies’ vaccine responses, however we never thought to find the strongest effects in the first weeks of life.”

The findings “could help to inform conversations about C-sections between expectant mothers and their doctors,” commented the researchers, who said that they could also “shape the design of more tailored vaccination programs.” For example, in the future, vaccination schedules could be adjusted based on the method of delivery or analysis of the baby’s microbiome.
 

 

 

Potential to rectify immune system after Caesarean

Responding to the study, Professor Neil Mabbott, personal chair in immunopathology at the Roslin Institute of the University of Edinburgh, told the Science Media Centre: “This is a very interesting and important study. The authors show that infants delivered by a vaginal birth had higher responses to the two different types of vaccines against bacterial diseases, and this was associated with higher abundances of the potentially beneficial bacteria known as Bifidobacterium and E. coli in their intestines.”

He added: “This study raises the possibility that it may be possible to treat infants, especially Caesarean-delivered infants, with a bacterial supplement, or even a product produced by these beneficial bacteria, to help improve their immune systems, enhance their responses to certain vaccines and reduce their susceptibility to infections.”

The study raises important questions, he said, including whether the increased antibody levels from pneumococcal and meningococcal vaccinations following vaginal birth also leads to increased protection of the infants against infection or serious disease. 

Sheena Cruickshank, immunologist and professor in biomedical sciences at the University of Manchester, England, commented: “It is now well established that the microbiome is important in immune development. In turn the mode of delivery and initial method of feeding is important in how the microbiome is first seeded in the baby.”

“However, other factors such as exposure to antibiotics and subsequent diet also play a role in how it then develops, making understanding the way the microbiome develops and changes quite complex. Microbes works as communities, and it can be difficult to determine whether changes in single species are important functionally. Breast milk also plays an important role in protecting the baby via transfer of maternal immunoglobulins, which will wane over a period of 6-12 months in the baby – thus ascertaining whether it’s the baby’s Ig is challenging.

“Given the complexity of the multitude of interactions, it is important that this is accounted for, and group sizes are large enough to ensure data is robust. Whilst this is an interesting study that adds to our knowledge of how the microbiome develops and the possible implications for immune development, it is still very preliminary, and the small group sizes warrant a need for further studies to verify this in larger groups.”

She added: “We will need to understand whether possible impacts of maternal delivery and feeding on immune development or vaccine responses can be restored by, for example, manipulating the microbiome.”

Professor Kim Barrett, vice dean for research at the University of California, Davis School of Medicine, said that, while further research was needed to uncover if and how manipulation of the human microbiome following C-section births might improve vaccine efficacy, “the work should at least lead to prompt additional consideration about an unintended consequence of the ever-increasing use of C-sections that may not be medically-necessary.”

Dr. Marie Lewis, researcher in gut microbiota at the University of Reading, England, said: “We have known for quite some time that the mode of delivery is incredibly important when it comes to the type of bacteria which colonize our guts. We also know that our gut bacteria in early life drive the development of our immune system, and natural births are linked with reduced risks of developing inflammatory conditions, such as asthma. It is therefore perhaps not really surprising that mode of delivery is also linked to responses to vaccinations.”

“The really interesting part here is the extent to which our gut microbiotas are accessible and changeable, and this important work could pave the way for administration of probiotics and prebiotics to improve vaccine responses in Caesarean-born children.”
 

 

 

‘Tantalizing data’

Dr. Chrissie Jones, associate professor of pediatric infectious diseases at the University of Southampton, and Southampton UK and education lead for the British Paediatric Allergy, Immunity, and Infection Group, said: “The link between method of delivery of the infant and the bacteria that live in the gut of the young infant has previously been shown. What is really interesting about this study is that, for the first time, the link between method of delivery (vaginal delivery vs. C-section), differences in bacterial communities of the gut, and differences in responses to vaccines is shown.”

“This study may give us fresh insights into the differences that we see in the amount of protective antibodies made after infant vaccination. It also gives us clues as to ways that we might be able to level the playing field for infants in the future – for instance, giving babies a safe cocktail of ‘friendly bacteria’ as a probiotic, or an additional dose of vaccine.”

“This study is the first step – it shows us a link or association but does not prove cause and effect that differences in the way babies are born alters how the immune system responds to vaccines. To prove this link we will need larger studies, but it is tantalizing data.”

The research was funded by Scotland’s Chief Scientist Office and the Netherlands Organisation for Scientific Research. DB received funding from OM pharma and Sanofi. All of the authors declared no other conflicts of interest.

A version of this article first appeared on Medscape.com.

Babies born vaginally have a different microbiome to those born by Caesarean section and have heightened responses to childhood vaccinations, according to a new study heralded as “interesting and important” by experts.

The microbiome is known to play a role in immune responses to vaccination. However, the relationship between early-life effects on intestinal microbiota composition and subsequent childhood vaccine responses had remained poorly understood. In the new study, “the findings suggest that vaginal birthing resulted in a microbiota composition associated with an increase in a specific type of antibody response to two routine childhood vaccines in healthy babies, compared with Caesarean section,” the authors said.

Researchers from the University of Edinburgh, with colleagues at Spaarne Hospital and University Medical Centre in Utrecht, and the National Institute for Public Health and the Environment in The Netherlands, tracked the development of the gut microbiome in a cohort of 120 healthy, full-term infants and assessed their antibody levels following two common childhood vaccinations.

The study, published in Nature Communications, found “a clear relationship between microbes in the gut of those babies and levels of antibodies.” Not only was vaginal birth associated with increased levels of Bifidobacterium and Escherichia coli in the gut microbiome in the first months of life but also with higher IgG antibody responses against both pneumococcal and meningococcal vaccines.
 

Antibody responses doubled after vaginal birth

The babies were given pneumococcal and meningococcal vaccinations at 8 and 12 weeks, and saliva was collected at follow-up visits at ages 12 and 18 months for antibody measurement. In the 101 babies tested for pneumococcal antibodies, the researchers found that antibody levels were twice as high among babies delivered naturally, compared with those delivered by C-section. High levels of two gut bacteria in particular – Bifidobacterium and E. coli – were associated with high antibody responses to the pneumococcal vaccine, showing that the microbiome mediated the link between mode of delivery and pneumococcal vaccine responses.

In 66 babies tested for anti-meningococcal antibodies, antibodies were 1.7 times higher for vaginally-born babies than those delivered via C-section, and high antibody levels were particularly associated with high levels of E. coli in the babies’ microbiome.

The results were also influenced by breast-feeding, which even among children born vaginally was linked with 3.5 times higher pneumococcal antibody levels, compared with those of formula-fed children. In contrast, levels of antibodies against meningococcus were unaffected by breast-feeding status.
 

Microbiome ‘sets level of infection protection’

The team said: “The baby acquires Bifidobacterium and E. coli bacteria through natural birth, and human milk is needed to provide the sugars for these bacteria to thrive on.” They explained: “The gut microbiome is seeded at birth, developing rapidly over the first few months of life, and is influenced mostly by delivery mode, breast-feeding, and antibiotic use.” The babies’ microbiome in early life contributes the immune system’s response to vaccines, they said, “and sets the level of protection against certain infections in childhood.”

Study lead Professor Debby Bogaert, chair of pediatric medicine at the University of Edinburgh, said: “I think it is especially interesting that we identified several beneficial microbes to be the link between mode of delivery and vaccine responses. In the future, we may be able to supplement those bacteria to children born by C-section shortly after birth through – for example, mother-to-baby ‘fecal transplants’ or the use of specifically designed probiotics.”

First author Dr. Emma de Koff, a microbiology trainee at the Amsterdam University Medical Center, said: “We expected to find a link between the gut microbiome and the babies’ vaccine responses, however we never thought to find the strongest effects in the first weeks of life.”

The findings “could help to inform conversations about C-sections between expectant mothers and their doctors,” commented the researchers, who said that they could also “shape the design of more tailored vaccination programs.” For example, in the future, vaccination schedules could be adjusted based on the method of delivery or analysis of the baby’s microbiome.
 

 

 

Potential to rectify immune system after Caesarean

Responding to the study, Professor Neil Mabbott, personal chair in immunopathology at the Roslin Institute of the University of Edinburgh, told the Science Media Centre: “This is a very interesting and important study. The authors show that infants delivered by a vaginal birth had higher responses to the two different types of vaccines against bacterial diseases, and this was associated with higher abundances of the potentially beneficial bacteria known as Bifidobacterium and E. coli in their intestines.”

He added: “This study raises the possibility that it may be possible to treat infants, especially Caesarean-delivered infants, with a bacterial supplement, or even a product produced by these beneficial bacteria, to help improve their immune systems, enhance their responses to certain vaccines and reduce their susceptibility to infections.”

The study raises important questions, he said, including whether the increased antibody levels from pneumococcal and meningococcal vaccinations following vaginal birth also leads to increased protection of the infants against infection or serious disease. 

Sheena Cruickshank, immunologist and professor in biomedical sciences at the University of Manchester, England, commented: “It is now well established that the microbiome is important in immune development. In turn the mode of delivery and initial method of feeding is important in how the microbiome is first seeded in the baby.”

“However, other factors such as exposure to antibiotics and subsequent diet also play a role in how it then develops, making understanding the way the microbiome develops and changes quite complex. Microbes works as communities, and it can be difficult to determine whether changes in single species are important functionally. Breast milk also plays an important role in protecting the baby via transfer of maternal immunoglobulins, which will wane over a period of 6-12 months in the baby – thus ascertaining whether it’s the baby’s Ig is challenging.

“Given the complexity of the multitude of interactions, it is important that this is accounted for, and group sizes are large enough to ensure data is robust. Whilst this is an interesting study that adds to our knowledge of how the microbiome develops and the possible implications for immune development, it is still very preliminary, and the small group sizes warrant a need for further studies to verify this in larger groups.”

She added: “We will need to understand whether possible impacts of maternal delivery and feeding on immune development or vaccine responses can be restored by, for example, manipulating the microbiome.”

Professor Kim Barrett, vice dean for research at the University of California, Davis School of Medicine, said that, while further research was needed to uncover if and how manipulation of the human microbiome following C-section births might improve vaccine efficacy, “the work should at least lead to prompt additional consideration about an unintended consequence of the ever-increasing use of C-sections that may not be medically-necessary.”

Dr. Marie Lewis, researcher in gut microbiota at the University of Reading, England, said: “We have known for quite some time that the mode of delivery is incredibly important when it comes to the type of bacteria which colonize our guts. We also know that our gut bacteria in early life drive the development of our immune system, and natural births are linked with reduced risks of developing inflammatory conditions, such as asthma. It is therefore perhaps not really surprising that mode of delivery is also linked to responses to vaccinations.”

“The really interesting part here is the extent to which our gut microbiotas are accessible and changeable, and this important work could pave the way for administration of probiotics and prebiotics to improve vaccine responses in Caesarean-born children.”
 

 

 

‘Tantalizing data’

Dr. Chrissie Jones, associate professor of pediatric infectious diseases at the University of Southampton, and Southampton UK and education lead for the British Paediatric Allergy, Immunity, and Infection Group, said: “The link between method of delivery of the infant and the bacteria that live in the gut of the young infant has previously been shown. What is really interesting about this study is that, for the first time, the link between method of delivery (vaginal delivery vs. C-section), differences in bacterial communities of the gut, and differences in responses to vaccines is shown.”

“This study may give us fresh insights into the differences that we see in the amount of protective antibodies made after infant vaccination. It also gives us clues as to ways that we might be able to level the playing field for infants in the future – for instance, giving babies a safe cocktail of ‘friendly bacteria’ as a probiotic, or an additional dose of vaccine.”

“This study is the first step – it shows us a link or association but does not prove cause and effect that differences in the way babies are born alters how the immune system responds to vaccines. To prove this link we will need larger studies, but it is tantalizing data.”

The research was funded by Scotland’s Chief Scientist Office and the Netherlands Organisation for Scientific Research. DB received funding from OM pharma and Sanofi. All of the authors declared no other conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE COMMUNICATIONS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Key cause’ of type 2 diabetes identified

Article Type
Changed
Thu, 12/15/2022 - 15:36

Understanding of the key mechanisms underlying the progression of type 2 diabetes has been advanced by new research from Oxford (England) University suggesting potential ways to “slow the seemingly inexorable decline in beta-cell function in T2D”.

The study in mice elucidated a “key cause” of T2D by showing that high blood glucose reprograms the metabolism of pancreatic beta-cells, helping to explain the progressive decline in their function in diabetes.

Scientists already knew that chronic hyperglycemia leads to a progressive decline in beta-cell function and, conversely, that the failure of pancreatic beta-cells to produce insulin results in chronically elevated blood glucose. However, the exact cause of beta-cell failure in T2D has remained unclear. T2D typically presents in later adult life, and by the time of diagnosis as much as 50% of beta-cell function has been lost.

In the United Kingdom there are nearly 5 million people diagnosed with T2D, which costs the National Health Service some £10 billion annually.
 

Glucose metabolites, rather than glucose itself, drives failure of cells to release insulin

The new study, published in Nature Communications, used both an animal model of diabetes and in vitro culture of beta-cells in a high glucose medium. In both cases the researchers showed, for the first time, that it is glucose metabolites, rather than glucose itself, that drives the failure of beta-cells to release insulin and is key to the progression of type 2 diabetes. 

Senior researcher Frances Ashcroft, PhD, of the department of physiology, anatomy and genetics at the University of Oxford said: “This suggests a potential way in which the decline in beta-cell function in T2D might be slowed or prevented.”

Blood glucose concentration is controlled within narrow limits, the team explained. When it is too low for more than few minutes, consciousness is rapidly lost because the brain is starved of fuel. However chronic elevation of blood glucose leads to the serious complications found in poorly controlled diabetes, such as retinopathy, nephropathy, peripheral neuropathy, and cardiac disease. Insulin, released from pancreatic beta-cells when blood glucose levels rise, is the only hormone that can lower the blood glucose concentration, and insufficient secretion results in diabetes. In T2D, the beta-cells are still present (unlike in T1D), but they have a reduced insulin content and the coupling between glucose and insulin release is impaired. 
 

Vicious spiral of hyperglycemia and beta-cell damage

Previous work by the same team had shown that chronic hyperglycemia damages the ability of the beta-cell to produce insulin and to release it when blood glucose levels rise. This suggested that “prolonged hyperglycemia sets off a vicious spiral in which an increase in blood glucose leads to beta-cell damage and less insulin secretion - which causes an even greater increase in blood glucose and a further decline in beta-cell function,” the team explained. 

Lead researcher Elizabeth Haythorne, PhD, said: “We realized that we next needed to understand how glucose damages beta-cell function, so we can think about how we might stop it and so slow the seemingly inexorable decline in beta-cell function in T2D.”

In the new study, they showed that altered glycolysis in T2D occurs, in part, through marked up-regulation of mammalian target of rapamycin complex 1 (mTORC1), a protein complex involved in control of cell growth, dysregulation of which underlies a variety of human diseases, including diabetes. Up-regulation of mTORC1 led to changes in metabolic gene expression, oxidative phosphorylation and insulin secretion. Furthermore, they demonstrated that reducing the rate at which glucose is metabolized and at which its metabolites build up could prevent the effects of chronic hyperglycemia and the ensuing beta-cell failure. 

“High blood glucose levels cause an increased rate of glucose metabolism in the beta-cell, which leads to a metabolic bottleneck and the pooling of upstream metabolites,” the team said. “These metabolites switch off the insulin gene, so less insulin is made, as well as switching off numerous genes involved in metabolism and stimulus-secretion coupling. Consequently, the beta-cells become glucose blind and no longer respond to changes in blood glucose with insulin secretion.”
 

 

 

Blocking metabolic enzyme could maintain insulin secretion

The team attempted to block the first step in glucose metabolism, and therefore prevent the gene changes from taking place, by blocking the enzyme glucokinase, which regulates the process. They found that this could maintain glucose-stimulated insulin secretion even in the presence of chronic hyperglycemia.

“Our results support the idea that progressive impairment of beta-cell metabolism, induced by increasing hyperglycemia, speeds T2D development, and suggest that reducing glycolysis at the level of glucokinase may slow this progression,” they said.

Dr. Ashcroft said: “This is potentially a useful way to try to prevent beta-cell decline in diabetes. Because glucose metabolism normally stimulates insulin secretion, it was previously hypothesized that increasing glucose metabolism would enhance insulin secretion in T2D and glucokinase activators were trialled, with varying results. 

“Our data suggests that glucokinase activators could have an adverse effect and, somewhat counter-intuitively, that a glucokinase inhibitor might be a better strategy to treat T2D. Of course, it would be important to reduce glucose flux in T2D to that found in people without diabetes – and no further. But there is a very long way to go before we can tell if this approach would be useful for treating beta-cell decline in T2D. 

“In the meantime, the key message from our study if you have type 2 diabetes is that it is important to keep your blood glucose well controlled.”

This study was funded by the UK Medical Research Council, the Biotechnology and Biological Sciences Research Council, the John Fell Fund, and the Nuffield Benefaction for Medicine/Wellcome Institutional Strategic Support Fund. The authors declared no competing interests.

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

Understanding of the key mechanisms underlying the progression of type 2 diabetes has been advanced by new research from Oxford (England) University suggesting potential ways to “slow the seemingly inexorable decline in beta-cell function in T2D”.

The study in mice elucidated a “key cause” of T2D by showing that high blood glucose reprograms the metabolism of pancreatic beta-cells, helping to explain the progressive decline in their function in diabetes.

Scientists already knew that chronic hyperglycemia leads to a progressive decline in beta-cell function and, conversely, that the failure of pancreatic beta-cells to produce insulin results in chronically elevated blood glucose. However, the exact cause of beta-cell failure in T2D has remained unclear. T2D typically presents in later adult life, and by the time of diagnosis as much as 50% of beta-cell function has been lost.

In the United Kingdom there are nearly 5 million people diagnosed with T2D, which costs the National Health Service some £10 billion annually.
 

Glucose metabolites, rather than glucose itself, drives failure of cells to release insulin

The new study, published in Nature Communications, used both an animal model of diabetes and in vitro culture of beta-cells in a high glucose medium. In both cases the researchers showed, for the first time, that it is glucose metabolites, rather than glucose itself, that drives the failure of beta-cells to release insulin and is key to the progression of type 2 diabetes. 

Senior researcher Frances Ashcroft, PhD, of the department of physiology, anatomy and genetics at the University of Oxford said: “This suggests a potential way in which the decline in beta-cell function in T2D might be slowed or prevented.”

Blood glucose concentration is controlled within narrow limits, the team explained. When it is too low for more than few minutes, consciousness is rapidly lost because the brain is starved of fuel. However chronic elevation of blood glucose leads to the serious complications found in poorly controlled diabetes, such as retinopathy, nephropathy, peripheral neuropathy, and cardiac disease. Insulin, released from pancreatic beta-cells when blood glucose levels rise, is the only hormone that can lower the blood glucose concentration, and insufficient secretion results in diabetes. In T2D, the beta-cells are still present (unlike in T1D), but they have a reduced insulin content and the coupling between glucose and insulin release is impaired. 
 

Vicious spiral of hyperglycemia and beta-cell damage

Previous work by the same team had shown that chronic hyperglycemia damages the ability of the beta-cell to produce insulin and to release it when blood glucose levels rise. This suggested that “prolonged hyperglycemia sets off a vicious spiral in which an increase in blood glucose leads to beta-cell damage and less insulin secretion - which causes an even greater increase in blood glucose and a further decline in beta-cell function,” the team explained. 

Lead researcher Elizabeth Haythorne, PhD, said: “We realized that we next needed to understand how glucose damages beta-cell function, so we can think about how we might stop it and so slow the seemingly inexorable decline in beta-cell function in T2D.”

In the new study, they showed that altered glycolysis in T2D occurs, in part, through marked up-regulation of mammalian target of rapamycin complex 1 (mTORC1), a protein complex involved in control of cell growth, dysregulation of which underlies a variety of human diseases, including diabetes. Up-regulation of mTORC1 led to changes in metabolic gene expression, oxidative phosphorylation and insulin secretion. Furthermore, they demonstrated that reducing the rate at which glucose is metabolized and at which its metabolites build up could prevent the effects of chronic hyperglycemia and the ensuing beta-cell failure. 

“High blood glucose levels cause an increased rate of glucose metabolism in the beta-cell, which leads to a metabolic bottleneck and the pooling of upstream metabolites,” the team said. “These metabolites switch off the insulin gene, so less insulin is made, as well as switching off numerous genes involved in metabolism and stimulus-secretion coupling. Consequently, the beta-cells become glucose blind and no longer respond to changes in blood glucose with insulin secretion.”
 

 

 

Blocking metabolic enzyme could maintain insulin secretion

The team attempted to block the first step in glucose metabolism, and therefore prevent the gene changes from taking place, by blocking the enzyme glucokinase, which regulates the process. They found that this could maintain glucose-stimulated insulin secretion even in the presence of chronic hyperglycemia.

“Our results support the idea that progressive impairment of beta-cell metabolism, induced by increasing hyperglycemia, speeds T2D development, and suggest that reducing glycolysis at the level of glucokinase may slow this progression,” they said.

Dr. Ashcroft said: “This is potentially a useful way to try to prevent beta-cell decline in diabetes. Because glucose metabolism normally stimulates insulin secretion, it was previously hypothesized that increasing glucose metabolism would enhance insulin secretion in T2D and glucokinase activators were trialled, with varying results. 

“Our data suggests that glucokinase activators could have an adverse effect and, somewhat counter-intuitively, that a glucokinase inhibitor might be a better strategy to treat T2D. Of course, it would be important to reduce glucose flux in T2D to that found in people without diabetes – and no further. But there is a very long way to go before we can tell if this approach would be useful for treating beta-cell decline in T2D. 

“In the meantime, the key message from our study if you have type 2 diabetes is that it is important to keep your blood glucose well controlled.”

This study was funded by the UK Medical Research Council, the Biotechnology and Biological Sciences Research Council, the John Fell Fund, and the Nuffield Benefaction for Medicine/Wellcome Institutional Strategic Support Fund. The authors declared no competing interests.

A version of this article first appeared on Medscape UK.

Understanding of the key mechanisms underlying the progression of type 2 diabetes has been advanced by new research from Oxford (England) University suggesting potential ways to “slow the seemingly inexorable decline in beta-cell function in T2D”.

The study in mice elucidated a “key cause” of T2D by showing that high blood glucose reprograms the metabolism of pancreatic beta-cells, helping to explain the progressive decline in their function in diabetes.

Scientists already knew that chronic hyperglycemia leads to a progressive decline in beta-cell function and, conversely, that the failure of pancreatic beta-cells to produce insulin results in chronically elevated blood glucose. However, the exact cause of beta-cell failure in T2D has remained unclear. T2D typically presents in later adult life, and by the time of diagnosis as much as 50% of beta-cell function has been lost.

In the United Kingdom there are nearly 5 million people diagnosed with T2D, which costs the National Health Service some £10 billion annually.
 

Glucose metabolites, rather than glucose itself, drives failure of cells to release insulin

The new study, published in Nature Communications, used both an animal model of diabetes and in vitro culture of beta-cells in a high glucose medium. In both cases the researchers showed, for the first time, that it is glucose metabolites, rather than glucose itself, that drives the failure of beta-cells to release insulin and is key to the progression of type 2 diabetes. 

Senior researcher Frances Ashcroft, PhD, of the department of physiology, anatomy and genetics at the University of Oxford said: “This suggests a potential way in which the decline in beta-cell function in T2D might be slowed or prevented.”

Blood glucose concentration is controlled within narrow limits, the team explained. When it is too low for more than few minutes, consciousness is rapidly lost because the brain is starved of fuel. However chronic elevation of blood glucose leads to the serious complications found in poorly controlled diabetes, such as retinopathy, nephropathy, peripheral neuropathy, and cardiac disease. Insulin, released from pancreatic beta-cells when blood glucose levels rise, is the only hormone that can lower the blood glucose concentration, and insufficient secretion results in diabetes. In T2D, the beta-cells are still present (unlike in T1D), but they have a reduced insulin content and the coupling between glucose and insulin release is impaired. 
 

Vicious spiral of hyperglycemia and beta-cell damage

Previous work by the same team had shown that chronic hyperglycemia damages the ability of the beta-cell to produce insulin and to release it when blood glucose levels rise. This suggested that “prolonged hyperglycemia sets off a vicious spiral in which an increase in blood glucose leads to beta-cell damage and less insulin secretion - which causes an even greater increase in blood glucose and a further decline in beta-cell function,” the team explained. 

Lead researcher Elizabeth Haythorne, PhD, said: “We realized that we next needed to understand how glucose damages beta-cell function, so we can think about how we might stop it and so slow the seemingly inexorable decline in beta-cell function in T2D.”

In the new study, they showed that altered glycolysis in T2D occurs, in part, through marked up-regulation of mammalian target of rapamycin complex 1 (mTORC1), a protein complex involved in control of cell growth, dysregulation of which underlies a variety of human diseases, including diabetes. Up-regulation of mTORC1 led to changes in metabolic gene expression, oxidative phosphorylation and insulin secretion. Furthermore, they demonstrated that reducing the rate at which glucose is metabolized and at which its metabolites build up could prevent the effects of chronic hyperglycemia and the ensuing beta-cell failure. 

“High blood glucose levels cause an increased rate of glucose metabolism in the beta-cell, which leads to a metabolic bottleneck and the pooling of upstream metabolites,” the team said. “These metabolites switch off the insulin gene, so less insulin is made, as well as switching off numerous genes involved in metabolism and stimulus-secretion coupling. Consequently, the beta-cells become glucose blind and no longer respond to changes in blood glucose with insulin secretion.”
 

 

 

Blocking metabolic enzyme could maintain insulin secretion

The team attempted to block the first step in glucose metabolism, and therefore prevent the gene changes from taking place, by blocking the enzyme glucokinase, which regulates the process. They found that this could maintain glucose-stimulated insulin secretion even in the presence of chronic hyperglycemia.

“Our results support the idea that progressive impairment of beta-cell metabolism, induced by increasing hyperglycemia, speeds T2D development, and suggest that reducing glycolysis at the level of glucokinase may slow this progression,” they said.

Dr. Ashcroft said: “This is potentially a useful way to try to prevent beta-cell decline in diabetes. Because glucose metabolism normally stimulates insulin secretion, it was previously hypothesized that increasing glucose metabolism would enhance insulin secretion in T2D and glucokinase activators were trialled, with varying results. 

“Our data suggests that glucokinase activators could have an adverse effect and, somewhat counter-intuitively, that a glucokinase inhibitor might be a better strategy to treat T2D. Of course, it would be important to reduce glucose flux in T2D to that found in people without diabetes – and no further. But there is a very long way to go before we can tell if this approach would be useful for treating beta-cell decline in T2D. 

“In the meantime, the key message from our study if you have type 2 diabetes is that it is important to keep your blood glucose well controlled.”

This study was funded by the UK Medical Research Council, the Biotechnology and Biological Sciences Research Council, the John Fell Fund, and the Nuffield Benefaction for Medicine/Wellcome Institutional Strategic Support Fund. The authors declared no competing interests.

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NATURE COMMUNICATIONS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Retinal imaging can predict cardiovascular mortality

Article Type
Changed
Fri, 10/07/2022 - 15:10

Cardiovascular disease (CVD) and mortality risk could be detected by routine retinal scanning, according to a new study using data from the UK Biobank Eye and Vision Consortium and the European Prospective Investigation into Cancer (EPIC)–Norfolk study.

The researchers, from St. George’s University of London, Cambridge University, Kingston University, Moorfields Eye Hospital, and University College London, developed a method of artificial intelligence (AI)–enabled imaging of the retina’s vascular network that could accurately predict CVD and death, without the need for blood tests or blood pressure measurement.

The system “paves the way for a highly effective, noninvasive screening test for people at medium to high risk of circulatory disease that doesn’t have to be done in a clinic,” they said. “In the general population it could be used as a noncontact form of systemic vascular health check, to triage those at medium-high risk of circulatory mortality for further clinical risk assessment and appropriate intervention.” Optometry specialists welcomed the prospect and hailed it as “an exciting development.”
 

Retinal vessels give an accurate early indicator of CVD

The study, published online in the British Journal of Ophthalmology, was based on previous research showing that the width of retinal arterioles and venules seen on retinal imaging may provide an accurate early indicator of CVD, whereas current risk prediction frameworks aren’t always reliable in identifying people who will go on to develop or die of circulatory diseases. 

The researchers developed a fully automated AI-enabled algorithm, called Quantitative Analysis of Retinal vessels Topology and Size (QUARTZ), to assess the potential of retinal vasculature imaging plus known risk factors to predict vascular health and death. They applied QUARTZ to retinal images from 88,052 UK Biobank participants aged 40-69 years, looking specifically at the width, vessel area, and degree of tortuosity of the retinal microvasculature, to develop prediction models for stroke, heart attack, and death from circulatory disease.

They then applied these models to the retinal images of 7,411 participants, aged 48-92 years, in the EPIC-Norfolk study. They then compared the performance of QUARTZ with the widely used Framingham Risk Scores framework.

The participants in the two studies were tracked for an average of 7.7 and 9.1 years, respectively, during which time there were 327 circulatory disease deaths among 64,144 UK Biobank participants (average age, 56.8 years) and 201 circulatory deaths among 5,862 EPIC-Norfolk participants (average age, 67.6 years).
 

Vessel characteristics important predictors of CVD mortality

Results from the QUARTZ models showed that in all participants, arteriolar and venular width, venular tortuosity, and width variation were important predictors of circulatory disease death. In addition, in women, but not in men, arteriolar and venular area were separate factors that contributed to risk prediction.

Overall, the predictive models, based on age, smoking, and medical history (antihypertensive or cholesterol lowering medication, diabetes, and history of stroke or MI) as well as retinal vasculature, captured between half and two-thirds of circulatory disease deaths in those most at risk, the authors said. 

Compared with Framingham Risk Scores (FRS), the retinal vasculature (RV) models captured about 5% more cases of stroke in UK Biobank men, 8% more cases in UK Biobank women, and 3% more cases among EPIC-Norfolk men most at risk, but nearly 2% fewer cases among EPIC-Norfolk women. However, the team said that, while adding RV to FRS resulted in only marginal changes in prediction of stroke or MI, a simpler noninvasive risk score based on age, sex, smoking status, medical history, and RV “yielded comparable performance to FRS, without the need for blood sampling or BP measurement.”
 

 

 

Vasculometry low cost, noninvasive and with high street availability

They concluded: “Retinal imaging is established within clinic and hospital eye care and in optometric practices in the U.S. and U.K. AI-enabled vasculometry risk prediction is fully automated, low cost, noninvasive and has the potential for reaching a higher proportion of the population in the community because of “high street” availability and because blood sampling or sphygmomanometry are not needed.

“[Retinal vasculature] is a microvascular marker, hence offers better prediction for circulatory mortality and stroke, compared with MI, which is more macrovascular, except perhaps in women. 

“In the general population it could be used as a noncontact form of systemic vascular health check, to triage those at medium-high risk of circulatory mortality for further clinical risk assessment and appropriate intervention.”

In the United Kingdom, for example, it could be included in the primary care NHS Health Check for those aged 41-74 years, they suggested.  In addition, “high street” retinal scanning could directly feed into primary medical services and help achieve greater screening coverage for older age groups, who are likely to attend an optometric practice for visual correction, especially with the onset of presbyopia. “This would offer a novel approach to identify those at high risk of circulatory mortality, which is not currently screened for,” the team said.
 

Test could help to identify high-risk individuals

In a linked editorial, Ify Mordi, MD, and Emanuele Trucco, MD, of the University of Dundee (Scotland), said that CVD remains a significant cause of mortality and morbidity and the most common cause of death worldwide, accounting for a quarter of all U.K. deaths – and its burden is increasing. “Identification of individuals at high risk is particularly important,” they said, but current clinical risk scores to identify those at risk “are unfortunately not perfect,” so miss some of those who might benefit from preventative therapy.

“The retina is the only location that allows non-invasive direct visualisation of the vasculature, potentially providing a rich source of information.” In the new study, the measurements derived with the software tool, QUARTZ, were significantly associated with CVD, they said, with similar predictive performance to the Framingham clinical risk score.

“The results strengthen the evidence from several similar studies that the retina can be a useful and potentially disruptive source of information for CVD risk in personalised medicine.” However, a number of questions remain about how this knowledge could be integrated into clinical care, including who would conduct such a retinal screening program and who would act on the findings?

The editorial concluded: “What is now needed is for ophthalmologists, cardiologists, primary care physicians, and computer scientists to work together to design studies to determine whether using this information improves clinical outcome, and, if so, to work with regulatory bodies, scientific societies and healthcare systems to optimize clinical work flows and enable practical implementation in routine practice.”
 

‘Exciting development that could improve outcomes’

Asked to comment, Farah Topia, clinical and regulatory adviser at the Association of Optometrists, said: “This is an exciting development that could improve outcomes for many patients by enabling earlier detection of serious health risks. Patients attend optometric practice for a variety of reasons and this interaction could be used to a greater extent to help detect disease earlier. With optometrists available on every High Street, in the heart of communities, it’s an element of primary care that can be accessed quickly and easily, and optometrists are also already trained to have health and lifestyle discussion with patients.”

She added: “Retinal photographs are regularly taken when patients visit an optometrist, so being able to further enhance this process using AI is exciting.

“We look forward to seeing how this area develops and how optometrists can work together with other healthcare sectors to improve patient outcomes and ease the burden the NHS currently faces.” 

The study was funded by the Medical Research Council Population and Systems Medicine Board and the British Heart Foundation.

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

Cardiovascular disease (CVD) and mortality risk could be detected by routine retinal scanning, according to a new study using data from the UK Biobank Eye and Vision Consortium and the European Prospective Investigation into Cancer (EPIC)–Norfolk study.

The researchers, from St. George’s University of London, Cambridge University, Kingston University, Moorfields Eye Hospital, and University College London, developed a method of artificial intelligence (AI)–enabled imaging of the retina’s vascular network that could accurately predict CVD and death, without the need for blood tests or blood pressure measurement.

The system “paves the way for a highly effective, noninvasive screening test for people at medium to high risk of circulatory disease that doesn’t have to be done in a clinic,” they said. “In the general population it could be used as a noncontact form of systemic vascular health check, to triage those at medium-high risk of circulatory mortality for further clinical risk assessment and appropriate intervention.” Optometry specialists welcomed the prospect and hailed it as “an exciting development.”
 

Retinal vessels give an accurate early indicator of CVD

The study, published online in the British Journal of Ophthalmology, was based on previous research showing that the width of retinal arterioles and venules seen on retinal imaging may provide an accurate early indicator of CVD, whereas current risk prediction frameworks aren’t always reliable in identifying people who will go on to develop or die of circulatory diseases. 

The researchers developed a fully automated AI-enabled algorithm, called Quantitative Analysis of Retinal vessels Topology and Size (QUARTZ), to assess the potential of retinal vasculature imaging plus known risk factors to predict vascular health and death. They applied QUARTZ to retinal images from 88,052 UK Biobank participants aged 40-69 years, looking specifically at the width, vessel area, and degree of tortuosity of the retinal microvasculature, to develop prediction models for stroke, heart attack, and death from circulatory disease.

They then applied these models to the retinal images of 7,411 participants, aged 48-92 years, in the EPIC-Norfolk study. They then compared the performance of QUARTZ with the widely used Framingham Risk Scores framework.

The participants in the two studies were tracked for an average of 7.7 and 9.1 years, respectively, during which time there were 327 circulatory disease deaths among 64,144 UK Biobank participants (average age, 56.8 years) and 201 circulatory deaths among 5,862 EPIC-Norfolk participants (average age, 67.6 years).
 

Vessel characteristics important predictors of CVD mortality

Results from the QUARTZ models showed that in all participants, arteriolar and venular width, venular tortuosity, and width variation were important predictors of circulatory disease death. In addition, in women, but not in men, arteriolar and venular area were separate factors that contributed to risk prediction.

Overall, the predictive models, based on age, smoking, and medical history (antihypertensive or cholesterol lowering medication, diabetes, and history of stroke or MI) as well as retinal vasculature, captured between half and two-thirds of circulatory disease deaths in those most at risk, the authors said. 

Compared with Framingham Risk Scores (FRS), the retinal vasculature (RV) models captured about 5% more cases of stroke in UK Biobank men, 8% more cases in UK Biobank women, and 3% more cases among EPIC-Norfolk men most at risk, but nearly 2% fewer cases among EPIC-Norfolk women. However, the team said that, while adding RV to FRS resulted in only marginal changes in prediction of stroke or MI, a simpler noninvasive risk score based on age, sex, smoking status, medical history, and RV “yielded comparable performance to FRS, without the need for blood sampling or BP measurement.”
 

 

 

Vasculometry low cost, noninvasive and with high street availability

They concluded: “Retinal imaging is established within clinic and hospital eye care and in optometric practices in the U.S. and U.K. AI-enabled vasculometry risk prediction is fully automated, low cost, noninvasive and has the potential for reaching a higher proportion of the population in the community because of “high street” availability and because blood sampling or sphygmomanometry are not needed.

“[Retinal vasculature] is a microvascular marker, hence offers better prediction for circulatory mortality and stroke, compared with MI, which is more macrovascular, except perhaps in women. 

“In the general population it could be used as a noncontact form of systemic vascular health check, to triage those at medium-high risk of circulatory mortality for further clinical risk assessment and appropriate intervention.”

In the United Kingdom, for example, it could be included in the primary care NHS Health Check for those aged 41-74 years, they suggested.  In addition, “high street” retinal scanning could directly feed into primary medical services and help achieve greater screening coverage for older age groups, who are likely to attend an optometric practice for visual correction, especially with the onset of presbyopia. “This would offer a novel approach to identify those at high risk of circulatory mortality, which is not currently screened for,” the team said.
 

Test could help to identify high-risk individuals

In a linked editorial, Ify Mordi, MD, and Emanuele Trucco, MD, of the University of Dundee (Scotland), said that CVD remains a significant cause of mortality and morbidity and the most common cause of death worldwide, accounting for a quarter of all U.K. deaths – and its burden is increasing. “Identification of individuals at high risk is particularly important,” they said, but current clinical risk scores to identify those at risk “are unfortunately not perfect,” so miss some of those who might benefit from preventative therapy.

“The retina is the only location that allows non-invasive direct visualisation of the vasculature, potentially providing a rich source of information.” In the new study, the measurements derived with the software tool, QUARTZ, were significantly associated with CVD, they said, with similar predictive performance to the Framingham clinical risk score.

“The results strengthen the evidence from several similar studies that the retina can be a useful and potentially disruptive source of information for CVD risk in personalised medicine.” However, a number of questions remain about how this knowledge could be integrated into clinical care, including who would conduct such a retinal screening program and who would act on the findings?

The editorial concluded: “What is now needed is for ophthalmologists, cardiologists, primary care physicians, and computer scientists to work together to design studies to determine whether using this information improves clinical outcome, and, if so, to work with regulatory bodies, scientific societies and healthcare systems to optimize clinical work flows and enable practical implementation in routine practice.”
 

‘Exciting development that could improve outcomes’

Asked to comment, Farah Topia, clinical and regulatory adviser at the Association of Optometrists, said: “This is an exciting development that could improve outcomes for many patients by enabling earlier detection of serious health risks. Patients attend optometric practice for a variety of reasons and this interaction could be used to a greater extent to help detect disease earlier. With optometrists available on every High Street, in the heart of communities, it’s an element of primary care that can be accessed quickly and easily, and optometrists are also already trained to have health and lifestyle discussion with patients.”

She added: “Retinal photographs are regularly taken when patients visit an optometrist, so being able to further enhance this process using AI is exciting.

“We look forward to seeing how this area develops and how optometrists can work together with other healthcare sectors to improve patient outcomes and ease the burden the NHS currently faces.” 

The study was funded by the Medical Research Council Population and Systems Medicine Board and the British Heart Foundation.

A version of this article first appeared on Medscape UK.

Cardiovascular disease (CVD) and mortality risk could be detected by routine retinal scanning, according to a new study using data from the UK Biobank Eye and Vision Consortium and the European Prospective Investigation into Cancer (EPIC)–Norfolk study.

The researchers, from St. George’s University of London, Cambridge University, Kingston University, Moorfields Eye Hospital, and University College London, developed a method of artificial intelligence (AI)–enabled imaging of the retina’s vascular network that could accurately predict CVD and death, without the need for blood tests or blood pressure measurement.

The system “paves the way for a highly effective, noninvasive screening test for people at medium to high risk of circulatory disease that doesn’t have to be done in a clinic,” they said. “In the general population it could be used as a noncontact form of systemic vascular health check, to triage those at medium-high risk of circulatory mortality for further clinical risk assessment and appropriate intervention.” Optometry specialists welcomed the prospect and hailed it as “an exciting development.”
 

Retinal vessels give an accurate early indicator of CVD

The study, published online in the British Journal of Ophthalmology, was based on previous research showing that the width of retinal arterioles and venules seen on retinal imaging may provide an accurate early indicator of CVD, whereas current risk prediction frameworks aren’t always reliable in identifying people who will go on to develop or die of circulatory diseases. 

The researchers developed a fully automated AI-enabled algorithm, called Quantitative Analysis of Retinal vessels Topology and Size (QUARTZ), to assess the potential of retinal vasculature imaging plus known risk factors to predict vascular health and death. They applied QUARTZ to retinal images from 88,052 UK Biobank participants aged 40-69 years, looking specifically at the width, vessel area, and degree of tortuosity of the retinal microvasculature, to develop prediction models for stroke, heart attack, and death from circulatory disease.

They then applied these models to the retinal images of 7,411 participants, aged 48-92 years, in the EPIC-Norfolk study. They then compared the performance of QUARTZ with the widely used Framingham Risk Scores framework.

The participants in the two studies were tracked for an average of 7.7 and 9.1 years, respectively, during which time there were 327 circulatory disease deaths among 64,144 UK Biobank participants (average age, 56.8 years) and 201 circulatory deaths among 5,862 EPIC-Norfolk participants (average age, 67.6 years).
 

Vessel characteristics important predictors of CVD mortality

Results from the QUARTZ models showed that in all participants, arteriolar and venular width, venular tortuosity, and width variation were important predictors of circulatory disease death. In addition, in women, but not in men, arteriolar and venular area were separate factors that contributed to risk prediction.

Overall, the predictive models, based on age, smoking, and medical history (antihypertensive or cholesterol lowering medication, diabetes, and history of stroke or MI) as well as retinal vasculature, captured between half and two-thirds of circulatory disease deaths in those most at risk, the authors said. 

Compared with Framingham Risk Scores (FRS), the retinal vasculature (RV) models captured about 5% more cases of stroke in UK Biobank men, 8% more cases in UK Biobank women, and 3% more cases among EPIC-Norfolk men most at risk, but nearly 2% fewer cases among EPIC-Norfolk women. However, the team said that, while adding RV to FRS resulted in only marginal changes in prediction of stroke or MI, a simpler noninvasive risk score based on age, sex, smoking status, medical history, and RV “yielded comparable performance to FRS, without the need for blood sampling or BP measurement.”
 

 

 

Vasculometry low cost, noninvasive and with high street availability

They concluded: “Retinal imaging is established within clinic and hospital eye care and in optometric practices in the U.S. and U.K. AI-enabled vasculometry risk prediction is fully automated, low cost, noninvasive and has the potential for reaching a higher proportion of the population in the community because of “high street” availability and because blood sampling or sphygmomanometry are not needed.

“[Retinal vasculature] is a microvascular marker, hence offers better prediction for circulatory mortality and stroke, compared with MI, which is more macrovascular, except perhaps in women. 

“In the general population it could be used as a noncontact form of systemic vascular health check, to triage those at medium-high risk of circulatory mortality for further clinical risk assessment and appropriate intervention.”

In the United Kingdom, for example, it could be included in the primary care NHS Health Check for those aged 41-74 years, they suggested.  In addition, “high street” retinal scanning could directly feed into primary medical services and help achieve greater screening coverage for older age groups, who are likely to attend an optometric practice for visual correction, especially with the onset of presbyopia. “This would offer a novel approach to identify those at high risk of circulatory mortality, which is not currently screened for,” the team said.
 

Test could help to identify high-risk individuals

In a linked editorial, Ify Mordi, MD, and Emanuele Trucco, MD, of the University of Dundee (Scotland), said that CVD remains a significant cause of mortality and morbidity and the most common cause of death worldwide, accounting for a quarter of all U.K. deaths – and its burden is increasing. “Identification of individuals at high risk is particularly important,” they said, but current clinical risk scores to identify those at risk “are unfortunately not perfect,” so miss some of those who might benefit from preventative therapy.

“The retina is the only location that allows non-invasive direct visualisation of the vasculature, potentially providing a rich source of information.” In the new study, the measurements derived with the software tool, QUARTZ, were significantly associated with CVD, they said, with similar predictive performance to the Framingham clinical risk score.

“The results strengthen the evidence from several similar studies that the retina can be a useful and potentially disruptive source of information for CVD risk in personalised medicine.” However, a number of questions remain about how this knowledge could be integrated into clinical care, including who would conduct such a retinal screening program and who would act on the findings?

The editorial concluded: “What is now needed is for ophthalmologists, cardiologists, primary care physicians, and computer scientists to work together to design studies to determine whether using this information improves clinical outcome, and, if so, to work with regulatory bodies, scientific societies and healthcare systems to optimize clinical work flows and enable practical implementation in routine practice.”
 

‘Exciting development that could improve outcomes’

Asked to comment, Farah Topia, clinical and regulatory adviser at the Association of Optometrists, said: “This is an exciting development that could improve outcomes for many patients by enabling earlier detection of serious health risks. Patients attend optometric practice for a variety of reasons and this interaction could be used to a greater extent to help detect disease earlier. With optometrists available on every High Street, in the heart of communities, it’s an element of primary care that can be accessed quickly and easily, and optometrists are also already trained to have health and lifestyle discussion with patients.”

She added: “Retinal photographs are regularly taken when patients visit an optometrist, so being able to further enhance this process using AI is exciting.

“We look forward to seeing how this area develops and how optometrists can work together with other healthcare sectors to improve patient outcomes and ease the burden the NHS currently faces.” 

The study was funded by the Medical Research Council Population and Systems Medicine Board and the British Heart Foundation.

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE BRITISH JOURNAL OF OPHTHALMOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A ‘big breakfast’ diet affects hunger, not weight loss

Article Type
Changed
Mon, 09/12/2022 - 15:25

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

Publications
Topics
Sections

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

The old saying ‘breakfast like a king, lunch like a prince, and dine like a pauper’ is wrong, at least in terms of weight control, according to a new study, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.

“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”

Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.

Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.

All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
 

No optimum time to eat for weight loss

Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.

The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.

“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.

“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
 

 

 

Meal timing reduces hunger but does not affect weight loss

However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”

“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.

“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.

“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
 

‘Major finding’ for chrono-nutrition

Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.

“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”

It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.

“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
 

Great variability in individual responses to diets

Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.

“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.

“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.

“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.

“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”

This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.

A version of this article first appeared on Medscape.co.uk.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL METABOLISM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Fish in pregnancy not dangerous after all, says new study

Article Type
Changed
Fri, 09/09/2022 - 12:00

A new study has called into question the decades-long official guidance advising pregnant women to limit consumption of certain fish because of their potentially high mercury content. That advice was based particularly on one 1997 study suggesting a correlation between fetal exposure to methylmercury and cognitive dysfunction at age 7.

The U.K’s National Health Service currently advises not only pregnant women but also all those who are potentially fertile (those “who are planning a pregnancy or may have a child one day”) to limit oily fish consumption to no more than two portions per week. During pregnancy and while trying to get pregnant, women are advised to avoid shark, swordfish, and marlin altogether.
 

Suspicions arose from study involving consumption of pilot whale

However, researchers from the University of Bristol (England) now suggest that assumptions generated by the original 1997 study – of a cohort of women in the Faroe Islands – were unwarranted. “It was clearly stated that the methylmercury levels were associated with consumption of pilot whale (a sea mammal, not a fish),” they said.

The pilot whale is a species known to concentrate cadmium and mercury, and indeed in 1989 Faroe Islanders themselves had been advised to limit consumption of both whale meat and blubber, and to abstain completely from liver and kidneys.

Yet, as the authors pointed out, following the 1997 study, “the subsequent assumptions were that seafood in general was responsible for increased mercury levels in the mother.”
 

New study shows ‘no evidence of harm’

Their new research, published in NeuroToxicology, has now shown that “there is no evidence of harm from these fish,” they said. They recommend that advice for pregnant women should now be revised.

The study drew together analyses on over 4,131 pregnant mothers from the Avon Longitudinal Study of Parents and Children (ALSPAC), also known as the ‘Children of the 90s’ study, with similar detailed studies conducted in the Seychelles. The two populations differ considerably in their frequency of fish consumption: fish is a major component of the diet in the Seychelles, but eaten less frequently in the Avon study area, centered on Bristol.

The team looked for studies using the data from these two contrasting cohorts where mercury levels had been measured during pregnancy and the children followed up at frequent intervals during their childhood. Longitudinal studies in the Seychelles “have not demonstrated harmful cognitive effects in children with increasing maternal mercury levels”, they reported.

The same proved true in the United Kingdom, a more-developed country where fish is eaten less frequently, they found. They summarized the results from various papers that used ALSPAC data and found no adverse associations between total mercury levels measured in maternal whole blood and umbilical cord tissue with children’s cognitive development, in terms of either IQ or scholastic abilities.

In addition, extensive dietary questionnaires during pregnancy had allowed estimates of total fish intake to be calculated, as well as variations in the amount of each type of seafood consumed. “Although seafood is a source of dietary mercury, it appeared to explain a relatively small proportion (9%) of the variation in total blood mercury in our U.K. study population,” they said – actually less than the variance attributable to socio-demographic characteristics of the mother (10.4%).
 

 

 

Positive benefits of eating fish irrespective of type

What mattered was not which types of fish were eaten but whether the woman ate fish or not, which emerged as the most important factor. The mother’s prenatal mercury level was positively associated with her child’s IQ if she had eaten fish in pregnancy, but not if she had not.

“Significantly beneficial associations with prenatal mercury levels were shown for total and performance IQ, mathematical/scientific reasoning, and birth weight, in fish-consuming versus non–fish-consuming mothers,” the authors said. “These beneficial findings are similar to those observed in the Seychelles, where fish consumption is high and prenatal mercury levels are 10 times higher than U.S. levels.”

Caroline Taylor, PhD, senior research fellow and coauthor of the study, said: “We found that the mother’s mercury level during pregnancy is likely to have no adverse effect on the development of the child provided that the mother eats fish. If she did not eat fish, then there was some evidence that her mercury level could have a harmful effect on the child.”

The team said that this was because the essential nutrients in the fish could be protective against the mercury content of the fish. “This could be because of the benefits from the mix of essential nutrients that fish provides, including long-chain fatty acids, iodine, vitamin D and selenium,” said Dr. Taylor.
 

Women stopped eating any fish ‘to be on the safe side’

The authors called for a change in official guidance. “Health advice to pregnant women concerning consumption of mercury-containing foods has resulted in anxiety, with subsequent avoidance of fish consumption during pregnancy.” Seafood contains many nutrients crucial for children’s growth and development, but “there is the possibility that some women will stop eating any fish ‘to be on the safe side.’ ”

The authors said: “Although advice to pregnant women was generally that fish was good, the accompanying caveat was to avoid fish with high levels of mercury. Psychologically, the latter was the message that women remembered, and the general reaction has been for women to reduce their intake of all seafood.”

Coauthor Jean Golding, emeritus professor of pediatric and perinatal epidemiology at the University of Bristol, said: “It is important that advisories from health professionals revise their advice warning against eating certain species of fish. There is no evidence of harm from these fish, but there is evidence from different countries that such advice can cause confusion in pregnant women. The guidance for pregnancy should highlight ‘Eat at least two portions of fish a week, one of which should be oily’ – and omit all warnings that certain fish should not be eaten.”

The study was funded via core support for ALSPAC by the UK Medical Research Council and the UK Wellcome Trust.

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

A new study has called into question the decades-long official guidance advising pregnant women to limit consumption of certain fish because of their potentially high mercury content. That advice was based particularly on one 1997 study suggesting a correlation between fetal exposure to methylmercury and cognitive dysfunction at age 7.

The U.K’s National Health Service currently advises not only pregnant women but also all those who are potentially fertile (those “who are planning a pregnancy or may have a child one day”) to limit oily fish consumption to no more than two portions per week. During pregnancy and while trying to get pregnant, women are advised to avoid shark, swordfish, and marlin altogether.
 

Suspicions arose from study involving consumption of pilot whale

However, researchers from the University of Bristol (England) now suggest that assumptions generated by the original 1997 study – of a cohort of women in the Faroe Islands – were unwarranted. “It was clearly stated that the methylmercury levels were associated with consumption of pilot whale (a sea mammal, not a fish),” they said.

The pilot whale is a species known to concentrate cadmium and mercury, and indeed in 1989 Faroe Islanders themselves had been advised to limit consumption of both whale meat and blubber, and to abstain completely from liver and kidneys.

Yet, as the authors pointed out, following the 1997 study, “the subsequent assumptions were that seafood in general was responsible for increased mercury levels in the mother.”
 

New study shows ‘no evidence of harm’

Their new research, published in NeuroToxicology, has now shown that “there is no evidence of harm from these fish,” they said. They recommend that advice for pregnant women should now be revised.

The study drew together analyses on over 4,131 pregnant mothers from the Avon Longitudinal Study of Parents and Children (ALSPAC), also known as the ‘Children of the 90s’ study, with similar detailed studies conducted in the Seychelles. The two populations differ considerably in their frequency of fish consumption: fish is a major component of the diet in the Seychelles, but eaten less frequently in the Avon study area, centered on Bristol.

The team looked for studies using the data from these two contrasting cohorts where mercury levels had been measured during pregnancy and the children followed up at frequent intervals during their childhood. Longitudinal studies in the Seychelles “have not demonstrated harmful cognitive effects in children with increasing maternal mercury levels”, they reported.

The same proved true in the United Kingdom, a more-developed country where fish is eaten less frequently, they found. They summarized the results from various papers that used ALSPAC data and found no adverse associations between total mercury levels measured in maternal whole blood and umbilical cord tissue with children’s cognitive development, in terms of either IQ or scholastic abilities.

In addition, extensive dietary questionnaires during pregnancy had allowed estimates of total fish intake to be calculated, as well as variations in the amount of each type of seafood consumed. “Although seafood is a source of dietary mercury, it appeared to explain a relatively small proportion (9%) of the variation in total blood mercury in our U.K. study population,” they said – actually less than the variance attributable to socio-demographic characteristics of the mother (10.4%).
 

 

 

Positive benefits of eating fish irrespective of type

What mattered was not which types of fish were eaten but whether the woman ate fish or not, which emerged as the most important factor. The mother’s prenatal mercury level was positively associated with her child’s IQ if she had eaten fish in pregnancy, but not if she had not.

“Significantly beneficial associations with prenatal mercury levels were shown for total and performance IQ, mathematical/scientific reasoning, and birth weight, in fish-consuming versus non–fish-consuming mothers,” the authors said. “These beneficial findings are similar to those observed in the Seychelles, where fish consumption is high and prenatal mercury levels are 10 times higher than U.S. levels.”

Caroline Taylor, PhD, senior research fellow and coauthor of the study, said: “We found that the mother’s mercury level during pregnancy is likely to have no adverse effect on the development of the child provided that the mother eats fish. If she did not eat fish, then there was some evidence that her mercury level could have a harmful effect on the child.”

The team said that this was because the essential nutrients in the fish could be protective against the mercury content of the fish. “This could be because of the benefits from the mix of essential nutrients that fish provides, including long-chain fatty acids, iodine, vitamin D and selenium,” said Dr. Taylor.
 

Women stopped eating any fish ‘to be on the safe side’

The authors called for a change in official guidance. “Health advice to pregnant women concerning consumption of mercury-containing foods has resulted in anxiety, with subsequent avoidance of fish consumption during pregnancy.” Seafood contains many nutrients crucial for children’s growth and development, but “there is the possibility that some women will stop eating any fish ‘to be on the safe side.’ ”

The authors said: “Although advice to pregnant women was generally that fish was good, the accompanying caveat was to avoid fish with high levels of mercury. Psychologically, the latter was the message that women remembered, and the general reaction has been for women to reduce their intake of all seafood.”

Coauthor Jean Golding, emeritus professor of pediatric and perinatal epidemiology at the University of Bristol, said: “It is important that advisories from health professionals revise their advice warning against eating certain species of fish. There is no evidence of harm from these fish, but there is evidence from different countries that such advice can cause confusion in pregnant women. The guidance for pregnancy should highlight ‘Eat at least two portions of fish a week, one of which should be oily’ – and omit all warnings that certain fish should not be eaten.”

The study was funded via core support for ALSPAC by the UK Medical Research Council and the UK Wellcome Trust.

A version of this article first appeared on Medscape UK.

A new study has called into question the decades-long official guidance advising pregnant women to limit consumption of certain fish because of their potentially high mercury content. That advice was based particularly on one 1997 study suggesting a correlation between fetal exposure to methylmercury and cognitive dysfunction at age 7.

The U.K’s National Health Service currently advises not only pregnant women but also all those who are potentially fertile (those “who are planning a pregnancy or may have a child one day”) to limit oily fish consumption to no more than two portions per week. During pregnancy and while trying to get pregnant, women are advised to avoid shark, swordfish, and marlin altogether.
 

Suspicions arose from study involving consumption of pilot whale

However, researchers from the University of Bristol (England) now suggest that assumptions generated by the original 1997 study – of a cohort of women in the Faroe Islands – were unwarranted. “It was clearly stated that the methylmercury levels were associated with consumption of pilot whale (a sea mammal, not a fish),” they said.

The pilot whale is a species known to concentrate cadmium and mercury, and indeed in 1989 Faroe Islanders themselves had been advised to limit consumption of both whale meat and blubber, and to abstain completely from liver and kidneys.

Yet, as the authors pointed out, following the 1997 study, “the subsequent assumptions were that seafood in general was responsible for increased mercury levels in the mother.”
 

New study shows ‘no evidence of harm’

Their new research, published in NeuroToxicology, has now shown that “there is no evidence of harm from these fish,” they said. They recommend that advice for pregnant women should now be revised.

The study drew together analyses on over 4,131 pregnant mothers from the Avon Longitudinal Study of Parents and Children (ALSPAC), also known as the ‘Children of the 90s’ study, with similar detailed studies conducted in the Seychelles. The two populations differ considerably in their frequency of fish consumption: fish is a major component of the diet in the Seychelles, but eaten less frequently in the Avon study area, centered on Bristol.

The team looked for studies using the data from these two contrasting cohorts where mercury levels had been measured during pregnancy and the children followed up at frequent intervals during their childhood. Longitudinal studies in the Seychelles “have not demonstrated harmful cognitive effects in children with increasing maternal mercury levels”, they reported.

The same proved true in the United Kingdom, a more-developed country where fish is eaten less frequently, they found. They summarized the results from various papers that used ALSPAC data and found no adverse associations between total mercury levels measured in maternal whole blood and umbilical cord tissue with children’s cognitive development, in terms of either IQ or scholastic abilities.

In addition, extensive dietary questionnaires during pregnancy had allowed estimates of total fish intake to be calculated, as well as variations in the amount of each type of seafood consumed. “Although seafood is a source of dietary mercury, it appeared to explain a relatively small proportion (9%) of the variation in total blood mercury in our U.K. study population,” they said – actually less than the variance attributable to socio-demographic characteristics of the mother (10.4%).
 

 

 

Positive benefits of eating fish irrespective of type

What mattered was not which types of fish were eaten but whether the woman ate fish or not, which emerged as the most important factor. The mother’s prenatal mercury level was positively associated with her child’s IQ if she had eaten fish in pregnancy, but not if she had not.

“Significantly beneficial associations with prenatal mercury levels were shown for total and performance IQ, mathematical/scientific reasoning, and birth weight, in fish-consuming versus non–fish-consuming mothers,” the authors said. “These beneficial findings are similar to those observed in the Seychelles, where fish consumption is high and prenatal mercury levels are 10 times higher than U.S. levels.”

Caroline Taylor, PhD, senior research fellow and coauthor of the study, said: “We found that the mother’s mercury level during pregnancy is likely to have no adverse effect on the development of the child provided that the mother eats fish. If she did not eat fish, then there was some evidence that her mercury level could have a harmful effect on the child.”

The team said that this was because the essential nutrients in the fish could be protective against the mercury content of the fish. “This could be because of the benefits from the mix of essential nutrients that fish provides, including long-chain fatty acids, iodine, vitamin D and selenium,” said Dr. Taylor.
 

Women stopped eating any fish ‘to be on the safe side’

The authors called for a change in official guidance. “Health advice to pregnant women concerning consumption of mercury-containing foods has resulted in anxiety, with subsequent avoidance of fish consumption during pregnancy.” Seafood contains many nutrients crucial for children’s growth and development, but “there is the possibility that some women will stop eating any fish ‘to be on the safe side.’ ”

The authors said: “Although advice to pregnant women was generally that fish was good, the accompanying caveat was to avoid fish with high levels of mercury. Psychologically, the latter was the message that women remembered, and the general reaction has been for women to reduce their intake of all seafood.”

Coauthor Jean Golding, emeritus professor of pediatric and perinatal epidemiology at the University of Bristol, said: “It is important that advisories from health professionals revise their advice warning against eating certain species of fish. There is no evidence of harm from these fish, but there is evidence from different countries that such advice can cause confusion in pregnant women. The guidance for pregnancy should highlight ‘Eat at least two portions of fish a week, one of which should be oily’ – and omit all warnings that certain fish should not be eaten.”

The study was funded via core support for ALSPAC by the UK Medical Research Council and the UK Wellcome Trust.

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROTOXICOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Children born very prematurely at higher risk to struggle in secondary school

Article Type
Changed
Fri, 08/19/2022 - 12:43

A new study of educational attainment among U.K. primary and secondary schoolchildren born prematurely now provides some reassurance about the longer-term outcomes for many of these children.

For the study, published in the open-access journal PLOS ONE, researchers from the University of Oxford with colleagues from the University of Leicester and City University, London, used data from 11,695 children in the population-based UK Millennium Cohort Study, which included children born in England from Sept. 1, 2000 to Aug. 31, 2001. They analyzed data on educational attainment in primary school, at age 11, for 6,950 pupils and in secondary school, at age 16, for 7,131 pupils.

Preterm birth is a known risk factor for developmental impairment, lower educational performance and reduced academic attainment, with the impact proportional to the degree of prematurity. Not every child born prematurely will experience learning or developmental challenges, but studies of children born before 34 weeks gestation have shown that they are more likely to have cognitive difficulties, particularly poorer reading and maths skills, at primary school, and to have special educational needs by the end of primary education.
 

Elevated risk of all preterm children in primary school

Until now, few studies have followed these children through secondary school or examined the full spectrum of gestational ages at birth. Yet as neonatal care advances and more premature babies now survive, an average primary class in the United Kingdom now includes two preterm children.

Among the primary school children overall, 17.7% had not achieved their expected level in English and mathematics at age 11. Children born very preterm, before 32 weeks or at 32-33 weeks gestation, were more than twice as likely as full term children to fail to meet these benchmarks, with adjusted relative risks of 2.06 and 2.13, respectively. Those born late preterm, at 34-36 weeks, or early term, at 37-38 weeks, were at lesser risk, with RRs of 1.18 and 1.21, respectively.

By the end of secondary school, 45.2% of pupils had not passed the benchmark of at least five General Certificate of Secondary Education (GCSE) examinations, including English and mathematics. The RR for children born very preterm, compared with full term children, was 1.26, with 60% of students in this group failing to achieve five GCSEs. However, children born at gestations between 32 and 38 weeks were not at elevated risk, compared with children born at full term.
 

Risk persists to secondary level only for very preterm children

A similar pattern was seen with English and mathematics analyzed separately, with no additional risk of not passing among children born at 32 weeks or above, but adjusted RRs of 1.33 for not passing English and 1.42 for not passing maths among pupils who had been born very preterm, compared with full term children.

“All children born before full term are more likely to have poorer attainment during primary school, compared with children born full term (39-41 weeks), but only children born very preterm (before 32 weeks) remain at risk of poor attainment at the end of secondary schooling,” the researchers concluded.

“Further studies are needed in order to confirm this result,” they acknowledge. They suggested their results could be explained by catch-up in academic attainment among children born moderately or late preterm or at early term. However, “very preterm children appear to be a high-risk group with persistent difficulties in terms of educational outcomes,” they said, noting that even this risk was of lower magnitude than the reduced attainment scores they found among pupils eligible for free school meals, meaning those from disadvantaged socioeconomic backgrounds.
 

 

 

Extra educational support needed

The researchers concluded: “Children born very preterm may benefit from screening for cognitive and language difficulties prior to school entry to guide the provision of additional support during schooling.” In addition, those born very preterm “may require additional educational support throughout compulsory schooling.”

Commenting on the study, Caroline Lee-Davey, chief executive of premature baby charity Bliss, told this news organization: “Every child who is born premature is unique, and their development and achievements will be individual to them. However, these new findings are significant and add to our understanding of how prematurity is related to longer-term educational attainment, particularly for children who were born very preterm.”

“Most importantly, they highlight the need for all children who were born premature – and particularly those who were born before 32 weeks – to have access to early support. This means ensuring all eligible babies receive a follow-up check at 2 and 4 years as recommended by NICE and for early years and educational professionals to be aware of the relationship between premature birth and development.”

“We know how concerning these findings might be for families with babies and very young children right now. That’s why Bliss has developed a suite of information to support families as they make choices about their child’s education.”

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

A new study of educational attainment among U.K. primary and secondary schoolchildren born prematurely now provides some reassurance about the longer-term outcomes for many of these children.

For the study, published in the open-access journal PLOS ONE, researchers from the University of Oxford with colleagues from the University of Leicester and City University, London, used data from 11,695 children in the population-based UK Millennium Cohort Study, which included children born in England from Sept. 1, 2000 to Aug. 31, 2001. They analyzed data on educational attainment in primary school, at age 11, for 6,950 pupils and in secondary school, at age 16, for 7,131 pupils.

Preterm birth is a known risk factor for developmental impairment, lower educational performance and reduced academic attainment, with the impact proportional to the degree of prematurity. Not every child born prematurely will experience learning or developmental challenges, but studies of children born before 34 weeks gestation have shown that they are more likely to have cognitive difficulties, particularly poorer reading and maths skills, at primary school, and to have special educational needs by the end of primary education.
 

Elevated risk of all preterm children in primary school

Until now, few studies have followed these children through secondary school or examined the full spectrum of gestational ages at birth. Yet as neonatal care advances and more premature babies now survive, an average primary class in the United Kingdom now includes two preterm children.

Among the primary school children overall, 17.7% had not achieved their expected level in English and mathematics at age 11. Children born very preterm, before 32 weeks or at 32-33 weeks gestation, were more than twice as likely as full term children to fail to meet these benchmarks, with adjusted relative risks of 2.06 and 2.13, respectively. Those born late preterm, at 34-36 weeks, or early term, at 37-38 weeks, were at lesser risk, with RRs of 1.18 and 1.21, respectively.

By the end of secondary school, 45.2% of pupils had not passed the benchmark of at least five General Certificate of Secondary Education (GCSE) examinations, including English and mathematics. The RR for children born very preterm, compared with full term children, was 1.26, with 60% of students in this group failing to achieve five GCSEs. However, children born at gestations between 32 and 38 weeks were not at elevated risk, compared with children born at full term.
 

Risk persists to secondary level only for very preterm children

A similar pattern was seen with English and mathematics analyzed separately, with no additional risk of not passing among children born at 32 weeks or above, but adjusted RRs of 1.33 for not passing English and 1.42 for not passing maths among pupils who had been born very preterm, compared with full term children.

“All children born before full term are more likely to have poorer attainment during primary school, compared with children born full term (39-41 weeks), but only children born very preterm (before 32 weeks) remain at risk of poor attainment at the end of secondary schooling,” the researchers concluded.

“Further studies are needed in order to confirm this result,” they acknowledge. They suggested their results could be explained by catch-up in academic attainment among children born moderately or late preterm or at early term. However, “very preterm children appear to be a high-risk group with persistent difficulties in terms of educational outcomes,” they said, noting that even this risk was of lower magnitude than the reduced attainment scores they found among pupils eligible for free school meals, meaning those from disadvantaged socioeconomic backgrounds.
 

 

 

Extra educational support needed

The researchers concluded: “Children born very preterm may benefit from screening for cognitive and language difficulties prior to school entry to guide the provision of additional support during schooling.” In addition, those born very preterm “may require additional educational support throughout compulsory schooling.”

Commenting on the study, Caroline Lee-Davey, chief executive of premature baby charity Bliss, told this news organization: “Every child who is born premature is unique, and their development and achievements will be individual to them. However, these new findings are significant and add to our understanding of how prematurity is related to longer-term educational attainment, particularly for children who were born very preterm.”

“Most importantly, they highlight the need for all children who were born premature – and particularly those who were born before 32 weeks – to have access to early support. This means ensuring all eligible babies receive a follow-up check at 2 and 4 years as recommended by NICE and for early years and educational professionals to be aware of the relationship between premature birth and development.”

“We know how concerning these findings might be for families with babies and very young children right now. That’s why Bliss has developed a suite of information to support families as they make choices about their child’s education.”

A version of this article first appeared on Medscape UK.

A new study of educational attainment among U.K. primary and secondary schoolchildren born prematurely now provides some reassurance about the longer-term outcomes for many of these children.

For the study, published in the open-access journal PLOS ONE, researchers from the University of Oxford with colleagues from the University of Leicester and City University, London, used data from 11,695 children in the population-based UK Millennium Cohort Study, which included children born in England from Sept. 1, 2000 to Aug. 31, 2001. They analyzed data on educational attainment in primary school, at age 11, for 6,950 pupils and in secondary school, at age 16, for 7,131 pupils.

Preterm birth is a known risk factor for developmental impairment, lower educational performance and reduced academic attainment, with the impact proportional to the degree of prematurity. Not every child born prematurely will experience learning or developmental challenges, but studies of children born before 34 weeks gestation have shown that they are more likely to have cognitive difficulties, particularly poorer reading and maths skills, at primary school, and to have special educational needs by the end of primary education.
 

Elevated risk of all preterm children in primary school

Until now, few studies have followed these children through secondary school or examined the full spectrum of gestational ages at birth. Yet as neonatal care advances and more premature babies now survive, an average primary class in the United Kingdom now includes two preterm children.

Among the primary school children overall, 17.7% had not achieved their expected level in English and mathematics at age 11. Children born very preterm, before 32 weeks or at 32-33 weeks gestation, were more than twice as likely as full term children to fail to meet these benchmarks, with adjusted relative risks of 2.06 and 2.13, respectively. Those born late preterm, at 34-36 weeks, or early term, at 37-38 weeks, were at lesser risk, with RRs of 1.18 and 1.21, respectively.

By the end of secondary school, 45.2% of pupils had not passed the benchmark of at least five General Certificate of Secondary Education (GCSE) examinations, including English and mathematics. The RR for children born very preterm, compared with full term children, was 1.26, with 60% of students in this group failing to achieve five GCSEs. However, children born at gestations between 32 and 38 weeks were not at elevated risk, compared with children born at full term.
 

Risk persists to secondary level only for very preterm children

A similar pattern was seen with English and mathematics analyzed separately, with no additional risk of not passing among children born at 32 weeks or above, but adjusted RRs of 1.33 for not passing English and 1.42 for not passing maths among pupils who had been born very preterm, compared with full term children.

“All children born before full term are more likely to have poorer attainment during primary school, compared with children born full term (39-41 weeks), but only children born very preterm (before 32 weeks) remain at risk of poor attainment at the end of secondary schooling,” the researchers concluded.

“Further studies are needed in order to confirm this result,” they acknowledge. They suggested their results could be explained by catch-up in academic attainment among children born moderately or late preterm or at early term. However, “very preterm children appear to be a high-risk group with persistent difficulties in terms of educational outcomes,” they said, noting that even this risk was of lower magnitude than the reduced attainment scores they found among pupils eligible for free school meals, meaning those from disadvantaged socioeconomic backgrounds.
 

 

 

Extra educational support needed

The researchers concluded: “Children born very preterm may benefit from screening for cognitive and language difficulties prior to school entry to guide the provision of additional support during schooling.” In addition, those born very preterm “may require additional educational support throughout compulsory schooling.”

Commenting on the study, Caroline Lee-Davey, chief executive of premature baby charity Bliss, told this news organization: “Every child who is born premature is unique, and their development and achievements will be individual to them. However, these new findings are significant and add to our understanding of how prematurity is related to longer-term educational attainment, particularly for children who were born very preterm.”

“Most importantly, they highlight the need for all children who were born premature – and particularly those who were born before 32 weeks – to have access to early support. This means ensuring all eligible babies receive a follow-up check at 2 and 4 years as recommended by NICE and for early years and educational professionals to be aware of the relationship between premature birth and development.”

“We know how concerning these findings might be for families with babies and very young children right now. That’s why Bliss has developed a suite of information to support families as they make choices about their child’s education.”

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PLOS ONE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Tocolytic benefits for preterm birth outweigh risks

Article Type
Changed
Fri, 08/12/2022 - 11:18

New research from the University of Birmingham, England, in collaboration with the World Health Organization, shows that tocolytic drugs used to delay preterm birth, and thus avert the ensuing associated mortality and morbidity, are all “probably effective in delaying preterm birth compared with placebo or no treatment.”

Expanded use of the drugs would be a safe means to reduce the global burden of neonatal death, the researchers suggest. Coauthor Victoria Hodgetts Morton, BMedSci, NIHR clinical lecturer in obstetrics at the University of Birmingham, said: “Preterm birth is the most common reason why a newborn baby may die, and the leading cause of death in children under 5 years of age.

“Tocolytics aim to delay preterm birth and allow time for the women to receive medicines that can help with baby’s breathing and feeding if born preterm, and medicines that lower the chance of cerebral palsy of the infant. Crucially, a short delay in preterm birth can enable women to reach specialist care.”
 

Network meta‐analysis drew on 122 trials

The new paper, published in Cochrane Reviews, aimed to find out which tocolytic was most effective in delaying preterm birth, safest, and with the fewest side effects. Researchers brought together data from 122 randomized clinical trials in a network meta‐analysis.

Unlike conventional Cochrane Reviews, this type of review simultaneously pools all direct and indirect evidence into one single coherent analysis. Indirect evidence is obtained by inferring the relative effectiveness of two competing drugs through a common comparator, even when these two drugs have not been directly compared. The method also enables researchers to calculate the probability for each competing drug to constitute the most effective drug with the least side effects. This thereby allowed the researchers to rank the available tocolytic drugs.

The trials, published between 1966 and 2021, involved 13,697 women across 39 countries and included high, middle and low-income states. The researchers looked for trials involving women with live fetus(es) who presented with signs and symptoms of preterm labor, defined as uterine activity with or without ruptured membranes; or ruptured membranes, with or without cervical dilatation or shortening or biomarkers consistent with a high risk of preterm birth.

Trials were eligible if they involved tocolytic drugs of any dosage, route, or regimen for delaying preterm birth, and compared them with other tocolytic drugs, placebo, or no treatment.

The team reported that overall, the evidence varied widely in quality, and their confidence in the effect estimates ranged from very low to high. Only 25 of the 122 studies (20%) were judged to be at “low risk of bias.” The effectiveness of different drugs was less clear in some of the studies considered.

Compared with the use of placebo or no tocolytic treatment, “all tocolytic drug classes assessed and their combinations were probably or possibly effective in delaying preterm birth for 48 hours, and 7 days,” the researchers found. “The most effective tocolytics for delaying preterm birth by 48 hours and 7 days were the nitric oxide donors, calcium channel blockers, oxytocin receptor antagonists, and combination tocolytics.”

Their figures showed:

  • Betamimetics are possibly effective in delaying preterm birth by 48 hours (risk ratio [RR] 1.12), and 7 days (RR 1.14).
  • Calcium channel blockers (for example, nifedipine) may be effective in delaying preterm birth by 48 hours (RR 1.16), and probably effective in delaying preterm birth by 7 days (RR 1.15), and prolong pregnancy by a mean of 5 days (0.1 to 9.2).
  • Magnesium sulphate is probably effective in delaying preterm birth by 48 hours (RR 1.12).
  • Oxytocin receptor antagonists (e.g., atosiban) are effective in delaying preterm birth by 7 days (RR 1.18), are probably effective in delaying preterm birth by 48 hours (RR 1.13), and possibly prolong pregnancy by an average of 10 days (95% confidence interval, 2.3 to 16.7).
  • Nitric oxide donors (e.g., glyceryl trinitrate) are probably effective in delaying preterm birth by 48 hours (RR 1.17), and 7 days (RR 1.18).
  • Cyclooxygenase-2 inhibitors (e.g., indomethacin) may be effective in delaying preterm birth by 48 hours (RR 1.11).
  • Combination tocolytics – most common was magnesium sulphate with betamimetics - are probably effective effective in delaying preterm birth by 48 hours (RR 1.17), and 7 days (RR 1.19).

Uncertain mortality outcomes and a wide range of adverse effects

However, the effects of tocolytic use on neonatal and perinatal mortality, and on safety outcomes such as maternal and neonatal infection, were “uncertain,” the researchers said, and the drugs proved compatible with a wide range of effects compared with placebo or no tocolytic treatment for these outcomes.

“All tocolytics were compatible with a wide range of serious adverse effects (trials including 6,983 women) when compared with placebo or no treatment,” the researchers said. Betamimetics and combination tocolytics had the most side effects and were most likely to lead to cessation of treatment (results from 8,122 women).

Overall, “the findings show that the benefits of these drugs outweigh any risks associated with unwanted side effects,” said first author Amie Wilson, PhD, research fellow in global maternal health at the University of Birmingham. “These treatments are leading to a significant reduction in the number of deadly preterm births, and we now need to further understand the effectiveness of tocolytics for specific groups depending on pregnancy length,” she said.

“Our previous research has led to the improvement of guidelines for use of tocolysis drug use to delay preterm birth in the U.K. Knowing that this paper helped to inform the forthcoming recommendations of the World Health Organisation on the use of tocolytics, we hope that many more women around the globe will have access to these drugs, and have healthier births.”

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

New research from the University of Birmingham, England, in collaboration with the World Health Organization, shows that tocolytic drugs used to delay preterm birth, and thus avert the ensuing associated mortality and morbidity, are all “probably effective in delaying preterm birth compared with placebo or no treatment.”

Expanded use of the drugs would be a safe means to reduce the global burden of neonatal death, the researchers suggest. Coauthor Victoria Hodgetts Morton, BMedSci, NIHR clinical lecturer in obstetrics at the University of Birmingham, said: “Preterm birth is the most common reason why a newborn baby may die, and the leading cause of death in children under 5 years of age.

“Tocolytics aim to delay preterm birth and allow time for the women to receive medicines that can help with baby’s breathing and feeding if born preterm, and medicines that lower the chance of cerebral palsy of the infant. Crucially, a short delay in preterm birth can enable women to reach specialist care.”
 

Network meta‐analysis drew on 122 trials

The new paper, published in Cochrane Reviews, aimed to find out which tocolytic was most effective in delaying preterm birth, safest, and with the fewest side effects. Researchers brought together data from 122 randomized clinical trials in a network meta‐analysis.

Unlike conventional Cochrane Reviews, this type of review simultaneously pools all direct and indirect evidence into one single coherent analysis. Indirect evidence is obtained by inferring the relative effectiveness of two competing drugs through a common comparator, even when these two drugs have not been directly compared. The method also enables researchers to calculate the probability for each competing drug to constitute the most effective drug with the least side effects. This thereby allowed the researchers to rank the available tocolytic drugs.

The trials, published between 1966 and 2021, involved 13,697 women across 39 countries and included high, middle and low-income states. The researchers looked for trials involving women with live fetus(es) who presented with signs and symptoms of preterm labor, defined as uterine activity with or without ruptured membranes; or ruptured membranes, with or without cervical dilatation or shortening or biomarkers consistent with a high risk of preterm birth.

Trials were eligible if they involved tocolytic drugs of any dosage, route, or regimen for delaying preterm birth, and compared them with other tocolytic drugs, placebo, or no treatment.

The team reported that overall, the evidence varied widely in quality, and their confidence in the effect estimates ranged from very low to high. Only 25 of the 122 studies (20%) were judged to be at “low risk of bias.” The effectiveness of different drugs was less clear in some of the studies considered.

Compared with the use of placebo or no tocolytic treatment, “all tocolytic drug classes assessed and their combinations were probably or possibly effective in delaying preterm birth for 48 hours, and 7 days,” the researchers found. “The most effective tocolytics for delaying preterm birth by 48 hours and 7 days were the nitric oxide donors, calcium channel blockers, oxytocin receptor antagonists, and combination tocolytics.”

Their figures showed:

  • Betamimetics are possibly effective in delaying preterm birth by 48 hours (risk ratio [RR] 1.12), and 7 days (RR 1.14).
  • Calcium channel blockers (for example, nifedipine) may be effective in delaying preterm birth by 48 hours (RR 1.16), and probably effective in delaying preterm birth by 7 days (RR 1.15), and prolong pregnancy by a mean of 5 days (0.1 to 9.2).
  • Magnesium sulphate is probably effective in delaying preterm birth by 48 hours (RR 1.12).
  • Oxytocin receptor antagonists (e.g., atosiban) are effective in delaying preterm birth by 7 days (RR 1.18), are probably effective in delaying preterm birth by 48 hours (RR 1.13), and possibly prolong pregnancy by an average of 10 days (95% confidence interval, 2.3 to 16.7).
  • Nitric oxide donors (e.g., glyceryl trinitrate) are probably effective in delaying preterm birth by 48 hours (RR 1.17), and 7 days (RR 1.18).
  • Cyclooxygenase-2 inhibitors (e.g., indomethacin) may be effective in delaying preterm birth by 48 hours (RR 1.11).
  • Combination tocolytics – most common was magnesium sulphate with betamimetics - are probably effective effective in delaying preterm birth by 48 hours (RR 1.17), and 7 days (RR 1.19).

Uncertain mortality outcomes and a wide range of adverse effects

However, the effects of tocolytic use on neonatal and perinatal mortality, and on safety outcomes such as maternal and neonatal infection, were “uncertain,” the researchers said, and the drugs proved compatible with a wide range of effects compared with placebo or no tocolytic treatment for these outcomes.

“All tocolytics were compatible with a wide range of serious adverse effects (trials including 6,983 women) when compared with placebo or no treatment,” the researchers said. Betamimetics and combination tocolytics had the most side effects and were most likely to lead to cessation of treatment (results from 8,122 women).

Overall, “the findings show that the benefits of these drugs outweigh any risks associated with unwanted side effects,” said first author Amie Wilson, PhD, research fellow in global maternal health at the University of Birmingham. “These treatments are leading to a significant reduction in the number of deadly preterm births, and we now need to further understand the effectiveness of tocolytics for specific groups depending on pregnancy length,” she said.

“Our previous research has led to the improvement of guidelines for use of tocolysis drug use to delay preterm birth in the U.K. Knowing that this paper helped to inform the forthcoming recommendations of the World Health Organisation on the use of tocolytics, we hope that many more women around the globe will have access to these drugs, and have healthier births.”

A version of this article first appeared on Medscape UK.

New research from the University of Birmingham, England, in collaboration with the World Health Organization, shows that tocolytic drugs used to delay preterm birth, and thus avert the ensuing associated mortality and morbidity, are all “probably effective in delaying preterm birth compared with placebo or no treatment.”

Expanded use of the drugs would be a safe means to reduce the global burden of neonatal death, the researchers suggest. Coauthor Victoria Hodgetts Morton, BMedSci, NIHR clinical lecturer in obstetrics at the University of Birmingham, said: “Preterm birth is the most common reason why a newborn baby may die, and the leading cause of death in children under 5 years of age.

“Tocolytics aim to delay preterm birth and allow time for the women to receive medicines that can help with baby’s breathing and feeding if born preterm, and medicines that lower the chance of cerebral palsy of the infant. Crucially, a short delay in preterm birth can enable women to reach specialist care.”
 

Network meta‐analysis drew on 122 trials

The new paper, published in Cochrane Reviews, aimed to find out which tocolytic was most effective in delaying preterm birth, safest, and with the fewest side effects. Researchers brought together data from 122 randomized clinical trials in a network meta‐analysis.

Unlike conventional Cochrane Reviews, this type of review simultaneously pools all direct and indirect evidence into one single coherent analysis. Indirect evidence is obtained by inferring the relative effectiveness of two competing drugs through a common comparator, even when these two drugs have not been directly compared. The method also enables researchers to calculate the probability for each competing drug to constitute the most effective drug with the least side effects. This thereby allowed the researchers to rank the available tocolytic drugs.

The trials, published between 1966 and 2021, involved 13,697 women across 39 countries and included high, middle and low-income states. The researchers looked for trials involving women with live fetus(es) who presented with signs and symptoms of preterm labor, defined as uterine activity with or without ruptured membranes; or ruptured membranes, with or without cervical dilatation or shortening or biomarkers consistent with a high risk of preterm birth.

Trials were eligible if they involved tocolytic drugs of any dosage, route, or regimen for delaying preterm birth, and compared them with other tocolytic drugs, placebo, or no treatment.

The team reported that overall, the evidence varied widely in quality, and their confidence in the effect estimates ranged from very low to high. Only 25 of the 122 studies (20%) were judged to be at “low risk of bias.” The effectiveness of different drugs was less clear in some of the studies considered.

Compared with the use of placebo or no tocolytic treatment, “all tocolytic drug classes assessed and their combinations were probably or possibly effective in delaying preterm birth for 48 hours, and 7 days,” the researchers found. “The most effective tocolytics for delaying preterm birth by 48 hours and 7 days were the nitric oxide donors, calcium channel blockers, oxytocin receptor antagonists, and combination tocolytics.”

Their figures showed:

  • Betamimetics are possibly effective in delaying preterm birth by 48 hours (risk ratio [RR] 1.12), and 7 days (RR 1.14).
  • Calcium channel blockers (for example, nifedipine) may be effective in delaying preterm birth by 48 hours (RR 1.16), and probably effective in delaying preterm birth by 7 days (RR 1.15), and prolong pregnancy by a mean of 5 days (0.1 to 9.2).
  • Magnesium sulphate is probably effective in delaying preterm birth by 48 hours (RR 1.12).
  • Oxytocin receptor antagonists (e.g., atosiban) are effective in delaying preterm birth by 7 days (RR 1.18), are probably effective in delaying preterm birth by 48 hours (RR 1.13), and possibly prolong pregnancy by an average of 10 days (95% confidence interval, 2.3 to 16.7).
  • Nitric oxide donors (e.g., glyceryl trinitrate) are probably effective in delaying preterm birth by 48 hours (RR 1.17), and 7 days (RR 1.18).
  • Cyclooxygenase-2 inhibitors (e.g., indomethacin) may be effective in delaying preterm birth by 48 hours (RR 1.11).
  • Combination tocolytics – most common was magnesium sulphate with betamimetics - are probably effective effective in delaying preterm birth by 48 hours (RR 1.17), and 7 days (RR 1.19).

Uncertain mortality outcomes and a wide range of adverse effects

However, the effects of tocolytic use on neonatal and perinatal mortality, and on safety outcomes such as maternal and neonatal infection, were “uncertain,” the researchers said, and the drugs proved compatible with a wide range of effects compared with placebo or no tocolytic treatment for these outcomes.

“All tocolytics were compatible with a wide range of serious adverse effects (trials including 6,983 women) when compared with placebo or no treatment,” the researchers said. Betamimetics and combination tocolytics had the most side effects and were most likely to lead to cessation of treatment (results from 8,122 women).

Overall, “the findings show that the benefits of these drugs outweigh any risks associated with unwanted side effects,” said first author Amie Wilson, PhD, research fellow in global maternal health at the University of Birmingham. “These treatments are leading to a significant reduction in the number of deadly preterm births, and we now need to further understand the effectiveness of tocolytics for specific groups depending on pregnancy length,” she said.

“Our previous research has led to the improvement of guidelines for use of tocolysis drug use to delay preterm birth in the U.K. Knowing that this paper helped to inform the forthcoming recommendations of the World Health Organisation on the use of tocolytics, we hope that many more women around the globe will have access to these drugs, and have healthier births.”

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM COCHRANE REVIEWS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

People really can get ‘hangry’ when hungry

Article Type
Changed
Wed, 07/20/2022 - 14:40

The notion that people get ‘hangry’ – irritable and short-tempered when they’re hungry – is such an established part of modern folklore that the word has even been added to the Oxford English Dictionary. Although experimental studies in the past have shown that low blood glucose levels increase impulsivity, anger, and aggression, there has been little solid evidence that this translates to real-life settings.

Now new research has confirmed that the phenomenon does really exist in everyday life. The study, published in the journal PLOS ONE, is the first to investigate how hunger affects people’s emotions on a day-to-day level. Lead author Viren Swami, professor of social psychology at Anglia Ruskin University, Cambridge, England, said: “Many of us are aware that being hungry can influence our emotions, but surprisingly little scientific research has focused on being ‘hangry’.”

He and coauthors from Karl Landsteiner University of Health Sciences in Krems an der Donau, Austria, recruited 64 participants from Central Europe who completed a 21-day experience sampling phase, in which they were prompted to report their feelings on a smartphone app five times a day. At each prompt, they reported their levels of hunger, anger, irritability, pleasure, and arousal on a visual analog scale.

Participants were on average 29.9 years old (range = 18-60), predominantly (81.3%) women, and had a mean body mass index of 23.8 kg/m2 (range 15.8-36.5 kg/m2).

Anger was rated on a 5-point scale but the team explained that the effects of hunger are unlikely to be unique to anger per se, so they also asked about experiences of irritability and, in order to obtain a more holistic view of emotionality, also about pleasure and arousal, as indexed using Russell’s affect grid.

They also asked about eating behaviors over the previous 3 weeks, including frequency of main meals, snacking behavior, healthy eating, feeling hungry, and sense of satiety, and about dietary behaviors including restrictive eating, emotionally induced eating, and externally determined eating behavior.

Analysis of the resulting total of 9,142 responses showed that higher levels of self-reported hunger were associated with greater feelings of anger and irritability, and with lower levels of pleasure. These findings remained significant after accounting for participants’ sex, age, body mass index, dietary behaviors, and trait anger. However, associations with arousal were not significant.

The authors commented that the use of the app allowed data collection to take place in subjects’ everyday environments, such as their workplace and at home. “These results provide evidence that everyday levels of hunger are associated with negative emotionality and supports the notion of being ‘hangry.’ ”
 

‘Substantial’ effects

“The effects were substantial,” the team said, “even after taking into account demographic factors” such as age and sex, body mass index, dietary behavior, and individual personality traits. Hunger was associated with 37% of the variance in irritability, 34% of the variance in anger, and 38% of the variance in pleasure recorded by the participants.

The research also showed that the negative emotions – irritability, anger, and unpleasantness – were caused by both day-to-day fluctuations in hunger and residual levels of hunger measured by averages over the 3-week period.

The authors said their findings “suggest that the experience of being hangry is real, insofar as hunger was associated with greater anger and irritability, and lower pleasure, in our sample over a period of 3 weeks.

“These results may have important implications for understanding everyday experiences of emotions, and may also assist practitioners to more effectively ensure productive individual behaviors and interpersonal relationships (for example, by ensuring that no one goes hungry).”

Although the majority of participants (55%) said they paid attention to hunger pangs, only 23% said that they knew when they were full and then stopped eating, whereas 63% said they could tell when they were full but sometimes continued to eat. Few (4.7%) people said they could not tell when they were full and therefore oriented their eating based on the size of the meal, but 9% described frequent overeating because of not feeling satiated, and 13% stated they ate when they were stressed, upset, angry, or bored.

Professor Swami said: “Ours is the first study to examine being ‘hangry’ outside of a lab. By following people in their day-to-day lives, we found that hunger was related to levels of anger, irritability, and pleasure.

“Although our study doesn’t present ways to mitigate negative hunger-induced emotions, research suggests that being able to label an emotion can help people to regulate it, such as by recognizing that we feel angry simply because we are hungry. Therefore, greater awareness of being ‘hangry’ could reduce the likelihood that hunger results in negative emotions and behaviors in individuals.”

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

The notion that people get ‘hangry’ – irritable and short-tempered when they’re hungry – is such an established part of modern folklore that the word has even been added to the Oxford English Dictionary. Although experimental studies in the past have shown that low blood glucose levels increase impulsivity, anger, and aggression, there has been little solid evidence that this translates to real-life settings.

Now new research has confirmed that the phenomenon does really exist in everyday life. The study, published in the journal PLOS ONE, is the first to investigate how hunger affects people’s emotions on a day-to-day level. Lead author Viren Swami, professor of social psychology at Anglia Ruskin University, Cambridge, England, said: “Many of us are aware that being hungry can influence our emotions, but surprisingly little scientific research has focused on being ‘hangry’.”

He and coauthors from Karl Landsteiner University of Health Sciences in Krems an der Donau, Austria, recruited 64 participants from Central Europe who completed a 21-day experience sampling phase, in which they were prompted to report their feelings on a smartphone app five times a day. At each prompt, they reported their levels of hunger, anger, irritability, pleasure, and arousal on a visual analog scale.

Participants were on average 29.9 years old (range = 18-60), predominantly (81.3%) women, and had a mean body mass index of 23.8 kg/m2 (range 15.8-36.5 kg/m2).

Anger was rated on a 5-point scale but the team explained that the effects of hunger are unlikely to be unique to anger per se, so they also asked about experiences of irritability and, in order to obtain a more holistic view of emotionality, also about pleasure and arousal, as indexed using Russell’s affect grid.

They also asked about eating behaviors over the previous 3 weeks, including frequency of main meals, snacking behavior, healthy eating, feeling hungry, and sense of satiety, and about dietary behaviors including restrictive eating, emotionally induced eating, and externally determined eating behavior.

Analysis of the resulting total of 9,142 responses showed that higher levels of self-reported hunger were associated with greater feelings of anger and irritability, and with lower levels of pleasure. These findings remained significant after accounting for participants’ sex, age, body mass index, dietary behaviors, and trait anger. However, associations with arousal were not significant.

The authors commented that the use of the app allowed data collection to take place in subjects’ everyday environments, such as their workplace and at home. “These results provide evidence that everyday levels of hunger are associated with negative emotionality and supports the notion of being ‘hangry.’ ”
 

‘Substantial’ effects

“The effects were substantial,” the team said, “even after taking into account demographic factors” such as age and sex, body mass index, dietary behavior, and individual personality traits. Hunger was associated with 37% of the variance in irritability, 34% of the variance in anger, and 38% of the variance in pleasure recorded by the participants.

The research also showed that the negative emotions – irritability, anger, and unpleasantness – were caused by both day-to-day fluctuations in hunger and residual levels of hunger measured by averages over the 3-week period.

The authors said their findings “suggest that the experience of being hangry is real, insofar as hunger was associated with greater anger and irritability, and lower pleasure, in our sample over a period of 3 weeks.

“These results may have important implications for understanding everyday experiences of emotions, and may also assist practitioners to more effectively ensure productive individual behaviors and interpersonal relationships (for example, by ensuring that no one goes hungry).”

Although the majority of participants (55%) said they paid attention to hunger pangs, only 23% said that they knew when they were full and then stopped eating, whereas 63% said they could tell when they were full but sometimes continued to eat. Few (4.7%) people said they could not tell when they were full and therefore oriented their eating based on the size of the meal, but 9% described frequent overeating because of not feeling satiated, and 13% stated they ate when they were stressed, upset, angry, or bored.

Professor Swami said: “Ours is the first study to examine being ‘hangry’ outside of a lab. By following people in their day-to-day lives, we found that hunger was related to levels of anger, irritability, and pleasure.

“Although our study doesn’t present ways to mitigate negative hunger-induced emotions, research suggests that being able to label an emotion can help people to regulate it, such as by recognizing that we feel angry simply because we are hungry. Therefore, greater awareness of being ‘hangry’ could reduce the likelihood that hunger results in negative emotions and behaviors in individuals.”

A version of this article first appeared on Medscape UK.

The notion that people get ‘hangry’ – irritable and short-tempered when they’re hungry – is such an established part of modern folklore that the word has even been added to the Oxford English Dictionary. Although experimental studies in the past have shown that low blood glucose levels increase impulsivity, anger, and aggression, there has been little solid evidence that this translates to real-life settings.

Now new research has confirmed that the phenomenon does really exist in everyday life. The study, published in the journal PLOS ONE, is the first to investigate how hunger affects people’s emotions on a day-to-day level. Lead author Viren Swami, professor of social psychology at Anglia Ruskin University, Cambridge, England, said: “Many of us are aware that being hungry can influence our emotions, but surprisingly little scientific research has focused on being ‘hangry’.”

He and coauthors from Karl Landsteiner University of Health Sciences in Krems an der Donau, Austria, recruited 64 participants from Central Europe who completed a 21-day experience sampling phase, in which they were prompted to report their feelings on a smartphone app five times a day. At each prompt, they reported their levels of hunger, anger, irritability, pleasure, and arousal on a visual analog scale.

Participants were on average 29.9 years old (range = 18-60), predominantly (81.3%) women, and had a mean body mass index of 23.8 kg/m2 (range 15.8-36.5 kg/m2).

Anger was rated on a 5-point scale but the team explained that the effects of hunger are unlikely to be unique to anger per se, so they also asked about experiences of irritability and, in order to obtain a more holistic view of emotionality, also about pleasure and arousal, as indexed using Russell’s affect grid.

They also asked about eating behaviors over the previous 3 weeks, including frequency of main meals, snacking behavior, healthy eating, feeling hungry, and sense of satiety, and about dietary behaviors including restrictive eating, emotionally induced eating, and externally determined eating behavior.

Analysis of the resulting total of 9,142 responses showed that higher levels of self-reported hunger were associated with greater feelings of anger and irritability, and with lower levels of pleasure. These findings remained significant after accounting for participants’ sex, age, body mass index, dietary behaviors, and trait anger. However, associations with arousal were not significant.

The authors commented that the use of the app allowed data collection to take place in subjects’ everyday environments, such as their workplace and at home. “These results provide evidence that everyday levels of hunger are associated with negative emotionality and supports the notion of being ‘hangry.’ ”
 

‘Substantial’ effects

“The effects were substantial,” the team said, “even after taking into account demographic factors” such as age and sex, body mass index, dietary behavior, and individual personality traits. Hunger was associated with 37% of the variance in irritability, 34% of the variance in anger, and 38% of the variance in pleasure recorded by the participants.

The research also showed that the negative emotions – irritability, anger, and unpleasantness – were caused by both day-to-day fluctuations in hunger and residual levels of hunger measured by averages over the 3-week period.

The authors said their findings “suggest that the experience of being hangry is real, insofar as hunger was associated with greater anger and irritability, and lower pleasure, in our sample over a period of 3 weeks.

“These results may have important implications for understanding everyday experiences of emotions, and may also assist practitioners to more effectively ensure productive individual behaviors and interpersonal relationships (for example, by ensuring that no one goes hungry).”

Although the majority of participants (55%) said they paid attention to hunger pangs, only 23% said that they knew when they were full and then stopped eating, whereas 63% said they could tell when they were full but sometimes continued to eat. Few (4.7%) people said they could not tell when they were full and therefore oriented their eating based on the size of the meal, but 9% described frequent overeating because of not feeling satiated, and 13% stated they ate when they were stressed, upset, angry, or bored.

Professor Swami said: “Ours is the first study to examine being ‘hangry’ outside of a lab. By following people in their day-to-day lives, we found that hunger was related to levels of anger, irritability, and pleasure.

“Although our study doesn’t present ways to mitigate negative hunger-induced emotions, research suggests that being able to label an emotion can help people to regulate it, such as by recognizing that we feel angry simply because we are hungry. Therefore, greater awareness of being ‘hangry’ could reduce the likelihood that hunger results in negative emotions and behaviors in individuals.”

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PLOS ONE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Women are not being warned that anesthetic may reduce birth pill efficacy

Article Type
Changed
Tue, 07/05/2022 - 15:24

The effectiveness of hormonal contraceptives, including the pill and mini-pill, may be compromised by sugammadex, a drug widely used in anesthesia for reversing neuromuscular blockade induced by rocuronium or vecuronium.

Yet women are not routinely informed that the drug may make their contraception less effective, delegates at Euroanaesthesia, the annual meeting of the European Society of Anaesthesiology and Intensive Care in Milan were told.

New research presented at the meeting supports the authors’ experience that “robust methods for identifying at-risk patients and informing them of the associated risk of contraceptive failures is not common practice across anesthetic departments within the United Kingdom, and likely further afield.”

This is according to a survey of almost 150 anesthetic professionals, including consultants, junior doctors, and physician assistants, working at University College London Hospitals NHS Foundation Trust.

Dr. Neha Passi, Dr. Matt Oliver, and colleagues at the trust’s department of anesthesiology sent out a seven-question survey to their 150 colleagues and received 82 responses, 94% of which claimed awareness of the risk of contraceptive failure with sugammadex. However, 70% of the respondents admitted that they do not routinely discuss this with patients who have received the drug.
 

Risk with all forms of hormonal contraceptive

Yet current guidance is to inform women of child-bearing age that they have received the drug and, because of increased risk of contraceptive failure, advise those taking oral hormonal contraceptives to follow the missed pill advice in the leaflet that comes with their contraceptives. It also counsels that clinicians should advise women using other types of hormonal contraceptive to use an additional nonhormonal means of contraception for 7 days.

The study authors also carried out a retrospective audit of sugammadex use in the trust and reported that during the 6 weeks covered by the audit, 234 patients were administered sugammadex of whom 65 (28%) were women of childbearing age. Of these, 17 had a medical history that meant they weren’t at risk of pregnancy, but the other 48 should have received advice on the risks of contraceptive failure – however there was no record in the medical notes of such advice having been given for any of the at-risk 48 women.

While sugammadex is the only anesthetic drug known to have this effect, it is recognized to interact with progesterone and so may reduce the effectiveness of hormonal contraceptives, including the progesterone-only pill, combined pill, vaginal rings, implants, and intrauterine devices.

Dr. Passi said: “It is concerning that we are so seldom informing patients of the risk of contraceptive failure following sugammadex use.

“Use of sugammadex is expected to rise as it becomes cheaper in the future, and ensuring that women receiving this medicine are aware it may increase their risk of unwanted pregnancy must be a priority.”

She added: “It is important to note, however, that most patients receiving an anesthetic do not need a muscle relaxant and that sugammadex is one of several drugs available to reverse muscle relaxation.”

Dr. Oliver said: “We only studied one hospital trust but we expect the results to be similar in elsewhere in the U.K.”

In response to their findings, the study’s authors have created patient information leaflets and letters and programmed the trust’s electronic patient record system to identify “at-risk” patients and deliver electronic prompts to the anesthetists caring for them in the perioperative period.

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

The effectiveness of hormonal contraceptives, including the pill and mini-pill, may be compromised by sugammadex, a drug widely used in anesthesia for reversing neuromuscular blockade induced by rocuronium or vecuronium.

Yet women are not routinely informed that the drug may make their contraception less effective, delegates at Euroanaesthesia, the annual meeting of the European Society of Anaesthesiology and Intensive Care in Milan were told.

New research presented at the meeting supports the authors’ experience that “robust methods for identifying at-risk patients and informing them of the associated risk of contraceptive failures is not common practice across anesthetic departments within the United Kingdom, and likely further afield.”

This is according to a survey of almost 150 anesthetic professionals, including consultants, junior doctors, and physician assistants, working at University College London Hospitals NHS Foundation Trust.

Dr. Neha Passi, Dr. Matt Oliver, and colleagues at the trust’s department of anesthesiology sent out a seven-question survey to their 150 colleagues and received 82 responses, 94% of which claimed awareness of the risk of contraceptive failure with sugammadex. However, 70% of the respondents admitted that they do not routinely discuss this with patients who have received the drug.
 

Risk with all forms of hormonal contraceptive

Yet current guidance is to inform women of child-bearing age that they have received the drug and, because of increased risk of contraceptive failure, advise those taking oral hormonal contraceptives to follow the missed pill advice in the leaflet that comes with their contraceptives. It also counsels that clinicians should advise women using other types of hormonal contraceptive to use an additional nonhormonal means of contraception for 7 days.

The study authors also carried out a retrospective audit of sugammadex use in the trust and reported that during the 6 weeks covered by the audit, 234 patients were administered sugammadex of whom 65 (28%) were women of childbearing age. Of these, 17 had a medical history that meant they weren’t at risk of pregnancy, but the other 48 should have received advice on the risks of contraceptive failure – however there was no record in the medical notes of such advice having been given for any of the at-risk 48 women.

While sugammadex is the only anesthetic drug known to have this effect, it is recognized to interact with progesterone and so may reduce the effectiveness of hormonal contraceptives, including the progesterone-only pill, combined pill, vaginal rings, implants, and intrauterine devices.

Dr. Passi said: “It is concerning that we are so seldom informing patients of the risk of contraceptive failure following sugammadex use.

“Use of sugammadex is expected to rise as it becomes cheaper in the future, and ensuring that women receiving this medicine are aware it may increase their risk of unwanted pregnancy must be a priority.”

She added: “It is important to note, however, that most patients receiving an anesthetic do not need a muscle relaxant and that sugammadex is one of several drugs available to reverse muscle relaxation.”

Dr. Oliver said: “We only studied one hospital trust but we expect the results to be similar in elsewhere in the U.K.”

In response to their findings, the study’s authors have created patient information leaflets and letters and programmed the trust’s electronic patient record system to identify “at-risk” patients and deliver electronic prompts to the anesthetists caring for them in the perioperative period.

A version of this article first appeared on Medscape UK.

The effectiveness of hormonal contraceptives, including the pill and mini-pill, may be compromised by sugammadex, a drug widely used in anesthesia for reversing neuromuscular blockade induced by rocuronium or vecuronium.

Yet women are not routinely informed that the drug may make their contraception less effective, delegates at Euroanaesthesia, the annual meeting of the European Society of Anaesthesiology and Intensive Care in Milan were told.

New research presented at the meeting supports the authors’ experience that “robust methods for identifying at-risk patients and informing them of the associated risk of contraceptive failures is not common practice across anesthetic departments within the United Kingdom, and likely further afield.”

This is according to a survey of almost 150 anesthetic professionals, including consultants, junior doctors, and physician assistants, working at University College London Hospitals NHS Foundation Trust.

Dr. Neha Passi, Dr. Matt Oliver, and colleagues at the trust’s department of anesthesiology sent out a seven-question survey to their 150 colleagues and received 82 responses, 94% of which claimed awareness of the risk of contraceptive failure with sugammadex. However, 70% of the respondents admitted that they do not routinely discuss this with patients who have received the drug.
 

Risk with all forms of hormonal contraceptive

Yet current guidance is to inform women of child-bearing age that they have received the drug and, because of increased risk of contraceptive failure, advise those taking oral hormonal contraceptives to follow the missed pill advice in the leaflet that comes with their contraceptives. It also counsels that clinicians should advise women using other types of hormonal contraceptive to use an additional nonhormonal means of contraception for 7 days.

The study authors also carried out a retrospective audit of sugammadex use in the trust and reported that during the 6 weeks covered by the audit, 234 patients were administered sugammadex of whom 65 (28%) were women of childbearing age. Of these, 17 had a medical history that meant they weren’t at risk of pregnancy, but the other 48 should have received advice on the risks of contraceptive failure – however there was no record in the medical notes of such advice having been given for any of the at-risk 48 women.

While sugammadex is the only anesthetic drug known to have this effect, it is recognized to interact with progesterone and so may reduce the effectiveness of hormonal contraceptives, including the progesterone-only pill, combined pill, vaginal rings, implants, and intrauterine devices.

Dr. Passi said: “It is concerning that we are so seldom informing patients of the risk of contraceptive failure following sugammadex use.

“Use of sugammadex is expected to rise as it becomes cheaper in the future, and ensuring that women receiving this medicine are aware it may increase their risk of unwanted pregnancy must be a priority.”

She added: “It is important to note, however, that most patients receiving an anesthetic do not need a muscle relaxant and that sugammadex is one of several drugs available to reverse muscle relaxation.”

Dr. Oliver said: “We only studied one hospital trust but we expect the results to be similar in elsewhere in the U.K.”

In response to their findings, the study’s authors have created patient information leaflets and letters and programmed the trust’s electronic patient record system to identify “at-risk” patients and deliver electronic prompts to the anesthetists caring for them in the perioperative period.

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM EUROANAESTHESIA 

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Five-year cervical screening interval safe for HPV-negative women

Article Type
Changed
Thu, 06/02/2022 - 14:33

A 5-year cervical screening interval is as safe and effective for women who test negative for human papillomavirus (HPV) as are 3-year intervals, according to a new ‘real life’ study led by King’s College London (KCL) with researchers from the University of Manchester, and the NHS, on behalf of the HPV pilot steering group.

The study, published in The BMJ, used data from the HPV screening pilot to assess rates of detection of high-grade cervical intraepithelial neoplasia (CIN3+) and of cervical cancer following a negative HPV test. It confirmed that 5-yearly screening prevents as many cancers as screening at 3-year intervals, even in women who are not vaccinated against HPV.
 

Change to primary HPV testing since 2019 

Before 2019, the NHS cervical screening program conducted cytology testing first, testing for HPV only if abnormalities were found. In 2019, following reporting of early results of the HPV pilot by the same researchers, the program in England switched to testing for HPV first, on the grounds that since having HPV infection comes before having abnormal cells, HPV testing would detect more women at risk of cervical cancer.

Following the switch to primary HPV testing, the same screening intervals were retained, meaning 3-yearly screening for those aged 24-49 years and testing every 5 years for women aged 50-64 years, or 3 years if they tested positive. However, the National Screening Committee had recommended that invites should be changed from 3 to 5 years for those in the under-50 age group found not to have high-risk HPV at their routine screening test.

For the latest study, funded by Cancer Research UK, the steering group researchers analyzed details for more than 1.3 million women who had attended screening for two rounds of the HPV screening pilot, the first from 2013 to 2016, with a follow-up to the end of 2019. By this time, the data set had doubled in size from the pilot study, and results had been linked with the national cancer registry.

They confirmed that HPV testing was more accurate than a cytology test, irrespective of whether the HPV test assay was DNA- or mRNA-based. With HPV testing, the risk of subsequent cytological changes more than halved overall. Eligible women under 50 who had a negative HPV screen in the first round had a much lower risk of detection of CIN3+ in the second round, with a rate of 1.21 in 1,000, compared with 4.52 in 1,000 after a negative cytology test.
 

Data support extension of the testing interval

“The study confirms that women in this age group are much less likely to develop clinically relevant cervical lesions and cervical cancer, 3 years after a negative HPV screen, compared with a negative smear test,” the researchers said.

They suggested that most women do not need to be screened as frequently as the current program allows, and that the data support an extension of the screening intervals, regardless of the test assay used, to 5 years after a negative HPV test in women aged 25-49 years, and even longer for women aged 50 years and older.

However, the screening interval for HPV-positive women who have negative HPV tests at early recall should be kept at 3 years, they said.

“These results are very reassuring,” said lead author Matejka Rebolj, PhD, senior epidemiologist at KCL. “They build on previous research that shows that following the introduction of HPV testing for cervical screening, a 5-year interval is at least as safe as the previous 3-year interval. Changing to 5-yearly screening will mean we can prevent just as many cancers as before, while allowing for fewer screens.”

Michelle Mitchell, Cancer Research UK’s chief executive, said: “This large study shows that offering cervical screening using HPV testing effectively prevents cervical cancer, without having to be screened as often. This builds on findings from years of research showing HPV testing is more accurate at predicting who is at risk of developing cervical cancer compared to the previous way of testing. As changes to the screening [programs] are made, they will be monitored to help ensure that cervical screening is as effective as possible for all who take part.”
 

 

 

If HPV is present, testing interval should remain every 3 years

Responding to the study, Theresa Freeman-Wang, MBChB, consultant gynecologist, president of the British Society for Colposcopy and Cervical Pathology, and spokesperson for the Royal College of Obstetricians and Gynaecologists, told this news organization: “England, Scotland, and Wales and many other countries now use HPV primary screening, which is much better at assessing risk than previous methods. HPV testing is more sensitive and accurate, so changes are picked up earlier.

“Studies have confirmed that if someone is HPV negative (i.e., HPV is not present in the screen test), intervals between tests can very safely be increased from 3 to 5 years. 

“If HPV is present, then the program will automatically look for any abnormal cells. If there are no abnormalities, the woman will be advised to have a repeat screen test in a year. If the HPV remains present over 3 successive years or if abnormal cells are detected at any stage, she will be referred for a more detailed screening examination called a colposcopy.

“It’s important that with any change like this, there is clear information available to explain what these changes mean.

“We have an effective cervical screening program in the UK that has significantly reduced the number of cases and deaths from this preventable cancer. 

“HPV screening every 5 years is safe and to be fully effective it is vital that women take up the invitation for cervical screening when called.”

A version of this article first appeared on Medscape UK.

Publications
Topics
Sections

A 5-year cervical screening interval is as safe and effective for women who test negative for human papillomavirus (HPV) as are 3-year intervals, according to a new ‘real life’ study led by King’s College London (KCL) with researchers from the University of Manchester, and the NHS, on behalf of the HPV pilot steering group.

The study, published in The BMJ, used data from the HPV screening pilot to assess rates of detection of high-grade cervical intraepithelial neoplasia (CIN3+) and of cervical cancer following a negative HPV test. It confirmed that 5-yearly screening prevents as many cancers as screening at 3-year intervals, even in women who are not vaccinated against HPV.
 

Change to primary HPV testing since 2019 

Before 2019, the NHS cervical screening program conducted cytology testing first, testing for HPV only if abnormalities were found. In 2019, following reporting of early results of the HPV pilot by the same researchers, the program in England switched to testing for HPV first, on the grounds that since having HPV infection comes before having abnormal cells, HPV testing would detect more women at risk of cervical cancer.

Following the switch to primary HPV testing, the same screening intervals were retained, meaning 3-yearly screening for those aged 24-49 years and testing every 5 years for women aged 50-64 years, or 3 years if they tested positive. However, the National Screening Committee had recommended that invites should be changed from 3 to 5 years for those in the under-50 age group found not to have high-risk HPV at their routine screening test.

For the latest study, funded by Cancer Research UK, the steering group researchers analyzed details for more than 1.3 million women who had attended screening for two rounds of the HPV screening pilot, the first from 2013 to 2016, with a follow-up to the end of 2019. By this time, the data set had doubled in size from the pilot study, and results had been linked with the national cancer registry.

They confirmed that HPV testing was more accurate than a cytology test, irrespective of whether the HPV test assay was DNA- or mRNA-based. With HPV testing, the risk of subsequent cytological changes more than halved overall. Eligible women under 50 who had a negative HPV screen in the first round had a much lower risk of detection of CIN3+ in the second round, with a rate of 1.21 in 1,000, compared with 4.52 in 1,000 after a negative cytology test.
 

Data support extension of the testing interval

“The study confirms that women in this age group are much less likely to develop clinically relevant cervical lesions and cervical cancer, 3 years after a negative HPV screen, compared with a negative smear test,” the researchers said.

They suggested that most women do not need to be screened as frequently as the current program allows, and that the data support an extension of the screening intervals, regardless of the test assay used, to 5 years after a negative HPV test in women aged 25-49 years, and even longer for women aged 50 years and older.

However, the screening interval for HPV-positive women who have negative HPV tests at early recall should be kept at 3 years, they said.

“These results are very reassuring,” said lead author Matejka Rebolj, PhD, senior epidemiologist at KCL. “They build on previous research that shows that following the introduction of HPV testing for cervical screening, a 5-year interval is at least as safe as the previous 3-year interval. Changing to 5-yearly screening will mean we can prevent just as many cancers as before, while allowing for fewer screens.”

Michelle Mitchell, Cancer Research UK’s chief executive, said: “This large study shows that offering cervical screening using HPV testing effectively prevents cervical cancer, without having to be screened as often. This builds on findings from years of research showing HPV testing is more accurate at predicting who is at risk of developing cervical cancer compared to the previous way of testing. As changes to the screening [programs] are made, they will be monitored to help ensure that cervical screening is as effective as possible for all who take part.”
 

 

 

If HPV is present, testing interval should remain every 3 years

Responding to the study, Theresa Freeman-Wang, MBChB, consultant gynecologist, president of the British Society for Colposcopy and Cervical Pathology, and spokesperson for the Royal College of Obstetricians and Gynaecologists, told this news organization: “England, Scotland, and Wales and many other countries now use HPV primary screening, which is much better at assessing risk than previous methods. HPV testing is more sensitive and accurate, so changes are picked up earlier.

“Studies have confirmed that if someone is HPV negative (i.e., HPV is not present in the screen test), intervals between tests can very safely be increased from 3 to 5 years. 

“If HPV is present, then the program will automatically look for any abnormal cells. If there are no abnormalities, the woman will be advised to have a repeat screen test in a year. If the HPV remains present over 3 successive years or if abnormal cells are detected at any stage, she will be referred for a more detailed screening examination called a colposcopy.

“It’s important that with any change like this, there is clear information available to explain what these changes mean.

“We have an effective cervical screening program in the UK that has significantly reduced the number of cases and deaths from this preventable cancer. 

“HPV screening every 5 years is safe and to be fully effective it is vital that women take up the invitation for cervical screening when called.”

A version of this article first appeared on Medscape UK.

A 5-year cervical screening interval is as safe and effective for women who test negative for human papillomavirus (HPV) as are 3-year intervals, according to a new ‘real life’ study led by King’s College London (KCL) with researchers from the University of Manchester, and the NHS, on behalf of the HPV pilot steering group.

The study, published in The BMJ, used data from the HPV screening pilot to assess rates of detection of high-grade cervical intraepithelial neoplasia (CIN3+) and of cervical cancer following a negative HPV test. It confirmed that 5-yearly screening prevents as many cancers as screening at 3-year intervals, even in women who are not vaccinated against HPV.
 

Change to primary HPV testing since 2019 

Before 2019, the NHS cervical screening program conducted cytology testing first, testing for HPV only if abnormalities were found. In 2019, following reporting of early results of the HPV pilot by the same researchers, the program in England switched to testing for HPV first, on the grounds that since having HPV infection comes before having abnormal cells, HPV testing would detect more women at risk of cervical cancer.

Following the switch to primary HPV testing, the same screening intervals were retained, meaning 3-yearly screening for those aged 24-49 years and testing every 5 years for women aged 50-64 years, or 3 years if they tested positive. However, the National Screening Committee had recommended that invites should be changed from 3 to 5 years for those in the under-50 age group found not to have high-risk HPV at their routine screening test.

For the latest study, funded by Cancer Research UK, the steering group researchers analyzed details for more than 1.3 million women who had attended screening for two rounds of the HPV screening pilot, the first from 2013 to 2016, with a follow-up to the end of 2019. By this time, the data set had doubled in size from the pilot study, and results had been linked with the national cancer registry.

They confirmed that HPV testing was more accurate than a cytology test, irrespective of whether the HPV test assay was DNA- or mRNA-based. With HPV testing, the risk of subsequent cytological changes more than halved overall. Eligible women under 50 who had a negative HPV screen in the first round had a much lower risk of detection of CIN3+ in the second round, with a rate of 1.21 in 1,000, compared with 4.52 in 1,000 after a negative cytology test.
 

Data support extension of the testing interval

“The study confirms that women in this age group are much less likely to develop clinically relevant cervical lesions and cervical cancer, 3 years after a negative HPV screen, compared with a negative smear test,” the researchers said.

They suggested that most women do not need to be screened as frequently as the current program allows, and that the data support an extension of the screening intervals, regardless of the test assay used, to 5 years after a negative HPV test in women aged 25-49 years, and even longer for women aged 50 years and older.

However, the screening interval for HPV-positive women who have negative HPV tests at early recall should be kept at 3 years, they said.

“These results are very reassuring,” said lead author Matejka Rebolj, PhD, senior epidemiologist at KCL. “They build on previous research that shows that following the introduction of HPV testing for cervical screening, a 5-year interval is at least as safe as the previous 3-year interval. Changing to 5-yearly screening will mean we can prevent just as many cancers as before, while allowing for fewer screens.”

Michelle Mitchell, Cancer Research UK’s chief executive, said: “This large study shows that offering cervical screening using HPV testing effectively prevents cervical cancer, without having to be screened as often. This builds on findings from years of research showing HPV testing is more accurate at predicting who is at risk of developing cervical cancer compared to the previous way of testing. As changes to the screening [programs] are made, they will be monitored to help ensure that cervical screening is as effective as possible for all who take part.”
 

 

 

If HPV is present, testing interval should remain every 3 years

Responding to the study, Theresa Freeman-Wang, MBChB, consultant gynecologist, president of the British Society for Colposcopy and Cervical Pathology, and spokesperson for the Royal College of Obstetricians and Gynaecologists, told this news organization: “England, Scotland, and Wales and many other countries now use HPV primary screening, which is much better at assessing risk than previous methods. HPV testing is more sensitive and accurate, so changes are picked up earlier.

“Studies have confirmed that if someone is HPV negative (i.e., HPV is not present in the screen test), intervals between tests can very safely be increased from 3 to 5 years. 

“If HPV is present, then the program will automatically look for any abnormal cells. If there are no abnormalities, the woman will be advised to have a repeat screen test in a year. If the HPV remains present over 3 successive years or if abnormal cells are detected at any stage, she will be referred for a more detailed screening examination called a colposcopy.

“It’s important that with any change like this, there is clear information available to explain what these changes mean.

“We have an effective cervical screening program in the UK that has significantly reduced the number of cases and deaths from this preventable cancer. 

“HPV screening every 5 years is safe and to be fully effective it is vital that women take up the invitation for cervical screening when called.”

A version of this article first appeared on Medscape UK.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE BMJ

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article