Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort

Military burn pits: Their evidence and implications for respiratory health

Article Type
Changed
Thu, 05/30/2024 - 12:18

Military service is a hazard-ridden profession. It’s easy to recognize the direct dangers from warfighting, such as gunfire and explosions, but the risks from environmental, chemical, and other occupational exposures can be harder to see.

Combustion-based waste management systems, otherwise known as “burn pits,” were used in deployed environments by the US military from the 1990s to the early 2010s. These burn pits were commonly used to eliminate plastics, electronics, munitions, metals, wood, chemicals, and even human waste. At the height of the recent conflicts in Afghanistan, Iraq, and other southwest Asia locations, more than 70% of military installations employed at least one, and nearly 4 million service members were exposed to some degree to their emissions.

Dr. Zachary A. Haynes
CHEST
Dr. Zachary A. Haynes

Reports of burn pits being related to organic disease have garnered widespread media attention. Initially, this came through anecdotal reports of post-deployment respiratory symptoms. Over time, the conditions attributed to burn pits expanded to include newly diagnosed respiratory diseases and malignancies. The composition of burn pit emissions sparked concern after fine particulate matter, volatile organic compounds, dioxins, and polycyclic aromatic hydrocarbons were detected. Each has previously been associated with an increased risk of respiratory disease or malignancy.

Ultimately, Congress passed the 2022 Promise to Address Comprehensive Toxins (PACT) Act, presumptively linking more than 20 diagnoses to burn pits. The PACT Act provides countless veterans access to low-cost or free medical care for their respective conditions.
 

What do we know about burn pits and deployment-related respiratory disease?

Data from the Millennium Cohort Study noted an approximately 40% increase in respiratory symptoms among individuals returning from deployment but no increase in the frequency of diagnosed respiratory diseases.1 This study and others definitively established a temporal relationship between deployment and respiratory symptoms. Soon after, a retrospective, observational study of service members with post-deployment respiratory symptoms found a high prevalence of constrictive bronchiolitis (CB) identified by lung biopsy.2 Patients in this group reported exposure to burn pits and a sulfur mine fire in the Mosul area while deployed. Most had normal imaging and pulmonary function testing before biopsy, confounding the clinical significance of the CB finding. The publication of this report led to increased investigation of respiratory function during and after deployment.

Dr. Joel Anthony Nations
CHEST
Dr. Joel Anthony Nations

In a series of prospective studies that included full pulmonary function testing, impulse oscillometry, cardiopulmonary exercise testing, bronchoscopy, and, occasionally, lung biopsy to evaluate post-deployment dyspnea, only a small minority received a diagnosis of clinically significant lung disease.3,4 Additionally, when comparing spirometry and impulse oscillometry results from before and after deployment, no decline in lung function was observed in a population of service members reporting regular burn pit exposure.5 These studies suggest that at the population level, deployment does not lead to abnormalities in the structure and function of the respiratory system.

The National Academies of Sciences published two separate reviews of burn pit exposure and outcomes in 2011 and 2020.6,7 They found insufficient evidence to support a causal relationship between burn pit exposure and pulmonary disease. They highlighted studies on the composition of emissions from the area surrounding the largest military burn pit in Iraq. Levels of particulate matter, volatile organic compounds, and polycyclic aromatic hydrocarbons were elevated when compared with those of a typical American city but were similar to the pollution levels seen in the region at the time. Given these findings, they suggested ambient air pollution may have contributed more to clinically significant disease than burn pit emissions.
 

 

 

How do we interpret this mixed data?

At the population level, we have yet to find conclusive data directly linking burn pit exposure to the development of any respiratory disease. Does this mean that burn pits are not harmful?

Not necessarily. Research on outcomes related to burn pit exposure is challenging given the heterogeneity in exposure volume. Much of the research is retrospective and subject to recall bias. Relationships may be distorted, and the precision of reported symptoms and exposure levels is altered. Given these challenges, it’s unsurprising that evidence of causality has yet to be proven. In addition, some portion of service members has been diagnosed with respiratory disease that could be related to burn pit exposure.

What is now indisputable is that deployment to southwest Asia leads to an increase in respiratory complaints. Whether veteran respiratory symptoms are due to burn pits, ambient pollution, environmental particulate matter, or dust storms is less clinically relevant. These symptoms require attention, investigation, and management.
 

What does this mean for the future medical care of service members and veterans?

Many veterans with post-deployment respiratory symptoms undergo extensive evaluations without obtaining a definitive diagnosis. A recent consensus statement on deployment-related respiratory symptoms provides a framework for evaluation in such cases.8 In keeping with that statement, we recommend veterans be referred to centers with expertise in this field, such as the Department of Veterans Affairs (VA) or military health centers, when deployment-related respiratory symptoms are reported. When the evaluation does not lead to a treatable diagnosis, these centers can provide multidisciplinary care to address the symptoms of dyspnea, cough, fatigue, and exercise intolerance to improve functional status.

Despite uncertainty in the evidence or challenges in diagnosis, both the Department of Defense (DoD) and VA remain fully committed to addressing the health concerns of service members and veterans. Notably, the VA has already screened more than 5 million veterans for toxic military exposures in accordance with the PACT Act and is providing ongoing screening and care for veterans with post-deployment respiratory symptoms. Furthermore, the DoD and VA have dedicated large portions of their research budgets to investigating the impacts of exposures during military service and optimizing the care of those with respiratory symptoms. With these commitments to patient care and research, our veterans’ respiratory health can now be optimized, and future risks can be mitigated.
 

Dr. Haynes is Fellow, Pulmonary and Critical Care Medicine, Walter Reed National Military Medical Center, Assistant Professor of Medicine, Uniformed Services University. Dr. Nations is Pulmonary and Critical Care Medicine, Deputy Chief of Staff for Operations, Washington DC VA Medical Center, Associate Professor of Medicine, Uniformed Services University.

References

1. Smith B, Wong CA, Smith TC, Boyko EJ, Gackstetter GD; Margaret A. K. Ryan for the Millennium Cohort Study Team. Newly reported respiratory symptoms and conditions among military personnel deployed to Iraq and Afghanistan: a prospective population-based study. Am J Epidemiol. 2009;170(11):1433-1442. Preprint. Posted online October 22, 2009. PMID: 19850627. doi: 10.1093/aje/kwp287

2. King MS, Eisenberg R, Newman JH, et al. Constrictive bronchiolitis in soldiers returning from Iraq and Afghanistan. N Engl J Med. 2011;365(3):222-230. Erratum in: N Engl J Med. 2011;365(18):1749. PMID: 21774710; PMCID: PMC3296566. doi: 10.1056/NEJMoa1101388

3. Morris MJ, Dodson DW, Lucero PF, et al. Study of active duty military for pulmonary disease related to environmental deployment exposures (STAMPEDE). Am J Respir Crit Care Med. 2014;190(1):77-84. PMID: 24922562. doi: 10.1164/rccm.201402-0372OC

4. Morris MJ, Walter RJ, McCann ET, et al. Clinical evaluation of deployed military personnel with chronic respiratory symptoms: study of active duty military for pulmonary disease related to environmental deployment exposures (STAMPEDE) III. Chest. 2020;157(6):1559-1567. Preprint. Posted online February 1, 2020. PMID: 32017933. doi: 10.1016/j.chest.2020.01.024

5. Morris MJ, Skabelund AJ, Rawlins FA 3rd, Gallup RA, Aden JK, Holley AB. Study of active duty military personnel for environmental deployment exposures: pre- and post-deployment spirometry (STAMPEDE II). Respir Care. 2019;64(5):536-544. Preprint. Posted online January 8, 2019.PMID: 30622173. doi: 10.4187/respcare.06396

6. Institute of Medicine. Long-Term Health Consequences of Exposure to Burn Pits in Iraq and Afghanistan. The National Academies Press; 2011. https://doi.org/10.17226/13209

7. National Academies of Sciences, Engineering, and Medicine. Respiratory Health Effects of Airborne Hazards Exposures in the Southwest Asia Theater of Military Operations. The National Academies Press; 2020. https://doi.org/10.17226/25837

8. Falvo MJ, Sotolongo AM, Osterholzer JJ, et al. Consensus statements on deployment-related respiratory disease, inclusive of constrictive bronchiolitis: a modified Delphi study. Chest. 2023;163(3):599-609. Preprint. Posted November 4, 2022. PMID: 36343686; PMCID: PMC10154857. doi: 10.1016/j.chest.2022.10.031

Publications
Topics
Sections

Military service is a hazard-ridden profession. It’s easy to recognize the direct dangers from warfighting, such as gunfire and explosions, but the risks from environmental, chemical, and other occupational exposures can be harder to see.

Combustion-based waste management systems, otherwise known as “burn pits,” were used in deployed environments by the US military from the 1990s to the early 2010s. These burn pits were commonly used to eliminate plastics, electronics, munitions, metals, wood, chemicals, and even human waste. At the height of the recent conflicts in Afghanistan, Iraq, and other southwest Asia locations, more than 70% of military installations employed at least one, and nearly 4 million service members were exposed to some degree to their emissions.

Dr. Zachary A. Haynes
CHEST
Dr. Zachary A. Haynes

Reports of burn pits being related to organic disease have garnered widespread media attention. Initially, this came through anecdotal reports of post-deployment respiratory symptoms. Over time, the conditions attributed to burn pits expanded to include newly diagnosed respiratory diseases and malignancies. The composition of burn pit emissions sparked concern after fine particulate matter, volatile organic compounds, dioxins, and polycyclic aromatic hydrocarbons were detected. Each has previously been associated with an increased risk of respiratory disease or malignancy.

Ultimately, Congress passed the 2022 Promise to Address Comprehensive Toxins (PACT) Act, presumptively linking more than 20 diagnoses to burn pits. The PACT Act provides countless veterans access to low-cost or free medical care for their respective conditions.
 

What do we know about burn pits and deployment-related respiratory disease?

Data from the Millennium Cohort Study noted an approximately 40% increase in respiratory symptoms among individuals returning from deployment but no increase in the frequency of diagnosed respiratory diseases.1 This study and others definitively established a temporal relationship between deployment and respiratory symptoms. Soon after, a retrospective, observational study of service members with post-deployment respiratory symptoms found a high prevalence of constrictive bronchiolitis (CB) identified by lung biopsy.2 Patients in this group reported exposure to burn pits and a sulfur mine fire in the Mosul area while deployed. Most had normal imaging and pulmonary function testing before biopsy, confounding the clinical significance of the CB finding. The publication of this report led to increased investigation of respiratory function during and after deployment.

Dr. Joel Anthony Nations
CHEST
Dr. Joel Anthony Nations

In a series of prospective studies that included full pulmonary function testing, impulse oscillometry, cardiopulmonary exercise testing, bronchoscopy, and, occasionally, lung biopsy to evaluate post-deployment dyspnea, only a small minority received a diagnosis of clinically significant lung disease.3,4 Additionally, when comparing spirometry and impulse oscillometry results from before and after deployment, no decline in lung function was observed in a population of service members reporting regular burn pit exposure.5 These studies suggest that at the population level, deployment does not lead to abnormalities in the structure and function of the respiratory system.

The National Academies of Sciences published two separate reviews of burn pit exposure and outcomes in 2011 and 2020.6,7 They found insufficient evidence to support a causal relationship between burn pit exposure and pulmonary disease. They highlighted studies on the composition of emissions from the area surrounding the largest military burn pit in Iraq. Levels of particulate matter, volatile organic compounds, and polycyclic aromatic hydrocarbons were elevated when compared with those of a typical American city but were similar to the pollution levels seen in the region at the time. Given these findings, they suggested ambient air pollution may have contributed more to clinically significant disease than burn pit emissions.
 

 

 

How do we interpret this mixed data?

At the population level, we have yet to find conclusive data directly linking burn pit exposure to the development of any respiratory disease. Does this mean that burn pits are not harmful?

Not necessarily. Research on outcomes related to burn pit exposure is challenging given the heterogeneity in exposure volume. Much of the research is retrospective and subject to recall bias. Relationships may be distorted, and the precision of reported symptoms and exposure levels is altered. Given these challenges, it’s unsurprising that evidence of causality has yet to be proven. In addition, some portion of service members has been diagnosed with respiratory disease that could be related to burn pit exposure.

What is now indisputable is that deployment to southwest Asia leads to an increase in respiratory complaints. Whether veteran respiratory symptoms are due to burn pits, ambient pollution, environmental particulate matter, or dust storms is less clinically relevant. These symptoms require attention, investigation, and management.
 

What does this mean for the future medical care of service members and veterans?

Many veterans with post-deployment respiratory symptoms undergo extensive evaluations without obtaining a definitive diagnosis. A recent consensus statement on deployment-related respiratory symptoms provides a framework for evaluation in such cases.8 In keeping with that statement, we recommend veterans be referred to centers with expertise in this field, such as the Department of Veterans Affairs (VA) or military health centers, when deployment-related respiratory symptoms are reported. When the evaluation does not lead to a treatable diagnosis, these centers can provide multidisciplinary care to address the symptoms of dyspnea, cough, fatigue, and exercise intolerance to improve functional status.

Despite uncertainty in the evidence or challenges in diagnosis, both the Department of Defense (DoD) and VA remain fully committed to addressing the health concerns of service members and veterans. Notably, the VA has already screened more than 5 million veterans for toxic military exposures in accordance with the PACT Act and is providing ongoing screening and care for veterans with post-deployment respiratory symptoms. Furthermore, the DoD and VA have dedicated large portions of their research budgets to investigating the impacts of exposures during military service and optimizing the care of those with respiratory symptoms. With these commitments to patient care and research, our veterans’ respiratory health can now be optimized, and future risks can be mitigated.
 

Dr. Haynes is Fellow, Pulmonary and Critical Care Medicine, Walter Reed National Military Medical Center, Assistant Professor of Medicine, Uniformed Services University. Dr. Nations is Pulmonary and Critical Care Medicine, Deputy Chief of Staff for Operations, Washington DC VA Medical Center, Associate Professor of Medicine, Uniformed Services University.

References

1. Smith B, Wong CA, Smith TC, Boyko EJ, Gackstetter GD; Margaret A. K. Ryan for the Millennium Cohort Study Team. Newly reported respiratory symptoms and conditions among military personnel deployed to Iraq and Afghanistan: a prospective population-based study. Am J Epidemiol. 2009;170(11):1433-1442. Preprint. Posted online October 22, 2009. PMID: 19850627. doi: 10.1093/aje/kwp287

2. King MS, Eisenberg R, Newman JH, et al. Constrictive bronchiolitis in soldiers returning from Iraq and Afghanistan. N Engl J Med. 2011;365(3):222-230. Erratum in: N Engl J Med. 2011;365(18):1749. PMID: 21774710; PMCID: PMC3296566. doi: 10.1056/NEJMoa1101388

3. Morris MJ, Dodson DW, Lucero PF, et al. Study of active duty military for pulmonary disease related to environmental deployment exposures (STAMPEDE). Am J Respir Crit Care Med. 2014;190(1):77-84. PMID: 24922562. doi: 10.1164/rccm.201402-0372OC

4. Morris MJ, Walter RJ, McCann ET, et al. Clinical evaluation of deployed military personnel with chronic respiratory symptoms: study of active duty military for pulmonary disease related to environmental deployment exposures (STAMPEDE) III. Chest. 2020;157(6):1559-1567. Preprint. Posted online February 1, 2020. PMID: 32017933. doi: 10.1016/j.chest.2020.01.024

5. Morris MJ, Skabelund AJ, Rawlins FA 3rd, Gallup RA, Aden JK, Holley AB. Study of active duty military personnel for environmental deployment exposures: pre- and post-deployment spirometry (STAMPEDE II). Respir Care. 2019;64(5):536-544. Preprint. Posted online January 8, 2019.PMID: 30622173. doi: 10.4187/respcare.06396

6. Institute of Medicine. Long-Term Health Consequences of Exposure to Burn Pits in Iraq and Afghanistan. The National Academies Press; 2011. https://doi.org/10.17226/13209

7. National Academies of Sciences, Engineering, and Medicine. Respiratory Health Effects of Airborne Hazards Exposures in the Southwest Asia Theater of Military Operations. The National Academies Press; 2020. https://doi.org/10.17226/25837

8. Falvo MJ, Sotolongo AM, Osterholzer JJ, et al. Consensus statements on deployment-related respiratory disease, inclusive of constrictive bronchiolitis: a modified Delphi study. Chest. 2023;163(3):599-609. Preprint. Posted November 4, 2022. PMID: 36343686; PMCID: PMC10154857. doi: 10.1016/j.chest.2022.10.031

Military service is a hazard-ridden profession. It’s easy to recognize the direct dangers from warfighting, such as gunfire and explosions, but the risks from environmental, chemical, and other occupational exposures can be harder to see.

Combustion-based waste management systems, otherwise known as “burn pits,” were used in deployed environments by the US military from the 1990s to the early 2010s. These burn pits were commonly used to eliminate plastics, electronics, munitions, metals, wood, chemicals, and even human waste. At the height of the recent conflicts in Afghanistan, Iraq, and other southwest Asia locations, more than 70% of military installations employed at least one, and nearly 4 million service members were exposed to some degree to their emissions.

Dr. Zachary A. Haynes
CHEST
Dr. Zachary A. Haynes

Reports of burn pits being related to organic disease have garnered widespread media attention. Initially, this came through anecdotal reports of post-deployment respiratory symptoms. Over time, the conditions attributed to burn pits expanded to include newly diagnosed respiratory diseases and malignancies. The composition of burn pit emissions sparked concern after fine particulate matter, volatile organic compounds, dioxins, and polycyclic aromatic hydrocarbons were detected. Each has previously been associated with an increased risk of respiratory disease or malignancy.

Ultimately, Congress passed the 2022 Promise to Address Comprehensive Toxins (PACT) Act, presumptively linking more than 20 diagnoses to burn pits. The PACT Act provides countless veterans access to low-cost or free medical care for their respective conditions.
 

What do we know about burn pits and deployment-related respiratory disease?

Data from the Millennium Cohort Study noted an approximately 40% increase in respiratory symptoms among individuals returning from deployment but no increase in the frequency of diagnosed respiratory diseases.1 This study and others definitively established a temporal relationship between deployment and respiratory symptoms. Soon after, a retrospective, observational study of service members with post-deployment respiratory symptoms found a high prevalence of constrictive bronchiolitis (CB) identified by lung biopsy.2 Patients in this group reported exposure to burn pits and a sulfur mine fire in the Mosul area while deployed. Most had normal imaging and pulmonary function testing before biopsy, confounding the clinical significance of the CB finding. The publication of this report led to increased investigation of respiratory function during and after deployment.

Dr. Joel Anthony Nations
CHEST
Dr. Joel Anthony Nations

In a series of prospective studies that included full pulmonary function testing, impulse oscillometry, cardiopulmonary exercise testing, bronchoscopy, and, occasionally, lung biopsy to evaluate post-deployment dyspnea, only a small minority received a diagnosis of clinically significant lung disease.3,4 Additionally, when comparing spirometry and impulse oscillometry results from before and after deployment, no decline in lung function was observed in a population of service members reporting regular burn pit exposure.5 These studies suggest that at the population level, deployment does not lead to abnormalities in the structure and function of the respiratory system.

The National Academies of Sciences published two separate reviews of burn pit exposure and outcomes in 2011 and 2020.6,7 They found insufficient evidence to support a causal relationship between burn pit exposure and pulmonary disease. They highlighted studies on the composition of emissions from the area surrounding the largest military burn pit in Iraq. Levels of particulate matter, volatile organic compounds, and polycyclic aromatic hydrocarbons were elevated when compared with those of a typical American city but were similar to the pollution levels seen in the region at the time. Given these findings, they suggested ambient air pollution may have contributed more to clinically significant disease than burn pit emissions.
 

 

 

How do we interpret this mixed data?

At the population level, we have yet to find conclusive data directly linking burn pit exposure to the development of any respiratory disease. Does this mean that burn pits are not harmful?

Not necessarily. Research on outcomes related to burn pit exposure is challenging given the heterogeneity in exposure volume. Much of the research is retrospective and subject to recall bias. Relationships may be distorted, and the precision of reported symptoms and exposure levels is altered. Given these challenges, it’s unsurprising that evidence of causality has yet to be proven. In addition, some portion of service members has been diagnosed with respiratory disease that could be related to burn pit exposure.

What is now indisputable is that deployment to southwest Asia leads to an increase in respiratory complaints. Whether veteran respiratory symptoms are due to burn pits, ambient pollution, environmental particulate matter, or dust storms is less clinically relevant. These symptoms require attention, investigation, and management.
 

What does this mean for the future medical care of service members and veterans?

Many veterans with post-deployment respiratory symptoms undergo extensive evaluations without obtaining a definitive diagnosis. A recent consensus statement on deployment-related respiratory symptoms provides a framework for evaluation in such cases.8 In keeping with that statement, we recommend veterans be referred to centers with expertise in this field, such as the Department of Veterans Affairs (VA) or military health centers, when deployment-related respiratory symptoms are reported. When the evaluation does not lead to a treatable diagnosis, these centers can provide multidisciplinary care to address the symptoms of dyspnea, cough, fatigue, and exercise intolerance to improve functional status.

Despite uncertainty in the evidence or challenges in diagnosis, both the Department of Defense (DoD) and VA remain fully committed to addressing the health concerns of service members and veterans. Notably, the VA has already screened more than 5 million veterans for toxic military exposures in accordance with the PACT Act and is providing ongoing screening and care for veterans with post-deployment respiratory symptoms. Furthermore, the DoD and VA have dedicated large portions of their research budgets to investigating the impacts of exposures during military service and optimizing the care of those with respiratory symptoms. With these commitments to patient care and research, our veterans’ respiratory health can now be optimized, and future risks can be mitigated.
 

Dr. Haynes is Fellow, Pulmonary and Critical Care Medicine, Walter Reed National Military Medical Center, Assistant Professor of Medicine, Uniformed Services University. Dr. Nations is Pulmonary and Critical Care Medicine, Deputy Chief of Staff for Operations, Washington DC VA Medical Center, Associate Professor of Medicine, Uniformed Services University.

References

1. Smith B, Wong CA, Smith TC, Boyko EJ, Gackstetter GD; Margaret A. K. Ryan for the Millennium Cohort Study Team. Newly reported respiratory symptoms and conditions among military personnel deployed to Iraq and Afghanistan: a prospective population-based study. Am J Epidemiol. 2009;170(11):1433-1442. Preprint. Posted online October 22, 2009. PMID: 19850627. doi: 10.1093/aje/kwp287

2. King MS, Eisenberg R, Newman JH, et al. Constrictive bronchiolitis in soldiers returning from Iraq and Afghanistan. N Engl J Med. 2011;365(3):222-230. Erratum in: N Engl J Med. 2011;365(18):1749. PMID: 21774710; PMCID: PMC3296566. doi: 10.1056/NEJMoa1101388

3. Morris MJ, Dodson DW, Lucero PF, et al. Study of active duty military for pulmonary disease related to environmental deployment exposures (STAMPEDE). Am J Respir Crit Care Med. 2014;190(1):77-84. PMID: 24922562. doi: 10.1164/rccm.201402-0372OC

4. Morris MJ, Walter RJ, McCann ET, et al. Clinical evaluation of deployed military personnel with chronic respiratory symptoms: study of active duty military for pulmonary disease related to environmental deployment exposures (STAMPEDE) III. Chest. 2020;157(6):1559-1567. Preprint. Posted online February 1, 2020. PMID: 32017933. doi: 10.1016/j.chest.2020.01.024

5. Morris MJ, Skabelund AJ, Rawlins FA 3rd, Gallup RA, Aden JK, Holley AB. Study of active duty military personnel for environmental deployment exposures: pre- and post-deployment spirometry (STAMPEDE II). Respir Care. 2019;64(5):536-544. Preprint. Posted online January 8, 2019.PMID: 30622173. doi: 10.4187/respcare.06396

6. Institute of Medicine. Long-Term Health Consequences of Exposure to Burn Pits in Iraq and Afghanistan. The National Academies Press; 2011. https://doi.org/10.17226/13209

7. National Academies of Sciences, Engineering, and Medicine. Respiratory Health Effects of Airborne Hazards Exposures in the Southwest Asia Theater of Military Operations. The National Academies Press; 2020. https://doi.org/10.17226/25837

8. Falvo MJ, Sotolongo AM, Osterholzer JJ, et al. Consensus statements on deployment-related respiratory disease, inclusive of constrictive bronchiolitis: a modified Delphi study. Chest. 2023;163(3):599-609. Preprint. Posted November 4, 2022. PMID: 36343686; PMCID: PMC10154857. doi: 10.1016/j.chest.2022.10.031

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Late-Night Eaters May Have Increased Risk for Colorectal Cancer

Article Type
Changed
Mon, 06/10/2024 - 17:11

 

WASHINGTON — Eating within 3 hours of bedtime at least 4 days a week could increase chances for developing colorectal cancer, according to the results of research presented at the annual Digestive Disease Week® (DDW).

Investigators in a new study questioned 664 people getting a colonoscopy to screen for cancer, and 42% said they were late eaters. This group was 46% more likely than non–late eaters to have an adenoma found during colonoscopy. An estimated 5% to 10% of them become cancerous over time.

“A lot of other studies are about what we eat but not when we eat,” said Edena Khoshaba, lead investigator and a medical student at Rush University Medical College in Chicago. “The common advice includes not eating red meat, eating more fruits and vegetables — which is great, of course — but we wanted to see if the timing affects us at all.”

Ms. Khoshaba and colleagues found it did. Late eaters were 5.5 times more likely to have three or more tubular adenomas compared to non–late eaters, even after adjusting for what people were eating. Tubular adenomas are the most common type of polyp found in the colon.

So, what’s the possible connection between late eating and the risk for colorectal cancer?
 

Resetting Your Internal Clock

Eating close to bedtime could be throwing off the body’s circadian rhythm. But in this case, it’s not the central circadian center located in the brain — the one that releases melatonin. Instead, late eating could disrupt the peripheral circadian rhythm, part of which is found in the GI tract. For example, if a person is eating late at night, the brain thinks it is nighttime while the gut thinks it is daytime, Ms. Khoshaba said in an interview.

This is an interesting study, said Amy Bragagnini, MS, RD, spokesperson for the Academy of Nutrition and Dietetics, when asked to comment on the research. “It is true that eating later at night can disrupt your circadian rhythm.”

“In addition, many of my patients have told me that when they do eat later at night, they don’t always make the healthiest food choices,” Ms. Bragagnini said. “Their late-night food choices are generally higher in added sugar and fat. This may cause them to consume far more calories than their body needs.” So, eating late at night can also lead to unwanted weight gain.

An unanswered question is if late eating is connected in any way at all to increasing rates of colorectal cancer seen in younger patients.

This was an observational study, and another possible limitation, Ms. Khoshaba said, is that people were asked to recall their diets over 24 hours, which may not always be accurate.

Some of the organisms in the gut have their own internal clocks that follow a daily rhythm, and what someone eat determines how many different kinds of these organisms are active, Ms. Bragagnini said.

“So, if your late-night eating consists of foods high in sugar and fat, you may be negatively impacting your microbiome.” she said.

The next step for Ms. Khoshaba and colleagues is a study examining the peripheral circadian rhythm, changes in the gut microbiome, and the risk for developing metabolic syndrome. Ms. Khoshaba and Ms. Bragagnini had no relevant disclosures.

A version of this article appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

WASHINGTON — Eating within 3 hours of bedtime at least 4 days a week could increase chances for developing colorectal cancer, according to the results of research presented at the annual Digestive Disease Week® (DDW).

Investigators in a new study questioned 664 people getting a colonoscopy to screen for cancer, and 42% said they were late eaters. This group was 46% more likely than non–late eaters to have an adenoma found during colonoscopy. An estimated 5% to 10% of them become cancerous over time.

“A lot of other studies are about what we eat but not when we eat,” said Edena Khoshaba, lead investigator and a medical student at Rush University Medical College in Chicago. “The common advice includes not eating red meat, eating more fruits and vegetables — which is great, of course — but we wanted to see if the timing affects us at all.”

Ms. Khoshaba and colleagues found it did. Late eaters were 5.5 times more likely to have three or more tubular adenomas compared to non–late eaters, even after adjusting for what people were eating. Tubular adenomas are the most common type of polyp found in the colon.

So, what’s the possible connection between late eating and the risk for colorectal cancer?
 

Resetting Your Internal Clock

Eating close to bedtime could be throwing off the body’s circadian rhythm. But in this case, it’s not the central circadian center located in the brain — the one that releases melatonin. Instead, late eating could disrupt the peripheral circadian rhythm, part of which is found in the GI tract. For example, if a person is eating late at night, the brain thinks it is nighttime while the gut thinks it is daytime, Ms. Khoshaba said in an interview.

This is an interesting study, said Amy Bragagnini, MS, RD, spokesperson for the Academy of Nutrition and Dietetics, when asked to comment on the research. “It is true that eating later at night can disrupt your circadian rhythm.”

“In addition, many of my patients have told me that when they do eat later at night, they don’t always make the healthiest food choices,” Ms. Bragagnini said. “Their late-night food choices are generally higher in added sugar and fat. This may cause them to consume far more calories than their body needs.” So, eating late at night can also lead to unwanted weight gain.

An unanswered question is if late eating is connected in any way at all to increasing rates of colorectal cancer seen in younger patients.

This was an observational study, and another possible limitation, Ms. Khoshaba said, is that people were asked to recall their diets over 24 hours, which may not always be accurate.

Some of the organisms in the gut have their own internal clocks that follow a daily rhythm, and what someone eat determines how many different kinds of these organisms are active, Ms. Bragagnini said.

“So, if your late-night eating consists of foods high in sugar and fat, you may be negatively impacting your microbiome.” she said.

The next step for Ms. Khoshaba and colleagues is a study examining the peripheral circadian rhythm, changes in the gut microbiome, and the risk for developing metabolic syndrome. Ms. Khoshaba and Ms. Bragagnini had no relevant disclosures.

A version of this article appeared on Medscape.com.

 

WASHINGTON — Eating within 3 hours of bedtime at least 4 days a week could increase chances for developing colorectal cancer, according to the results of research presented at the annual Digestive Disease Week® (DDW).

Investigators in a new study questioned 664 people getting a colonoscopy to screen for cancer, and 42% said they were late eaters. This group was 46% more likely than non–late eaters to have an adenoma found during colonoscopy. An estimated 5% to 10% of them become cancerous over time.

“A lot of other studies are about what we eat but not when we eat,” said Edena Khoshaba, lead investigator and a medical student at Rush University Medical College in Chicago. “The common advice includes not eating red meat, eating more fruits and vegetables — which is great, of course — but we wanted to see if the timing affects us at all.”

Ms. Khoshaba and colleagues found it did. Late eaters were 5.5 times more likely to have three or more tubular adenomas compared to non–late eaters, even after adjusting for what people were eating. Tubular adenomas are the most common type of polyp found in the colon.

So, what’s the possible connection between late eating and the risk for colorectal cancer?
 

Resetting Your Internal Clock

Eating close to bedtime could be throwing off the body’s circadian rhythm. But in this case, it’s not the central circadian center located in the brain — the one that releases melatonin. Instead, late eating could disrupt the peripheral circadian rhythm, part of which is found in the GI tract. For example, if a person is eating late at night, the brain thinks it is nighttime while the gut thinks it is daytime, Ms. Khoshaba said in an interview.

This is an interesting study, said Amy Bragagnini, MS, RD, spokesperson for the Academy of Nutrition and Dietetics, when asked to comment on the research. “It is true that eating later at night can disrupt your circadian rhythm.”

“In addition, many of my patients have told me that when they do eat later at night, they don’t always make the healthiest food choices,” Ms. Bragagnini said. “Their late-night food choices are generally higher in added sugar and fat. This may cause them to consume far more calories than their body needs.” So, eating late at night can also lead to unwanted weight gain.

An unanswered question is if late eating is connected in any way at all to increasing rates of colorectal cancer seen in younger patients.

This was an observational study, and another possible limitation, Ms. Khoshaba said, is that people were asked to recall their diets over 24 hours, which may not always be accurate.

Some of the organisms in the gut have their own internal clocks that follow a daily rhythm, and what someone eat determines how many different kinds of these organisms are active, Ms. Bragagnini said.

“So, if your late-night eating consists of foods high in sugar and fat, you may be negatively impacting your microbiome.” she said.

The next step for Ms. Khoshaba and colleagues is a study examining the peripheral circadian rhythm, changes in the gut microbiome, and the risk for developing metabolic syndrome. Ms. Khoshaba and Ms. Bragagnini had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DDW 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Obesity and Cancer: Untangling a Complex Web

Article Type
Changed
Tue, 05/28/2024 - 15:41

 

According to the Centers for Disease Control and Prevention (CDC), over 684,000 Americans are diagnosed with an “obesity-associated” cancer each year.

The incidence of many of these cancers has been rising in recent years, particularly among younger people — a trend that sits in contrast with the overall decline in cancers with no established relationship to excess weight, such as lung and skin cancers. 

Is obesity the new smoking? Not exactly.

Tracing a direct line between excess fat and cancer is much less clear-cut than it is with tobacco. While about 42% of cancers — including common ones such as colorectal and postmenopausal breast cancers — are considered obesity-related, only about 8% of incident cancers are attributed to excess body weight. People often develop those diseases regardless of weight.

Although plenty of evidence points to excess body fat as a cancer risk factor, it’s unclear at what point excess weight has an effect. Is gaining weight later in life, for instance, better or worse for cancer risk than being overweight or obese from a young age?

There’s another glaring knowledge gap: Does losing weight at some point in adulthood change the picture? In other words, how many of those 684,000 diagnoses might have been prevented if people shed excess pounds?

When it comes to weight and cancer risk, “there’s a lot we don’t know,” said Jennifer W. Bea, PhD, associate professor, health promotion sciences, University of Arizona, Tucson.

A Consistent but Complicated Relationship

Given the growing incidence of obesity — which currently affects about 42% of US adults and 20% of children and teenagers — it’s no surprise that many studies have delved into the potential effects of excess weight on cancer rates.

Although virtually all the evidence comes from large cohort studies, leaving the cause-effect question open, certain associations keep showing up.

“What we know is that, consistently, a higher body mass index [BMI] — particularly in the obese category — leads to a higher risk of multiple cancers,” said Jeffrey A. Meyerhardt, MD, MPH, codirector, Colon and Rectal Cancer Center, Dana-Farber Cancer Institute, Boston.

In a widely cited report published in The New England Journal of Medicine in 2016, the International Agency for Research on Cancer (IARC) analyzed over 1000 epidemiologic studies on body fat and cancer. The agency pointed to over a dozen cancers, including some of the most common and deadly, linked to excess body weight.

That list includes esophageal adenocarcinoma and endometrial cancer — associated with the highest risk — along with kidney, liver, stomach (gastric cardia), pancreatic, colorectal, postmenopausal breast, gallbladder, ovarian, and thyroid cancers, plus multiple myeloma and meningioma. There’s also “limited” evidence linking excess weight to additional cancer types, including aggressive prostate cancer and certain head and neck cancers.

At the same time, Dr. Meyerhardt said, many of those same cancers are also associated with issues that lead to, or coexist with, overweight and obesity, including poor diet, lack of exercise, and metabolic conditions such as diabetes. 

It’s a complicated web, and it’s likely, Dr. Meyerhardt said, that high BMI both directly affects cancer risk and is part of a “causal pathway” of other factors that do.

Regarding direct effects, preclinical research has pointed to multiple ways in which excess body fat could contribute to cancer, said Karen M. Basen-Engquist, PhD, MPH, professor, Division of Cancer Prevention and Population Services, The University of Texas MD Anderson Cancer Center, Houston.

One broad mechanism to help explain the obesity-cancer link is chronic systemic inflammation because excess fat tissue can raise levels of substances in the body, such as tumor necrosis factor alpha and interleukin 6, which fuel inflammation. Excess fat also contributes to hyperinsulinemia — too much insulin in the blood — which can help promote the growth and spread of tumor cells. 

But the underlying reasons also appear to vary by cancer type, Dr. Basen-Engquist said. With hormonally driven cancer types, such as breast and endometrial, excess body fat may alter hormone levels in ways that spur tumor growth. Extra fat tissue may, for example, convert androgens into estrogens, which could help feed estrogen-dependent tumors.

That, Dr. Basen-Engquist noted, could be why excess weight is associated with postmenopausal, not premenopausal, breast cancer: Before menopause, body fat is a relatively minor contributor to estrogen levels but becomes more important after menopause.

 

 

How Big Is the Effect?

While more than a dozen cancers have been consistently linked to excess weight, the strength of those associations varies considerably. 

Endometrial and esophageal cancers are two that stand out. In the 2016 IARC analysis, people with severe obesity had a seven-times greater risk for endometrial cancer and 4.8-times greater risk for esophageal adenocarcinoma vs people with a normal BMI.

With other cancers, the risk increases for those with severe obesity compared with a normal BMI were far more modest: 10% for ovarian cancer, 30% for colorectal cancer, and 80% for kidney and stomach cancers, for example. For postmenopausal breast cancer, every five-unit increase in BMI was associated with a 10% relative risk increase.

A 2018 study from the American Cancer Society, which attempted to estimate the proportion of cancers in the United States attributable to modifiable risk factors — including alcohol consumption, ultraviolet rays exposure, and physical inactivity — found that smoking accounted for the highest proportion of cancer cases by a wide margin (19%), but excess weight came in second (7.8%).

Again, weight appeared to play a bigger role in certain cancers than others: An estimated 60% of endometrial cancers were linked to excess weight, as were roughly one third of esophageal, kidney, and liver cancers. At the other end of the spectrum, just over 11% of breast, 5% of colorectal, and 4% of ovarian cancers were attributable to excess weight.

Even at the lower end, those rates could make a big difference on the population level, especially for groups with higher rates of obesity.

CDC data show that obesity-related cancers are rising among women younger than 50 years, most rapidly among Hispanic women, and some less common obesity-related cancers, such as stomach, thyroid and pancreatic, are also rising among Black individuals and Hispanic Americans.

Obesity may be one reason for growing cancer disparities, said Leah Ferrucci, PhD, MPH, assistant professor, epidemiology, Yale School of Public Health, New Haven, Connecticut. But, she added, the evidence is limited because Black individuals and Hispanic Americans are understudied.

When Do Extra Pounds Matter?

When it comes to cancer risk, at what point in life does excess weight, or weight gain, matter? Is the standard weight gain in middle age, for instance, as hazardous as being overweight or obese from a young age?

Some evidence suggests there’s no “safe” time for putting on excess pounds.

A recent meta-analysis concluded that weight gain at any point after age 18 years is associated with incremental increases in the risk for postmenopausal breast cancer. A 2023 study in JAMA Network Open found a similar pattern with colorectal and other gastrointestinal cancers: People who had sustained overweight or obesity from age 20 years through middle age faced an increased risk of developing those cancers after age 55 years. 

The timing of weight gain didn’t seem to matter either. The same elevated risk held among people who were normal weight in their younger years but became overweight after age 55 years.

Those studies focused on later-onset disease. But, in recent years, experts have tracked a troubling rise in early-onset cancers — those diagnosed before age 50 years — particularly gastrointestinal cancers. 

An obvious question, Dr. Meyerhardt said, is whether the growing prevalence of obesity among young people is partly to blame.

There’s some data to support that, he said. An analysis from the Nurses’ Health Study II found that women with obesity had double the risk for early-onset colorectal cancer as those with a normal BMI. And every 5-kg increase in weight after age 18 years was associated with a 9% increase in colorectal cancer risk.

But while obesity trends probably partly explain the rise in early-onset cancers, there is likely more to the story, Dr. Meyerhardt said.

“I think all of us who see an increasing number of patients under 50 with colorectal cancer know there’s a fair number who do not fit that [high BMI] profile,” he said. “There’s a fair number over 50 who don’t either.”

 

 

Does Weight Loss Help?

With all the evidence pointing to high BMI as a cancer risk factor, a logical conclusion is that weight loss should reduce that excess risk. However, Dr. Bea said, there’s actually little data to support that, and what exists comes from observational studies.

Some research has focused on people who had substantial weight loss after bariatric surgery, with encouraging results. A study published in JAMA found that among 5053 people who underwent bariatric surgery, 2.9% developed an obesity-related cancer over 10 years compared with 4.9% in the nonsurgery group.

Most people, however, aim for less dramatic weight loss, with the help of diet and exercise or sometimes medication. Some evidence shows that a modest degree of weight loss may lower the risks for postmenopausal breast and endometrial cancers. 

A 2020 pooled analysis found, for instance, that among women aged ≥ 50 years, those who lost as little as 2.0-4.5 kg, or 4.4-10.0 pounds, and kept it off for 10 years had a lower risk for breast cancer than women whose weight remained stable. And losing more weight — 9 kg, or about 20 pounds, or more — was even better for lowering cancer risk.

But other research suggests the opposite. A recent analysis found that people who lost weight within the past 2 years through diet and exercise had a higher risk for a range of cancers compared with those who did not lose weight. Overall, though, the increased risk was quite low.

Whatever the research does, or doesn’t, show about weight and cancer risk, Dr. Basen-Engquist said, it’s important that risk factors, obesity and otherwise, aren’t “used as blame tools.”

“With obesity, behavior certainly plays into it,” she said. “But there are so many influences on our behavior that are socially determined.”

Both Dr. Basen-Engquist and Dr. Meyerhardt said it’s important for clinicians to consider the individual in front of them and for everyone to set realistic expectations. 

People with obesity should not feel they have to become thin to be healthier, and no one has to leap from being sedentary to exercising several hours a week

“We don’t want patients to feel that if they don’t get to a stated goal in a guideline, it’s all for naught,” Dr. Meyerhardt said.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

According to the Centers for Disease Control and Prevention (CDC), over 684,000 Americans are diagnosed with an “obesity-associated” cancer each year.

The incidence of many of these cancers has been rising in recent years, particularly among younger people — a trend that sits in contrast with the overall decline in cancers with no established relationship to excess weight, such as lung and skin cancers. 

Is obesity the new smoking? Not exactly.

Tracing a direct line between excess fat and cancer is much less clear-cut than it is with tobacco. While about 42% of cancers — including common ones such as colorectal and postmenopausal breast cancers — are considered obesity-related, only about 8% of incident cancers are attributed to excess body weight. People often develop those diseases regardless of weight.

Although plenty of evidence points to excess body fat as a cancer risk factor, it’s unclear at what point excess weight has an effect. Is gaining weight later in life, for instance, better or worse for cancer risk than being overweight or obese from a young age?

There’s another glaring knowledge gap: Does losing weight at some point in adulthood change the picture? In other words, how many of those 684,000 diagnoses might have been prevented if people shed excess pounds?

When it comes to weight and cancer risk, “there’s a lot we don’t know,” said Jennifer W. Bea, PhD, associate professor, health promotion sciences, University of Arizona, Tucson.

A Consistent but Complicated Relationship

Given the growing incidence of obesity — which currently affects about 42% of US adults and 20% of children and teenagers — it’s no surprise that many studies have delved into the potential effects of excess weight on cancer rates.

Although virtually all the evidence comes from large cohort studies, leaving the cause-effect question open, certain associations keep showing up.

“What we know is that, consistently, a higher body mass index [BMI] — particularly in the obese category — leads to a higher risk of multiple cancers,” said Jeffrey A. Meyerhardt, MD, MPH, codirector, Colon and Rectal Cancer Center, Dana-Farber Cancer Institute, Boston.

In a widely cited report published in The New England Journal of Medicine in 2016, the International Agency for Research on Cancer (IARC) analyzed over 1000 epidemiologic studies on body fat and cancer. The agency pointed to over a dozen cancers, including some of the most common and deadly, linked to excess body weight.

That list includes esophageal adenocarcinoma and endometrial cancer — associated with the highest risk — along with kidney, liver, stomach (gastric cardia), pancreatic, colorectal, postmenopausal breast, gallbladder, ovarian, and thyroid cancers, plus multiple myeloma and meningioma. There’s also “limited” evidence linking excess weight to additional cancer types, including aggressive prostate cancer and certain head and neck cancers.

At the same time, Dr. Meyerhardt said, many of those same cancers are also associated with issues that lead to, or coexist with, overweight and obesity, including poor diet, lack of exercise, and metabolic conditions such as diabetes. 

It’s a complicated web, and it’s likely, Dr. Meyerhardt said, that high BMI both directly affects cancer risk and is part of a “causal pathway” of other factors that do.

Regarding direct effects, preclinical research has pointed to multiple ways in which excess body fat could contribute to cancer, said Karen M. Basen-Engquist, PhD, MPH, professor, Division of Cancer Prevention and Population Services, The University of Texas MD Anderson Cancer Center, Houston.

One broad mechanism to help explain the obesity-cancer link is chronic systemic inflammation because excess fat tissue can raise levels of substances in the body, such as tumor necrosis factor alpha and interleukin 6, which fuel inflammation. Excess fat also contributes to hyperinsulinemia — too much insulin in the blood — which can help promote the growth and spread of tumor cells. 

But the underlying reasons also appear to vary by cancer type, Dr. Basen-Engquist said. With hormonally driven cancer types, such as breast and endometrial, excess body fat may alter hormone levels in ways that spur tumor growth. Extra fat tissue may, for example, convert androgens into estrogens, which could help feed estrogen-dependent tumors.

That, Dr. Basen-Engquist noted, could be why excess weight is associated with postmenopausal, not premenopausal, breast cancer: Before menopause, body fat is a relatively minor contributor to estrogen levels but becomes more important after menopause.

 

 

How Big Is the Effect?

While more than a dozen cancers have been consistently linked to excess weight, the strength of those associations varies considerably. 

Endometrial and esophageal cancers are two that stand out. In the 2016 IARC analysis, people with severe obesity had a seven-times greater risk for endometrial cancer and 4.8-times greater risk for esophageal adenocarcinoma vs people with a normal BMI.

With other cancers, the risk increases for those with severe obesity compared with a normal BMI were far more modest: 10% for ovarian cancer, 30% for colorectal cancer, and 80% for kidney and stomach cancers, for example. For postmenopausal breast cancer, every five-unit increase in BMI was associated with a 10% relative risk increase.

A 2018 study from the American Cancer Society, which attempted to estimate the proportion of cancers in the United States attributable to modifiable risk factors — including alcohol consumption, ultraviolet rays exposure, and physical inactivity — found that smoking accounted for the highest proportion of cancer cases by a wide margin (19%), but excess weight came in second (7.8%).

Again, weight appeared to play a bigger role in certain cancers than others: An estimated 60% of endometrial cancers were linked to excess weight, as were roughly one third of esophageal, kidney, and liver cancers. At the other end of the spectrum, just over 11% of breast, 5% of colorectal, and 4% of ovarian cancers were attributable to excess weight.

Even at the lower end, those rates could make a big difference on the population level, especially for groups with higher rates of obesity.

CDC data show that obesity-related cancers are rising among women younger than 50 years, most rapidly among Hispanic women, and some less common obesity-related cancers, such as stomach, thyroid and pancreatic, are also rising among Black individuals and Hispanic Americans.

Obesity may be one reason for growing cancer disparities, said Leah Ferrucci, PhD, MPH, assistant professor, epidemiology, Yale School of Public Health, New Haven, Connecticut. But, she added, the evidence is limited because Black individuals and Hispanic Americans are understudied.

When Do Extra Pounds Matter?

When it comes to cancer risk, at what point in life does excess weight, or weight gain, matter? Is the standard weight gain in middle age, for instance, as hazardous as being overweight or obese from a young age?

Some evidence suggests there’s no “safe” time for putting on excess pounds.

A recent meta-analysis concluded that weight gain at any point after age 18 years is associated with incremental increases in the risk for postmenopausal breast cancer. A 2023 study in JAMA Network Open found a similar pattern with colorectal and other gastrointestinal cancers: People who had sustained overweight or obesity from age 20 years through middle age faced an increased risk of developing those cancers after age 55 years. 

The timing of weight gain didn’t seem to matter either. The same elevated risk held among people who were normal weight in their younger years but became overweight after age 55 years.

Those studies focused on later-onset disease. But, in recent years, experts have tracked a troubling rise in early-onset cancers — those diagnosed before age 50 years — particularly gastrointestinal cancers. 

An obvious question, Dr. Meyerhardt said, is whether the growing prevalence of obesity among young people is partly to blame.

There’s some data to support that, he said. An analysis from the Nurses’ Health Study II found that women with obesity had double the risk for early-onset colorectal cancer as those with a normal BMI. And every 5-kg increase in weight after age 18 years was associated with a 9% increase in colorectal cancer risk.

But while obesity trends probably partly explain the rise in early-onset cancers, there is likely more to the story, Dr. Meyerhardt said.

“I think all of us who see an increasing number of patients under 50 with colorectal cancer know there’s a fair number who do not fit that [high BMI] profile,” he said. “There’s a fair number over 50 who don’t either.”

 

 

Does Weight Loss Help?

With all the evidence pointing to high BMI as a cancer risk factor, a logical conclusion is that weight loss should reduce that excess risk. However, Dr. Bea said, there’s actually little data to support that, and what exists comes from observational studies.

Some research has focused on people who had substantial weight loss after bariatric surgery, with encouraging results. A study published in JAMA found that among 5053 people who underwent bariatric surgery, 2.9% developed an obesity-related cancer over 10 years compared with 4.9% in the nonsurgery group.

Most people, however, aim for less dramatic weight loss, with the help of diet and exercise or sometimes medication. Some evidence shows that a modest degree of weight loss may lower the risks for postmenopausal breast and endometrial cancers. 

A 2020 pooled analysis found, for instance, that among women aged ≥ 50 years, those who lost as little as 2.0-4.5 kg, or 4.4-10.0 pounds, and kept it off for 10 years had a lower risk for breast cancer than women whose weight remained stable. And losing more weight — 9 kg, or about 20 pounds, or more — was even better for lowering cancer risk.

But other research suggests the opposite. A recent analysis found that people who lost weight within the past 2 years through diet and exercise had a higher risk for a range of cancers compared with those who did not lose weight. Overall, though, the increased risk was quite low.

Whatever the research does, or doesn’t, show about weight and cancer risk, Dr. Basen-Engquist said, it’s important that risk factors, obesity and otherwise, aren’t “used as blame tools.”

“With obesity, behavior certainly plays into it,” she said. “But there are so many influences on our behavior that are socially determined.”

Both Dr. Basen-Engquist and Dr. Meyerhardt said it’s important for clinicians to consider the individual in front of them and for everyone to set realistic expectations. 

People with obesity should not feel they have to become thin to be healthier, and no one has to leap from being sedentary to exercising several hours a week

“We don’t want patients to feel that if they don’t get to a stated goal in a guideline, it’s all for naught,” Dr. Meyerhardt said.

A version of this article appeared on Medscape.com.

 

According to the Centers for Disease Control and Prevention (CDC), over 684,000 Americans are diagnosed with an “obesity-associated” cancer each year.

The incidence of many of these cancers has been rising in recent years, particularly among younger people — a trend that sits in contrast with the overall decline in cancers with no established relationship to excess weight, such as lung and skin cancers. 

Is obesity the new smoking? Not exactly.

Tracing a direct line between excess fat and cancer is much less clear-cut than it is with tobacco. While about 42% of cancers — including common ones such as colorectal and postmenopausal breast cancers — are considered obesity-related, only about 8% of incident cancers are attributed to excess body weight. People often develop those diseases regardless of weight.

Although plenty of evidence points to excess body fat as a cancer risk factor, it’s unclear at what point excess weight has an effect. Is gaining weight later in life, for instance, better or worse for cancer risk than being overweight or obese from a young age?

There’s another glaring knowledge gap: Does losing weight at some point in adulthood change the picture? In other words, how many of those 684,000 diagnoses might have been prevented if people shed excess pounds?

When it comes to weight and cancer risk, “there’s a lot we don’t know,” said Jennifer W. Bea, PhD, associate professor, health promotion sciences, University of Arizona, Tucson.

A Consistent but Complicated Relationship

Given the growing incidence of obesity — which currently affects about 42% of US adults and 20% of children and teenagers — it’s no surprise that many studies have delved into the potential effects of excess weight on cancer rates.

Although virtually all the evidence comes from large cohort studies, leaving the cause-effect question open, certain associations keep showing up.

“What we know is that, consistently, a higher body mass index [BMI] — particularly in the obese category — leads to a higher risk of multiple cancers,” said Jeffrey A. Meyerhardt, MD, MPH, codirector, Colon and Rectal Cancer Center, Dana-Farber Cancer Institute, Boston.

In a widely cited report published in The New England Journal of Medicine in 2016, the International Agency for Research on Cancer (IARC) analyzed over 1000 epidemiologic studies on body fat and cancer. The agency pointed to over a dozen cancers, including some of the most common and deadly, linked to excess body weight.

That list includes esophageal adenocarcinoma and endometrial cancer — associated with the highest risk — along with kidney, liver, stomach (gastric cardia), pancreatic, colorectal, postmenopausal breast, gallbladder, ovarian, and thyroid cancers, plus multiple myeloma and meningioma. There’s also “limited” evidence linking excess weight to additional cancer types, including aggressive prostate cancer and certain head and neck cancers.

At the same time, Dr. Meyerhardt said, many of those same cancers are also associated with issues that lead to, or coexist with, overweight and obesity, including poor diet, lack of exercise, and metabolic conditions such as diabetes. 

It’s a complicated web, and it’s likely, Dr. Meyerhardt said, that high BMI both directly affects cancer risk and is part of a “causal pathway” of other factors that do.

Regarding direct effects, preclinical research has pointed to multiple ways in which excess body fat could contribute to cancer, said Karen M. Basen-Engquist, PhD, MPH, professor, Division of Cancer Prevention and Population Services, The University of Texas MD Anderson Cancer Center, Houston.

One broad mechanism to help explain the obesity-cancer link is chronic systemic inflammation because excess fat tissue can raise levels of substances in the body, such as tumor necrosis factor alpha and interleukin 6, which fuel inflammation. Excess fat also contributes to hyperinsulinemia — too much insulin in the blood — which can help promote the growth and spread of tumor cells. 

But the underlying reasons also appear to vary by cancer type, Dr. Basen-Engquist said. With hormonally driven cancer types, such as breast and endometrial, excess body fat may alter hormone levels in ways that spur tumor growth. Extra fat tissue may, for example, convert androgens into estrogens, which could help feed estrogen-dependent tumors.

That, Dr. Basen-Engquist noted, could be why excess weight is associated with postmenopausal, not premenopausal, breast cancer: Before menopause, body fat is a relatively minor contributor to estrogen levels but becomes more important after menopause.

 

 

How Big Is the Effect?

While more than a dozen cancers have been consistently linked to excess weight, the strength of those associations varies considerably. 

Endometrial and esophageal cancers are two that stand out. In the 2016 IARC analysis, people with severe obesity had a seven-times greater risk for endometrial cancer and 4.8-times greater risk for esophageal adenocarcinoma vs people with a normal BMI.

With other cancers, the risk increases for those with severe obesity compared with a normal BMI were far more modest: 10% for ovarian cancer, 30% for colorectal cancer, and 80% for kidney and stomach cancers, for example. For postmenopausal breast cancer, every five-unit increase in BMI was associated with a 10% relative risk increase.

A 2018 study from the American Cancer Society, which attempted to estimate the proportion of cancers in the United States attributable to modifiable risk factors — including alcohol consumption, ultraviolet rays exposure, and physical inactivity — found that smoking accounted for the highest proportion of cancer cases by a wide margin (19%), but excess weight came in second (7.8%).

Again, weight appeared to play a bigger role in certain cancers than others: An estimated 60% of endometrial cancers were linked to excess weight, as were roughly one third of esophageal, kidney, and liver cancers. At the other end of the spectrum, just over 11% of breast, 5% of colorectal, and 4% of ovarian cancers were attributable to excess weight.

Even at the lower end, those rates could make a big difference on the population level, especially for groups with higher rates of obesity.

CDC data show that obesity-related cancers are rising among women younger than 50 years, most rapidly among Hispanic women, and some less common obesity-related cancers, such as stomach, thyroid and pancreatic, are also rising among Black individuals and Hispanic Americans.

Obesity may be one reason for growing cancer disparities, said Leah Ferrucci, PhD, MPH, assistant professor, epidemiology, Yale School of Public Health, New Haven, Connecticut. But, she added, the evidence is limited because Black individuals and Hispanic Americans are understudied.

When Do Extra Pounds Matter?

When it comes to cancer risk, at what point in life does excess weight, or weight gain, matter? Is the standard weight gain in middle age, for instance, as hazardous as being overweight or obese from a young age?

Some evidence suggests there’s no “safe” time for putting on excess pounds.

A recent meta-analysis concluded that weight gain at any point after age 18 years is associated with incremental increases in the risk for postmenopausal breast cancer. A 2023 study in JAMA Network Open found a similar pattern with colorectal and other gastrointestinal cancers: People who had sustained overweight or obesity from age 20 years through middle age faced an increased risk of developing those cancers after age 55 years. 

The timing of weight gain didn’t seem to matter either. The same elevated risk held among people who were normal weight in their younger years but became overweight after age 55 years.

Those studies focused on later-onset disease. But, in recent years, experts have tracked a troubling rise in early-onset cancers — those diagnosed before age 50 years — particularly gastrointestinal cancers. 

An obvious question, Dr. Meyerhardt said, is whether the growing prevalence of obesity among young people is partly to blame.

There’s some data to support that, he said. An analysis from the Nurses’ Health Study II found that women with obesity had double the risk for early-onset colorectal cancer as those with a normal BMI. And every 5-kg increase in weight after age 18 years was associated with a 9% increase in colorectal cancer risk.

But while obesity trends probably partly explain the rise in early-onset cancers, there is likely more to the story, Dr. Meyerhardt said.

“I think all of us who see an increasing number of patients under 50 with colorectal cancer know there’s a fair number who do not fit that [high BMI] profile,” he said. “There’s a fair number over 50 who don’t either.”

 

 

Does Weight Loss Help?

With all the evidence pointing to high BMI as a cancer risk factor, a logical conclusion is that weight loss should reduce that excess risk. However, Dr. Bea said, there’s actually little data to support that, and what exists comes from observational studies.

Some research has focused on people who had substantial weight loss after bariatric surgery, with encouraging results. A study published in JAMA found that among 5053 people who underwent bariatric surgery, 2.9% developed an obesity-related cancer over 10 years compared with 4.9% in the nonsurgery group.

Most people, however, aim for less dramatic weight loss, with the help of diet and exercise or sometimes medication. Some evidence shows that a modest degree of weight loss may lower the risks for postmenopausal breast and endometrial cancers. 

A 2020 pooled analysis found, for instance, that among women aged ≥ 50 years, those who lost as little as 2.0-4.5 kg, or 4.4-10.0 pounds, and kept it off for 10 years had a lower risk for breast cancer than women whose weight remained stable. And losing more weight — 9 kg, or about 20 pounds, or more — was even better for lowering cancer risk.

But other research suggests the opposite. A recent analysis found that people who lost weight within the past 2 years through diet and exercise had a higher risk for a range of cancers compared with those who did not lose weight. Overall, though, the increased risk was quite low.

Whatever the research does, or doesn’t, show about weight and cancer risk, Dr. Basen-Engquist said, it’s important that risk factors, obesity and otherwise, aren’t “used as blame tools.”

“With obesity, behavior certainly plays into it,” she said. “But there are so many influences on our behavior that are socially determined.”

Both Dr. Basen-Engquist and Dr. Meyerhardt said it’s important for clinicians to consider the individual in front of them and for everyone to set realistic expectations. 

People with obesity should not feel they have to become thin to be healthier, and no one has to leap from being sedentary to exercising several hours a week

“We don’t want patients to feel that if they don’t get to a stated goal in a guideline, it’s all for naught,” Dr. Meyerhardt said.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Urine Test Could Prevent Unnecessary Prostate Biopsies

Article Type
Changed
Tue, 05/28/2024 - 15:42

To date, men undergoing screening through the measurement of prostate-specific antigen (PSA) levels have had a significant reduction in neoplastic mortality. Because of its low specificity, however, this practice often leads to frequent, unnecessary, invasive biopsies and the diagnosis of low-grade, indolent cancer. While guided biopsies with multiparametric MRI can improve the diagnosis of grade 2 prostate cancers, widespread implementation remains challenging. The use of noninvasive biomarkers to stratify the risk for prostate cancer may be a more practical option.

The National Comprehensive Cancer Network proposes a test consisting of six blood and urine biomarkers for all grades of prostate cancer, and it outperforms PSA testing. However, current practice focuses on detecting high-grade cancers. It has been hypothesized that increasing the number of biomarkers by including molecules specifically expressed in aggressive high-grade prostate cancers could improve test accuracy. Based on the identification of new genes that are overexpressed in high-grade cancers, a polymerase chain reaction (PCR) technique targeting 54 candidate markers was used to develop an optimal 18-gene test that could be used before imaging (with MRI) and biopsy and to assess whether the latter procedures are warranted.
 

Development Cohort

In the development cohort (n = 815; median age, 63 years), quantitative PCR (qPCR) analysis of the 54 candidate genes was performed on urine samples that had been prospectively collected before biopsy following a digital rectal examination. Patients with previously diagnosed prostate cancer, abnormal MRI results, and those who had already undergone a prostate biopsy were excluded. Participants’ PSA levels ranged from 3 to 10 ng/mL (median interquartile range [IQR], 5.6 [4.6-7.2] ng/mL). Valid qPCR results were obtained from 761 participants (93.4%). Subsequently, prostate biopsy revealed grade 2 or higher cancer in 293 participants (38.5%).

Thus, a urine test called MyProstateScore 2.0 (MPSA) was developed, with two formulations: MPSA2 and MPSA2+, depending on whether a prostate volume was considered. The final MPSA2 development model included clinical data and 17 of the most informative markers, including nine specific to cancer, which were associated with the KLK3 reference gene.
 

Validation and Analyses

The external validation cohort (n = 813) consisted of participants in the NCI EDRN PCA3 Evaluation trial. Valid qPCR results were obtained from 743 participants, of whom 151 (20.3%) were found to have high-grade prostate cancer.

The median MPS2 score was higher in patients with grade 2 or higher prostate cancer (0.44; IQR, 0.23-0.69) than in those with noncontributory biopsies (0.08; IQR, 0.03-0.19; P < .001) or grade 1 cancer (0.25; IQR, 0.09-0.48; P < .01).

Comparative analyses included PSA, the Prostate Cancer Prevention Trial risk calculator, the Prostate Health Index (PHI), and various previous genetic models. Decision curve analyses quantified the benefit of each biomarker studied. The 151 participants with high-grade prostate cancer had operating curve values ranging from 0.60 for PSA alone to 0.77 for PHI and 0.76 for a two-gene multiplex model. The MPSA model had values of 0.81 and 0.82 for MPSA2+. For a required sensitivity of 95%, the MPS2 model could reduce the rate of unnecessary initial biopsies in the population by 35%-42%, with an impact of 15%-30% for other tests. Among the subgroups analyzed, MPS2 models showed negative predictive values of 95%-99% for grade 2 or higher prostate cancers and 99% for grade 3 or higher tumors.
 

 

 

MPS2 and Competitors

Existing biomarkers have reduced selectivity in detecting high-grade prostate tumors. This lower performance has led to the development of a new urine test including, for the first time, markers specifically overexpressed in high-grade prostate cancer. This new MPS2 test has a sensitivity of 95% for high-grade prostate cancer and a specificity ranging from 35% to 51%, depending on the subgroups. For clinicians, widespread use of MPS2 could greatly reduce the number of unnecessary biopsies while maintaining a high detection rate of grade 2 or higher prostate cancer.

Among patients who have had a negative first biopsy, MPS2 would have a sensitivity of 94.4% and a specificity of 51%, which is much higher than other tests like prostate cancer antigen 3 gene, three-gene model, and MPS. In addition, in patients with grade 1 prostate cancer, urinary markers for high-grade cancer could indicate the existence of a more aggressive tumor requiring increased monitoring.

This study has limitations, however. The ethnic diversity of its population was limited. A few Black men were included, for example. Second, a systematic biopsy was used as the reference, which can increase negative predictive value and decrease positive predictive value. Classification errors may have occurred. Therefore, further studies are needed to confirm these initial results and the long-term positive impact of using MPS2.

In conclusion, an 18-gene urine test seems to be more relevant for diagnosing high-grade prostate cancer than existing tests. It could prevent additional imaging or biopsy examinations in 35%-41% of patients. Therefore, the use of such tests in patients with high PSA levels could reduce the potential risks associated with prostate cancer screening while preserving its long-term benefits.

This story was translated from JIM, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

To date, men undergoing screening through the measurement of prostate-specific antigen (PSA) levels have had a significant reduction in neoplastic mortality. Because of its low specificity, however, this practice often leads to frequent, unnecessary, invasive biopsies and the diagnosis of low-grade, indolent cancer. While guided biopsies with multiparametric MRI can improve the diagnosis of grade 2 prostate cancers, widespread implementation remains challenging. The use of noninvasive biomarkers to stratify the risk for prostate cancer may be a more practical option.

The National Comprehensive Cancer Network proposes a test consisting of six blood and urine biomarkers for all grades of prostate cancer, and it outperforms PSA testing. However, current practice focuses on detecting high-grade cancers. It has been hypothesized that increasing the number of biomarkers by including molecules specifically expressed in aggressive high-grade prostate cancers could improve test accuracy. Based on the identification of new genes that are overexpressed in high-grade cancers, a polymerase chain reaction (PCR) technique targeting 54 candidate markers was used to develop an optimal 18-gene test that could be used before imaging (with MRI) and biopsy and to assess whether the latter procedures are warranted.
 

Development Cohort

In the development cohort (n = 815; median age, 63 years), quantitative PCR (qPCR) analysis of the 54 candidate genes was performed on urine samples that had been prospectively collected before biopsy following a digital rectal examination. Patients with previously diagnosed prostate cancer, abnormal MRI results, and those who had already undergone a prostate biopsy were excluded. Participants’ PSA levels ranged from 3 to 10 ng/mL (median interquartile range [IQR], 5.6 [4.6-7.2] ng/mL). Valid qPCR results were obtained from 761 participants (93.4%). Subsequently, prostate biopsy revealed grade 2 or higher cancer in 293 participants (38.5%).

Thus, a urine test called MyProstateScore 2.0 (MPSA) was developed, with two formulations: MPSA2 and MPSA2+, depending on whether a prostate volume was considered. The final MPSA2 development model included clinical data and 17 of the most informative markers, including nine specific to cancer, which were associated with the KLK3 reference gene.
 

Validation and Analyses

The external validation cohort (n = 813) consisted of participants in the NCI EDRN PCA3 Evaluation trial. Valid qPCR results were obtained from 743 participants, of whom 151 (20.3%) were found to have high-grade prostate cancer.

The median MPS2 score was higher in patients with grade 2 or higher prostate cancer (0.44; IQR, 0.23-0.69) than in those with noncontributory biopsies (0.08; IQR, 0.03-0.19; P < .001) or grade 1 cancer (0.25; IQR, 0.09-0.48; P < .01).

Comparative analyses included PSA, the Prostate Cancer Prevention Trial risk calculator, the Prostate Health Index (PHI), and various previous genetic models. Decision curve analyses quantified the benefit of each biomarker studied. The 151 participants with high-grade prostate cancer had operating curve values ranging from 0.60 for PSA alone to 0.77 for PHI and 0.76 for a two-gene multiplex model. The MPSA model had values of 0.81 and 0.82 for MPSA2+. For a required sensitivity of 95%, the MPS2 model could reduce the rate of unnecessary initial biopsies in the population by 35%-42%, with an impact of 15%-30% for other tests. Among the subgroups analyzed, MPS2 models showed negative predictive values of 95%-99% for grade 2 or higher prostate cancers and 99% for grade 3 or higher tumors.
 

 

 

MPS2 and Competitors

Existing biomarkers have reduced selectivity in detecting high-grade prostate tumors. This lower performance has led to the development of a new urine test including, for the first time, markers specifically overexpressed in high-grade prostate cancer. This new MPS2 test has a sensitivity of 95% for high-grade prostate cancer and a specificity ranging from 35% to 51%, depending on the subgroups. For clinicians, widespread use of MPS2 could greatly reduce the number of unnecessary biopsies while maintaining a high detection rate of grade 2 or higher prostate cancer.

Among patients who have had a negative first biopsy, MPS2 would have a sensitivity of 94.4% and a specificity of 51%, which is much higher than other tests like prostate cancer antigen 3 gene, three-gene model, and MPS. In addition, in patients with grade 1 prostate cancer, urinary markers for high-grade cancer could indicate the existence of a more aggressive tumor requiring increased monitoring.

This study has limitations, however. The ethnic diversity of its population was limited. A few Black men were included, for example. Second, a systematic biopsy was used as the reference, which can increase negative predictive value and decrease positive predictive value. Classification errors may have occurred. Therefore, further studies are needed to confirm these initial results and the long-term positive impact of using MPS2.

In conclusion, an 18-gene urine test seems to be more relevant for diagnosing high-grade prostate cancer than existing tests. It could prevent additional imaging or biopsy examinations in 35%-41% of patients. Therefore, the use of such tests in patients with high PSA levels could reduce the potential risks associated with prostate cancer screening while preserving its long-term benefits.

This story was translated from JIM, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

To date, men undergoing screening through the measurement of prostate-specific antigen (PSA) levels have had a significant reduction in neoplastic mortality. Because of its low specificity, however, this practice often leads to frequent, unnecessary, invasive biopsies and the diagnosis of low-grade, indolent cancer. While guided biopsies with multiparametric MRI can improve the diagnosis of grade 2 prostate cancers, widespread implementation remains challenging. The use of noninvasive biomarkers to stratify the risk for prostate cancer may be a more practical option.

The National Comprehensive Cancer Network proposes a test consisting of six blood and urine biomarkers for all grades of prostate cancer, and it outperforms PSA testing. However, current practice focuses on detecting high-grade cancers. It has been hypothesized that increasing the number of biomarkers by including molecules specifically expressed in aggressive high-grade prostate cancers could improve test accuracy. Based on the identification of new genes that are overexpressed in high-grade cancers, a polymerase chain reaction (PCR) technique targeting 54 candidate markers was used to develop an optimal 18-gene test that could be used before imaging (with MRI) and biopsy and to assess whether the latter procedures are warranted.
 

Development Cohort

In the development cohort (n = 815; median age, 63 years), quantitative PCR (qPCR) analysis of the 54 candidate genes was performed on urine samples that had been prospectively collected before biopsy following a digital rectal examination. Patients with previously diagnosed prostate cancer, abnormal MRI results, and those who had already undergone a prostate biopsy were excluded. Participants’ PSA levels ranged from 3 to 10 ng/mL (median interquartile range [IQR], 5.6 [4.6-7.2] ng/mL). Valid qPCR results were obtained from 761 participants (93.4%). Subsequently, prostate biopsy revealed grade 2 or higher cancer in 293 participants (38.5%).

Thus, a urine test called MyProstateScore 2.0 (MPSA) was developed, with two formulations: MPSA2 and MPSA2+, depending on whether a prostate volume was considered. The final MPSA2 development model included clinical data and 17 of the most informative markers, including nine specific to cancer, which were associated with the KLK3 reference gene.
 

Validation and Analyses

The external validation cohort (n = 813) consisted of participants in the NCI EDRN PCA3 Evaluation trial. Valid qPCR results were obtained from 743 participants, of whom 151 (20.3%) were found to have high-grade prostate cancer.

The median MPS2 score was higher in patients with grade 2 or higher prostate cancer (0.44; IQR, 0.23-0.69) than in those with noncontributory biopsies (0.08; IQR, 0.03-0.19; P < .001) or grade 1 cancer (0.25; IQR, 0.09-0.48; P < .01).

Comparative analyses included PSA, the Prostate Cancer Prevention Trial risk calculator, the Prostate Health Index (PHI), and various previous genetic models. Decision curve analyses quantified the benefit of each biomarker studied. The 151 participants with high-grade prostate cancer had operating curve values ranging from 0.60 for PSA alone to 0.77 for PHI and 0.76 for a two-gene multiplex model. The MPSA model had values of 0.81 and 0.82 for MPSA2+. For a required sensitivity of 95%, the MPS2 model could reduce the rate of unnecessary initial biopsies in the population by 35%-42%, with an impact of 15%-30% for other tests. Among the subgroups analyzed, MPS2 models showed negative predictive values of 95%-99% for grade 2 or higher prostate cancers and 99% for grade 3 or higher tumors.
 

 

 

MPS2 and Competitors

Existing biomarkers have reduced selectivity in detecting high-grade prostate tumors. This lower performance has led to the development of a new urine test including, for the first time, markers specifically overexpressed in high-grade prostate cancer. This new MPS2 test has a sensitivity of 95% for high-grade prostate cancer and a specificity ranging from 35% to 51%, depending on the subgroups. For clinicians, widespread use of MPS2 could greatly reduce the number of unnecessary biopsies while maintaining a high detection rate of grade 2 or higher prostate cancer.

Among patients who have had a negative first biopsy, MPS2 would have a sensitivity of 94.4% and a specificity of 51%, which is much higher than other tests like prostate cancer antigen 3 gene, three-gene model, and MPS. In addition, in patients with grade 1 prostate cancer, urinary markers for high-grade cancer could indicate the existence of a more aggressive tumor requiring increased monitoring.

This study has limitations, however. The ethnic diversity of its population was limited. A few Black men were included, for example. Second, a systematic biopsy was used as the reference, which can increase negative predictive value and decrease positive predictive value. Classification errors may have occurred. Therefore, further studies are needed to confirm these initial results and the long-term positive impact of using MPS2.

In conclusion, an 18-gene urine test seems to be more relevant for diagnosing high-grade prostate cancer than existing tests. It could prevent additional imaging or biopsy examinations in 35%-41% of patients. Therefore, the use of such tests in patients with high PSA levels could reduce the potential risks associated with prostate cancer screening while preserving its long-term benefits.

This story was translated from JIM, which is part of the Medscape professional network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Gel Makes Alcohol 50% Less Toxic, Curbs Organ Damage

Article Type
Changed
Wed, 06/05/2024 - 14:05

It sounds like a gimmick. Drink a couple glasses of wine and feel only half as intoxicated as you normally would — and sustain less damage to your liver and other organs.

But that’s the promise of a new gel, developed by researchers in Switzerland, that changes how the body processes alcohol. The gel has been tested in mice so far, but the researchers hope to make it available to people soon. The goal: To protect people from alcohol-related accidents and chronic disease — responsible for more than three million annual deaths worldwide.

“It is a global, urgent issue,” said study coauthor Raffaele Mezzenga, PhD, a professor at ETH Zürich, Switzerland.

The advance builds on a decades-long quest among scientists to reduce the toxicity of alcohol, said Che-Hong Chen, PhD, a molecular biologist at Stanford School of Medicine, Stanford, California, who was not involved in the study. Some probiotic-based products aim to help process alcohol’s toxic byproduct acetaldehyde in the gut, but their effects seem inconsistent from one person to another, Dr. Chen said. Intravenous infusions of natural enzyme complexes, such as those that mimic liver cells to speed up alcohol metabolism, can actually produce some acetaldehyde, mitigating their detoxifying effects.

“Our method has the potential to fill the gap of most of the approaches being explored,” Dr. Mezzenga said. “We hope and plan to move to clinical studies as soon as possible.” 

Usually, the liver processes alcohol, causing the release of toxic acetaldehyde followed by less harmful acetic acid. Acetaldehyde can cause DNA damage, oxidative stress, and vascular inflammation. Too much acetaldehyde can increase the risk for cancer.

But the gel catalyzes the breakdown of alcohol in the digestive tract, converting about half of it into acetic acid. Only the remaining 45% enters the bloodstream and becomes acetaldehyde.

“The concentration of acetaldehyde will be decreased by a factor of more than two and so will the ‘intoxicating’ effect of the alcohol,” said Dr. Mezzenga.

Ideally, someone would ingest the gel immediately before or when consuming alcohol. It’s designed to continue working for several hours.

Some of the mice received one serving of alcohol, while others were served regularly for 10 days. The gel slashed their blood alcohol level by 40% after half an hour and by up to 56% after 5 hours compared with a control group given alcohol but not the gel. Mice that consumed the gel also had less liver and intestinal damage.

“The results, both the short-term behavior of the mice and in the long term for the preservation of organs, were way beyond our expectation,” said Dr. Mezzenga.

Casual drinkers could benefit from the gel. However, the gel could also lead people to consume more alcohol than they would normally to feel intoxicated, Dr. Chen said.
 

Bypassing a Problematic Pathway

A liver enzyme called alcohol dehydrogenase (ADH) converts alcohol to acetaldehyde before a second enzyme called aldehyde dehydrogenase (ALDH2) helps process acetaldehyde into acetic acid. But with the gel, alcohol transforms directly to acetic acid in the digestive tract.

“This chemical reaction seems to bypass the known biological pathway of alcohol metabolism. That’s new to me,” said Dr. Chen, a senior research scientist at Stanford and country director at the Center for Asian Health Research and Education Center. The processing of alcohol before it passes through the mucous membrane of the digestive tract is “another novel aspect,”Dr. Chen said.

To make the gel, the researchers boil whey proteins — also found in milk — to produce stringy fibrils. Next, they add salt and water to cause the fibrils to crosslink, forming a gel. The gel gets infused with iron atoms, which catalyze the conversion of alcohol into acetic acid. That conversion relies on hydrogen peroxide, the byproduct of a reaction between gold and glucose, both of which are also added to the gel.

A previous version of the technology used iron nanoparticles, which needed to be “digested down to ionic form by the acidic pH in the stomach,” said Dr. Mezzenga. That process took too long, giving alcohol more time to cross into the bloodstream. By “decorating” the protein fibrils with single iron atoms, the researchers were able to “increase their catalytic efficiency,” he added.
 

 

 

What’s Next?

With animal studies completed, human clinical studies are next. How soon that could happen will depend on ethical clearance and financial support, the researchers said.

An “interesting next step,” said Dr. Chen, would be to give the gel to mice with a genetic mutation in ALDH2. The mutation makes it harder to process acetaldehyde, often causing facial redness. Prevalent among East Asian populations, the mutation affects about 560 million people and has been linked to Alzheimer’s disease. Dr. Chen’s lab found a chemical compound that can increase the activity of ADH2, which is expected to begin phase 2 clinical trials this year.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

It sounds like a gimmick. Drink a couple glasses of wine and feel only half as intoxicated as you normally would — and sustain less damage to your liver and other organs.

But that’s the promise of a new gel, developed by researchers in Switzerland, that changes how the body processes alcohol. The gel has been tested in mice so far, but the researchers hope to make it available to people soon. The goal: To protect people from alcohol-related accidents and chronic disease — responsible for more than three million annual deaths worldwide.

“It is a global, urgent issue,” said study coauthor Raffaele Mezzenga, PhD, a professor at ETH Zürich, Switzerland.

The advance builds on a decades-long quest among scientists to reduce the toxicity of alcohol, said Che-Hong Chen, PhD, a molecular biologist at Stanford School of Medicine, Stanford, California, who was not involved in the study. Some probiotic-based products aim to help process alcohol’s toxic byproduct acetaldehyde in the gut, but their effects seem inconsistent from one person to another, Dr. Chen said. Intravenous infusions of natural enzyme complexes, such as those that mimic liver cells to speed up alcohol metabolism, can actually produce some acetaldehyde, mitigating their detoxifying effects.

“Our method has the potential to fill the gap of most of the approaches being explored,” Dr. Mezzenga said. “We hope and plan to move to clinical studies as soon as possible.” 

Usually, the liver processes alcohol, causing the release of toxic acetaldehyde followed by less harmful acetic acid. Acetaldehyde can cause DNA damage, oxidative stress, and vascular inflammation. Too much acetaldehyde can increase the risk for cancer.

But the gel catalyzes the breakdown of alcohol in the digestive tract, converting about half of it into acetic acid. Only the remaining 45% enters the bloodstream and becomes acetaldehyde.

“The concentration of acetaldehyde will be decreased by a factor of more than two and so will the ‘intoxicating’ effect of the alcohol,” said Dr. Mezzenga.

Ideally, someone would ingest the gel immediately before or when consuming alcohol. It’s designed to continue working for several hours.

Some of the mice received one serving of alcohol, while others were served regularly for 10 days. The gel slashed their blood alcohol level by 40% after half an hour and by up to 56% after 5 hours compared with a control group given alcohol but not the gel. Mice that consumed the gel also had less liver and intestinal damage.

“The results, both the short-term behavior of the mice and in the long term for the preservation of organs, were way beyond our expectation,” said Dr. Mezzenga.

Casual drinkers could benefit from the gel. However, the gel could also lead people to consume more alcohol than they would normally to feel intoxicated, Dr. Chen said.
 

Bypassing a Problematic Pathway

A liver enzyme called alcohol dehydrogenase (ADH) converts alcohol to acetaldehyde before a second enzyme called aldehyde dehydrogenase (ALDH2) helps process acetaldehyde into acetic acid. But with the gel, alcohol transforms directly to acetic acid in the digestive tract.

“This chemical reaction seems to bypass the known biological pathway of alcohol metabolism. That’s new to me,” said Dr. Chen, a senior research scientist at Stanford and country director at the Center for Asian Health Research and Education Center. The processing of alcohol before it passes through the mucous membrane of the digestive tract is “another novel aspect,”Dr. Chen said.

To make the gel, the researchers boil whey proteins — also found in milk — to produce stringy fibrils. Next, they add salt and water to cause the fibrils to crosslink, forming a gel. The gel gets infused with iron atoms, which catalyze the conversion of alcohol into acetic acid. That conversion relies on hydrogen peroxide, the byproduct of a reaction between gold and glucose, both of which are also added to the gel.

A previous version of the technology used iron nanoparticles, which needed to be “digested down to ionic form by the acidic pH in the stomach,” said Dr. Mezzenga. That process took too long, giving alcohol more time to cross into the bloodstream. By “decorating” the protein fibrils with single iron atoms, the researchers were able to “increase their catalytic efficiency,” he added.
 

 

 

What’s Next?

With animal studies completed, human clinical studies are next. How soon that could happen will depend on ethical clearance and financial support, the researchers said.

An “interesting next step,” said Dr. Chen, would be to give the gel to mice with a genetic mutation in ALDH2. The mutation makes it harder to process acetaldehyde, often causing facial redness. Prevalent among East Asian populations, the mutation affects about 560 million people and has been linked to Alzheimer’s disease. Dr. Chen’s lab found a chemical compound that can increase the activity of ADH2, which is expected to begin phase 2 clinical trials this year.
 

A version of this article appeared on Medscape.com.

It sounds like a gimmick. Drink a couple glasses of wine and feel only half as intoxicated as you normally would — and sustain less damage to your liver and other organs.

But that’s the promise of a new gel, developed by researchers in Switzerland, that changes how the body processes alcohol. The gel has been tested in mice so far, but the researchers hope to make it available to people soon. The goal: To protect people from alcohol-related accidents and chronic disease — responsible for more than three million annual deaths worldwide.

“It is a global, urgent issue,” said study coauthor Raffaele Mezzenga, PhD, a professor at ETH Zürich, Switzerland.

The advance builds on a decades-long quest among scientists to reduce the toxicity of alcohol, said Che-Hong Chen, PhD, a molecular biologist at Stanford School of Medicine, Stanford, California, who was not involved in the study. Some probiotic-based products aim to help process alcohol’s toxic byproduct acetaldehyde in the gut, but their effects seem inconsistent from one person to another, Dr. Chen said. Intravenous infusions of natural enzyme complexes, such as those that mimic liver cells to speed up alcohol metabolism, can actually produce some acetaldehyde, mitigating their detoxifying effects.

“Our method has the potential to fill the gap of most of the approaches being explored,” Dr. Mezzenga said. “We hope and plan to move to clinical studies as soon as possible.” 

Usually, the liver processes alcohol, causing the release of toxic acetaldehyde followed by less harmful acetic acid. Acetaldehyde can cause DNA damage, oxidative stress, and vascular inflammation. Too much acetaldehyde can increase the risk for cancer.

But the gel catalyzes the breakdown of alcohol in the digestive tract, converting about half of it into acetic acid. Only the remaining 45% enters the bloodstream and becomes acetaldehyde.

“The concentration of acetaldehyde will be decreased by a factor of more than two and so will the ‘intoxicating’ effect of the alcohol,” said Dr. Mezzenga.

Ideally, someone would ingest the gel immediately before or when consuming alcohol. It’s designed to continue working for several hours.

Some of the mice received one serving of alcohol, while others were served regularly for 10 days. The gel slashed their blood alcohol level by 40% after half an hour and by up to 56% after 5 hours compared with a control group given alcohol but not the gel. Mice that consumed the gel also had less liver and intestinal damage.

“The results, both the short-term behavior of the mice and in the long term for the preservation of organs, were way beyond our expectation,” said Dr. Mezzenga.

Casual drinkers could benefit from the gel. However, the gel could also lead people to consume more alcohol than they would normally to feel intoxicated, Dr. Chen said.
 

Bypassing a Problematic Pathway

A liver enzyme called alcohol dehydrogenase (ADH) converts alcohol to acetaldehyde before a second enzyme called aldehyde dehydrogenase (ALDH2) helps process acetaldehyde into acetic acid. But with the gel, alcohol transforms directly to acetic acid in the digestive tract.

“This chemical reaction seems to bypass the known biological pathway of alcohol metabolism. That’s new to me,” said Dr. Chen, a senior research scientist at Stanford and country director at the Center for Asian Health Research and Education Center. The processing of alcohol before it passes through the mucous membrane of the digestive tract is “another novel aspect,”Dr. Chen said.

To make the gel, the researchers boil whey proteins — also found in milk — to produce stringy fibrils. Next, they add salt and water to cause the fibrils to crosslink, forming a gel. The gel gets infused with iron atoms, which catalyze the conversion of alcohol into acetic acid. That conversion relies on hydrogen peroxide, the byproduct of a reaction between gold and glucose, both of which are also added to the gel.

A previous version of the technology used iron nanoparticles, which needed to be “digested down to ionic form by the acidic pH in the stomach,” said Dr. Mezzenga. That process took too long, giving alcohol more time to cross into the bloodstream. By “decorating” the protein fibrils with single iron atoms, the researchers were able to “increase their catalytic efficiency,” he added.
 

 

 

What’s Next?

With animal studies completed, human clinical studies are next. How soon that could happen will depend on ethical clearance and financial support, the researchers said.

An “interesting next step,” said Dr. Chen, would be to give the gel to mice with a genetic mutation in ALDH2. The mutation makes it harder to process acetaldehyde, often causing facial redness. Prevalent among East Asian populations, the mutation affects about 560 million people and has been linked to Alzheimer’s disease. Dr. Chen’s lab found a chemical compound that can increase the activity of ADH2, which is expected to begin phase 2 clinical trials this year.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

LDCT Lung Cancer Screening Finds Undiagnosed Pulmonary Comorbidities in High-Risk Population

Article Type
Changed
Tue, 05/28/2024 - 15:40

Lung cancer screening with low-dose CT (LDCT) can effectively evaluate a high-risk population for undiagnosed chronic obstructive pulmonary disease (COPD) and airflow obstruction, based on data from a new study of approximately 2000 individuals.

Previous research suggests that approximately 70%-90% of individuals with COPD are undiagnosed, especially low-income and minority populations who may be less likely to undergo screening, said Michaela A. Seigo, DO, of Temple University Hospital, Philadelphia, in a study presented at the American Thoracic Society (ATS) 2024 International Conference.

Although the current guidance from the United States Preventive Services Task Force (USPSTF) recommends against universal COPD screening in asymptomatic adults, the use of LDCT may be an option for evaluating a high-risk population, the researchers noted.

The researchers reviewed data from 2083 adults enrolled in the Temple Healthy Chest Initiative, an urban health system-wide lung cancer screening program, combined with the detection of symptoms and comorbidities.
 

Baseline LDCT for Identification of Comorbidities

Study participants underwent baseline LDCT between October 2021 and October 2022. The images were reviewed by radiologists for pulmonary comorbidities including emphysema, airway disease, bronchiectasis, and interstitial lung disease. In addition, 604 participants (29%) completed a symptom survey, and 624 (30%) underwent spirometry. The mean age of the participants was 65.8 years and 63.9 years for those with and without a history of COPD, respectively.

Approximately half of the participants in both groups were female.

Overall, 66 of 181 (36.5%) individuals previously undiagnosed with COPD had spirometry consistent with airflow obstruction (forced expiratory volume in 1 second/forced vital capacity, < 70%). Individuals with previously undiagnosed COPD were more likely to be younger, male, current smokers, and identified as Hispanic or other race (not Black, White, Hispanic, or Asian/Native American/Pacific Islander).

Individuals without a reported history of COPD had fewer pulmonary comorbidities on LDCT and lower rates of respiratory symptoms than those with COPD. However, nearly 25% of individuals with no reported history of COPD said that breathing issues affected their “ability to do things,” Ms. Seigo said, and a majority of those with no COPD diagnosis exhibited airway disease (76.2% compared with 84% of diagnosed patients with COPD). In addition, 88.1% reported ever experiencing dyspnea and 72.6% reported experiencing cough; both symptoms are compatible with a clinical diagnosis of COPD, the researchers noted.

“We detected pulmonary comorbidities at higher rates than previously published,” Ms. Seigo said in an interview. The increase likely reflects the patient population at Temple, which includes a relatively high percentage of city-dwelling, lower-income individuals, as well as more racial-ethnic minorities and persons of color, she said.

However, “these findings will help clinicians target the most at-risk populations for previously undiagnosed COPD,” Ms. Seigo said.

Looking ahead, Ms. Seigo said she sees a dominant role for artificial intelligence (AI) in COPD screening. “At-risk populations will get LDCT scans, and AI will identify pulmonary and extra-pulmonary comorbidities that may need to be addressed,” she said.

A combination of symptom detection plus strategic and more widely available access to screening offers “a huge opportunity to intervene earlier and potentially save lives,” she told this news organization.
 

 

 

Lung Cancer Screening May Promote Earlier COPD Intervention

The current study examines the prevalence of undiagnosed COPD, especially among low-income and minority populations, in an asymptomatic high-risk group. “By integrating lung cancer CT screening with the detection of pulmonary comorbidities on LDCT and respiratory symptoms, the current study aimed to identify individuals with undiagnosed COPD,” said Dharani K. Narendra, MD, of Baylor College of Medicine, Houston, in an interview.

“The study highlighted the feasibility and potential benefits of coupling lung cancer screening tests with COPD detection, which is noteworthy, and hits two targets with one arrow — early detection of lung cancer and COPD — in high-risk groups, Dr. Narendra said.

“Although the USPSTF recommends against screening for COPD in asymptomatic patients, abnormal pulmonary comorbidities observed on CT chest scans could serve as a gateway for clinicians to screen for COPD,” said Dr. Narendra. “This approach allows for early diagnosis, education on smoking cessation, and timely treatment of COPD, potentially preventing lung function deterioration and reducing the risk of exacerbations,” she noted.

The finding that one third of previously undiagnosed and asymptomatic patients with COPD showed significant rates of airflow obstruction on spirometry is consistent with previous research, Dr. Narendra told this news organization.

“Interestingly, in questions about specific symptoms, undiagnosed COPD patients reported higher rates of dyspnea, more cough, and breathing difficulties affecting their daily activities, at 16.1%, 27.4%, and 24.5%, respectively, highlighting a lower perception of symptoms,” she said.

“Barriers to lung cancer screening in urban, high-risk communities include limited healthcare facility access, insufficient awareness of screening programs, financial constraints, and cultural or language barriers,” said Dr. Narendra.

Potential strategies to overcome these barriers include improving access through additional screening centers and providing transportation, implementing community-based education and outreach programs to increase awareness about the benefits of lung cancer screening and early COPD detection, and providing financial assistance in the form of free screening options and collaboration with insurers to cover screening expenses, she said.

“Healthcare providers must recognize the dual benefits of lung cancer screening programs, including the opportunity to screen for undiagnosed COPD,” Dr. Narendra emphasized. “This integrated approach is crucial in identifying high-risk individuals who could benefit from early intervention and effective management of COPD. Clinicians should actively support implementing comprehensive screening programs incorporating assessments for pulmonary comorbidities through LDCT and screening questionnaires for COPD symptoms,” she said.

“Further research is needed to evaluate long-term mortality outcomes and identify best practices to determine the most effective methods and cost-effectiveness for implementing and sustaining combined screening programs in various urban settings,” Dr. Narendra told this news organization.

Other areas to address in future studies include investigating specific barriers to screening among different high-risk groups and tailoring interventions to improve screening uptake and adherence, Narendra said. “By addressing these research gaps, health care providers can optimize screening programs and enhance the overall health of urban, high-risk populations,” she added.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Narendra serves on the editorial board of CHEST Physician.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Lung cancer screening with low-dose CT (LDCT) can effectively evaluate a high-risk population for undiagnosed chronic obstructive pulmonary disease (COPD) and airflow obstruction, based on data from a new study of approximately 2000 individuals.

Previous research suggests that approximately 70%-90% of individuals with COPD are undiagnosed, especially low-income and minority populations who may be less likely to undergo screening, said Michaela A. Seigo, DO, of Temple University Hospital, Philadelphia, in a study presented at the American Thoracic Society (ATS) 2024 International Conference.

Although the current guidance from the United States Preventive Services Task Force (USPSTF) recommends against universal COPD screening in asymptomatic adults, the use of LDCT may be an option for evaluating a high-risk population, the researchers noted.

The researchers reviewed data from 2083 adults enrolled in the Temple Healthy Chest Initiative, an urban health system-wide lung cancer screening program, combined with the detection of symptoms and comorbidities.
 

Baseline LDCT for Identification of Comorbidities

Study participants underwent baseline LDCT between October 2021 and October 2022. The images were reviewed by radiologists for pulmonary comorbidities including emphysema, airway disease, bronchiectasis, and interstitial lung disease. In addition, 604 participants (29%) completed a symptom survey, and 624 (30%) underwent spirometry. The mean age of the participants was 65.8 years and 63.9 years for those with and without a history of COPD, respectively.

Approximately half of the participants in both groups were female.

Overall, 66 of 181 (36.5%) individuals previously undiagnosed with COPD had spirometry consistent with airflow obstruction (forced expiratory volume in 1 second/forced vital capacity, < 70%). Individuals with previously undiagnosed COPD were more likely to be younger, male, current smokers, and identified as Hispanic or other race (not Black, White, Hispanic, or Asian/Native American/Pacific Islander).

Individuals without a reported history of COPD had fewer pulmonary comorbidities on LDCT and lower rates of respiratory symptoms than those with COPD. However, nearly 25% of individuals with no reported history of COPD said that breathing issues affected their “ability to do things,” Ms. Seigo said, and a majority of those with no COPD diagnosis exhibited airway disease (76.2% compared with 84% of diagnosed patients with COPD). In addition, 88.1% reported ever experiencing dyspnea and 72.6% reported experiencing cough; both symptoms are compatible with a clinical diagnosis of COPD, the researchers noted.

“We detected pulmonary comorbidities at higher rates than previously published,” Ms. Seigo said in an interview. The increase likely reflects the patient population at Temple, which includes a relatively high percentage of city-dwelling, lower-income individuals, as well as more racial-ethnic minorities and persons of color, she said.

However, “these findings will help clinicians target the most at-risk populations for previously undiagnosed COPD,” Ms. Seigo said.

Looking ahead, Ms. Seigo said she sees a dominant role for artificial intelligence (AI) in COPD screening. “At-risk populations will get LDCT scans, and AI will identify pulmonary and extra-pulmonary comorbidities that may need to be addressed,” she said.

A combination of symptom detection plus strategic and more widely available access to screening offers “a huge opportunity to intervene earlier and potentially save lives,” she told this news organization.
 

 

 

Lung Cancer Screening May Promote Earlier COPD Intervention

The current study examines the prevalence of undiagnosed COPD, especially among low-income and minority populations, in an asymptomatic high-risk group. “By integrating lung cancer CT screening with the detection of pulmonary comorbidities on LDCT and respiratory symptoms, the current study aimed to identify individuals with undiagnosed COPD,” said Dharani K. Narendra, MD, of Baylor College of Medicine, Houston, in an interview.

“The study highlighted the feasibility and potential benefits of coupling lung cancer screening tests with COPD detection, which is noteworthy, and hits two targets with one arrow — early detection of lung cancer and COPD — in high-risk groups, Dr. Narendra said.

“Although the USPSTF recommends against screening for COPD in asymptomatic patients, abnormal pulmonary comorbidities observed on CT chest scans could serve as a gateway for clinicians to screen for COPD,” said Dr. Narendra. “This approach allows for early diagnosis, education on smoking cessation, and timely treatment of COPD, potentially preventing lung function deterioration and reducing the risk of exacerbations,” she noted.

The finding that one third of previously undiagnosed and asymptomatic patients with COPD showed significant rates of airflow obstruction on spirometry is consistent with previous research, Dr. Narendra told this news organization.

“Interestingly, in questions about specific symptoms, undiagnosed COPD patients reported higher rates of dyspnea, more cough, and breathing difficulties affecting their daily activities, at 16.1%, 27.4%, and 24.5%, respectively, highlighting a lower perception of symptoms,” she said.

“Barriers to lung cancer screening in urban, high-risk communities include limited healthcare facility access, insufficient awareness of screening programs, financial constraints, and cultural or language barriers,” said Dr. Narendra.

Potential strategies to overcome these barriers include improving access through additional screening centers and providing transportation, implementing community-based education and outreach programs to increase awareness about the benefits of lung cancer screening and early COPD detection, and providing financial assistance in the form of free screening options and collaboration with insurers to cover screening expenses, she said.

“Healthcare providers must recognize the dual benefits of lung cancer screening programs, including the opportunity to screen for undiagnosed COPD,” Dr. Narendra emphasized. “This integrated approach is crucial in identifying high-risk individuals who could benefit from early intervention and effective management of COPD. Clinicians should actively support implementing comprehensive screening programs incorporating assessments for pulmonary comorbidities through LDCT and screening questionnaires for COPD symptoms,” she said.

“Further research is needed to evaluate long-term mortality outcomes and identify best practices to determine the most effective methods and cost-effectiveness for implementing and sustaining combined screening programs in various urban settings,” Dr. Narendra told this news organization.

Other areas to address in future studies include investigating specific barriers to screening among different high-risk groups and tailoring interventions to improve screening uptake and adherence, Narendra said. “By addressing these research gaps, health care providers can optimize screening programs and enhance the overall health of urban, high-risk populations,” she added.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Narendra serves on the editorial board of CHEST Physician.

A version of this article first appeared on Medscape.com.

Lung cancer screening with low-dose CT (LDCT) can effectively evaluate a high-risk population for undiagnosed chronic obstructive pulmonary disease (COPD) and airflow obstruction, based on data from a new study of approximately 2000 individuals.

Previous research suggests that approximately 70%-90% of individuals with COPD are undiagnosed, especially low-income and minority populations who may be less likely to undergo screening, said Michaela A. Seigo, DO, of Temple University Hospital, Philadelphia, in a study presented at the American Thoracic Society (ATS) 2024 International Conference.

Although the current guidance from the United States Preventive Services Task Force (USPSTF) recommends against universal COPD screening in asymptomatic adults, the use of LDCT may be an option for evaluating a high-risk population, the researchers noted.

The researchers reviewed data from 2083 adults enrolled in the Temple Healthy Chest Initiative, an urban health system-wide lung cancer screening program, combined with the detection of symptoms and comorbidities.
 

Baseline LDCT for Identification of Comorbidities

Study participants underwent baseline LDCT between October 2021 and October 2022. The images were reviewed by radiologists for pulmonary comorbidities including emphysema, airway disease, bronchiectasis, and interstitial lung disease. In addition, 604 participants (29%) completed a symptom survey, and 624 (30%) underwent spirometry. The mean age of the participants was 65.8 years and 63.9 years for those with and without a history of COPD, respectively.

Approximately half of the participants in both groups were female.

Overall, 66 of 181 (36.5%) individuals previously undiagnosed with COPD had spirometry consistent with airflow obstruction (forced expiratory volume in 1 second/forced vital capacity, < 70%). Individuals with previously undiagnosed COPD were more likely to be younger, male, current smokers, and identified as Hispanic or other race (not Black, White, Hispanic, or Asian/Native American/Pacific Islander).

Individuals without a reported history of COPD had fewer pulmonary comorbidities on LDCT and lower rates of respiratory symptoms than those with COPD. However, nearly 25% of individuals with no reported history of COPD said that breathing issues affected their “ability to do things,” Ms. Seigo said, and a majority of those with no COPD diagnosis exhibited airway disease (76.2% compared with 84% of diagnosed patients with COPD). In addition, 88.1% reported ever experiencing dyspnea and 72.6% reported experiencing cough; both symptoms are compatible with a clinical diagnosis of COPD, the researchers noted.

“We detected pulmonary comorbidities at higher rates than previously published,” Ms. Seigo said in an interview. The increase likely reflects the patient population at Temple, which includes a relatively high percentage of city-dwelling, lower-income individuals, as well as more racial-ethnic minorities and persons of color, she said.

However, “these findings will help clinicians target the most at-risk populations for previously undiagnosed COPD,” Ms. Seigo said.

Looking ahead, Ms. Seigo said she sees a dominant role for artificial intelligence (AI) in COPD screening. “At-risk populations will get LDCT scans, and AI will identify pulmonary and extra-pulmonary comorbidities that may need to be addressed,” she said.

A combination of symptom detection plus strategic and more widely available access to screening offers “a huge opportunity to intervene earlier and potentially save lives,” she told this news organization.
 

 

 

Lung Cancer Screening May Promote Earlier COPD Intervention

The current study examines the prevalence of undiagnosed COPD, especially among low-income and minority populations, in an asymptomatic high-risk group. “By integrating lung cancer CT screening with the detection of pulmonary comorbidities on LDCT and respiratory symptoms, the current study aimed to identify individuals with undiagnosed COPD,” said Dharani K. Narendra, MD, of Baylor College of Medicine, Houston, in an interview.

“The study highlighted the feasibility and potential benefits of coupling lung cancer screening tests with COPD detection, which is noteworthy, and hits two targets with one arrow — early detection of lung cancer and COPD — in high-risk groups, Dr. Narendra said.

“Although the USPSTF recommends against screening for COPD in asymptomatic patients, abnormal pulmonary comorbidities observed on CT chest scans could serve as a gateway for clinicians to screen for COPD,” said Dr. Narendra. “This approach allows for early diagnosis, education on smoking cessation, and timely treatment of COPD, potentially preventing lung function deterioration and reducing the risk of exacerbations,” she noted.

The finding that one third of previously undiagnosed and asymptomatic patients with COPD showed significant rates of airflow obstruction on spirometry is consistent with previous research, Dr. Narendra told this news organization.

“Interestingly, in questions about specific symptoms, undiagnosed COPD patients reported higher rates of dyspnea, more cough, and breathing difficulties affecting their daily activities, at 16.1%, 27.4%, and 24.5%, respectively, highlighting a lower perception of symptoms,” she said.

“Barriers to lung cancer screening in urban, high-risk communities include limited healthcare facility access, insufficient awareness of screening programs, financial constraints, and cultural or language barriers,” said Dr. Narendra.

Potential strategies to overcome these barriers include improving access through additional screening centers and providing transportation, implementing community-based education and outreach programs to increase awareness about the benefits of lung cancer screening and early COPD detection, and providing financial assistance in the form of free screening options and collaboration with insurers to cover screening expenses, she said.

“Healthcare providers must recognize the dual benefits of lung cancer screening programs, including the opportunity to screen for undiagnosed COPD,” Dr. Narendra emphasized. “This integrated approach is crucial in identifying high-risk individuals who could benefit from early intervention and effective management of COPD. Clinicians should actively support implementing comprehensive screening programs incorporating assessments for pulmonary comorbidities through LDCT and screening questionnaires for COPD symptoms,” she said.

“Further research is needed to evaluate long-term mortality outcomes and identify best practices to determine the most effective methods and cost-effectiveness for implementing and sustaining combined screening programs in various urban settings,” Dr. Narendra told this news organization.

Other areas to address in future studies include investigating specific barriers to screening among different high-risk groups and tailoring interventions to improve screening uptake and adherence, Narendra said. “By addressing these research gaps, health care providers can optimize screening programs and enhance the overall health of urban, high-risk populations,” she added.

The study received no outside funding. The researchers had no financial conflicts to disclose. Dr. Narendra serves on the editorial board of CHEST Physician.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Most women can conceive after breast cancer treatment

Article Type
Changed
Tue, 06/04/2024 - 15:20

Most younger women diagnosed with nonmetastatic breast cancer will succeed if they attempt to become pregnant after treatment, according to new research.

The findings, presented May 23 in advance of the annual meeting of the American Society of Clinical Oncology (ASCO) represent the most comprehensive look to date at fertility outcomes following treatment for women diagnosed with breast cancer before age 40 (Abstract 1518).

Kimia Sorouri, MD, a research fellow at the Dana-Farber Cancer Center in Boston, Massachusetts, and her colleagues, looked at data from the Young Women’s Breast Cancer study, a multicenter longitudinal cohort study, for 1213 U.S. and Canadian women (74% non-Hispanic white) who were diagnosed with stages 0-III breast cancer between 2006 and 2016. None of the included patients had metastatic disease, prior hysterectomy, or prior oophorectomy at diagnosis.

During a median 11 years of follow up, 197 of the women reported attempting pregnancy. Of these, 73% reported becoming pregnant, and 65% delivered a live infant a median 4 years after cancer diagnosis. The median age at diagnosis was 32 years, and 28% opted for egg or embryo freezing to preserve fertility. Importantly, 68% received chemotherapy, which can impair fertility, with only a small percentage undergoing ovarian suppression during chemotherapy treatment.

Key predictors of pregnancy or live birth in this study were “financial comfort,” a self-reported measure defined as having money left over to spend after bills are paid (odds ratio [OR], 2.04; 95% CI 1.01-4.12; P = .047); younger age at the time of diagnosis; and undergoing fertility preservation interventions at diagnosis (OR, 2.78; 95% CI 1.29-6.00; P = .009). Chemotherapy and other treatment factors were not seen to be associated with pregnancy or birth outcomes.

“Current research that informs our understanding of the impact of breast cancer treatment on pregnancy and live birth rates is fairly limited,” Dr. Sorouri said during an online press conference announcing the findings. Quality data on fertility outcomes has been limited to studies in certain subgroups, such as women with estrogen receptor–positive breast cancers, she noted, while other studies “have short-term follow-up and critically lack prospective assessment of attempt at conception.”

The new findings show, Dr. Sorouri said, “that in this modern cohort with a heightened awareness of fertility, access to fertility preservation can help to mitigate a portion of the damage from chemotherapy and other agents. Importantly, this highlights the need for increased accessibility of fertility preservation services for women newly diagnosed with breast cancer who are interested in a future pregnancy.”

Commenting on Dr. Sorouri and colleagues’ findings, Julie Gralow, MD, a breast cancer researcher and ASCO’s chief medical officer, stressed that, while younger age at diagnosis and financial comfort were two factors outside the scope of clinical oncology practice, “we can impact fertility preservation prior to treatment.”

She called it “critical” that every patient be informed of the impact of a breast cancer diagnosis and treatment on future fertility, and that all young patients interested in future fertility be offered fertility preservation prior to beginning treatment.

Ann Partridge, MD, of Dana-Farber, said in an interview that the findings reflected a decades’ long change in approach. “Twenty years ago when we first started this cohort, people would tell women ‘you can’t get pregnant. It’s too dangerous. You won’t be able to.’ And some indeed aren’t able to, but the majority who are attempting are succeeding, especially if they preserve their eggs or embryos. So even if chemo puts you into menopause or made you subfertile, if you’ve preserved eggs or embryos, we now can mitigate that distressing effect that many cancer patients have suffered from historically. That’s the good news here.”

Nonetheless, Dr. Partridge, an oncologist and the last author of the study, noted, the results reflected success only for women actively attempting pregnancy. “Remember, we’re not including the people who didn’t attempt. There may be some who went into menopause who never banked eggs or embryos, and may never have tried because they went to a doctor who told them they’re not fertile.” Further, she said, not all insurances cover in vitro fertilization for women who have had breast cancer.

The fact that financial comfort was correlated with reproductive success, Dr. Partridge said, speaks to broader issues about access. “It may not be all about insurers. It may be to have the ability, to have the time, the education and the wherewithal to do this right — and about being with doctors who talk about it.”

Dr. Sorouri and colleagues’ study was sponsored by the Breast Cancer Research Foundation and Susan G. Komen. Several co-authors disclosed receiving speaking and/or consulting fees from pharmaceutical companies, and one reported being an employee of GlaxoSmithKline. Dr. Sorouri reported no industry funding, while Dr. Partridge reported research funding from Novartis.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Most younger women diagnosed with nonmetastatic breast cancer will succeed if they attempt to become pregnant after treatment, according to new research.

The findings, presented May 23 in advance of the annual meeting of the American Society of Clinical Oncology (ASCO) represent the most comprehensive look to date at fertility outcomes following treatment for women diagnosed with breast cancer before age 40 (Abstract 1518).

Kimia Sorouri, MD, a research fellow at the Dana-Farber Cancer Center in Boston, Massachusetts, and her colleagues, looked at data from the Young Women’s Breast Cancer study, a multicenter longitudinal cohort study, for 1213 U.S. and Canadian women (74% non-Hispanic white) who were diagnosed with stages 0-III breast cancer between 2006 and 2016. None of the included patients had metastatic disease, prior hysterectomy, or prior oophorectomy at diagnosis.

During a median 11 years of follow up, 197 of the women reported attempting pregnancy. Of these, 73% reported becoming pregnant, and 65% delivered a live infant a median 4 years after cancer diagnosis. The median age at diagnosis was 32 years, and 28% opted for egg or embryo freezing to preserve fertility. Importantly, 68% received chemotherapy, which can impair fertility, with only a small percentage undergoing ovarian suppression during chemotherapy treatment.

Key predictors of pregnancy or live birth in this study were “financial comfort,” a self-reported measure defined as having money left over to spend after bills are paid (odds ratio [OR], 2.04; 95% CI 1.01-4.12; P = .047); younger age at the time of diagnosis; and undergoing fertility preservation interventions at diagnosis (OR, 2.78; 95% CI 1.29-6.00; P = .009). Chemotherapy and other treatment factors were not seen to be associated with pregnancy or birth outcomes.

“Current research that informs our understanding of the impact of breast cancer treatment on pregnancy and live birth rates is fairly limited,” Dr. Sorouri said during an online press conference announcing the findings. Quality data on fertility outcomes has been limited to studies in certain subgroups, such as women with estrogen receptor–positive breast cancers, she noted, while other studies “have short-term follow-up and critically lack prospective assessment of attempt at conception.”

The new findings show, Dr. Sorouri said, “that in this modern cohort with a heightened awareness of fertility, access to fertility preservation can help to mitigate a portion of the damage from chemotherapy and other agents. Importantly, this highlights the need for increased accessibility of fertility preservation services for women newly diagnosed with breast cancer who are interested in a future pregnancy.”

Commenting on Dr. Sorouri and colleagues’ findings, Julie Gralow, MD, a breast cancer researcher and ASCO’s chief medical officer, stressed that, while younger age at diagnosis and financial comfort were two factors outside the scope of clinical oncology practice, “we can impact fertility preservation prior to treatment.”

She called it “critical” that every patient be informed of the impact of a breast cancer diagnosis and treatment on future fertility, and that all young patients interested in future fertility be offered fertility preservation prior to beginning treatment.

Ann Partridge, MD, of Dana-Farber, said in an interview that the findings reflected a decades’ long change in approach. “Twenty years ago when we first started this cohort, people would tell women ‘you can’t get pregnant. It’s too dangerous. You won’t be able to.’ And some indeed aren’t able to, but the majority who are attempting are succeeding, especially if they preserve their eggs or embryos. So even if chemo puts you into menopause or made you subfertile, if you’ve preserved eggs or embryos, we now can mitigate that distressing effect that many cancer patients have suffered from historically. That’s the good news here.”

Nonetheless, Dr. Partridge, an oncologist and the last author of the study, noted, the results reflected success only for women actively attempting pregnancy. “Remember, we’re not including the people who didn’t attempt. There may be some who went into menopause who never banked eggs or embryos, and may never have tried because they went to a doctor who told them they’re not fertile.” Further, she said, not all insurances cover in vitro fertilization for women who have had breast cancer.

The fact that financial comfort was correlated with reproductive success, Dr. Partridge said, speaks to broader issues about access. “It may not be all about insurers. It may be to have the ability, to have the time, the education and the wherewithal to do this right — and about being with doctors who talk about it.”

Dr. Sorouri and colleagues’ study was sponsored by the Breast Cancer Research Foundation and Susan G. Komen. Several co-authors disclosed receiving speaking and/or consulting fees from pharmaceutical companies, and one reported being an employee of GlaxoSmithKline. Dr. Sorouri reported no industry funding, while Dr. Partridge reported research funding from Novartis.

Most younger women diagnosed with nonmetastatic breast cancer will succeed if they attempt to become pregnant after treatment, according to new research.

The findings, presented May 23 in advance of the annual meeting of the American Society of Clinical Oncology (ASCO) represent the most comprehensive look to date at fertility outcomes following treatment for women diagnosed with breast cancer before age 40 (Abstract 1518).

Kimia Sorouri, MD, a research fellow at the Dana-Farber Cancer Center in Boston, Massachusetts, and her colleagues, looked at data from the Young Women’s Breast Cancer study, a multicenter longitudinal cohort study, for 1213 U.S. and Canadian women (74% non-Hispanic white) who were diagnosed with stages 0-III breast cancer between 2006 and 2016. None of the included patients had metastatic disease, prior hysterectomy, or prior oophorectomy at diagnosis.

During a median 11 years of follow up, 197 of the women reported attempting pregnancy. Of these, 73% reported becoming pregnant, and 65% delivered a live infant a median 4 years after cancer diagnosis. The median age at diagnosis was 32 years, and 28% opted for egg or embryo freezing to preserve fertility. Importantly, 68% received chemotherapy, which can impair fertility, with only a small percentage undergoing ovarian suppression during chemotherapy treatment.

Key predictors of pregnancy or live birth in this study were “financial comfort,” a self-reported measure defined as having money left over to spend after bills are paid (odds ratio [OR], 2.04; 95% CI 1.01-4.12; P = .047); younger age at the time of diagnosis; and undergoing fertility preservation interventions at diagnosis (OR, 2.78; 95% CI 1.29-6.00; P = .009). Chemotherapy and other treatment factors were not seen to be associated with pregnancy or birth outcomes.

“Current research that informs our understanding of the impact of breast cancer treatment on pregnancy and live birth rates is fairly limited,” Dr. Sorouri said during an online press conference announcing the findings. Quality data on fertility outcomes has been limited to studies in certain subgroups, such as women with estrogen receptor–positive breast cancers, she noted, while other studies “have short-term follow-up and critically lack prospective assessment of attempt at conception.”

The new findings show, Dr. Sorouri said, “that in this modern cohort with a heightened awareness of fertility, access to fertility preservation can help to mitigate a portion of the damage from chemotherapy and other agents. Importantly, this highlights the need for increased accessibility of fertility preservation services for women newly diagnosed with breast cancer who are interested in a future pregnancy.”

Commenting on Dr. Sorouri and colleagues’ findings, Julie Gralow, MD, a breast cancer researcher and ASCO’s chief medical officer, stressed that, while younger age at diagnosis and financial comfort were two factors outside the scope of clinical oncology practice, “we can impact fertility preservation prior to treatment.”

She called it “critical” that every patient be informed of the impact of a breast cancer diagnosis and treatment on future fertility, and that all young patients interested in future fertility be offered fertility preservation prior to beginning treatment.

Ann Partridge, MD, of Dana-Farber, said in an interview that the findings reflected a decades’ long change in approach. “Twenty years ago when we first started this cohort, people would tell women ‘you can’t get pregnant. It’s too dangerous. You won’t be able to.’ And some indeed aren’t able to, but the majority who are attempting are succeeding, especially if they preserve their eggs or embryos. So even if chemo puts you into menopause or made you subfertile, if you’ve preserved eggs or embryos, we now can mitigate that distressing effect that many cancer patients have suffered from historically. That’s the good news here.”

Nonetheless, Dr. Partridge, an oncologist and the last author of the study, noted, the results reflected success only for women actively attempting pregnancy. “Remember, we’re not including the people who didn’t attempt. There may be some who went into menopause who never banked eggs or embryos, and may never have tried because they went to a doctor who told them they’re not fertile.” Further, she said, not all insurances cover in vitro fertilization for women who have had breast cancer.

The fact that financial comfort was correlated with reproductive success, Dr. Partridge said, speaks to broader issues about access. “It may not be all about insurers. It may be to have the ability, to have the time, the education and the wherewithal to do this right — and about being with doctors who talk about it.”

Dr. Sorouri and colleagues’ study was sponsored by the Breast Cancer Research Foundation and Susan G. Komen. Several co-authors disclosed receiving speaking and/or consulting fees from pharmaceutical companies, and one reported being an employee of GlaxoSmithKline. Dr. Sorouri reported no industry funding, while Dr. Partridge reported research funding from Novartis.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ASCO 2024

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is Vaginal Estrogen Safe in Breast Cancer Survivors?

Article Type
Changed
Tue, 06/04/2024 - 15:21

 

TOPLINE:

Vaginal estrogen therapy does not increase the risk for recurrence in women with hormone receptor (HR)–negative breast cancer or in those with HR–positive tumors concurrently treated with tamoxifen but should be avoided in aromatase inhibitor users, a French study suggested.

METHODOLOGY:

  • Survivors of breast cancer often experience genitourinary symptoms due to declining estrogen levels. Vaginal estrogen therapies, including estriol and promestriene (3-propyl ethyl, 17B-methyl estradiol), can prevent these symptoms, but the effect on breast cancer outcomes remains uncertain.
  • Researchers used French insurance claims data to emulate a target trial assessing the effect of initiating vaginal estrogen therapy — any molecule, promestriene, or estriol — on disease-free survival in survivors of breast cancer.
  • Patients included in the study had a median age of 54 years; 85% were HR-positive, and 15% were HR–negative. The researchers conducted subgroup analyses based on HR status and endocrine therapy regimen.

TAKEAWAY:

  • Among 134,942 unique patients, 1739 started vaginal estrogen therapy — 56%, promestriene; 34%, estriol; and 10%, both. 
  • Initiation of vaginal estrogen therapy led to a modest decrease in disease-free survival in patients with HR–positive tumors (−2.1 percentage point at 5 years), particularly in those concurrently treated with an aromatase inhibitor (−3.0 percentage points).
  • No decrease in disease-free survival was observed in patients with HR–negative tumors or in those treated with tamoxifen.
  • In aromatase inhibitor users, starting estriol led to a “more severe and premature” decrease in disease-free survival (−4.2 percentage point after 3 years) compared with initiating promestriene (1.0 percentage point difference at 3 years).

IN PRACTICE:

“This study addresses a very important survivorship issue — sexual dysfunction in cancer patients — which is associated with anxiety and depression and should be considered a crucial component of survivorship care,” said study discussant Matteo Lambertini, MD, PhD, with University of Genova, Genova, Italy.

Our results suggest that using vaginal estrogen therapy “is safe in individuals with HR-negative tumors and in those concurrently treated with tamoxifen,” said study presenter Elise Dumas, PhD, with Institut Curie, Paris, France. For breast cancer survivors treated with aromatase inhibitors, vaginal estrogen therapy should be avoided as much as possible, but promestriene is preferred over estriol in this subgroup of patients.

SOURCE:

The research (Abstract 268MO) was presented at the European Society for Medical Oncology Breast Cancer 2024 Annual Congress on May 17, 2024.

LIMITATIONS:

No limitations were discussed in the presentation.

DISCLOSURES:

Funding was provided by Monoprix and the French National Cancer Institute. Dumas declared no conflicts of interest. Lambertini has financial relationships with various pharmaceutical companies including Roche, Novartis, AstraZeneca, Lilly, Exact Sciences, Pfizer, and others.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Vaginal estrogen therapy does not increase the risk for recurrence in women with hormone receptor (HR)–negative breast cancer or in those with HR–positive tumors concurrently treated with tamoxifen but should be avoided in aromatase inhibitor users, a French study suggested.

METHODOLOGY:

  • Survivors of breast cancer often experience genitourinary symptoms due to declining estrogen levels. Vaginal estrogen therapies, including estriol and promestriene (3-propyl ethyl, 17B-methyl estradiol), can prevent these symptoms, but the effect on breast cancer outcomes remains uncertain.
  • Researchers used French insurance claims data to emulate a target trial assessing the effect of initiating vaginal estrogen therapy — any molecule, promestriene, or estriol — on disease-free survival in survivors of breast cancer.
  • Patients included in the study had a median age of 54 years; 85% were HR-positive, and 15% were HR–negative. The researchers conducted subgroup analyses based on HR status and endocrine therapy regimen.

TAKEAWAY:

  • Among 134,942 unique patients, 1739 started vaginal estrogen therapy — 56%, promestriene; 34%, estriol; and 10%, both. 
  • Initiation of vaginal estrogen therapy led to a modest decrease in disease-free survival in patients with HR–positive tumors (−2.1 percentage point at 5 years), particularly in those concurrently treated with an aromatase inhibitor (−3.0 percentage points).
  • No decrease in disease-free survival was observed in patients with HR–negative tumors or in those treated with tamoxifen.
  • In aromatase inhibitor users, starting estriol led to a “more severe and premature” decrease in disease-free survival (−4.2 percentage point after 3 years) compared with initiating promestriene (1.0 percentage point difference at 3 years).

IN PRACTICE:

“This study addresses a very important survivorship issue — sexual dysfunction in cancer patients — which is associated with anxiety and depression and should be considered a crucial component of survivorship care,” said study discussant Matteo Lambertini, MD, PhD, with University of Genova, Genova, Italy.

Our results suggest that using vaginal estrogen therapy “is safe in individuals with HR-negative tumors and in those concurrently treated with tamoxifen,” said study presenter Elise Dumas, PhD, with Institut Curie, Paris, France. For breast cancer survivors treated with aromatase inhibitors, vaginal estrogen therapy should be avoided as much as possible, but promestriene is preferred over estriol in this subgroup of patients.

SOURCE:

The research (Abstract 268MO) was presented at the European Society for Medical Oncology Breast Cancer 2024 Annual Congress on May 17, 2024.

LIMITATIONS:

No limitations were discussed in the presentation.

DISCLOSURES:

Funding was provided by Monoprix and the French National Cancer Institute. Dumas declared no conflicts of interest. Lambertini has financial relationships with various pharmaceutical companies including Roche, Novartis, AstraZeneca, Lilly, Exact Sciences, Pfizer, and others.

A version of this article first appeared on Medscape.com.

 

TOPLINE:

Vaginal estrogen therapy does not increase the risk for recurrence in women with hormone receptor (HR)–negative breast cancer or in those with HR–positive tumors concurrently treated with tamoxifen but should be avoided in aromatase inhibitor users, a French study suggested.

METHODOLOGY:

  • Survivors of breast cancer often experience genitourinary symptoms due to declining estrogen levels. Vaginal estrogen therapies, including estriol and promestriene (3-propyl ethyl, 17B-methyl estradiol), can prevent these symptoms, but the effect on breast cancer outcomes remains uncertain.
  • Researchers used French insurance claims data to emulate a target trial assessing the effect of initiating vaginal estrogen therapy — any molecule, promestriene, or estriol — on disease-free survival in survivors of breast cancer.
  • Patients included in the study had a median age of 54 years; 85% were HR-positive, and 15% were HR–negative. The researchers conducted subgroup analyses based on HR status and endocrine therapy regimen.

TAKEAWAY:

  • Among 134,942 unique patients, 1739 started vaginal estrogen therapy — 56%, promestriene; 34%, estriol; and 10%, both. 
  • Initiation of vaginal estrogen therapy led to a modest decrease in disease-free survival in patients with HR–positive tumors (−2.1 percentage point at 5 years), particularly in those concurrently treated with an aromatase inhibitor (−3.0 percentage points).
  • No decrease in disease-free survival was observed in patients with HR–negative tumors or in those treated with tamoxifen.
  • In aromatase inhibitor users, starting estriol led to a “more severe and premature” decrease in disease-free survival (−4.2 percentage point after 3 years) compared with initiating promestriene (1.0 percentage point difference at 3 years).

IN PRACTICE:

“This study addresses a very important survivorship issue — sexual dysfunction in cancer patients — which is associated with anxiety and depression and should be considered a crucial component of survivorship care,” said study discussant Matteo Lambertini, MD, PhD, with University of Genova, Genova, Italy.

Our results suggest that using vaginal estrogen therapy “is safe in individuals with HR-negative tumors and in those concurrently treated with tamoxifen,” said study presenter Elise Dumas, PhD, with Institut Curie, Paris, France. For breast cancer survivors treated with aromatase inhibitors, vaginal estrogen therapy should be avoided as much as possible, but promestriene is preferred over estriol in this subgroup of patients.

SOURCE:

The research (Abstract 268MO) was presented at the European Society for Medical Oncology Breast Cancer 2024 Annual Congress on May 17, 2024.

LIMITATIONS:

No limitations were discussed in the presentation.

DISCLOSURES:

Funding was provided by Monoprix and the French National Cancer Institute. Dumas declared no conflicts of interest. Lambertini has financial relationships with various pharmaceutical companies including Roche, Novartis, AstraZeneca, Lilly, Exact Sciences, Pfizer, and others.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Ultraprocessed Foods May Be an Independent Risk Factor for Poor Brain Health

Article Type
Changed
Tue, 05/28/2024 - 15:00

Consuming highly processed foods may be harmful to the aging brain, independent of other risk factors for adverse neurologic outcomes and adherence to recommended dietary patterns, new research suggests.

Observations from a large cohort of adults followed for more than 10 years suggested that eating more ultraprocessed foods (UPFs) may increase the risk for cognitive decline and stroke, while eating more unprocessed or minimally processed foods may lower the risk.

“The first key takeaway is that the type of food that we eat matters for brain health, but it’s equally important to think about how it’s made and handled when thinking about brain health,” said study investigator W. Taylor Kimberly, MD, PhD, with Massachusetts General Hospital in Boston.

“The second is that it’s not just all a bad news story because while increased consumption of ultra-processed foods is associated with a higher risk of cognitive impairment and stroke, unprocessed foods appear to be protective,” Dr. Kimberly added.

The study was published online on May 22 in Neurology.
 

Food Processing Matters

UPFs are highly manipulated, low in protein and fiber, and packed with added ingredients, including sugar, fat, and salt. Examples of UPFs are soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, and fries.

Unprocessed or minimally processed foods include meats such as simple cuts of beef, pork, and chicken, and vegetables and fruits.

Research has shown associations between high UPF consumption and increased risk for metabolic and neurologic disorders.

As reported previously, in the ELSA-Brasil study, higher intake of UPFs was significantly associated with a faster rate of decline in executive and global cognitive function.

Yet, it’s unclear whether the extent of food processing contributes to the risk of adverse neurologic outcomes independent of dietary patterns.

Dr. Kimberly and colleagues examined the association of food processing levels with the risk for cognitive impairment and stroke in the long-running REGARDS study, a large prospective US cohort of Black and White adults aged 45 years and older.

Food processing levels were defined by the NOVA food classification system, which ranges from unprocessed or minimally processed foods (NOVA1) to UPFs (NOVA4). Dietary patterns were characterized based on food frequency questionnaires.

In the cognitive impairment cohort, 768 of 14,175 adults without evidence of impairment at baseline who underwent follow-up testing developed cognitive impairment.
 

Diet an Opportunity to Protect Brain Health

In multivariable Cox proportional hazards models adjusting for age, sex, high blood pressure, and other factors, a 10% increase in relative intake of UPFs was associated with a 16% higher risk for cognitive impairment (hazard ratio [HR], 1.16). Conversely, a higher intake of unprocessed or minimally processed foods correlated with a 12% lower risk for cognitive impairment (HR, 0.88).

In the stroke cohort, 1108 of 20,243 adults without a history of stroke had a stroke during the follow-up.

In multivariable Cox models, greater intake of UPFs was associated with an 8% increased risk for stroke (HR, 1.08), while greater intake of unprocessed or minimally processed foods correlated with a 9% lower risk for stroke (HR, 0.91).

The effect of UPFs on stroke risk was greater among Black than among White adults (UPF-by-race interaction HR, 1.15).

The associations between UPFs and both cognitive impairment and stroke were independent of adherence to the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet, and the Mediterranean-DASH Intervention for Neurodegenerative Delay diet.

These results “highlight the possibility that we have the capacity to maintain our brain health and prevent poor brain health outcomes by focusing on unprocessed foods in the long term,” Dr. Kimberly said.

He cautioned that this was “an observational study and not an interventional study, so we can’t say with certainty that substituting ultra-processed foods with unprocessed foods will definitively improve brain health,” Dr. Kimberly said. “That’s a clinical trial question that has not been done but our results certainly are provocative.”
 

 

 

Consider UPFs in National Guidelines?

The coauthors of an accompanying editorial said the “robust” results from Kimberly and colleagues highlight the “significant role of food processing levels and their relationship with adverse neurologic outcomes, independent of conventional dietary patterns.”

Peipei Gao, MS, with Harvard T.H. Chan School of Public Health, and Zhendong Mei, PhD, with Harvard Medical School, both in Boston, noted that the mechanisms underlying the impact of UPFs on adverse neurologic outcomes “can be attributed not only to their nutritional profiles,” including poor nutrient composition and high glycemic load, “but also to the presence of additives including emulsifiers, colorants, sweeteners, and nitrates/nitrites, which have been associated with disruptions in the gut microbial ecosystem and inflammation.

“Understanding how food processing levels are associated with human health offers a fresh take on the saying ‘you are what you eat,’ ” the editorialists wrote.

This new study, they noted, adds to the evidence by highlighting the link between UPFs and brain health, independent of traditional dietary patterns and “raises questions about whether considerations of UPFs should be included in dietary guidelines, as well as national and global public health policies for improving brain health.”

The editorialists called for large prospective population studies and randomized controlled trials to better understand the link between UPF consumption and brain health. “In addition, mechanistic studies are warranted to identify specific foods, detrimental processes, and additives that play a role in UPFs and their association with neurologic disorders,” they concluded.

Funding for the study was provided by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, National Institutes of Health, and Department of Health and Human Services. The authors and editorial writers had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Consuming highly processed foods may be harmful to the aging brain, independent of other risk factors for adverse neurologic outcomes and adherence to recommended dietary patterns, new research suggests.

Observations from a large cohort of adults followed for more than 10 years suggested that eating more ultraprocessed foods (UPFs) may increase the risk for cognitive decline and stroke, while eating more unprocessed or minimally processed foods may lower the risk.

“The first key takeaway is that the type of food that we eat matters for brain health, but it’s equally important to think about how it’s made and handled when thinking about brain health,” said study investigator W. Taylor Kimberly, MD, PhD, with Massachusetts General Hospital in Boston.

“The second is that it’s not just all a bad news story because while increased consumption of ultra-processed foods is associated with a higher risk of cognitive impairment and stroke, unprocessed foods appear to be protective,” Dr. Kimberly added.

The study was published online on May 22 in Neurology.
 

Food Processing Matters

UPFs are highly manipulated, low in protein and fiber, and packed with added ingredients, including sugar, fat, and salt. Examples of UPFs are soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, and fries.

Unprocessed or minimally processed foods include meats such as simple cuts of beef, pork, and chicken, and vegetables and fruits.

Research has shown associations between high UPF consumption and increased risk for metabolic and neurologic disorders.

As reported previously, in the ELSA-Brasil study, higher intake of UPFs was significantly associated with a faster rate of decline in executive and global cognitive function.

Yet, it’s unclear whether the extent of food processing contributes to the risk of adverse neurologic outcomes independent of dietary patterns.

Dr. Kimberly and colleagues examined the association of food processing levels with the risk for cognitive impairment and stroke in the long-running REGARDS study, a large prospective US cohort of Black and White adults aged 45 years and older.

Food processing levels were defined by the NOVA food classification system, which ranges from unprocessed or minimally processed foods (NOVA1) to UPFs (NOVA4). Dietary patterns were characterized based on food frequency questionnaires.

In the cognitive impairment cohort, 768 of 14,175 adults without evidence of impairment at baseline who underwent follow-up testing developed cognitive impairment.
 

Diet an Opportunity to Protect Brain Health

In multivariable Cox proportional hazards models adjusting for age, sex, high blood pressure, and other factors, a 10% increase in relative intake of UPFs was associated with a 16% higher risk for cognitive impairment (hazard ratio [HR], 1.16). Conversely, a higher intake of unprocessed or minimally processed foods correlated with a 12% lower risk for cognitive impairment (HR, 0.88).

In the stroke cohort, 1108 of 20,243 adults without a history of stroke had a stroke during the follow-up.

In multivariable Cox models, greater intake of UPFs was associated with an 8% increased risk for stroke (HR, 1.08), while greater intake of unprocessed or minimally processed foods correlated with a 9% lower risk for stroke (HR, 0.91).

The effect of UPFs on stroke risk was greater among Black than among White adults (UPF-by-race interaction HR, 1.15).

The associations between UPFs and both cognitive impairment and stroke were independent of adherence to the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet, and the Mediterranean-DASH Intervention for Neurodegenerative Delay diet.

These results “highlight the possibility that we have the capacity to maintain our brain health and prevent poor brain health outcomes by focusing on unprocessed foods in the long term,” Dr. Kimberly said.

He cautioned that this was “an observational study and not an interventional study, so we can’t say with certainty that substituting ultra-processed foods with unprocessed foods will definitively improve brain health,” Dr. Kimberly said. “That’s a clinical trial question that has not been done but our results certainly are provocative.”
 

 

 

Consider UPFs in National Guidelines?

The coauthors of an accompanying editorial said the “robust” results from Kimberly and colleagues highlight the “significant role of food processing levels and their relationship with adverse neurologic outcomes, independent of conventional dietary patterns.”

Peipei Gao, MS, with Harvard T.H. Chan School of Public Health, and Zhendong Mei, PhD, with Harvard Medical School, both in Boston, noted that the mechanisms underlying the impact of UPFs on adverse neurologic outcomes “can be attributed not only to their nutritional profiles,” including poor nutrient composition and high glycemic load, “but also to the presence of additives including emulsifiers, colorants, sweeteners, and nitrates/nitrites, which have been associated with disruptions in the gut microbial ecosystem and inflammation.

“Understanding how food processing levels are associated with human health offers a fresh take on the saying ‘you are what you eat,’ ” the editorialists wrote.

This new study, they noted, adds to the evidence by highlighting the link between UPFs and brain health, independent of traditional dietary patterns and “raises questions about whether considerations of UPFs should be included in dietary guidelines, as well as national and global public health policies for improving brain health.”

The editorialists called for large prospective population studies and randomized controlled trials to better understand the link between UPF consumption and brain health. “In addition, mechanistic studies are warranted to identify specific foods, detrimental processes, and additives that play a role in UPFs and their association with neurologic disorders,” they concluded.

Funding for the study was provided by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, National Institutes of Health, and Department of Health and Human Services. The authors and editorial writers had no relevant disclosures.

A version of this article appeared on Medscape.com.

Consuming highly processed foods may be harmful to the aging brain, independent of other risk factors for adverse neurologic outcomes and adherence to recommended dietary patterns, new research suggests.

Observations from a large cohort of adults followed for more than 10 years suggested that eating more ultraprocessed foods (UPFs) may increase the risk for cognitive decline and stroke, while eating more unprocessed or minimally processed foods may lower the risk.

“The first key takeaway is that the type of food that we eat matters for brain health, but it’s equally important to think about how it’s made and handled when thinking about brain health,” said study investigator W. Taylor Kimberly, MD, PhD, with Massachusetts General Hospital in Boston.

“The second is that it’s not just all a bad news story because while increased consumption of ultra-processed foods is associated with a higher risk of cognitive impairment and stroke, unprocessed foods appear to be protective,” Dr. Kimberly added.

The study was published online on May 22 in Neurology.
 

Food Processing Matters

UPFs are highly manipulated, low in protein and fiber, and packed with added ingredients, including sugar, fat, and salt. Examples of UPFs are soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, and fries.

Unprocessed or minimally processed foods include meats such as simple cuts of beef, pork, and chicken, and vegetables and fruits.

Research has shown associations between high UPF consumption and increased risk for metabolic and neurologic disorders.

As reported previously, in the ELSA-Brasil study, higher intake of UPFs was significantly associated with a faster rate of decline in executive and global cognitive function.

Yet, it’s unclear whether the extent of food processing contributes to the risk of adverse neurologic outcomes independent of dietary patterns.

Dr. Kimberly and colleagues examined the association of food processing levels with the risk for cognitive impairment and stroke in the long-running REGARDS study, a large prospective US cohort of Black and White adults aged 45 years and older.

Food processing levels were defined by the NOVA food classification system, which ranges from unprocessed or minimally processed foods (NOVA1) to UPFs (NOVA4). Dietary patterns were characterized based on food frequency questionnaires.

In the cognitive impairment cohort, 768 of 14,175 adults without evidence of impairment at baseline who underwent follow-up testing developed cognitive impairment.
 

Diet an Opportunity to Protect Brain Health

In multivariable Cox proportional hazards models adjusting for age, sex, high blood pressure, and other factors, a 10% increase in relative intake of UPFs was associated with a 16% higher risk for cognitive impairment (hazard ratio [HR], 1.16). Conversely, a higher intake of unprocessed or minimally processed foods correlated with a 12% lower risk for cognitive impairment (HR, 0.88).

In the stroke cohort, 1108 of 20,243 adults without a history of stroke had a stroke during the follow-up.

In multivariable Cox models, greater intake of UPFs was associated with an 8% increased risk for stroke (HR, 1.08), while greater intake of unprocessed or minimally processed foods correlated with a 9% lower risk for stroke (HR, 0.91).

The effect of UPFs on stroke risk was greater among Black than among White adults (UPF-by-race interaction HR, 1.15).

The associations between UPFs and both cognitive impairment and stroke were independent of adherence to the Mediterranean diet, the Dietary Approaches to Stop Hypertension (DASH) diet, and the Mediterranean-DASH Intervention for Neurodegenerative Delay diet.

These results “highlight the possibility that we have the capacity to maintain our brain health and prevent poor brain health outcomes by focusing on unprocessed foods in the long term,” Dr. Kimberly said.

He cautioned that this was “an observational study and not an interventional study, so we can’t say with certainty that substituting ultra-processed foods with unprocessed foods will definitively improve brain health,” Dr. Kimberly said. “That’s a clinical trial question that has not been done but our results certainly are provocative.”
 

 

 

Consider UPFs in National Guidelines?

The coauthors of an accompanying editorial said the “robust” results from Kimberly and colleagues highlight the “significant role of food processing levels and their relationship with adverse neurologic outcomes, independent of conventional dietary patterns.”

Peipei Gao, MS, with Harvard T.H. Chan School of Public Health, and Zhendong Mei, PhD, with Harvard Medical School, both in Boston, noted that the mechanisms underlying the impact of UPFs on adverse neurologic outcomes “can be attributed not only to their nutritional profiles,” including poor nutrient composition and high glycemic load, “but also to the presence of additives including emulsifiers, colorants, sweeteners, and nitrates/nitrites, which have been associated with disruptions in the gut microbial ecosystem and inflammation.

“Understanding how food processing levels are associated with human health offers a fresh take on the saying ‘you are what you eat,’ ” the editorialists wrote.

This new study, they noted, adds to the evidence by highlighting the link between UPFs and brain health, independent of traditional dietary patterns and “raises questions about whether considerations of UPFs should be included in dietary guidelines, as well as national and global public health policies for improving brain health.”

The editorialists called for large prospective population studies and randomized controlled trials to better understand the link between UPF consumption and brain health. “In addition, mechanistic studies are warranted to identify specific foods, detrimental processes, and additives that play a role in UPFs and their association with neurologic disorders,” they concluded.

Funding for the study was provided by the National Institute of Neurological Disorders and Stroke, the National Institute on Aging, National Institutes of Health, and Department of Health and Human Services. The authors and editorial writers had no relevant disclosures.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Liposomal Irinotecan for Pancreatic Cancer: Is It Worth It?

Article Type
Changed
Tue, 05/28/2024 - 15:52

In February, the US Food and Drug Administration (FDA) approved irinotecan liposome (Onivyde) as part of a new regimen for first-line metastatic pancreatic adenocarcinoma called NALIRIFOX.

The main difference between NALIRIFOX and a standard go-to regimen for the indication, modified FOLFIRINOX, is that liposomal irinotecan — irinotecan encased in a lipid nanoparticle — is used instead of free irinotecan.

Trial data suggested a better overall response rate, a slight progression-free survival advantage, and potentially fewer adverse events with the liposomal formulation.

The substitution, however, raises the cost of treatment substantially. According to one estimate, a single cycle of FOLFIRINOX costs about $500 at a body surface area of 2 m2, while the equivalent single cycle of NALIRIFOX costs $7800 — over 15-fold more expensive.

While some oncologists have called the NALIRIFOX regimen a potential new standard first-line treatment for metastatic pancreatic adenocarcinoma, others have expressed serious doubts about whether the potential benefits are worth the extra cost.

“I can’t really see a single scenario where I would recommend NALIRIFOX over FOLFIRINOX” Ignacio Garrido-Laguna, MD, PhD, a gastrointestinal oncologist and pancreatic cancer researcher at the University of Utah, Salt Lake City, told this news organization. “Most of us in the academic setting have the same take on this.”
 

No Head-to-Head Comparison

Uncertainty surrounding the benefits of NALIRIFOX is largely driven by the fact that NALIRIFOX wasn’t compared with FOLFIRINOX in the phase 3 trial that won liposomal irinotecan approval.

Instead, the 770-patient NAPOLI 3 trial compared NALIRIFOX — which also includes oxaliplatin, fluorouracil, and leucovorin — with a two-drug regimen, nab-paclitaxel and gemcitabine. In the trial, overall survival and other outcomes were moderately better with NALIRIFOX.

Oncologists have said that the true value of the trial is that it conclusively demonstrates that a four-drug regimen is superior to a two-drug regimen for patients who can tolerate the more intensive therapy.

Eileen M. O’Reilly, MD, the senior investigator on NAPOLI 3, made this point when she presented the phase 3 results at the 2023 ASCO annual meeting.

The trial “answers the question of four drugs versus two” for first-line metastatic pancreatic cancer but “does not address the question of NALIRIFOX versus FOLFIRINOX,” said Dr. O’Reilly, a pancreatic and hepatobiliary oncologist and researcher at Memorial Sloan Kettering Cancer Center in New York City.

Comparing them directly in the study “probably wouldn’t have been in the interest of the sponsor,” said Dr. O’Reilly.

With no head-to-head comparison, oncologists have been comparing NAPOLI 3 results with those from PRODIGE 4, the 2011 trial that won FOLFIRINOX its place as a first-line regimen.

When comparing the trials, median overall survival was exactly the same for the two regimens — 11.1 months. FOLFIRINOX was associated with a slightly higher 1-year survival rate — 48.4% with FOLFIRINOX vs 45.6% with NALIRIFOX.

However, Dr. O’Reilly and her colleagues also highlighted comparisons between the two trials that favored NAPOLI 3.

NAPOLI 3 had no age limit, while PRODIGE subjects were no older than 75 years. Median progression-free survival was 1 month longer among patients receiving NALIRIFOX — 7.4 months vs 6.4 months in PRODIGE — and overall response rates were higher as well — 41.8% in NAPOLI 3 vs 31.6%. Patients receiving NALIRIFOX also had lower rates of grade 3/4 neutropenia (23.8% vs 45.7%, respectively) and peripheral sensory neuropathy (3.5% vs 9.0%, respectively).

The authors explained that the lower rate of neuropathy could be because NALIRIFOX uses a lower dose of oxaliplatin (FOLFIRINOX), at 60 mg/m2 instead of 85 mg/m2.
 

 

 

Is It Worth It?

During a presentation of the phase 3 findings last year, study author Zev A. Wainberg, MD, of the University of California, Los Angeles, said the NALIRIFOX regimen can be considered the new reference regimen for first-line treatment of metastatic pancreatic adenocarcinoma.

The study discussant, Laura Goff, MD, MSCI, of Vanderbilt-Ingram Cancer Center, Nashville, Tennessee, agreed that the results support the NALIRIFOX regimen as “the new standard for fit patients.”

However, other oncologists remain skeptical about the benefits of the new regimen over FOLFIRINOX for patients with metastatic pancreatic adenocarcinoma.

In a recent editorial, Dr. Garrido-Laguna and University of Utah gastrointestinal oncologist Christopher Nevala-Plagemann, MD, compared the evidence for both regimens.

The experts pointed out that overall response rates were assessed by investigators in NAPOLI 3 and not by an independent review committee, as in PRODIGE 4, and might have been overestimated.

Although the lack of an age limit was touted as a benefit of NAPOLI 3, Dr. Garrido-Laguna and Dr. Nevala-Plagemann doubt whether enough patients over 75 years old participated to draw any meaningful conclusions about using NALIRIFOX in older, frailer patients. If anything, patients in PRODIGE 4 might have been less fit because, among other things, the trial allowed patients with serum albumins < 3 g/dL.

On the adverse event front, the authors highlighted the higher incidences of grade 3 or worse diarrhea with NALIRIFOX (20% vs 12.7%) and questioned if there truly is less neutropenia with NALIRIFOX because high-risk patients in NAPOLI 3 were treated with granulocyte colony-stimulating factor to prevent it. The pair also questioned whether the differences in neuropathy rates between the two trials were big enough to be clinically meaningful.

Insights from a recent meta-analysis may further clarify some of the lingering questions about the efficacy of NALIRIFOX vs FOLFIRINOX.

In the analysis, the team found no meaningful difference in overall and progression-free survival between the two regimens. Differences in rates of peripheral neuropathy and diarrhea were not statistically significant, but NALIRIFOX did carry a statistically significant advantage in lower rates of febrile neutropenia, thrombocytopenia, and vomiting.

The team concluded that “NALIRIFOX and FOLFIRINOX may provide equal efficacy as first-line treatment of metastatic pancreatic cancer, but with different toxicity profiles,” and called for careful patient selection when choosing between the two regimens as well as consideration of financial toxicity.

Dr. Garrido-Laguna had a different take. With the current data, NALIRIFOX does not seem to “add anything substantially different to what we already” have with FOLFIRINOX, he told this news organization. Given that, “we can’t really justify NALIRIFOX over FOLFIRINOX without more of a head-to-head comparison.”

The higher cost of NALIRIFOX, in particular, remains a major drawback.

“We think it would be an economic disservice to our healthcare systems if we used NALIRIFOX instead of FOLFIRINOX for these patients on the basis of [NAPOLI 3] data,” Bishal Gyawali, MD, PhD, and Christopher Booth, MD, gastrointestinal oncologists at Queen’s University in Kingston, Ontario, Canada, said in a recent essay.

Dr. Garrido-Laguna and Dr. Nevala-Plagemann reiterated this concern.

Overall, “NALIRIFOX does not seem to raise the bar but rather exposes patients and healthcare systems to financial toxicities,” Dr. Garrido-Laguna and Dr. Nevala-Plagemann wrote in their review.

NAPOLI 3 was funded by Ipsen and PRODIGE 4 was funded by the government of France. No funding source was reported for the meta-analysis. NAPOLI 3 investigators included Ipsen employees. Dr. O’Reilly disclosed grants or contracts from Ipsen and many other companies. Dr. Garrido-Laguna reported institutional research funding from Bristol Myers Squibb, Novartis, Pfizer, and other companies, but not Ipsen. Dr. Nevala-Plagemann is an advisor for Seagen and reported institutional research funding from Theriva. Dr. Gyawali is a consultant for Vivio Health; Dr. Booth had no disclosures. Two meta-analysis authors reported grants or personal fees from Ipsen as well as ties to other companies.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

In February, the US Food and Drug Administration (FDA) approved irinotecan liposome (Onivyde) as part of a new regimen for first-line metastatic pancreatic adenocarcinoma called NALIRIFOX.

The main difference between NALIRIFOX and a standard go-to regimen for the indication, modified FOLFIRINOX, is that liposomal irinotecan — irinotecan encased in a lipid nanoparticle — is used instead of free irinotecan.

Trial data suggested a better overall response rate, a slight progression-free survival advantage, and potentially fewer adverse events with the liposomal formulation.

The substitution, however, raises the cost of treatment substantially. According to one estimate, a single cycle of FOLFIRINOX costs about $500 at a body surface area of 2 m2, while the equivalent single cycle of NALIRIFOX costs $7800 — over 15-fold more expensive.

While some oncologists have called the NALIRIFOX regimen a potential new standard first-line treatment for metastatic pancreatic adenocarcinoma, others have expressed serious doubts about whether the potential benefits are worth the extra cost.

“I can’t really see a single scenario where I would recommend NALIRIFOX over FOLFIRINOX” Ignacio Garrido-Laguna, MD, PhD, a gastrointestinal oncologist and pancreatic cancer researcher at the University of Utah, Salt Lake City, told this news organization. “Most of us in the academic setting have the same take on this.”
 

No Head-to-Head Comparison

Uncertainty surrounding the benefits of NALIRIFOX is largely driven by the fact that NALIRIFOX wasn’t compared with FOLFIRINOX in the phase 3 trial that won liposomal irinotecan approval.

Instead, the 770-patient NAPOLI 3 trial compared NALIRIFOX — which also includes oxaliplatin, fluorouracil, and leucovorin — with a two-drug regimen, nab-paclitaxel and gemcitabine. In the trial, overall survival and other outcomes were moderately better with NALIRIFOX.

Oncologists have said that the true value of the trial is that it conclusively demonstrates that a four-drug regimen is superior to a two-drug regimen for patients who can tolerate the more intensive therapy.

Eileen M. O’Reilly, MD, the senior investigator on NAPOLI 3, made this point when she presented the phase 3 results at the 2023 ASCO annual meeting.

The trial “answers the question of four drugs versus two” for first-line metastatic pancreatic cancer but “does not address the question of NALIRIFOX versus FOLFIRINOX,” said Dr. O’Reilly, a pancreatic and hepatobiliary oncologist and researcher at Memorial Sloan Kettering Cancer Center in New York City.

Comparing them directly in the study “probably wouldn’t have been in the interest of the sponsor,” said Dr. O’Reilly.

With no head-to-head comparison, oncologists have been comparing NAPOLI 3 results with those from PRODIGE 4, the 2011 trial that won FOLFIRINOX its place as a first-line regimen.

When comparing the trials, median overall survival was exactly the same for the two regimens — 11.1 months. FOLFIRINOX was associated with a slightly higher 1-year survival rate — 48.4% with FOLFIRINOX vs 45.6% with NALIRIFOX.

However, Dr. O’Reilly and her colleagues also highlighted comparisons between the two trials that favored NAPOLI 3.

NAPOLI 3 had no age limit, while PRODIGE subjects were no older than 75 years. Median progression-free survival was 1 month longer among patients receiving NALIRIFOX — 7.4 months vs 6.4 months in PRODIGE — and overall response rates were higher as well — 41.8% in NAPOLI 3 vs 31.6%. Patients receiving NALIRIFOX also had lower rates of grade 3/4 neutropenia (23.8% vs 45.7%, respectively) and peripheral sensory neuropathy (3.5% vs 9.0%, respectively).

The authors explained that the lower rate of neuropathy could be because NALIRIFOX uses a lower dose of oxaliplatin (FOLFIRINOX), at 60 mg/m2 instead of 85 mg/m2.
 

 

 

Is It Worth It?

During a presentation of the phase 3 findings last year, study author Zev A. Wainberg, MD, of the University of California, Los Angeles, said the NALIRIFOX regimen can be considered the new reference regimen for first-line treatment of metastatic pancreatic adenocarcinoma.

The study discussant, Laura Goff, MD, MSCI, of Vanderbilt-Ingram Cancer Center, Nashville, Tennessee, agreed that the results support the NALIRIFOX regimen as “the new standard for fit patients.”

However, other oncologists remain skeptical about the benefits of the new regimen over FOLFIRINOX for patients with metastatic pancreatic adenocarcinoma.

In a recent editorial, Dr. Garrido-Laguna and University of Utah gastrointestinal oncologist Christopher Nevala-Plagemann, MD, compared the evidence for both regimens.

The experts pointed out that overall response rates were assessed by investigators in NAPOLI 3 and not by an independent review committee, as in PRODIGE 4, and might have been overestimated.

Although the lack of an age limit was touted as a benefit of NAPOLI 3, Dr. Garrido-Laguna and Dr. Nevala-Plagemann doubt whether enough patients over 75 years old participated to draw any meaningful conclusions about using NALIRIFOX in older, frailer patients. If anything, patients in PRODIGE 4 might have been less fit because, among other things, the trial allowed patients with serum albumins < 3 g/dL.

On the adverse event front, the authors highlighted the higher incidences of grade 3 or worse diarrhea with NALIRIFOX (20% vs 12.7%) and questioned if there truly is less neutropenia with NALIRIFOX because high-risk patients in NAPOLI 3 were treated with granulocyte colony-stimulating factor to prevent it. The pair also questioned whether the differences in neuropathy rates between the two trials were big enough to be clinically meaningful.

Insights from a recent meta-analysis may further clarify some of the lingering questions about the efficacy of NALIRIFOX vs FOLFIRINOX.

In the analysis, the team found no meaningful difference in overall and progression-free survival between the two regimens. Differences in rates of peripheral neuropathy and diarrhea were not statistically significant, but NALIRIFOX did carry a statistically significant advantage in lower rates of febrile neutropenia, thrombocytopenia, and vomiting.

The team concluded that “NALIRIFOX and FOLFIRINOX may provide equal efficacy as first-line treatment of metastatic pancreatic cancer, but with different toxicity profiles,” and called for careful patient selection when choosing between the two regimens as well as consideration of financial toxicity.

Dr. Garrido-Laguna had a different take. With the current data, NALIRIFOX does not seem to “add anything substantially different to what we already” have with FOLFIRINOX, he told this news organization. Given that, “we can’t really justify NALIRIFOX over FOLFIRINOX without more of a head-to-head comparison.”

The higher cost of NALIRIFOX, in particular, remains a major drawback.

“We think it would be an economic disservice to our healthcare systems if we used NALIRIFOX instead of FOLFIRINOX for these patients on the basis of [NAPOLI 3] data,” Bishal Gyawali, MD, PhD, and Christopher Booth, MD, gastrointestinal oncologists at Queen’s University in Kingston, Ontario, Canada, said in a recent essay.

Dr. Garrido-Laguna and Dr. Nevala-Plagemann reiterated this concern.

Overall, “NALIRIFOX does not seem to raise the bar but rather exposes patients and healthcare systems to financial toxicities,” Dr. Garrido-Laguna and Dr. Nevala-Plagemann wrote in their review.

NAPOLI 3 was funded by Ipsen and PRODIGE 4 was funded by the government of France. No funding source was reported for the meta-analysis. NAPOLI 3 investigators included Ipsen employees. Dr. O’Reilly disclosed grants or contracts from Ipsen and many other companies. Dr. Garrido-Laguna reported institutional research funding from Bristol Myers Squibb, Novartis, Pfizer, and other companies, but not Ipsen. Dr. Nevala-Plagemann is an advisor for Seagen and reported institutional research funding from Theriva. Dr. Gyawali is a consultant for Vivio Health; Dr. Booth had no disclosures. Two meta-analysis authors reported grants or personal fees from Ipsen as well as ties to other companies.

A version of this article appeared on Medscape.com.

In February, the US Food and Drug Administration (FDA) approved irinotecan liposome (Onivyde) as part of a new regimen for first-line metastatic pancreatic adenocarcinoma called NALIRIFOX.

The main difference between NALIRIFOX and a standard go-to regimen for the indication, modified FOLFIRINOX, is that liposomal irinotecan — irinotecan encased in a lipid nanoparticle — is used instead of free irinotecan.

Trial data suggested a better overall response rate, a slight progression-free survival advantage, and potentially fewer adverse events with the liposomal formulation.

The substitution, however, raises the cost of treatment substantially. According to one estimate, a single cycle of FOLFIRINOX costs about $500 at a body surface area of 2 m2, while the equivalent single cycle of NALIRIFOX costs $7800 — over 15-fold more expensive.

While some oncologists have called the NALIRIFOX regimen a potential new standard first-line treatment for metastatic pancreatic adenocarcinoma, others have expressed serious doubts about whether the potential benefits are worth the extra cost.

“I can’t really see a single scenario where I would recommend NALIRIFOX over FOLFIRINOX” Ignacio Garrido-Laguna, MD, PhD, a gastrointestinal oncologist and pancreatic cancer researcher at the University of Utah, Salt Lake City, told this news organization. “Most of us in the academic setting have the same take on this.”
 

No Head-to-Head Comparison

Uncertainty surrounding the benefits of NALIRIFOX is largely driven by the fact that NALIRIFOX wasn’t compared with FOLFIRINOX in the phase 3 trial that won liposomal irinotecan approval.

Instead, the 770-patient NAPOLI 3 trial compared NALIRIFOX — which also includes oxaliplatin, fluorouracil, and leucovorin — with a two-drug regimen, nab-paclitaxel and gemcitabine. In the trial, overall survival and other outcomes were moderately better with NALIRIFOX.

Oncologists have said that the true value of the trial is that it conclusively demonstrates that a four-drug regimen is superior to a two-drug regimen for patients who can tolerate the more intensive therapy.

Eileen M. O’Reilly, MD, the senior investigator on NAPOLI 3, made this point when she presented the phase 3 results at the 2023 ASCO annual meeting.

The trial “answers the question of four drugs versus two” for first-line metastatic pancreatic cancer but “does not address the question of NALIRIFOX versus FOLFIRINOX,” said Dr. O’Reilly, a pancreatic and hepatobiliary oncologist and researcher at Memorial Sloan Kettering Cancer Center in New York City.

Comparing them directly in the study “probably wouldn’t have been in the interest of the sponsor,” said Dr. O’Reilly.

With no head-to-head comparison, oncologists have been comparing NAPOLI 3 results with those from PRODIGE 4, the 2011 trial that won FOLFIRINOX its place as a first-line regimen.

When comparing the trials, median overall survival was exactly the same for the two regimens — 11.1 months. FOLFIRINOX was associated with a slightly higher 1-year survival rate — 48.4% with FOLFIRINOX vs 45.6% with NALIRIFOX.

However, Dr. O’Reilly and her colleagues also highlighted comparisons between the two trials that favored NAPOLI 3.

NAPOLI 3 had no age limit, while PRODIGE subjects were no older than 75 years. Median progression-free survival was 1 month longer among patients receiving NALIRIFOX — 7.4 months vs 6.4 months in PRODIGE — and overall response rates were higher as well — 41.8% in NAPOLI 3 vs 31.6%. Patients receiving NALIRIFOX also had lower rates of grade 3/4 neutropenia (23.8% vs 45.7%, respectively) and peripheral sensory neuropathy (3.5% vs 9.0%, respectively).

The authors explained that the lower rate of neuropathy could be because NALIRIFOX uses a lower dose of oxaliplatin (FOLFIRINOX), at 60 mg/m2 instead of 85 mg/m2.
 

 

 

Is It Worth It?

During a presentation of the phase 3 findings last year, study author Zev A. Wainberg, MD, of the University of California, Los Angeles, said the NALIRIFOX regimen can be considered the new reference regimen for first-line treatment of metastatic pancreatic adenocarcinoma.

The study discussant, Laura Goff, MD, MSCI, of Vanderbilt-Ingram Cancer Center, Nashville, Tennessee, agreed that the results support the NALIRIFOX regimen as “the new standard for fit patients.”

However, other oncologists remain skeptical about the benefits of the new regimen over FOLFIRINOX for patients with metastatic pancreatic adenocarcinoma.

In a recent editorial, Dr. Garrido-Laguna and University of Utah gastrointestinal oncologist Christopher Nevala-Plagemann, MD, compared the evidence for both regimens.

The experts pointed out that overall response rates were assessed by investigators in NAPOLI 3 and not by an independent review committee, as in PRODIGE 4, and might have been overestimated.

Although the lack of an age limit was touted as a benefit of NAPOLI 3, Dr. Garrido-Laguna and Dr. Nevala-Plagemann doubt whether enough patients over 75 years old participated to draw any meaningful conclusions about using NALIRIFOX in older, frailer patients. If anything, patients in PRODIGE 4 might have been less fit because, among other things, the trial allowed patients with serum albumins < 3 g/dL.

On the adverse event front, the authors highlighted the higher incidences of grade 3 or worse diarrhea with NALIRIFOX (20% vs 12.7%) and questioned if there truly is less neutropenia with NALIRIFOX because high-risk patients in NAPOLI 3 were treated with granulocyte colony-stimulating factor to prevent it. The pair also questioned whether the differences in neuropathy rates between the two trials were big enough to be clinically meaningful.

Insights from a recent meta-analysis may further clarify some of the lingering questions about the efficacy of NALIRIFOX vs FOLFIRINOX.

In the analysis, the team found no meaningful difference in overall and progression-free survival between the two regimens. Differences in rates of peripheral neuropathy and diarrhea were not statistically significant, but NALIRIFOX did carry a statistically significant advantage in lower rates of febrile neutropenia, thrombocytopenia, and vomiting.

The team concluded that “NALIRIFOX and FOLFIRINOX may provide equal efficacy as first-line treatment of metastatic pancreatic cancer, but with different toxicity profiles,” and called for careful patient selection when choosing between the two regimens as well as consideration of financial toxicity.

Dr. Garrido-Laguna had a different take. With the current data, NALIRIFOX does not seem to “add anything substantially different to what we already” have with FOLFIRINOX, he told this news organization. Given that, “we can’t really justify NALIRIFOX over FOLFIRINOX without more of a head-to-head comparison.”

The higher cost of NALIRIFOX, in particular, remains a major drawback.

“We think it would be an economic disservice to our healthcare systems if we used NALIRIFOX instead of FOLFIRINOX for these patients on the basis of [NAPOLI 3] data,” Bishal Gyawali, MD, PhD, and Christopher Booth, MD, gastrointestinal oncologists at Queen’s University in Kingston, Ontario, Canada, said in a recent essay.

Dr. Garrido-Laguna and Dr. Nevala-Plagemann reiterated this concern.

Overall, “NALIRIFOX does not seem to raise the bar but rather exposes patients and healthcare systems to financial toxicities,” Dr. Garrido-Laguna and Dr. Nevala-Plagemann wrote in their review.

NAPOLI 3 was funded by Ipsen and PRODIGE 4 was funded by the government of France. No funding source was reported for the meta-analysis. NAPOLI 3 investigators included Ipsen employees. Dr. O’Reilly disclosed grants or contracts from Ipsen and many other companies. Dr. Garrido-Laguna reported institutional research funding from Bristol Myers Squibb, Novartis, Pfizer, and other companies, but not Ipsen. Dr. Nevala-Plagemann is an advisor for Seagen and reported institutional research funding from Theriva. Dr. Gyawali is a consultant for Vivio Health; Dr. Booth had no disclosures. Two meta-analysis authors reported grants or personal fees from Ipsen as well as ties to other companies.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article