Topline results for novel drug in ATTR amyloidosis with cardiomyopathy

Article Type
Changed
Tue, 09/20/2022 - 10:42

 

The RNA interference (RNAi) therapeutic patisiran (Onpattro, Alnylam Pharmaceuticals) led to statistically significant improvement in functional capacity and quality of life in adults with transthyretin-mediated (ATTR) amyloidosis with cardiomyopathy in the phase 3 APOLLO-B study, according to topline results released Aug. 3.

“We are thrilled that APOLLO-B successfully met all its major objectives, which we believe for the first time validates the hypothesis that TTR silencing by an RNAi therapeutic can be an effective approach for treating the cardiomyopathy of ATTR amyloidosis,” Pushkal Garg, MD, Alnylam chief medical officer, said in a news release.

The Food and Drug Administration approved patisiran in 2018 for polyneuropathy caused by hereditary ATTR in adults on the basis of results of the APOLLO phase 3 trial, as reported by this news organization.

APOLLO-B enrolled 360 adults with ATTR amyloidosis (hereditary or wild-type) with cardiomyopathy at 69 centers in 21 countries. Half were randomly allocated to 0.3 mg/kg of patisiran or placebo administered intravenously every 3 weeks for 12 months.

The study met the primary endpoint of a statistically significant improvement from baseline in the 6-minute walk test at 12 months compared with placebo (P = .0162), the company said.

The study also met the first secondary endpoint of a statistically significant improvement from baseline in quality of life compared with placebo, as measured by the Kansas City Cardiomyopathy Questionnaire (P = .0397).

The patisiran and placebo groups had similar frequencies of adverse events (91% and 94%, respectively) and serious adverse events (34% and 35%, respectively).

“ATTR amyloidosis with cardiomyopathy is an increasingly recognized cause of heart failure, affecting greater than 250,000 patients around the world. These patients have limited treatment options, and disease progression is common. As such, we are encouraged to see the potential of patisiran to improve the functional capacity and quality of life of patients living with this fatal, multisystem disease,” Dr. Garg said in the release.

Full results from APOLLO-B will be presented at a late-breaker session at the 18th International Symposium on Amyloidosis in September in Heidelberg, Germany.

Based on these results, the company plans to file a supplementary new drug application (sNDA) for patisiran for this indication with the FDA later this year, the release noted.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

The RNA interference (RNAi) therapeutic patisiran (Onpattro, Alnylam Pharmaceuticals) led to statistically significant improvement in functional capacity and quality of life in adults with transthyretin-mediated (ATTR) amyloidosis with cardiomyopathy in the phase 3 APOLLO-B study, according to topline results released Aug. 3.

“We are thrilled that APOLLO-B successfully met all its major objectives, which we believe for the first time validates the hypothesis that TTR silencing by an RNAi therapeutic can be an effective approach for treating the cardiomyopathy of ATTR amyloidosis,” Pushkal Garg, MD, Alnylam chief medical officer, said in a news release.

The Food and Drug Administration approved patisiran in 2018 for polyneuropathy caused by hereditary ATTR in adults on the basis of results of the APOLLO phase 3 trial, as reported by this news organization.

APOLLO-B enrolled 360 adults with ATTR amyloidosis (hereditary or wild-type) with cardiomyopathy at 69 centers in 21 countries. Half were randomly allocated to 0.3 mg/kg of patisiran or placebo administered intravenously every 3 weeks for 12 months.

The study met the primary endpoint of a statistically significant improvement from baseline in the 6-minute walk test at 12 months compared with placebo (P = .0162), the company said.

The study also met the first secondary endpoint of a statistically significant improvement from baseline in quality of life compared with placebo, as measured by the Kansas City Cardiomyopathy Questionnaire (P = .0397).

The patisiran and placebo groups had similar frequencies of adverse events (91% and 94%, respectively) and serious adverse events (34% and 35%, respectively).

“ATTR amyloidosis with cardiomyopathy is an increasingly recognized cause of heart failure, affecting greater than 250,000 patients around the world. These patients have limited treatment options, and disease progression is common. As such, we are encouraged to see the potential of patisiran to improve the functional capacity and quality of life of patients living with this fatal, multisystem disease,” Dr. Garg said in the release.

Full results from APOLLO-B will be presented at a late-breaker session at the 18th International Symposium on Amyloidosis in September in Heidelberg, Germany.

Based on these results, the company plans to file a supplementary new drug application (sNDA) for patisiran for this indication with the FDA later this year, the release noted.

A version of this article first appeared on Medscape.com.

 

The RNA interference (RNAi) therapeutic patisiran (Onpattro, Alnylam Pharmaceuticals) led to statistically significant improvement in functional capacity and quality of life in adults with transthyretin-mediated (ATTR) amyloidosis with cardiomyopathy in the phase 3 APOLLO-B study, according to topline results released Aug. 3.

“We are thrilled that APOLLO-B successfully met all its major objectives, which we believe for the first time validates the hypothesis that TTR silencing by an RNAi therapeutic can be an effective approach for treating the cardiomyopathy of ATTR amyloidosis,” Pushkal Garg, MD, Alnylam chief medical officer, said in a news release.

The Food and Drug Administration approved patisiran in 2018 for polyneuropathy caused by hereditary ATTR in adults on the basis of results of the APOLLO phase 3 trial, as reported by this news organization.

APOLLO-B enrolled 360 adults with ATTR amyloidosis (hereditary or wild-type) with cardiomyopathy at 69 centers in 21 countries. Half were randomly allocated to 0.3 mg/kg of patisiran or placebo administered intravenously every 3 weeks for 12 months.

The study met the primary endpoint of a statistically significant improvement from baseline in the 6-minute walk test at 12 months compared with placebo (P = .0162), the company said.

The study also met the first secondary endpoint of a statistically significant improvement from baseline in quality of life compared with placebo, as measured by the Kansas City Cardiomyopathy Questionnaire (P = .0397).

The patisiran and placebo groups had similar frequencies of adverse events (91% and 94%, respectively) and serious adverse events (34% and 35%, respectively).

“ATTR amyloidosis with cardiomyopathy is an increasingly recognized cause of heart failure, affecting greater than 250,000 patients around the world. These patients have limited treatment options, and disease progression is common. As such, we are encouraged to see the potential of patisiran to improve the functional capacity and quality of life of patients living with this fatal, multisystem disease,” Dr. Garg said in the release.

Full results from APOLLO-B will be presented at a late-breaker session at the 18th International Symposium on Amyloidosis in September in Heidelberg, Germany.

Based on these results, the company plans to file a supplementary new drug application (sNDA) for patisiran for this indication with the FDA later this year, the release noted.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Regular exercise appears to slow cognitive decline in MCI

Article Type
Changed
Fri, 08/26/2022 - 11:26

Regular exercise, regardless of intensity level, appears to slow cognitive decline in sedentary older adults with mild cognitive impariment (MCI), new research from the largest study of its kind suggests. Topline results from the EXERT trial showed patients with MCI who participated regularly in either aerobic exercise or stretching/balance/range-of-motion exercises maintained stable global cognitive function over 12 months of follow-up – with no differences between the two types of exercise.

“We’re excited about these findings, because these types of exercises that we’re seeing can protect against cognitive decline are accessible to everyone and therefore scalable to the public,” study investigator Laura Baker, PhD, Wake Forest University School of Medicine, Winston-Salem, N.C., said at a press briefing.

The topline results were presented at the 2022 Alzheimer’s Association International Conference.
 

No decline

The 18-month EXERT trial was designed to be the definitive study to answer the question about whether exercise can slow cognitive decline in older adults with amnestic MCI, Dr. Baker reported. Investigators enrolled 296 sedentary men and women with MCI (mean age, about 75 years). All were randomly allocated to either an aerobic exercise group (maintaining a heart rate at about 70%-85%) or a stretching and balance group (maintaining heart rate less than 35%).

Both groups exercised four times per week for about 30-40 minutes. In the first 12 months they were supervised by a trainer at the YMCA and then they exercised independently for the final 6 months.

Participants were assessed at baseline and every 6 months. The primary endpoint was change from baseline on the ADAS-Cog-Exec, a validated measure of global cognitive function, at the end of the 12 months of supervised exercise.

During the first 12 months, participants completed over 31,000 sessions of exercise, which is “quite impressive,” Dr. Baker said.

Over the first 12 months, neither the aerobic group nor the stretch/balance group showed a decline on the ADAS-Cog-Exec.

“We saw no group differences, and importantly, no decline after 12 months,” Dr. Baker reported.
 

Supported exercise is ‘crucial’

To help “make sense” of these findings, Dr. Baker noted that 12-month changes in the ADAS-Cog-Exec for the EXERT intervention groups were also compared with a “usual care” cohort of adults matched for age, sex, education, baseline cognitive status, and APOE4 genotype.

In this “apples-to-apples” comparison, the usual care cohort showed the expected decline or worsening of cognitive function over 12 months on the ADAS-Cog-Exec, but the EXERT exercise groups did not.

Dr. Baker noted that both exercise groups received equal amounts of weekly socialization, which may have contributed to the apparent protective effects on the brain.

A greater volume of exercise in EXERT, compared with other trials, may also be a factor. Each individual participant in EXERT completed more than 100 hours of exercise.

“The take-home message is that an increased amount of either low-intensity or high-intensity exercise for 120-150 minutes per week for 12 months may slow cognitive decline in sedentary older adults with MCI,” Dr. Baker said.

“What’s critical is that this regular exercise must be supported in these older [patients] with MCI. It must be supervised. There has to be some social component,” she added.

In her view, 120 minutes of regular supported exercise for sedentary individuals with MCI “needs to be part of the recommendation for risk reduction.”
 

 

 

Important study

Commenting on the findings, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, noted that several studies over the years have suggested that different types of exercise can have benefits on the brain.

“What’s important about this study is that it’s in a population of people that have MCI and are already experiencing memory changes,” Dr. Snyder said.

“The results suggest that engaging in both of these types of exercise may be beneficial for our brain. And given that this is the largest study of its kind in a population of people with MCI, it suggests it’s ‘never too late’ to start exercising,” she added.

Dr. Snyder noted the importance of continuing this work and to continue following these individuals “over time as well.”

The study was funded by the National Institutes of Health, National Institute on Aging. Dr. Baker and Dr. Snyder have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews - 30(9)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Regular exercise, regardless of intensity level, appears to slow cognitive decline in sedentary older adults with mild cognitive impariment (MCI), new research from the largest study of its kind suggests. Topline results from the EXERT trial showed patients with MCI who participated regularly in either aerobic exercise or stretching/balance/range-of-motion exercises maintained stable global cognitive function over 12 months of follow-up – with no differences between the two types of exercise.

“We’re excited about these findings, because these types of exercises that we’re seeing can protect against cognitive decline are accessible to everyone and therefore scalable to the public,” study investigator Laura Baker, PhD, Wake Forest University School of Medicine, Winston-Salem, N.C., said at a press briefing.

The topline results were presented at the 2022 Alzheimer’s Association International Conference.
 

No decline

The 18-month EXERT trial was designed to be the definitive study to answer the question about whether exercise can slow cognitive decline in older adults with amnestic MCI, Dr. Baker reported. Investigators enrolled 296 sedentary men and women with MCI (mean age, about 75 years). All were randomly allocated to either an aerobic exercise group (maintaining a heart rate at about 70%-85%) or a stretching and balance group (maintaining heart rate less than 35%).

Both groups exercised four times per week for about 30-40 minutes. In the first 12 months they were supervised by a trainer at the YMCA and then they exercised independently for the final 6 months.

Participants were assessed at baseline and every 6 months. The primary endpoint was change from baseline on the ADAS-Cog-Exec, a validated measure of global cognitive function, at the end of the 12 months of supervised exercise.

During the first 12 months, participants completed over 31,000 sessions of exercise, which is “quite impressive,” Dr. Baker said.

Over the first 12 months, neither the aerobic group nor the stretch/balance group showed a decline on the ADAS-Cog-Exec.

“We saw no group differences, and importantly, no decline after 12 months,” Dr. Baker reported.
 

Supported exercise is ‘crucial’

To help “make sense” of these findings, Dr. Baker noted that 12-month changes in the ADAS-Cog-Exec for the EXERT intervention groups were also compared with a “usual care” cohort of adults matched for age, sex, education, baseline cognitive status, and APOE4 genotype.

In this “apples-to-apples” comparison, the usual care cohort showed the expected decline or worsening of cognitive function over 12 months on the ADAS-Cog-Exec, but the EXERT exercise groups did not.

Dr. Baker noted that both exercise groups received equal amounts of weekly socialization, which may have contributed to the apparent protective effects on the brain.

A greater volume of exercise in EXERT, compared with other trials, may also be a factor. Each individual participant in EXERT completed more than 100 hours of exercise.

“The take-home message is that an increased amount of either low-intensity or high-intensity exercise for 120-150 minutes per week for 12 months may slow cognitive decline in sedentary older adults with MCI,” Dr. Baker said.

“What’s critical is that this regular exercise must be supported in these older [patients] with MCI. It must be supervised. There has to be some social component,” she added.

In her view, 120 minutes of regular supported exercise for sedentary individuals with MCI “needs to be part of the recommendation for risk reduction.”
 

 

 

Important study

Commenting on the findings, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, noted that several studies over the years have suggested that different types of exercise can have benefits on the brain.

“What’s important about this study is that it’s in a population of people that have MCI and are already experiencing memory changes,” Dr. Snyder said.

“The results suggest that engaging in both of these types of exercise may be beneficial for our brain. And given that this is the largest study of its kind in a population of people with MCI, it suggests it’s ‘never too late’ to start exercising,” she added.

Dr. Snyder noted the importance of continuing this work and to continue following these individuals “over time as well.”

The study was funded by the National Institutes of Health, National Institute on Aging. Dr. Baker and Dr. Snyder have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Regular exercise, regardless of intensity level, appears to slow cognitive decline in sedentary older adults with mild cognitive impariment (MCI), new research from the largest study of its kind suggests. Topline results from the EXERT trial showed patients with MCI who participated regularly in either aerobic exercise or stretching/balance/range-of-motion exercises maintained stable global cognitive function over 12 months of follow-up – with no differences between the two types of exercise.

“We’re excited about these findings, because these types of exercises that we’re seeing can protect against cognitive decline are accessible to everyone and therefore scalable to the public,” study investigator Laura Baker, PhD, Wake Forest University School of Medicine, Winston-Salem, N.C., said at a press briefing.

The topline results were presented at the 2022 Alzheimer’s Association International Conference.
 

No decline

The 18-month EXERT trial was designed to be the definitive study to answer the question about whether exercise can slow cognitive decline in older adults with amnestic MCI, Dr. Baker reported. Investigators enrolled 296 sedentary men and women with MCI (mean age, about 75 years). All were randomly allocated to either an aerobic exercise group (maintaining a heart rate at about 70%-85%) or a stretching and balance group (maintaining heart rate less than 35%).

Both groups exercised four times per week for about 30-40 minutes. In the first 12 months they were supervised by a trainer at the YMCA and then they exercised independently for the final 6 months.

Participants were assessed at baseline and every 6 months. The primary endpoint was change from baseline on the ADAS-Cog-Exec, a validated measure of global cognitive function, at the end of the 12 months of supervised exercise.

During the first 12 months, participants completed over 31,000 sessions of exercise, which is “quite impressive,” Dr. Baker said.

Over the first 12 months, neither the aerobic group nor the stretch/balance group showed a decline on the ADAS-Cog-Exec.

“We saw no group differences, and importantly, no decline after 12 months,” Dr. Baker reported.
 

Supported exercise is ‘crucial’

To help “make sense” of these findings, Dr. Baker noted that 12-month changes in the ADAS-Cog-Exec for the EXERT intervention groups were also compared with a “usual care” cohort of adults matched for age, sex, education, baseline cognitive status, and APOE4 genotype.

In this “apples-to-apples” comparison, the usual care cohort showed the expected decline or worsening of cognitive function over 12 months on the ADAS-Cog-Exec, but the EXERT exercise groups did not.

Dr. Baker noted that both exercise groups received equal amounts of weekly socialization, which may have contributed to the apparent protective effects on the brain.

A greater volume of exercise in EXERT, compared with other trials, may also be a factor. Each individual participant in EXERT completed more than 100 hours of exercise.

“The take-home message is that an increased amount of either low-intensity or high-intensity exercise for 120-150 minutes per week for 12 months may slow cognitive decline in sedentary older adults with MCI,” Dr. Baker said.

“What’s critical is that this regular exercise must be supported in these older [patients] with MCI. It must be supervised. There has to be some social component,” she added.

In her view, 120 minutes of regular supported exercise for sedentary individuals with MCI “needs to be part of the recommendation for risk reduction.”
 

 

 

Important study

Commenting on the findings, Heather Snyder, PhD, vice president of medical and scientific relations at the Alzheimer’s Association, noted that several studies over the years have suggested that different types of exercise can have benefits on the brain.

“What’s important about this study is that it’s in a population of people that have MCI and are already experiencing memory changes,” Dr. Snyder said.

“The results suggest that engaging in both of these types of exercise may be beneficial for our brain. And given that this is the largest study of its kind in a population of people with MCI, it suggests it’s ‘never too late’ to start exercising,” she added.

Dr. Snyder noted the importance of continuing this work and to continue following these individuals “over time as well.”

The study was funded by the National Institutes of Health, National Institute on Aging. Dr. Baker and Dr. Snyder have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(9)
Issue
Neurology Reviews - 30(9)
Publications
Publications
Topics
Article Type
Sections
Article Source

From AAIC 2022

Citation Override
August 4, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

More evidence that ultraprocessed foods are detrimental for the brain

Article Type
Changed
Fri, 08/26/2022 - 11:22

More research suggests that a diet high in ultraprocessed foods (UPFs) is harmful for the aging brain.

Results from the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil), which included participants aged 35 and older, showed that higher intake of UPFs was significantly associated with a faster rate of decline in both executive and global cognitive function.

“Based on these findings, doctors might counsel patients to prefer cooking at home [and] choosing fresher ingredients instead of buying ready-made meals and snacks,” said coinvestigator Natalia Gonçalves, PhD, University of São Paulo, Brazil.

Presented at the Alzheimer’s Association International Conference, the findings align with those from a recent study in Neurology. That study linked a diet high in UPFs to an increased risk for dementia.
 

Increasing worldwide consumption

UPFs are highly manipulated, are packed with added ingredients, including sugar, fat, and salt, and are low in protein and fiber. Examples of UPFs include soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, fries, and many more.

Over the past 30 years, there has been a steady increase in consumption of UPFs worldwide. They are thought to induce systemic inflammation and oxidative stress and have been linked to a variety of ailments, such as overweight/obesity, cardiovascular disease, and cancer.

UPFs may also be a risk factor for cognitive decline, although data are scarce as to their effects on the brain.

To investigate, Dr. Gonçalves and colleagues evaluated longitudinal data on 10,775 adults (mean age, 50.6 years; 56% women; 55% White) who participated in the ELSA-Brasil study. They were evaluated in three waves (2008-2010, 2012-2014, and 2017-2019).

Information on diet was obtained via food frequency questionnaires and included information regarding consumption of unprocessed foods, minimally processed foods, and UPFs.

Participants were grouped according to UPF consumption quartiles (lowest to highest). Cognitive performance was evaluated by use of a standardized battery of tests.
 

Significant decline

Using linear mixed effects models that were adjusted for sociodemographic, lifestyle, and clinical variables, the investigators assessed the association of dietary UPFs as a percentage of total daily calories with cognitive performance over time.

During a median follow-up of 8 years, UPF intake in quartiles 2 to 4 (vs. quartile 1) was associated with a significant decline in global cognition (P = .003) and executive function (P = .015).

“Participants who reported consumption of more than 20% of daily calories from ultraprocessed foods had a 28% faster rate of global cognitive decline and a 25% faster decrease of the executive function compared to those who reported eating less than 20% of daily calories from ultraprocessed foods,” Dr. Gonçalves reported.

“Considering a person who eats a total of 2,000 kcal per day, 20% of daily calories from ultraprocessed foods are about two 1.5-ounce bars of KitKat, or five slices of bread, or about a third of an 8.5-ounce package of chips,” she explained.

Dr. Gonçalves noted that the reasons UPFs may harm the brain remain a “very relevant but not yet well-studied topic.”

Hypotheses include secondary effects from cerebrovascular lesions or chronic inflammation processes. More studies are needed to investigate the possible mechanisms that might explain the harm of UPFs to the brain, she said.
 

 

 

‘Troubling but not surprising’

Commenting on the study, Percy Griffin, PhD, director of scientific engagement for the Alzheimer’s Association, said there is “growing evidence that what we eat can impact our brains as we age.”

He added that many previous studies have suggested it is best for the brain for one to eat a heart-healthy, balanced diet that is low in processed foods and high in whole, nutritional foods, such as vegetables and fruits.

“These new data from the Alzheimer’s Association International Conference suggest eating a large amount of ultraprocessed food can significantly accelerate cognitive decline,” said Dr. Griffin, who was not involved with the research.

He noted that an increase in the availability and consumption of fast foods, processed foods, and UPFs is due to a number of socioeconomic factors, including low access to healthy foods, less time to prepare foods from scratch, and an inability to afford whole foods.

“Ultraprocessed foods make up more than half of American diets. It’s troubling but not surprising to see new data suggesting these foods can significantly accelerate cognitive decline,” Dr. Griffin said.

“The good news is there are steps we can take to reduce risk of cognitive decline as we age. These include eating a balanced diet, exercising regularly, getting good sleep, staying cognitively engaged, protecting from head injury, not smoking, and managing heart health,” he added.

Past research has suggested that the greatest benefit is from engaging in combinations of these lifestyle changes and that they are beneficial at any age, he noted.

“Even if you begin with one or two healthful actions, you’re moving in the right direction. It’s never too early or too late to incorporate these habits into your life,” Dr. Griffin said.

The study had no specific funding. Dr. Gonçalves and Dr. Griffin have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Issue
Neurology Reviews - 30(9)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

More research suggests that a diet high in ultraprocessed foods (UPFs) is harmful for the aging brain.

Results from the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil), which included participants aged 35 and older, showed that higher intake of UPFs was significantly associated with a faster rate of decline in both executive and global cognitive function.

“Based on these findings, doctors might counsel patients to prefer cooking at home [and] choosing fresher ingredients instead of buying ready-made meals and snacks,” said coinvestigator Natalia Gonçalves, PhD, University of São Paulo, Brazil.

Presented at the Alzheimer’s Association International Conference, the findings align with those from a recent study in Neurology. That study linked a diet high in UPFs to an increased risk for dementia.
 

Increasing worldwide consumption

UPFs are highly manipulated, are packed with added ingredients, including sugar, fat, and salt, and are low in protein and fiber. Examples of UPFs include soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, fries, and many more.

Over the past 30 years, there has been a steady increase in consumption of UPFs worldwide. They are thought to induce systemic inflammation and oxidative stress and have been linked to a variety of ailments, such as overweight/obesity, cardiovascular disease, and cancer.

UPFs may also be a risk factor for cognitive decline, although data are scarce as to their effects on the brain.

To investigate, Dr. Gonçalves and colleagues evaluated longitudinal data on 10,775 adults (mean age, 50.6 years; 56% women; 55% White) who participated in the ELSA-Brasil study. They were evaluated in three waves (2008-2010, 2012-2014, and 2017-2019).

Information on diet was obtained via food frequency questionnaires and included information regarding consumption of unprocessed foods, minimally processed foods, and UPFs.

Participants were grouped according to UPF consumption quartiles (lowest to highest). Cognitive performance was evaluated by use of a standardized battery of tests.
 

Significant decline

Using linear mixed effects models that were adjusted for sociodemographic, lifestyle, and clinical variables, the investigators assessed the association of dietary UPFs as a percentage of total daily calories with cognitive performance over time.

During a median follow-up of 8 years, UPF intake in quartiles 2 to 4 (vs. quartile 1) was associated with a significant decline in global cognition (P = .003) and executive function (P = .015).

“Participants who reported consumption of more than 20% of daily calories from ultraprocessed foods had a 28% faster rate of global cognitive decline and a 25% faster decrease of the executive function compared to those who reported eating less than 20% of daily calories from ultraprocessed foods,” Dr. Gonçalves reported.

“Considering a person who eats a total of 2,000 kcal per day, 20% of daily calories from ultraprocessed foods are about two 1.5-ounce bars of KitKat, or five slices of bread, or about a third of an 8.5-ounce package of chips,” she explained.

Dr. Gonçalves noted that the reasons UPFs may harm the brain remain a “very relevant but not yet well-studied topic.”

Hypotheses include secondary effects from cerebrovascular lesions or chronic inflammation processes. More studies are needed to investigate the possible mechanisms that might explain the harm of UPFs to the brain, she said.
 

 

 

‘Troubling but not surprising’

Commenting on the study, Percy Griffin, PhD, director of scientific engagement for the Alzheimer’s Association, said there is “growing evidence that what we eat can impact our brains as we age.”

He added that many previous studies have suggested it is best for the brain for one to eat a heart-healthy, balanced diet that is low in processed foods and high in whole, nutritional foods, such as vegetables and fruits.

“These new data from the Alzheimer’s Association International Conference suggest eating a large amount of ultraprocessed food can significantly accelerate cognitive decline,” said Dr. Griffin, who was not involved with the research.

He noted that an increase in the availability and consumption of fast foods, processed foods, and UPFs is due to a number of socioeconomic factors, including low access to healthy foods, less time to prepare foods from scratch, and an inability to afford whole foods.

“Ultraprocessed foods make up more than half of American diets. It’s troubling but not surprising to see new data suggesting these foods can significantly accelerate cognitive decline,” Dr. Griffin said.

“The good news is there are steps we can take to reduce risk of cognitive decline as we age. These include eating a balanced diet, exercising regularly, getting good sleep, staying cognitively engaged, protecting from head injury, not smoking, and managing heart health,” he added.

Past research has suggested that the greatest benefit is from engaging in combinations of these lifestyle changes and that they are beneficial at any age, he noted.

“Even if you begin with one or two healthful actions, you’re moving in the right direction. It’s never too early or too late to incorporate these habits into your life,” Dr. Griffin said.

The study had no specific funding. Dr. Gonçalves and Dr. Griffin have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

More research suggests that a diet high in ultraprocessed foods (UPFs) is harmful for the aging brain.

Results from the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil), which included participants aged 35 and older, showed that higher intake of UPFs was significantly associated with a faster rate of decline in both executive and global cognitive function.

“Based on these findings, doctors might counsel patients to prefer cooking at home [and] choosing fresher ingredients instead of buying ready-made meals and snacks,” said coinvestigator Natalia Gonçalves, PhD, University of São Paulo, Brazil.

Presented at the Alzheimer’s Association International Conference, the findings align with those from a recent study in Neurology. That study linked a diet high in UPFs to an increased risk for dementia.
 

Increasing worldwide consumption

UPFs are highly manipulated, are packed with added ingredients, including sugar, fat, and salt, and are low in protein and fiber. Examples of UPFs include soft drinks, chips, chocolate, candy, ice cream, sweetened breakfast cereals, packaged soups, chicken nuggets, hot dogs, fries, and many more.

Over the past 30 years, there has been a steady increase in consumption of UPFs worldwide. They are thought to induce systemic inflammation and oxidative stress and have been linked to a variety of ailments, such as overweight/obesity, cardiovascular disease, and cancer.

UPFs may also be a risk factor for cognitive decline, although data are scarce as to their effects on the brain.

To investigate, Dr. Gonçalves and colleagues evaluated longitudinal data on 10,775 adults (mean age, 50.6 years; 56% women; 55% White) who participated in the ELSA-Brasil study. They were evaluated in three waves (2008-2010, 2012-2014, and 2017-2019).

Information on diet was obtained via food frequency questionnaires and included information regarding consumption of unprocessed foods, minimally processed foods, and UPFs.

Participants were grouped according to UPF consumption quartiles (lowest to highest). Cognitive performance was evaluated by use of a standardized battery of tests.
 

Significant decline

Using linear mixed effects models that were adjusted for sociodemographic, lifestyle, and clinical variables, the investigators assessed the association of dietary UPFs as a percentage of total daily calories with cognitive performance over time.

During a median follow-up of 8 years, UPF intake in quartiles 2 to 4 (vs. quartile 1) was associated with a significant decline in global cognition (P = .003) and executive function (P = .015).

“Participants who reported consumption of more than 20% of daily calories from ultraprocessed foods had a 28% faster rate of global cognitive decline and a 25% faster decrease of the executive function compared to those who reported eating less than 20% of daily calories from ultraprocessed foods,” Dr. Gonçalves reported.

“Considering a person who eats a total of 2,000 kcal per day, 20% of daily calories from ultraprocessed foods are about two 1.5-ounce bars of KitKat, or five slices of bread, or about a third of an 8.5-ounce package of chips,” she explained.

Dr. Gonçalves noted that the reasons UPFs may harm the brain remain a “very relevant but not yet well-studied topic.”

Hypotheses include secondary effects from cerebrovascular lesions or chronic inflammation processes. More studies are needed to investigate the possible mechanisms that might explain the harm of UPFs to the brain, she said.
 

 

 

‘Troubling but not surprising’

Commenting on the study, Percy Griffin, PhD, director of scientific engagement for the Alzheimer’s Association, said there is “growing evidence that what we eat can impact our brains as we age.”

He added that many previous studies have suggested it is best for the brain for one to eat a heart-healthy, balanced diet that is low in processed foods and high in whole, nutritional foods, such as vegetables and fruits.

“These new data from the Alzheimer’s Association International Conference suggest eating a large amount of ultraprocessed food can significantly accelerate cognitive decline,” said Dr. Griffin, who was not involved with the research.

He noted that an increase in the availability and consumption of fast foods, processed foods, and UPFs is due to a number of socioeconomic factors, including low access to healthy foods, less time to prepare foods from scratch, and an inability to afford whole foods.

“Ultraprocessed foods make up more than half of American diets. It’s troubling but not surprising to see new data suggesting these foods can significantly accelerate cognitive decline,” Dr. Griffin said.

“The good news is there are steps we can take to reduce risk of cognitive decline as we age. These include eating a balanced diet, exercising regularly, getting good sleep, staying cognitively engaged, protecting from head injury, not smoking, and managing heart health,” he added.

Past research has suggested that the greatest benefit is from engaging in combinations of these lifestyle changes and that they are beneficial at any age, he noted.

“Even if you begin with one or two healthful actions, you’re moving in the right direction. It’s never too early or too late to incorporate these habits into your life,” Dr. Griffin said.

The study had no specific funding. Dr. Gonçalves and Dr. Griffin have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(9)
Issue
Neurology Reviews - 30(9)
Publications
Publications
Topics
Article Type
Sections
Article Source

From AAIC 2022

Citation Override
August 3, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

ICU stays linked to a doubling of dementia risk

Article Type
Changed
Tue, 08/02/2022 - 11:01

Older adults who have spent time in the intensive care unit have double the risk of developing dementia in later years, compared with older adults who have never stayed in the ICU, new research suggests.

“ICU hospitalization may be an underrecognized risk factor for dementia in older adults,” Bryan D. James, PhD, epidemiologist with Rush Alzheimer’s Disease Center, Chicago, said in an interview.

“Health care providers caring for older patients who have experienced a hospitalization for critical illness should be prepared to assess and monitor their patients’ cognitive status as part of their long-term care plan,” Dr. James added.

The findings were presented at the Alzheimer’s Association International Conference.
 

Hidden risk factor?

ICU hospitalization as a result of critical illness has been linked to subsequent cognitive impairment in older patients. However, how ICU hospitalization relates to the long-term risk of developing Alzheimer’s and other age-related dementias is unknown.

“Given the high rate of ICU hospitalization in older persons, especially during the COVID-19 pandemic, it is critical to explore this relationship, Dr. James said.

The Rush team assessed the impact of an ICU stay on dementia risk in 3,822 older adults (mean age, 77 years) without known dementia at baseline participating in five diverse epidemiologic cohorts.

Participants were checked annually for development of Alzheimer’s and all-type dementia using standardized cognitive assessments.

Over an average of 7.8 years, 1,991 (52%) adults had at least one ICU stay; 1,031 (27%) had an ICU stay before study enrollment; and 961 (25%) had an ICU stay during the study period.

In models adjusted for age, sex, education, and race, ICU hospitalization was associated with 63% higher risk of Alzheimer’s dementia (hazard ratio, 1.63; 95% confidence interval, 1.41-1.88) and 71% higher risk of all-type dementia (HR, 1.71; 95% CI, 1.48-1.97).

In models further adjusted for other health factors such as vascular risk factors and disease, other chronic medical conditions and functional disabilities, the association was even stronger: ICU hospitalization was associated with roughly double the risk of Alzheimer’s dementia (HR 2.10; 95% CI, 1.66-2.65) and all-type dementia (HR, 2.20; 95% CI, 1.75-2.77).

Dr. James said in an interview that it remains unclear why an ICU stay may raise the dementia risk.

“This study was not designed to assess the causes of the higher risk of dementia in persons who had ICU hospitalizations. However, researchers have looked into a number of factors that could account for this increased risk,” he explained.

One is critical illness itself that leads to hospitalization, which could result in damage to the brain; for example, severe COVID-19 has been shown to directly harm the brain, Dr. James said.

He also noted that specific events experienced during ICU stay have been shown to increase risk for cognitive impairment, including infection and severe sepsis, acute dialysis, neurologic dysfunction and delirium, and sedation.
 

Important work

Commenting on the study, Heather Snyder, PhD, vice president of medical & scientific relations at the Alzheimer’s Association, said what’s interesting about the study is that it looks at individuals in the ICU, regardless of the cause.

“The study shows that having some type of health issue that results in some type of ICU stay is associated with an increased risk of declining cognition,” Dr. Snyder said.

“That’s really important,” she said, “especially given the increase in individuals, particularly those 60 and older, who did experience an ICU stay over the last couple of years and understanding how that might impact their long-term risk related to Alzheimer’s and other changes in memory.”

“If an individual has been in the ICU, that should be part of the conversation with their physician or health care provider,” Dr. Snyder advised.

The study was funded by the National Institute on Aging. Dr. James and Dr. Snyder disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Older adults who have spent time in the intensive care unit have double the risk of developing dementia in later years, compared with older adults who have never stayed in the ICU, new research suggests.

“ICU hospitalization may be an underrecognized risk factor for dementia in older adults,” Bryan D. James, PhD, epidemiologist with Rush Alzheimer’s Disease Center, Chicago, said in an interview.

“Health care providers caring for older patients who have experienced a hospitalization for critical illness should be prepared to assess and monitor their patients’ cognitive status as part of their long-term care plan,” Dr. James added.

The findings were presented at the Alzheimer’s Association International Conference.
 

Hidden risk factor?

ICU hospitalization as a result of critical illness has been linked to subsequent cognitive impairment in older patients. However, how ICU hospitalization relates to the long-term risk of developing Alzheimer’s and other age-related dementias is unknown.

“Given the high rate of ICU hospitalization in older persons, especially during the COVID-19 pandemic, it is critical to explore this relationship, Dr. James said.

The Rush team assessed the impact of an ICU stay on dementia risk in 3,822 older adults (mean age, 77 years) without known dementia at baseline participating in five diverse epidemiologic cohorts.

Participants were checked annually for development of Alzheimer’s and all-type dementia using standardized cognitive assessments.

Over an average of 7.8 years, 1,991 (52%) adults had at least one ICU stay; 1,031 (27%) had an ICU stay before study enrollment; and 961 (25%) had an ICU stay during the study period.

In models adjusted for age, sex, education, and race, ICU hospitalization was associated with 63% higher risk of Alzheimer’s dementia (hazard ratio, 1.63; 95% confidence interval, 1.41-1.88) and 71% higher risk of all-type dementia (HR, 1.71; 95% CI, 1.48-1.97).

In models further adjusted for other health factors such as vascular risk factors and disease, other chronic medical conditions and functional disabilities, the association was even stronger: ICU hospitalization was associated with roughly double the risk of Alzheimer’s dementia (HR 2.10; 95% CI, 1.66-2.65) and all-type dementia (HR, 2.20; 95% CI, 1.75-2.77).

Dr. James said in an interview that it remains unclear why an ICU stay may raise the dementia risk.

“This study was not designed to assess the causes of the higher risk of dementia in persons who had ICU hospitalizations. However, researchers have looked into a number of factors that could account for this increased risk,” he explained.

One is critical illness itself that leads to hospitalization, which could result in damage to the brain; for example, severe COVID-19 has been shown to directly harm the brain, Dr. James said.

He also noted that specific events experienced during ICU stay have been shown to increase risk for cognitive impairment, including infection and severe sepsis, acute dialysis, neurologic dysfunction and delirium, and sedation.
 

Important work

Commenting on the study, Heather Snyder, PhD, vice president of medical & scientific relations at the Alzheimer’s Association, said what’s interesting about the study is that it looks at individuals in the ICU, regardless of the cause.

“The study shows that having some type of health issue that results in some type of ICU stay is associated with an increased risk of declining cognition,” Dr. Snyder said.

“That’s really important,” she said, “especially given the increase in individuals, particularly those 60 and older, who did experience an ICU stay over the last couple of years and understanding how that might impact their long-term risk related to Alzheimer’s and other changes in memory.”

“If an individual has been in the ICU, that should be part of the conversation with their physician or health care provider,” Dr. Snyder advised.

The study was funded by the National Institute on Aging. Dr. James and Dr. Snyder disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Older adults who have spent time in the intensive care unit have double the risk of developing dementia in later years, compared with older adults who have never stayed in the ICU, new research suggests.

“ICU hospitalization may be an underrecognized risk factor for dementia in older adults,” Bryan D. James, PhD, epidemiologist with Rush Alzheimer’s Disease Center, Chicago, said in an interview.

“Health care providers caring for older patients who have experienced a hospitalization for critical illness should be prepared to assess and monitor their patients’ cognitive status as part of their long-term care plan,” Dr. James added.

The findings were presented at the Alzheimer’s Association International Conference.
 

Hidden risk factor?

ICU hospitalization as a result of critical illness has been linked to subsequent cognitive impairment in older patients. However, how ICU hospitalization relates to the long-term risk of developing Alzheimer’s and other age-related dementias is unknown.

“Given the high rate of ICU hospitalization in older persons, especially during the COVID-19 pandemic, it is critical to explore this relationship, Dr. James said.

The Rush team assessed the impact of an ICU stay on dementia risk in 3,822 older adults (mean age, 77 years) without known dementia at baseline participating in five diverse epidemiologic cohorts.

Participants were checked annually for development of Alzheimer’s and all-type dementia using standardized cognitive assessments.

Over an average of 7.8 years, 1,991 (52%) adults had at least one ICU stay; 1,031 (27%) had an ICU stay before study enrollment; and 961 (25%) had an ICU stay during the study period.

In models adjusted for age, sex, education, and race, ICU hospitalization was associated with 63% higher risk of Alzheimer’s dementia (hazard ratio, 1.63; 95% confidence interval, 1.41-1.88) and 71% higher risk of all-type dementia (HR, 1.71; 95% CI, 1.48-1.97).

In models further adjusted for other health factors such as vascular risk factors and disease, other chronic medical conditions and functional disabilities, the association was even stronger: ICU hospitalization was associated with roughly double the risk of Alzheimer’s dementia (HR 2.10; 95% CI, 1.66-2.65) and all-type dementia (HR, 2.20; 95% CI, 1.75-2.77).

Dr. James said in an interview that it remains unclear why an ICU stay may raise the dementia risk.

“This study was not designed to assess the causes of the higher risk of dementia in persons who had ICU hospitalizations. However, researchers have looked into a number of factors that could account for this increased risk,” he explained.

One is critical illness itself that leads to hospitalization, which could result in damage to the brain; for example, severe COVID-19 has been shown to directly harm the brain, Dr. James said.

He also noted that specific events experienced during ICU stay have been shown to increase risk for cognitive impairment, including infection and severe sepsis, acute dialysis, neurologic dysfunction and delirium, and sedation.
 

Important work

Commenting on the study, Heather Snyder, PhD, vice president of medical & scientific relations at the Alzheimer’s Association, said what’s interesting about the study is that it looks at individuals in the ICU, regardless of the cause.

“The study shows that having some type of health issue that results in some type of ICU stay is associated with an increased risk of declining cognition,” Dr. Snyder said.

“That’s really important,” she said, “especially given the increase in individuals, particularly those 60 and older, who did experience an ICU stay over the last couple of years and understanding how that might impact their long-term risk related to Alzheimer’s and other changes in memory.”

“If an individual has been in the ICU, that should be part of the conversation with their physician or health care provider,” Dr. Snyder advised.

The study was funded by the National Institute on Aging. Dr. James and Dr. Snyder disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM AAIC 2022

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Reassuring’ safety data on PPI therapy

Article Type
Changed
Wed, 09/07/2022 - 14:02

In a novel analysis accounting for protopathic bias, proton pump inhibitor (PPI) therapy was not associated with increased risk for death due to digestive disease, cancer, cardiovascular disease (CVD), or any cause, although the jury is out on renal disease.

“There have been several studies suggesting that PPIs can cause long-term health problems and may be associated with increased mortality,” Andrew T. Chan, MD, MPH, gastroenterologist and professor of medicine, Massachusetts General Hospital and Harvard Medical School, both in Boston, told this news organization.

“We conducted this study to examine this issue using data that were better able to account for potential biases in those prior studies. We found that PPIs were generally not associated with an increased risk of mortality,” Dr. Chan said.

The study was published online in Gastroenterology.
 

‘Reassuring’ data

The findings are based on data collected between 2004 and 2018 from 50,156 women enrolled in the Nurses’ Health Study and 21,731 men enrolled from the Health Professionals Follow-up Study.

During the study period, 10,998 women (21.9%) and 2,945 men (13.6%) initiated PPI therapy, and PPI use increased over the study period from 6.1% to 10.0% in women and from 2.5% to 7.0% in men.

The mean age at baseline was 68.9 years for women and 68.0 years for men. During a median follow-up of 13.8 years, a total of 22,125 participants died – 4,592 of cancer, 5,404 of CVD, and 12,129 of other causes.

Unlike other studies, the researchers used a modified lag-time approach to minimize reverse causation (protopathic bias).

“Using this approach, any increased PPI use during the excluded period, which could be due to comorbid conditions prior to death, will not be considered in the quantification of the exposure, and thus, protopathic bias would be avoided,” they explain.

In the initial analysis that did not take into account lag times, PPI users had significantly higher risks for all-cause mortality and mortality due to cancer, CVD, respiratory diseases, and digestive diseases, compared with nonusers.

However, when applying lag times of up to 6 years, the associations were largely attenuated and no longer statistically significant, which “highlights the importance of carefully controlling for the influence of protopathic bias,” the researchers write.

However, despite applying lag times, PPI users remained at a significantly increased risk for mortality due to renal diseases (hazard ratio, 2.45; 95% confidence interval, 1.59-3.78).

The researchers caution, however, that they did not have reliable data on renal diseases and therefore could not adjust for confounding in the models. They call for further studies examining the risk for mortality due to renal diseases in patients using PPI therapy.

The researchers also looked at duration of PPI use and all-cause and cause-specific mortality.

For all-cause mortality and mortality due to cancer, CVD, respiratory diseases, and digestive diseases, the greatest risks were seen mostly in those who reported PPI use for 1-2 years. Longer duration of PPI use did not confer higher risk for mortality for these endpoints.

In contrast, a potential trend toward greater risk with longer duration of PPI use was observed for mortality due to renal disease. The hazard ratio was 1.68 (95% CI, 1.19-2.38) for 1 to 2 years of use and gradually increased to 2.42 (95% CI, 1.23-4.77) for 7 or more years of use.

Notably, when mortality risks were compared among PPI users and histamine H2 receptor antagonist (H2RA) users without lag time, PPI users were at increased risk for all-cause mortality and mortality due to causes other than cancer and CVD, compared with H2RA users.

But again, the strength of the associations decreased after lag time was introduced.

“This confirmed our main findings and suggested PPIs might be preferred over H2RAs in sicker patients with comorbid conditions,” the researchers write.
 

 

 

‘Generally safe’ when needed

Summing up, Dr. Chan said, “We think our results should be reassuring to clinicians that recommending PPIs to patients with appropriate indications will not increase their risk of death. These are generally safe drugs that when used appropriately can be very beneficial.”

Offering perspective on the study, David Johnson, MD, professor of medicine and chief of gastroenterology at the Eastern Virginia School of Medicine, Norfolk, noted that a “major continuing criticism of the allegations of harm by PPIs has been that these most commonly come from retrospective analyses of databases that were not constructed to evaluate these endpoints of harm.”

“Accordingly, these reports have multiple potentials for stratification bias and typically have low odds ratios for supporting the purported causality,” Dr. Johnson told this news organization.

“This is a well-done study design with a prospective database analysis that uses a modified lag-time approach to minimize reverse causation, that is, protopathic bias, which can occur when a pharmaceutical agent is inadvertently prescribed for an early manifestation of a disease that has not yet been diagnostically detected,” Dr. Johnson explained.

Echoing Dr. Chan, Dr. Johnson said the finding that PPI use was not associated with higher risk for all-cause mortality and mortality due to major causes is “reassuring.”

“Recognizably, too many people are taking PPIs chronically when they are not needed. If needed and appropriate, these data on continued use are reassuring,” Dr. Johnson added.

This work was supported by the National Institutes of Health and the Crohn’s and Colitis Foundation. Dr. Chan has consulted for OM1, Bayer Pharma AG, and Pfizer for topics unrelated to this study, as well as Boehringer Ingelheim for litigation related to ranitidine and cancer. Dr. Johnson reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

In a novel analysis accounting for protopathic bias, proton pump inhibitor (PPI) therapy was not associated with increased risk for death due to digestive disease, cancer, cardiovascular disease (CVD), or any cause, although the jury is out on renal disease.

“There have been several studies suggesting that PPIs can cause long-term health problems and may be associated with increased mortality,” Andrew T. Chan, MD, MPH, gastroenterologist and professor of medicine, Massachusetts General Hospital and Harvard Medical School, both in Boston, told this news organization.

“We conducted this study to examine this issue using data that were better able to account for potential biases in those prior studies. We found that PPIs were generally not associated with an increased risk of mortality,” Dr. Chan said.

The study was published online in Gastroenterology.
 

‘Reassuring’ data

The findings are based on data collected between 2004 and 2018 from 50,156 women enrolled in the Nurses’ Health Study and 21,731 men enrolled from the Health Professionals Follow-up Study.

During the study period, 10,998 women (21.9%) and 2,945 men (13.6%) initiated PPI therapy, and PPI use increased over the study period from 6.1% to 10.0% in women and from 2.5% to 7.0% in men.

The mean age at baseline was 68.9 years for women and 68.0 years for men. During a median follow-up of 13.8 years, a total of 22,125 participants died – 4,592 of cancer, 5,404 of CVD, and 12,129 of other causes.

Unlike other studies, the researchers used a modified lag-time approach to minimize reverse causation (protopathic bias).

“Using this approach, any increased PPI use during the excluded period, which could be due to comorbid conditions prior to death, will not be considered in the quantification of the exposure, and thus, protopathic bias would be avoided,” they explain.

In the initial analysis that did not take into account lag times, PPI users had significantly higher risks for all-cause mortality and mortality due to cancer, CVD, respiratory diseases, and digestive diseases, compared with nonusers.

However, when applying lag times of up to 6 years, the associations were largely attenuated and no longer statistically significant, which “highlights the importance of carefully controlling for the influence of protopathic bias,” the researchers write.

However, despite applying lag times, PPI users remained at a significantly increased risk for mortality due to renal diseases (hazard ratio, 2.45; 95% confidence interval, 1.59-3.78).

The researchers caution, however, that they did not have reliable data on renal diseases and therefore could not adjust for confounding in the models. They call for further studies examining the risk for mortality due to renal diseases in patients using PPI therapy.

The researchers also looked at duration of PPI use and all-cause and cause-specific mortality.

For all-cause mortality and mortality due to cancer, CVD, respiratory diseases, and digestive diseases, the greatest risks were seen mostly in those who reported PPI use for 1-2 years. Longer duration of PPI use did not confer higher risk for mortality for these endpoints.

In contrast, a potential trend toward greater risk with longer duration of PPI use was observed for mortality due to renal disease. The hazard ratio was 1.68 (95% CI, 1.19-2.38) for 1 to 2 years of use and gradually increased to 2.42 (95% CI, 1.23-4.77) for 7 or more years of use.

Notably, when mortality risks were compared among PPI users and histamine H2 receptor antagonist (H2RA) users without lag time, PPI users were at increased risk for all-cause mortality and mortality due to causes other than cancer and CVD, compared with H2RA users.

But again, the strength of the associations decreased after lag time was introduced.

“This confirmed our main findings and suggested PPIs might be preferred over H2RAs in sicker patients with comorbid conditions,” the researchers write.
 

 

 

‘Generally safe’ when needed

Summing up, Dr. Chan said, “We think our results should be reassuring to clinicians that recommending PPIs to patients with appropriate indications will not increase their risk of death. These are generally safe drugs that when used appropriately can be very beneficial.”

Offering perspective on the study, David Johnson, MD, professor of medicine and chief of gastroenterology at the Eastern Virginia School of Medicine, Norfolk, noted that a “major continuing criticism of the allegations of harm by PPIs has been that these most commonly come from retrospective analyses of databases that were not constructed to evaluate these endpoints of harm.”

“Accordingly, these reports have multiple potentials for stratification bias and typically have low odds ratios for supporting the purported causality,” Dr. Johnson told this news organization.

“This is a well-done study design with a prospective database analysis that uses a modified lag-time approach to minimize reverse causation, that is, protopathic bias, which can occur when a pharmaceutical agent is inadvertently prescribed for an early manifestation of a disease that has not yet been diagnostically detected,” Dr. Johnson explained.

Echoing Dr. Chan, Dr. Johnson said the finding that PPI use was not associated with higher risk for all-cause mortality and mortality due to major causes is “reassuring.”

“Recognizably, too many people are taking PPIs chronically when they are not needed. If needed and appropriate, these data on continued use are reassuring,” Dr. Johnson added.

This work was supported by the National Institutes of Health and the Crohn’s and Colitis Foundation. Dr. Chan has consulted for OM1, Bayer Pharma AG, and Pfizer for topics unrelated to this study, as well as Boehringer Ingelheim for litigation related to ranitidine and cancer. Dr. Johnson reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

In a novel analysis accounting for protopathic bias, proton pump inhibitor (PPI) therapy was not associated with increased risk for death due to digestive disease, cancer, cardiovascular disease (CVD), or any cause, although the jury is out on renal disease.

“There have been several studies suggesting that PPIs can cause long-term health problems and may be associated with increased mortality,” Andrew T. Chan, MD, MPH, gastroenterologist and professor of medicine, Massachusetts General Hospital and Harvard Medical School, both in Boston, told this news organization.

“We conducted this study to examine this issue using data that were better able to account for potential biases in those prior studies. We found that PPIs were generally not associated with an increased risk of mortality,” Dr. Chan said.

The study was published online in Gastroenterology.
 

‘Reassuring’ data

The findings are based on data collected between 2004 and 2018 from 50,156 women enrolled in the Nurses’ Health Study and 21,731 men enrolled from the Health Professionals Follow-up Study.

During the study period, 10,998 women (21.9%) and 2,945 men (13.6%) initiated PPI therapy, and PPI use increased over the study period from 6.1% to 10.0% in women and from 2.5% to 7.0% in men.

The mean age at baseline was 68.9 years for women and 68.0 years for men. During a median follow-up of 13.8 years, a total of 22,125 participants died – 4,592 of cancer, 5,404 of CVD, and 12,129 of other causes.

Unlike other studies, the researchers used a modified lag-time approach to minimize reverse causation (protopathic bias).

“Using this approach, any increased PPI use during the excluded period, which could be due to comorbid conditions prior to death, will not be considered in the quantification of the exposure, and thus, protopathic bias would be avoided,” they explain.

In the initial analysis that did not take into account lag times, PPI users had significantly higher risks for all-cause mortality and mortality due to cancer, CVD, respiratory diseases, and digestive diseases, compared with nonusers.

However, when applying lag times of up to 6 years, the associations were largely attenuated and no longer statistically significant, which “highlights the importance of carefully controlling for the influence of protopathic bias,” the researchers write.

However, despite applying lag times, PPI users remained at a significantly increased risk for mortality due to renal diseases (hazard ratio, 2.45; 95% confidence interval, 1.59-3.78).

The researchers caution, however, that they did not have reliable data on renal diseases and therefore could not adjust for confounding in the models. They call for further studies examining the risk for mortality due to renal diseases in patients using PPI therapy.

The researchers also looked at duration of PPI use and all-cause and cause-specific mortality.

For all-cause mortality and mortality due to cancer, CVD, respiratory diseases, and digestive diseases, the greatest risks were seen mostly in those who reported PPI use for 1-2 years. Longer duration of PPI use did not confer higher risk for mortality for these endpoints.

In contrast, a potential trend toward greater risk with longer duration of PPI use was observed for mortality due to renal disease. The hazard ratio was 1.68 (95% CI, 1.19-2.38) for 1 to 2 years of use and gradually increased to 2.42 (95% CI, 1.23-4.77) for 7 or more years of use.

Notably, when mortality risks were compared among PPI users and histamine H2 receptor antagonist (H2RA) users without lag time, PPI users were at increased risk for all-cause mortality and mortality due to causes other than cancer and CVD, compared with H2RA users.

But again, the strength of the associations decreased after lag time was introduced.

“This confirmed our main findings and suggested PPIs might be preferred over H2RAs in sicker patients with comorbid conditions,” the researchers write.
 

 

 

‘Generally safe’ when needed

Summing up, Dr. Chan said, “We think our results should be reassuring to clinicians that recommending PPIs to patients with appropriate indications will not increase their risk of death. These are generally safe drugs that when used appropriately can be very beneficial.”

Offering perspective on the study, David Johnson, MD, professor of medicine and chief of gastroenterology at the Eastern Virginia School of Medicine, Norfolk, noted that a “major continuing criticism of the allegations of harm by PPIs has been that these most commonly come from retrospective analyses of databases that were not constructed to evaluate these endpoints of harm.”

“Accordingly, these reports have multiple potentials for stratification bias and typically have low odds ratios for supporting the purported causality,” Dr. Johnson told this news organization.

“This is a well-done study design with a prospective database analysis that uses a modified lag-time approach to minimize reverse causation, that is, protopathic bias, which can occur when a pharmaceutical agent is inadvertently prescribed for an early manifestation of a disease that has not yet been diagnostically detected,” Dr. Johnson explained.

Echoing Dr. Chan, Dr. Johnson said the finding that PPI use was not associated with higher risk for all-cause mortality and mortality due to major causes is “reassuring.”

“Recognizably, too many people are taking PPIs chronically when they are not needed. If needed and appropriate, these data on continued use are reassuring,” Dr. Johnson added.

This work was supported by the National Institutes of Health and the Crohn’s and Colitis Foundation. Dr. Chan has consulted for OM1, Bayer Pharma AG, and Pfizer for topics unrelated to this study, as well as Boehringer Ingelheim for litigation related to ranitidine and cancer. Dr. Johnson reports no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Smartphone tool helps gauge bowel prep quality before colonoscopy

Article Type
Changed
Fri, 07/29/2022 - 08:47

An artificial intelligence (AI) tool that runs on a smartphone can help patients scheduled for a colonoscopy evaluate independently how well they do with bowel cleansing and may be an alternative approach for evaluating bowel preparation quality before the colonoscopy, especially in the COVID-19 era.

The AI tool is a “manpower-saving” option that reduces the need for nurses to evaluate the quality of bowel preparation, say Wei Gong, MD, Southern Medical University, Shenzhen, China, and colleagues.

Having the tool on a patient’s smartphone means caregivers and nurses would not be required to assess the adequacy of bowel cleansing for patients, which, in turn, would reduce person-to-person contact and the spread of infectious diseases, they add.

The study was published online in the American Journal of Gastroenterology.
 

Better than do-it-yourself evaluation?

The study was conducted at two hospitals in China among consecutive patients prepping for colonoscopy. All participants received standard bowel preparation instructions and were given a leaflet with general guidelines on bowel preparation.

The leaflet included photos representing bowel preparation quality and informed patients that their stool should eventually be a yellowish clear liquid; if any cloudiness (including turbid liquid, particles, or small amounts of feces) is observed in the liquid stool, the bowel preparation is not complete.

All patients were prescribed standard polyethylene glycol electrolyte solution for bowel cleansing 4-6 hours before the colonoscopy.

After consuming the solution, all patients scanned a QR (quick response) code with a smartphone for randomization into an experimental group using the AI-convolutional neural network (AI-CNN) model or a control group using self-evaluation.

The system gave instructions for using the application, taking photos of their feces, and uploading the images.

After uploading the images, the 730 patients in the AI-CNN group automatically received a “pass” or “not pass” alert, which indicated whether their bowel preparation was adequate or not.

The 704 patients in the control group evaluated the adequacy of bowel preparation on their own according to the leaflet instructions after uploading their images.

Colonoscopists and nurses were blinded to the bowel evaluation method that each patient used.

According to the investigators, evaluation results (“pass” or “not pass”) in terms of adequacy of bowel preparation as represented by Boston Bowel Preparation Scale (BBPS) scores were consistent between the two methods (AI-CNN or self-evaluation).

Overall, there were no significant differences in the two methods in terms of mean BBPS scores, polyp detection rate, or adenoma detection rate.

In subgroup analysis, however, the mean BBPS score of patients with “pass” results was significantly higher in the AI-CNN group than in the self-evaluation control group.

This suggests that the AI-CNN model may further improve the quality of bowel preparation in patients exhibiting adequate bowel preparation, the researchers say.

The results also suggest improved bowel preparation quality of the right colon under the aid of the AI-CNN model, which may be crucial for the prevention of interval colorectal cancer.

The study did not investigate the user acceptability of the AI-CNN model.

“To improve the model and broaden its application in routine practice, evaluating its convenience, accessibility, aspects that cause users difficulty, and user satisfaction is crucial,” the study team concludes.

The study was supported by the Xiamen Medical Health Science and Technology Project and the Xiamen Chang Gung Hospital Science Project. The authors have declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An artificial intelligence (AI) tool that runs on a smartphone can help patients scheduled for a colonoscopy evaluate independently how well they do with bowel cleansing and may be an alternative approach for evaluating bowel preparation quality before the colonoscopy, especially in the COVID-19 era.

The AI tool is a “manpower-saving” option that reduces the need for nurses to evaluate the quality of bowel preparation, say Wei Gong, MD, Southern Medical University, Shenzhen, China, and colleagues.

Having the tool on a patient’s smartphone means caregivers and nurses would not be required to assess the adequacy of bowel cleansing for patients, which, in turn, would reduce person-to-person contact and the spread of infectious diseases, they add.

The study was published online in the American Journal of Gastroenterology.
 

Better than do-it-yourself evaluation?

The study was conducted at two hospitals in China among consecutive patients prepping for colonoscopy. All participants received standard bowel preparation instructions and were given a leaflet with general guidelines on bowel preparation.

The leaflet included photos representing bowel preparation quality and informed patients that their stool should eventually be a yellowish clear liquid; if any cloudiness (including turbid liquid, particles, or small amounts of feces) is observed in the liquid stool, the bowel preparation is not complete.

All patients were prescribed standard polyethylene glycol electrolyte solution for bowel cleansing 4-6 hours before the colonoscopy.

After consuming the solution, all patients scanned a QR (quick response) code with a smartphone for randomization into an experimental group using the AI-convolutional neural network (AI-CNN) model or a control group using self-evaluation.

The system gave instructions for using the application, taking photos of their feces, and uploading the images.

After uploading the images, the 730 patients in the AI-CNN group automatically received a “pass” or “not pass” alert, which indicated whether their bowel preparation was adequate or not.

The 704 patients in the control group evaluated the adequacy of bowel preparation on their own according to the leaflet instructions after uploading their images.

Colonoscopists and nurses were blinded to the bowel evaluation method that each patient used.

According to the investigators, evaluation results (“pass” or “not pass”) in terms of adequacy of bowel preparation as represented by Boston Bowel Preparation Scale (BBPS) scores were consistent between the two methods (AI-CNN or self-evaluation).

Overall, there were no significant differences in the two methods in terms of mean BBPS scores, polyp detection rate, or adenoma detection rate.

In subgroup analysis, however, the mean BBPS score of patients with “pass” results was significantly higher in the AI-CNN group than in the self-evaluation control group.

This suggests that the AI-CNN model may further improve the quality of bowel preparation in patients exhibiting adequate bowel preparation, the researchers say.

The results also suggest improved bowel preparation quality of the right colon under the aid of the AI-CNN model, which may be crucial for the prevention of interval colorectal cancer.

The study did not investigate the user acceptability of the AI-CNN model.

“To improve the model and broaden its application in routine practice, evaluating its convenience, accessibility, aspects that cause users difficulty, and user satisfaction is crucial,” the study team concludes.

The study was supported by the Xiamen Medical Health Science and Technology Project and the Xiamen Chang Gung Hospital Science Project. The authors have declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

An artificial intelligence (AI) tool that runs on a smartphone can help patients scheduled for a colonoscopy evaluate independently how well they do with bowel cleansing and may be an alternative approach for evaluating bowel preparation quality before the colonoscopy, especially in the COVID-19 era.

The AI tool is a “manpower-saving” option that reduces the need for nurses to evaluate the quality of bowel preparation, say Wei Gong, MD, Southern Medical University, Shenzhen, China, and colleagues.

Having the tool on a patient’s smartphone means caregivers and nurses would not be required to assess the adequacy of bowel cleansing for patients, which, in turn, would reduce person-to-person contact and the spread of infectious diseases, they add.

The study was published online in the American Journal of Gastroenterology.
 

Better than do-it-yourself evaluation?

The study was conducted at two hospitals in China among consecutive patients prepping for colonoscopy. All participants received standard bowel preparation instructions and were given a leaflet with general guidelines on bowel preparation.

The leaflet included photos representing bowel preparation quality and informed patients that their stool should eventually be a yellowish clear liquid; if any cloudiness (including turbid liquid, particles, or small amounts of feces) is observed in the liquid stool, the bowel preparation is not complete.

All patients were prescribed standard polyethylene glycol electrolyte solution for bowel cleansing 4-6 hours before the colonoscopy.

After consuming the solution, all patients scanned a QR (quick response) code with a smartphone for randomization into an experimental group using the AI-convolutional neural network (AI-CNN) model or a control group using self-evaluation.

The system gave instructions for using the application, taking photos of their feces, and uploading the images.

After uploading the images, the 730 patients in the AI-CNN group automatically received a “pass” or “not pass” alert, which indicated whether their bowel preparation was adequate or not.

The 704 patients in the control group evaluated the adequacy of bowel preparation on their own according to the leaflet instructions after uploading their images.

Colonoscopists and nurses were blinded to the bowel evaluation method that each patient used.

According to the investigators, evaluation results (“pass” or “not pass”) in terms of adequacy of bowel preparation as represented by Boston Bowel Preparation Scale (BBPS) scores were consistent between the two methods (AI-CNN or self-evaluation).

Overall, there were no significant differences in the two methods in terms of mean BBPS scores, polyp detection rate, or adenoma detection rate.

In subgroup analysis, however, the mean BBPS score of patients with “pass” results was significantly higher in the AI-CNN group than in the self-evaluation control group.

This suggests that the AI-CNN model may further improve the quality of bowel preparation in patients exhibiting adequate bowel preparation, the researchers say.

The results also suggest improved bowel preparation quality of the right colon under the aid of the AI-CNN model, which may be crucial for the prevention of interval colorectal cancer.

The study did not investigate the user acceptability of the AI-CNN model.

“To improve the model and broaden its application in routine practice, evaluating its convenience, accessibility, aspects that cause users difficulty, and user satisfaction is crucial,” the study team concludes.

The study was supported by the Xiamen Medical Health Science and Technology Project and the Xiamen Chang Gung Hospital Science Project. The authors have declared no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE AMERICAN JOURNAL OF GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Alarming’ global rise in NAFLD

Article Type
Changed
Thu, 07/28/2022 - 12:49

The global prevalence of fatty liver disease not caused by alcohol is considerably higher than previously estimated and is continuing to increase at an alarming rate, report researchers from Canada.

Their analysis suggests nearly one-third of the global general adult population has nonalcoholic fatty liver disease (NAFLD), with men much more likely to have the disease than women.

“Greater awareness of NAFLD and the development of cost-effective risk stratification strategies are needed to address the growing burden NAFLD,” wrote Abdel-Aziz Shaheen, MBBCh, MSc, and colleagues with the University of Calgary (Alta.).

The study was published online in Lancet Gastroenterology and Hepatology.

NAFLD is the most common liver disease worldwide and a leading cause of liver-related illness and death. Yet, high-quality reports on the epidemiology of NAFLD at a global level are scarce and temporal trends of the NAFLD burden, including by gender, have not been described, until now.

Using MEDLINE, EMBASE, Scopus, and Web of Science, the Calgary team identified reports on NAFLD incidence and prevalence in study populations representative of the general adult population published between the date of database inception to May 25, 2021.

In total, 72 publications, with a sample population of more than 1 million adults from 17 countries, were included in the prevalence analysis, and 16 publications, with a sample population of nearly 382,000 individuals from five countries, were included in the incidence analysis.

By their estimates, the overall global prevalence of NAFLD is 32.4%, with prevalence increasing steadily and significantly over time, from 25.5% in or before 2005 to 37.8% in 2016 or later. The overall prevalence is significantly higher in men than in women (39.7% vs. 25.6%).

These figures contrast with recent meta-analyses and systematic reviews that put the global prevalence of NAFLD at between 25.2% and 29.8%. However, these studies had “considerable” limitations with “potentially biased inferences,” Dr. Shaheen and colleagues noted.

By region, their data put the prevalence of NAFLD at 31.6% in Asia, 32.6% in Europe, 47.8% in North America, and 56.8% in Africa.

Dr. Shaheen and colleagues estimate the overall incidence of NAFLD to be 46.9 cases per 1,000 person-years, with a higher incidence in men than women (70.8 vs. 29.6 cases per 1000 person-years), in line with the gender differences in prevalence.

They caution that there was “considerable” heterogeneity between studies in both NAFLD prevalence and incidence (I2 = 99.9%) and few “high-quality” studies.

Despite these limitations, Dr. Shaheen and colleagues said the rise in NAFLD prevalence “should drive enhanced awareness of NAFLD at the level of primary care physicians, public health specialists, and health policy makers to encourage the development of more effective preventive policies.”

Funding for the study was provided by the Canadian Institutes of Health. Dr. Shaheen has received research grants from Gilead and Intercept, and honoraria from SCOPE Canada.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

The global prevalence of fatty liver disease not caused by alcohol is considerably higher than previously estimated and is continuing to increase at an alarming rate, report researchers from Canada.

Their analysis suggests nearly one-third of the global general adult population has nonalcoholic fatty liver disease (NAFLD), with men much more likely to have the disease than women.

“Greater awareness of NAFLD and the development of cost-effective risk stratification strategies are needed to address the growing burden NAFLD,” wrote Abdel-Aziz Shaheen, MBBCh, MSc, and colleagues with the University of Calgary (Alta.).

The study was published online in Lancet Gastroenterology and Hepatology.

NAFLD is the most common liver disease worldwide and a leading cause of liver-related illness and death. Yet, high-quality reports on the epidemiology of NAFLD at a global level are scarce and temporal trends of the NAFLD burden, including by gender, have not been described, until now.

Using MEDLINE, EMBASE, Scopus, and Web of Science, the Calgary team identified reports on NAFLD incidence and prevalence in study populations representative of the general adult population published between the date of database inception to May 25, 2021.

In total, 72 publications, with a sample population of more than 1 million adults from 17 countries, were included in the prevalence analysis, and 16 publications, with a sample population of nearly 382,000 individuals from five countries, were included in the incidence analysis.

By their estimates, the overall global prevalence of NAFLD is 32.4%, with prevalence increasing steadily and significantly over time, from 25.5% in or before 2005 to 37.8% in 2016 or later. The overall prevalence is significantly higher in men than in women (39.7% vs. 25.6%).

These figures contrast with recent meta-analyses and systematic reviews that put the global prevalence of NAFLD at between 25.2% and 29.8%. However, these studies had “considerable” limitations with “potentially biased inferences,” Dr. Shaheen and colleagues noted.

By region, their data put the prevalence of NAFLD at 31.6% in Asia, 32.6% in Europe, 47.8% in North America, and 56.8% in Africa.

Dr. Shaheen and colleagues estimate the overall incidence of NAFLD to be 46.9 cases per 1,000 person-years, with a higher incidence in men than women (70.8 vs. 29.6 cases per 1000 person-years), in line with the gender differences in prevalence.

They caution that there was “considerable” heterogeneity between studies in both NAFLD prevalence and incidence (I2 = 99.9%) and few “high-quality” studies.

Despite these limitations, Dr. Shaheen and colleagues said the rise in NAFLD prevalence “should drive enhanced awareness of NAFLD at the level of primary care physicians, public health specialists, and health policy makers to encourage the development of more effective preventive policies.”

Funding for the study was provided by the Canadian Institutes of Health. Dr. Shaheen has received research grants from Gilead and Intercept, and honoraria from SCOPE Canada.

A version of this article first appeared on Medscape.com.

The global prevalence of fatty liver disease not caused by alcohol is considerably higher than previously estimated and is continuing to increase at an alarming rate, report researchers from Canada.

Their analysis suggests nearly one-third of the global general adult population has nonalcoholic fatty liver disease (NAFLD), with men much more likely to have the disease than women.

“Greater awareness of NAFLD and the development of cost-effective risk stratification strategies are needed to address the growing burden NAFLD,” wrote Abdel-Aziz Shaheen, MBBCh, MSc, and colleagues with the University of Calgary (Alta.).

The study was published online in Lancet Gastroenterology and Hepatology.

NAFLD is the most common liver disease worldwide and a leading cause of liver-related illness and death. Yet, high-quality reports on the epidemiology of NAFLD at a global level are scarce and temporal trends of the NAFLD burden, including by gender, have not been described, until now.

Using MEDLINE, EMBASE, Scopus, and Web of Science, the Calgary team identified reports on NAFLD incidence and prevalence in study populations representative of the general adult population published between the date of database inception to May 25, 2021.

In total, 72 publications, with a sample population of more than 1 million adults from 17 countries, were included in the prevalence analysis, and 16 publications, with a sample population of nearly 382,000 individuals from five countries, were included in the incidence analysis.

By their estimates, the overall global prevalence of NAFLD is 32.4%, with prevalence increasing steadily and significantly over time, from 25.5% in or before 2005 to 37.8% in 2016 or later. The overall prevalence is significantly higher in men than in women (39.7% vs. 25.6%).

These figures contrast with recent meta-analyses and systematic reviews that put the global prevalence of NAFLD at between 25.2% and 29.8%. However, these studies had “considerable” limitations with “potentially biased inferences,” Dr. Shaheen and colleagues noted.

By region, their data put the prevalence of NAFLD at 31.6% in Asia, 32.6% in Europe, 47.8% in North America, and 56.8% in Africa.

Dr. Shaheen and colleagues estimate the overall incidence of NAFLD to be 46.9 cases per 1,000 person-years, with a higher incidence in men than women (70.8 vs. 29.6 cases per 1000 person-years), in line with the gender differences in prevalence.

They caution that there was “considerable” heterogeneity between studies in both NAFLD prevalence and incidence (I2 = 99.9%) and few “high-quality” studies.

Despite these limitations, Dr. Shaheen and colleagues said the rise in NAFLD prevalence “should drive enhanced awareness of NAFLD at the level of primary care physicians, public health specialists, and health policy makers to encourage the development of more effective preventive policies.”

Funding for the study was provided by the Canadian Institutes of Health. Dr. Shaheen has received research grants from Gilead and Intercept, and honoraria from SCOPE Canada.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM LANCET GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Are head-to-head cancer drug trials rigged?

Article Type
Changed
Thu, 12/15/2022 - 14:28

More than half of studies testing anticancer drugs against each other have rules with regard to dose modification and growth support that favor the experimental drug arm, a new analysis suggests.

“We found it sobering that this practice is so common,” Timothée Olivier, MD, with Geneva University Hospital and the University of California, San Francisco, said in an interview.

Trials may be “rigged” in a way where the new therapy appears more effective than if the trial would have been designed with fairer rules, he explained.

This leaves open the question of whether new drugs are truly superior to older ones or if instead different outcomes are caused by more aggressive dosing or growth factor support, the investigators said.

Dr. Olivier, with UCSF coinvestigators Alyson Haslam, PhD, and Vinay Prasad, MD, reported their findings online in the European Journal of Cancer.

‘Highly concerning’

Different drug modification rules or growth factor support guidance may affect the results of randomized controlled trials (RCTs) of testing new cancer agents.

For their study, Dr. Olivier and colleagues did a cross-sectional analysis of all 62 head-to-head registration RCTs that led to Food and Drug Administration approval between 2009 and 2021.

All of the trials examined anticancer drugs in the advanced or metastatic setting where a comparison was made between arms regarding either dose modification rules or myeloid growth factors recommendations.

The researchers assessed imbalance in drug modification rules, myeloid growth factor recommendations, or both, according to prespecified rules.

They discovered that 40 of the 62 trials (65%) had unequal rules for dose medication, granulocyte colony-stimulating factor (G-CSF) use, or both.

Six trials (10%) had rules favoring the control arm, while 34 (55%) had rules favoring the experimental arm. Among these, 50% had unequal drug modification rules, 41% had unequal G-CSF rules, and 9% had both.

Dr. Olivier said in an interview the results are “highly concerning because when you are investigating the effect of a new drug, you don’t want to have a false sense of a drug’s effect because of other factors not directly related to the drug’s efficacy.”

“If you introduce unfair rules about dose modification or supporting medication that favors the new drug, then you don’t know if a positive trial is due to the effect of the new drug or to the effect of differential dosing or supporting medication,” he added.
 

Blame industry?

Dr. Olivier said the fact that most registration trials are industry-sponsored is likely the primary explanation of the findings.

“Industry-sponsored trials may be designed so that the new drug has the best chance to get the largest ‘win,’ because this means more market share and more profit for the company that manufactures the drug. This is not a criticism of the industry, which runs on a business model that naturally aims to gain more market share and more profit,” Dr. Olivier said.

“However, it is the role and duty of regulators to reconcile industry incentives with the patients’ best interests, and there is accumulating data showing the regulators are failing to do so,” he added.

Addressing this problem will likely take buy-in from multiple stakeholders.

Awareness of the problem is a first step and understanding the influence of commercial incentives in drug development is also key, Dr. Olivier said.

Institutional review boards and drug regulators could also systematically evaluate drug dosing modification and supportive medication rules before a trial gets underway.

Regulators could also incentivize companies to implement balanced rules between arms by not granting drug approval based on trials suffering from such flaws.

“However, financial conflict of interest is present at many levels of drug development, including in drug regulation,” Dr. Olivier noted.

He pointed to a recent study that found when hematology-oncology medical reviewers working at the FDA leave the agency, more than half end up working or consulting for the pharmaceutical industry.

Dr. Olivier wondered: “How can one fairly and independently appraise a medical intervention if one’s current or future revenue depends on its source?”

The study was funded by Arnold Ventures, through a grant paid to UCSF. Dr. Olivier and Dr. Haslam had no relevant disclosures. Dr. Prasad reported receiving royalties from Arnold Ventures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

More than half of studies testing anticancer drugs against each other have rules with regard to dose modification and growth support that favor the experimental drug arm, a new analysis suggests.

“We found it sobering that this practice is so common,” Timothée Olivier, MD, with Geneva University Hospital and the University of California, San Francisco, said in an interview.

Trials may be “rigged” in a way where the new therapy appears more effective than if the trial would have been designed with fairer rules, he explained.

This leaves open the question of whether new drugs are truly superior to older ones or if instead different outcomes are caused by more aggressive dosing or growth factor support, the investigators said.

Dr. Olivier, with UCSF coinvestigators Alyson Haslam, PhD, and Vinay Prasad, MD, reported their findings online in the European Journal of Cancer.

‘Highly concerning’

Different drug modification rules or growth factor support guidance may affect the results of randomized controlled trials (RCTs) of testing new cancer agents.

For their study, Dr. Olivier and colleagues did a cross-sectional analysis of all 62 head-to-head registration RCTs that led to Food and Drug Administration approval between 2009 and 2021.

All of the trials examined anticancer drugs in the advanced or metastatic setting where a comparison was made between arms regarding either dose modification rules or myeloid growth factors recommendations.

The researchers assessed imbalance in drug modification rules, myeloid growth factor recommendations, or both, according to prespecified rules.

They discovered that 40 of the 62 trials (65%) had unequal rules for dose medication, granulocyte colony-stimulating factor (G-CSF) use, or both.

Six trials (10%) had rules favoring the control arm, while 34 (55%) had rules favoring the experimental arm. Among these, 50% had unequal drug modification rules, 41% had unequal G-CSF rules, and 9% had both.

Dr. Olivier said in an interview the results are “highly concerning because when you are investigating the effect of a new drug, you don’t want to have a false sense of a drug’s effect because of other factors not directly related to the drug’s efficacy.”

“If you introduce unfair rules about dose modification or supporting medication that favors the new drug, then you don’t know if a positive trial is due to the effect of the new drug or to the effect of differential dosing or supporting medication,” he added.
 

Blame industry?

Dr. Olivier said the fact that most registration trials are industry-sponsored is likely the primary explanation of the findings.

“Industry-sponsored trials may be designed so that the new drug has the best chance to get the largest ‘win,’ because this means more market share and more profit for the company that manufactures the drug. This is not a criticism of the industry, which runs on a business model that naturally aims to gain more market share and more profit,” Dr. Olivier said.

“However, it is the role and duty of regulators to reconcile industry incentives with the patients’ best interests, and there is accumulating data showing the regulators are failing to do so,” he added.

Addressing this problem will likely take buy-in from multiple stakeholders.

Awareness of the problem is a first step and understanding the influence of commercial incentives in drug development is also key, Dr. Olivier said.

Institutional review boards and drug regulators could also systematically evaluate drug dosing modification and supportive medication rules before a trial gets underway.

Regulators could also incentivize companies to implement balanced rules between arms by not granting drug approval based on trials suffering from such flaws.

“However, financial conflict of interest is present at many levels of drug development, including in drug regulation,” Dr. Olivier noted.

He pointed to a recent study that found when hematology-oncology medical reviewers working at the FDA leave the agency, more than half end up working or consulting for the pharmaceutical industry.

Dr. Olivier wondered: “How can one fairly and independently appraise a medical intervention if one’s current or future revenue depends on its source?”

The study was funded by Arnold Ventures, through a grant paid to UCSF. Dr. Olivier and Dr. Haslam had no relevant disclosures. Dr. Prasad reported receiving royalties from Arnold Ventures.

A version of this article first appeared on Medscape.com.

More than half of studies testing anticancer drugs against each other have rules with regard to dose modification and growth support that favor the experimental drug arm, a new analysis suggests.

“We found it sobering that this practice is so common,” Timothée Olivier, MD, with Geneva University Hospital and the University of California, San Francisco, said in an interview.

Trials may be “rigged” in a way where the new therapy appears more effective than if the trial would have been designed with fairer rules, he explained.

This leaves open the question of whether new drugs are truly superior to older ones or if instead different outcomes are caused by more aggressive dosing or growth factor support, the investigators said.

Dr. Olivier, with UCSF coinvestigators Alyson Haslam, PhD, and Vinay Prasad, MD, reported their findings online in the European Journal of Cancer.

‘Highly concerning’

Different drug modification rules or growth factor support guidance may affect the results of randomized controlled trials (RCTs) of testing new cancer agents.

For their study, Dr. Olivier and colleagues did a cross-sectional analysis of all 62 head-to-head registration RCTs that led to Food and Drug Administration approval between 2009 and 2021.

All of the trials examined anticancer drugs in the advanced or metastatic setting where a comparison was made between arms regarding either dose modification rules or myeloid growth factors recommendations.

The researchers assessed imbalance in drug modification rules, myeloid growth factor recommendations, or both, according to prespecified rules.

They discovered that 40 of the 62 trials (65%) had unequal rules for dose medication, granulocyte colony-stimulating factor (G-CSF) use, or both.

Six trials (10%) had rules favoring the control arm, while 34 (55%) had rules favoring the experimental arm. Among these, 50% had unequal drug modification rules, 41% had unequal G-CSF rules, and 9% had both.

Dr. Olivier said in an interview the results are “highly concerning because when you are investigating the effect of a new drug, you don’t want to have a false sense of a drug’s effect because of other factors not directly related to the drug’s efficacy.”

“If you introduce unfair rules about dose modification or supporting medication that favors the new drug, then you don’t know if a positive trial is due to the effect of the new drug or to the effect of differential dosing or supporting medication,” he added.
 

Blame industry?

Dr. Olivier said the fact that most registration trials are industry-sponsored is likely the primary explanation of the findings.

“Industry-sponsored trials may be designed so that the new drug has the best chance to get the largest ‘win,’ because this means more market share and more profit for the company that manufactures the drug. This is not a criticism of the industry, which runs on a business model that naturally aims to gain more market share and more profit,” Dr. Olivier said.

“However, it is the role and duty of regulators to reconcile industry incentives with the patients’ best interests, and there is accumulating data showing the regulators are failing to do so,” he added.

Addressing this problem will likely take buy-in from multiple stakeholders.

Awareness of the problem is a first step and understanding the influence of commercial incentives in drug development is also key, Dr. Olivier said.

Institutional review boards and drug regulators could also systematically evaluate drug dosing modification and supportive medication rules before a trial gets underway.

Regulators could also incentivize companies to implement balanced rules between arms by not granting drug approval based on trials suffering from such flaws.

“However, financial conflict of interest is present at many levels of drug development, including in drug regulation,” Dr. Olivier noted.

He pointed to a recent study that found when hematology-oncology medical reviewers working at the FDA leave the agency, more than half end up working or consulting for the pharmaceutical industry.

Dr. Olivier wondered: “How can one fairly and independently appraise a medical intervention if one’s current or future revenue depends on its source?”

The study was funded by Arnold Ventures, through a grant paid to UCSF. Dr. Olivier and Dr. Haslam had no relevant disclosures. Dr. Prasad reported receiving royalties from Arnold Ventures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Two distinct phenotypes of COVID-related myocarditis emerge

Article Type
Changed
Wed, 07/27/2022 - 08:54

Researchers from France have identified two distinct phenotypes of fulminant COVID-19–related myocarditis in adults, with different clinical presentations, immunologic profiles, and outcomes.

Differentiation between the two bioclinical entities is important to understand for patient management and further pathophysiological studies, they said.

The first phenotype occurs early (within a few days) in acute SARS-CoV-2 infection, with active viral replication (polymerase chain reaction positive) in adults who meet criteria for multisystem inflammatory syndrome (MIS-A+).

Floaria Bicher/iStock/Getty Images Plus

In this early phenotype, there is “limited systemic inflammation without skin and mucosal involvement, but myocardial dysfunction is fulminant and frequently associated with large pericardial effusions. These cases more often require extracorporeal membrane oxygenation [ECMO],” Guy Gorochov, MD, PhD, Sorbonne University, Paris, said in an interview.

The second is a delayed, postinfectious, immune-driven phenotype that occurs in adults who fail to meet the criteria for MIS-A (MIS-A–).

This phenotype occurs weeks after SARS-CoV-2 infection, usually beyond detectable active viral replication (PCR–) in the context of specific immune response and severe systemic inflammation with skin and mucosal involvement. Myocardial dysfunction is more progressive and rarely associated with large pericardial effusions, Dr. Gorochov explained.

The study was published in the Journal of the American College of Cardiology.
 

Evolving understanding

The findings are based on a retrospective analysis of 38 patients without a history of COVID-19 vaccination who were admitted to the intensive care unit from March 2020 to June 2021 for suspected fulminant COVID-19 myocarditis.

Patients were confirmed to have SARS-CoV-2 infection by PCR and/or by serologic testing. As noted in other studies, the patients were predominantly young men (66%; median age, 27.5 years). Twenty-five (66%) patients were MIS-A+ and 13 (34%) were MIS-A–.



In general, the MIS-A– patients were sicker and had worse outcomes.

Specifically, compared with the MIS-A+ patients, MIS-A– patients had a shorter time between the onset of COVID-19 symptoms and the development of myocarditis, a shorter time to ICU admission, and more severe presentations assessed using lower left ventricular ejection fraction and sequential organ failure assessment scores.

MIS-A– patients also had higher lactate levels, were more likely to need venoarterial ECMO (92% vs 16%), had higher ICU mortality (31% vs. 4%), and a had lower probability of survival at 3 months (68% vs. 96%), compared with their MIS-A+ peers.

Immunologic differences

The immunologic profiles of these two distinct clinical phenotypes also differed.

In MIS-A– early-type COVID-19 myocarditis, RNA polymerase III autoantibodies are frequently positive and serum levels of antiviral interferon-alpha and granulocyte-attracting interleukin-8 are elevated.

In contrast, in MIS-A+ delayed-type COVID-19 myocarditis, RNA polymerase III autoantibodies are negative and serum levels of IL-17 and IL-22 are highly elevated.

“We suggest that IL-17 and IL-22 are novel criteria that should help to assess in adults the recently recognized MIS-A,” Dr. Gorochov told this news organization. “It should be tested whether IL-17 and IL-22 are also elevated in children with MIS-C.”

The researchers also observed “extremely” high serum IL-10 levels in both patient groups. This has been previously associated with severe myocardial injury and an increase in the risk for death in severe COVID-19 patients.

The researchers said the phenotypic clustering of patients with fulminant COVID-19–related myocarditis “seems relevant” for their management.

MIS-A– cases, owing to the high risk for evolution toward refractory cardiogenic shock, should be “urgently” referred to a center with venoarterial ECMO and closely monitored to prevent a “too-late” cannulation, especially under cardiopulmonary resuscitation, known to be associated with poor outcomes, they advised.

They noted that the five patients who died in their series had late venoarterial ECMO implantation, while undergoing multiple organ failures or resuscitation.

Conversely, the risk for evolution to refractory cardiogenic shock is lower in MIS-A+ cases. However, identifying MIS-A+ cases is “all the more important given that numerous data support the efficacy of corticosteroids and/or intravenous immunoglobulins in MIS-C,” Dr. Gorochov and colleagues wrote.

The authors of a linked editorial said the French team should be “commended on their work in furthering our understanding of fulminant myocarditis related to COVID-19 infection.”

Ajith Nair, MD, Baylor College of Medicine, and Anita Deswal, MD, MPH, University of Texas M.D. Anderson Cancer Center, both in Houston, noted that fulminant myocarditis is rare and can result from either of two mechanisms: viral tropism or an immune-mediated mechanism.

“It remains to be seen whether using antiviral therapy versus immunomodulatory therapy on the basis of clinical and cytokine profiles will yield benefits,” they wrote.

“Fulminant myocarditis invariably requires hemodynamic support and carries a high mortality risk if it is recognized late. However, the long-term prognosis in patients who survive the critical period is favorable, with recovery of myocardial function,” they added.

“This study highlights the ever-shifting understanding of the pathophysiology and therapeutic approaches to fulminant myocarditis,” Dr. Nair and Dr. Deswal concluded.

This research was supported in part by the Foundation of France, French National Research Agency, Sorbonne University, and Clinical Research Hospital. The researchers have filed a patent application based on these results. Dr. Nair and Dr. Deswal have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Researchers from France have identified two distinct phenotypes of fulminant COVID-19–related myocarditis in adults, with different clinical presentations, immunologic profiles, and outcomes.

Differentiation between the two bioclinical entities is important to understand for patient management and further pathophysiological studies, they said.

The first phenotype occurs early (within a few days) in acute SARS-CoV-2 infection, with active viral replication (polymerase chain reaction positive) in adults who meet criteria for multisystem inflammatory syndrome (MIS-A+).

Floaria Bicher/iStock/Getty Images Plus

In this early phenotype, there is “limited systemic inflammation without skin and mucosal involvement, but myocardial dysfunction is fulminant and frequently associated with large pericardial effusions. These cases more often require extracorporeal membrane oxygenation [ECMO],” Guy Gorochov, MD, PhD, Sorbonne University, Paris, said in an interview.

The second is a delayed, postinfectious, immune-driven phenotype that occurs in adults who fail to meet the criteria for MIS-A (MIS-A–).

This phenotype occurs weeks after SARS-CoV-2 infection, usually beyond detectable active viral replication (PCR–) in the context of specific immune response and severe systemic inflammation with skin and mucosal involvement. Myocardial dysfunction is more progressive and rarely associated with large pericardial effusions, Dr. Gorochov explained.

The study was published in the Journal of the American College of Cardiology.
 

Evolving understanding

The findings are based on a retrospective analysis of 38 patients without a history of COVID-19 vaccination who were admitted to the intensive care unit from March 2020 to June 2021 for suspected fulminant COVID-19 myocarditis.

Patients were confirmed to have SARS-CoV-2 infection by PCR and/or by serologic testing. As noted in other studies, the patients were predominantly young men (66%; median age, 27.5 years). Twenty-five (66%) patients were MIS-A+ and 13 (34%) were MIS-A–.



In general, the MIS-A– patients were sicker and had worse outcomes.

Specifically, compared with the MIS-A+ patients, MIS-A– patients had a shorter time between the onset of COVID-19 symptoms and the development of myocarditis, a shorter time to ICU admission, and more severe presentations assessed using lower left ventricular ejection fraction and sequential organ failure assessment scores.

MIS-A– patients also had higher lactate levels, were more likely to need venoarterial ECMO (92% vs 16%), had higher ICU mortality (31% vs. 4%), and a had lower probability of survival at 3 months (68% vs. 96%), compared with their MIS-A+ peers.

Immunologic differences

The immunologic profiles of these two distinct clinical phenotypes also differed.

In MIS-A– early-type COVID-19 myocarditis, RNA polymerase III autoantibodies are frequently positive and serum levels of antiviral interferon-alpha and granulocyte-attracting interleukin-8 are elevated.

In contrast, in MIS-A+ delayed-type COVID-19 myocarditis, RNA polymerase III autoantibodies are negative and serum levels of IL-17 and IL-22 are highly elevated.

“We suggest that IL-17 and IL-22 are novel criteria that should help to assess in adults the recently recognized MIS-A,” Dr. Gorochov told this news organization. “It should be tested whether IL-17 and IL-22 are also elevated in children with MIS-C.”

The researchers also observed “extremely” high serum IL-10 levels in both patient groups. This has been previously associated with severe myocardial injury and an increase in the risk for death in severe COVID-19 patients.

The researchers said the phenotypic clustering of patients with fulminant COVID-19–related myocarditis “seems relevant” for their management.

MIS-A– cases, owing to the high risk for evolution toward refractory cardiogenic shock, should be “urgently” referred to a center with venoarterial ECMO and closely monitored to prevent a “too-late” cannulation, especially under cardiopulmonary resuscitation, known to be associated with poor outcomes, they advised.

They noted that the five patients who died in their series had late venoarterial ECMO implantation, while undergoing multiple organ failures or resuscitation.

Conversely, the risk for evolution to refractory cardiogenic shock is lower in MIS-A+ cases. However, identifying MIS-A+ cases is “all the more important given that numerous data support the efficacy of corticosteroids and/or intravenous immunoglobulins in MIS-C,” Dr. Gorochov and colleagues wrote.

The authors of a linked editorial said the French team should be “commended on their work in furthering our understanding of fulminant myocarditis related to COVID-19 infection.”

Ajith Nair, MD, Baylor College of Medicine, and Anita Deswal, MD, MPH, University of Texas M.D. Anderson Cancer Center, both in Houston, noted that fulminant myocarditis is rare and can result from either of two mechanisms: viral tropism or an immune-mediated mechanism.

“It remains to be seen whether using antiviral therapy versus immunomodulatory therapy on the basis of clinical and cytokine profiles will yield benefits,” they wrote.

“Fulminant myocarditis invariably requires hemodynamic support and carries a high mortality risk if it is recognized late. However, the long-term prognosis in patients who survive the critical period is favorable, with recovery of myocardial function,” they added.

“This study highlights the ever-shifting understanding of the pathophysiology and therapeutic approaches to fulminant myocarditis,” Dr. Nair and Dr. Deswal concluded.

This research was supported in part by the Foundation of France, French National Research Agency, Sorbonne University, and Clinical Research Hospital. The researchers have filed a patent application based on these results. Dr. Nair and Dr. Deswal have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Researchers from France have identified two distinct phenotypes of fulminant COVID-19–related myocarditis in adults, with different clinical presentations, immunologic profiles, and outcomes.

Differentiation between the two bioclinical entities is important to understand for patient management and further pathophysiological studies, they said.

The first phenotype occurs early (within a few days) in acute SARS-CoV-2 infection, with active viral replication (polymerase chain reaction positive) in adults who meet criteria for multisystem inflammatory syndrome (MIS-A+).

Floaria Bicher/iStock/Getty Images Plus

In this early phenotype, there is “limited systemic inflammation without skin and mucosal involvement, but myocardial dysfunction is fulminant and frequently associated with large pericardial effusions. These cases more often require extracorporeal membrane oxygenation [ECMO],” Guy Gorochov, MD, PhD, Sorbonne University, Paris, said in an interview.

The second is a delayed, postinfectious, immune-driven phenotype that occurs in adults who fail to meet the criteria for MIS-A (MIS-A–).

This phenotype occurs weeks after SARS-CoV-2 infection, usually beyond detectable active viral replication (PCR–) in the context of specific immune response and severe systemic inflammation with skin and mucosal involvement. Myocardial dysfunction is more progressive and rarely associated with large pericardial effusions, Dr. Gorochov explained.

The study was published in the Journal of the American College of Cardiology.
 

Evolving understanding

The findings are based on a retrospective analysis of 38 patients without a history of COVID-19 vaccination who were admitted to the intensive care unit from March 2020 to June 2021 for suspected fulminant COVID-19 myocarditis.

Patients were confirmed to have SARS-CoV-2 infection by PCR and/or by serologic testing. As noted in other studies, the patients were predominantly young men (66%; median age, 27.5 years). Twenty-five (66%) patients were MIS-A+ and 13 (34%) were MIS-A–.



In general, the MIS-A– patients were sicker and had worse outcomes.

Specifically, compared with the MIS-A+ patients, MIS-A– patients had a shorter time between the onset of COVID-19 symptoms and the development of myocarditis, a shorter time to ICU admission, and more severe presentations assessed using lower left ventricular ejection fraction and sequential organ failure assessment scores.

MIS-A– patients also had higher lactate levels, were more likely to need venoarterial ECMO (92% vs 16%), had higher ICU mortality (31% vs. 4%), and a had lower probability of survival at 3 months (68% vs. 96%), compared with their MIS-A+ peers.

Immunologic differences

The immunologic profiles of these two distinct clinical phenotypes also differed.

In MIS-A– early-type COVID-19 myocarditis, RNA polymerase III autoantibodies are frequently positive and serum levels of antiviral interferon-alpha and granulocyte-attracting interleukin-8 are elevated.

In contrast, in MIS-A+ delayed-type COVID-19 myocarditis, RNA polymerase III autoantibodies are negative and serum levels of IL-17 and IL-22 are highly elevated.

“We suggest that IL-17 and IL-22 are novel criteria that should help to assess in adults the recently recognized MIS-A,” Dr. Gorochov told this news organization. “It should be tested whether IL-17 and IL-22 are also elevated in children with MIS-C.”

The researchers also observed “extremely” high serum IL-10 levels in both patient groups. This has been previously associated with severe myocardial injury and an increase in the risk for death in severe COVID-19 patients.

The researchers said the phenotypic clustering of patients with fulminant COVID-19–related myocarditis “seems relevant” for their management.

MIS-A– cases, owing to the high risk for evolution toward refractory cardiogenic shock, should be “urgently” referred to a center with venoarterial ECMO and closely monitored to prevent a “too-late” cannulation, especially under cardiopulmonary resuscitation, known to be associated with poor outcomes, they advised.

They noted that the five patients who died in their series had late venoarterial ECMO implantation, while undergoing multiple organ failures or resuscitation.

Conversely, the risk for evolution to refractory cardiogenic shock is lower in MIS-A+ cases. However, identifying MIS-A+ cases is “all the more important given that numerous data support the efficacy of corticosteroids and/or intravenous immunoglobulins in MIS-C,” Dr. Gorochov and colleagues wrote.

The authors of a linked editorial said the French team should be “commended on their work in furthering our understanding of fulminant myocarditis related to COVID-19 infection.”

Ajith Nair, MD, Baylor College of Medicine, and Anita Deswal, MD, MPH, University of Texas M.D. Anderson Cancer Center, both in Houston, noted that fulminant myocarditis is rare and can result from either of two mechanisms: viral tropism or an immune-mediated mechanism.

“It remains to be seen whether using antiviral therapy versus immunomodulatory therapy on the basis of clinical and cytokine profiles will yield benefits,” they wrote.

“Fulminant myocarditis invariably requires hemodynamic support and carries a high mortality risk if it is recognized late. However, the long-term prognosis in patients who survive the critical period is favorable, with recovery of myocardial function,” they added.

“This study highlights the ever-shifting understanding of the pathophysiology and therapeutic approaches to fulminant myocarditis,” Dr. Nair and Dr. Deswal concluded.

This research was supported in part by the Foundation of France, French National Research Agency, Sorbonne University, and Clinical Research Hospital. The researchers have filed a patent application based on these results. Dr. Nair and Dr. Deswal have no relevant disclosures.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Case closed’: Bridging thrombolysis remains ‘gold standard’ in stroke thrombectomy

Article Type
Changed
Fri, 08/26/2022 - 11:30

Two new noninferiority trials address the controversial question of whether thrombolytic therapy can be omitted for acute ischemic stroke in patients undergoing endovascular thrombectomy for large-vessel occlusion.

Both trials show better outcomes when standard bridging thrombolytic therapy is used before thrombectomy, with comparable safety.

The results of SWIFT-DIRECT and DIRECT-SAFE were published online June 22 in The Lancet.

“The case appears closed. Bypass intravenous thrombolysis is highly unlikely to be noninferior to standard care by a clinically acceptable margin for most patients,” writes Pooja Khatri, MD, MSc, department of neurology, University of Cincinnati, in a linked comment.
 

SWIFT-DIRECT

SWIFT-DIRECT enrolled 408 patients (median age 72; 51% women) with acute stroke due to large vessel occlusion admitted to stroke centers in Europe and Canada. Half were randomly allocated to thrombectomy alone and half to intravenous alteplase and thrombectomy.

Successful reperfusion was less common in patients who had thrombectomy alone (91% vs. 96%; risk difference −5.1%; 95% confidence interval, −10.2 to 0.0, P = .047).

With combination therapy, more patients achieved functional independence with a modified Rankin scale score of 0-2 at 90 days (65% vs. 57%; adjusted risk difference −7.3%; 95% CI, −16·6 to 2·1, lower limit of one-sided 95% CI, −15·1%, crossing the noninferiority margin of −12%).

“Despite a very liberal noninferiority margin and strict inclusion and exclusion criteria aimed at studying a population most likely to benefit from thrombectomy alone, point estimates directionally favored intravenous thrombolysis plus thrombectomy,” Urs Fischer, MD, cochair of the Stroke Center, University Hospital Basel, Switzerland, told this news organization.

“Furthermore, we could demonstrate that overall reperfusion rates were extremely high and yet significantly better in patients receiving intravenous thrombolysis plus thrombectomy than in patients treated with thrombectomy alone, a finding which has not been shown before,” Dr. Fischer said.

There was no significant difference in the risk of symptomatic intracranial bleeding (3% with combination therapy and 2% with thrombectomy alone).

Based on the results, in patients suitable for thrombolysis, skipping it before thrombectomy “is not justified,” the study team concludes.
 

DIRECT-SAFE

DIRECT-SAFE enrolled 295 patients (median age 69; 43% women) with stroke and large vessel occlusion from Australia, New Zealand, China, and Vietnam, with half undergoing direct thrombectomy and half bridging therapy first.

Functional independence (modified Rankin Scale 0-2 or return to baseline at 90 days) was more common in the bridging group (61% vs. 55%).

Safety outcomes were similar between groups. Symptomatic intracerebral hemorrhage occurred in 2 (1%) patients in the direct group and 1 (1%) patient in the bridging group. There were 22 (15%) deaths in the direct group and 24 in the bridging group.

“There has been concern across the world regarding cost of treatment, together with fears of increasing bleeding risk or clot migration with intravenous thrombolytic,” lead investigator Peter Mitchell, MBBS, director, NeuroIntervention Service, The Royal Melbourne Hospital, Parkville, Victoria, Australia, told this news organization.

“We showed that patients in the bridging treatment arm had better outcomes across the entire study, especially in Asian region patients” and therefore remains “the gold standard,” Dr. Mitchell said.

To date, six published trials have addressed this question of endovascular therapy alone or with thrombolysis – SKIP, DIRECT-MT, MR CLEAN NO IV, SWIFT-DIRECT, and DIRECT-SAFE.

Dr. Fischer said the SWIFT-DIRECT study group plans to perform an individual participant data meta-analysis known as Improving Reperfusion Strategies in Ischemic Stroke (IRIS) of all six trials to see whether there are subgroups of patients in whom thrombectomy alone is as effective as thrombolysis plus thrombectomy.

Subgroups of interest, he said, include patients with early ischemic signs on imaging, those at increased risk for hemorrhagic complications, and patients with a high clot burden.

SWIFT-DIRECT was funding by Medtronic and University Hospital Bern. DIRECT-SAFE was funded by Australian National Health and Medical Research Council and Stryker USA. A complete list of author disclosures is available with the original articles.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(9)
Publications
Topics
Sections

Two new noninferiority trials address the controversial question of whether thrombolytic therapy can be omitted for acute ischemic stroke in patients undergoing endovascular thrombectomy for large-vessel occlusion.

Both trials show better outcomes when standard bridging thrombolytic therapy is used before thrombectomy, with comparable safety.

The results of SWIFT-DIRECT and DIRECT-SAFE were published online June 22 in The Lancet.

“The case appears closed. Bypass intravenous thrombolysis is highly unlikely to be noninferior to standard care by a clinically acceptable margin for most patients,” writes Pooja Khatri, MD, MSc, department of neurology, University of Cincinnati, in a linked comment.
 

SWIFT-DIRECT

SWIFT-DIRECT enrolled 408 patients (median age 72; 51% women) with acute stroke due to large vessel occlusion admitted to stroke centers in Europe and Canada. Half were randomly allocated to thrombectomy alone and half to intravenous alteplase and thrombectomy.

Successful reperfusion was less common in patients who had thrombectomy alone (91% vs. 96%; risk difference −5.1%; 95% confidence interval, −10.2 to 0.0, P = .047).

With combination therapy, more patients achieved functional independence with a modified Rankin scale score of 0-2 at 90 days (65% vs. 57%; adjusted risk difference −7.3%; 95% CI, −16·6 to 2·1, lower limit of one-sided 95% CI, −15·1%, crossing the noninferiority margin of −12%).

“Despite a very liberal noninferiority margin and strict inclusion and exclusion criteria aimed at studying a population most likely to benefit from thrombectomy alone, point estimates directionally favored intravenous thrombolysis plus thrombectomy,” Urs Fischer, MD, cochair of the Stroke Center, University Hospital Basel, Switzerland, told this news organization.

“Furthermore, we could demonstrate that overall reperfusion rates were extremely high and yet significantly better in patients receiving intravenous thrombolysis plus thrombectomy than in patients treated with thrombectomy alone, a finding which has not been shown before,” Dr. Fischer said.

There was no significant difference in the risk of symptomatic intracranial bleeding (3% with combination therapy and 2% with thrombectomy alone).

Based on the results, in patients suitable for thrombolysis, skipping it before thrombectomy “is not justified,” the study team concludes.
 

DIRECT-SAFE

DIRECT-SAFE enrolled 295 patients (median age 69; 43% women) with stroke and large vessel occlusion from Australia, New Zealand, China, and Vietnam, with half undergoing direct thrombectomy and half bridging therapy first.

Functional independence (modified Rankin Scale 0-2 or return to baseline at 90 days) was more common in the bridging group (61% vs. 55%).

Safety outcomes were similar between groups. Symptomatic intracerebral hemorrhage occurred in 2 (1%) patients in the direct group and 1 (1%) patient in the bridging group. There were 22 (15%) deaths in the direct group and 24 in the bridging group.

“There has been concern across the world regarding cost of treatment, together with fears of increasing bleeding risk or clot migration with intravenous thrombolytic,” lead investigator Peter Mitchell, MBBS, director, NeuroIntervention Service, The Royal Melbourne Hospital, Parkville, Victoria, Australia, told this news organization.

“We showed that patients in the bridging treatment arm had better outcomes across the entire study, especially in Asian region patients” and therefore remains “the gold standard,” Dr. Mitchell said.

To date, six published trials have addressed this question of endovascular therapy alone or with thrombolysis – SKIP, DIRECT-MT, MR CLEAN NO IV, SWIFT-DIRECT, and DIRECT-SAFE.

Dr. Fischer said the SWIFT-DIRECT study group plans to perform an individual participant data meta-analysis known as Improving Reperfusion Strategies in Ischemic Stroke (IRIS) of all six trials to see whether there are subgroups of patients in whom thrombectomy alone is as effective as thrombolysis plus thrombectomy.

Subgroups of interest, he said, include patients with early ischemic signs on imaging, those at increased risk for hemorrhagic complications, and patients with a high clot burden.

SWIFT-DIRECT was funding by Medtronic and University Hospital Bern. DIRECT-SAFE was funded by Australian National Health and Medical Research Council and Stryker USA. A complete list of author disclosures is available with the original articles.

A version of this article first appeared on Medscape.com.

Two new noninferiority trials address the controversial question of whether thrombolytic therapy can be omitted for acute ischemic stroke in patients undergoing endovascular thrombectomy for large-vessel occlusion.

Both trials show better outcomes when standard bridging thrombolytic therapy is used before thrombectomy, with comparable safety.

The results of SWIFT-DIRECT and DIRECT-SAFE were published online June 22 in The Lancet.

“The case appears closed. Bypass intravenous thrombolysis is highly unlikely to be noninferior to standard care by a clinically acceptable margin for most patients,” writes Pooja Khatri, MD, MSc, department of neurology, University of Cincinnati, in a linked comment.
 

SWIFT-DIRECT

SWIFT-DIRECT enrolled 408 patients (median age 72; 51% women) with acute stroke due to large vessel occlusion admitted to stroke centers in Europe and Canada. Half were randomly allocated to thrombectomy alone and half to intravenous alteplase and thrombectomy.

Successful reperfusion was less common in patients who had thrombectomy alone (91% vs. 96%; risk difference −5.1%; 95% confidence interval, −10.2 to 0.0, P = .047).

With combination therapy, more patients achieved functional independence with a modified Rankin scale score of 0-2 at 90 days (65% vs. 57%; adjusted risk difference −7.3%; 95% CI, −16·6 to 2·1, lower limit of one-sided 95% CI, −15·1%, crossing the noninferiority margin of −12%).

“Despite a very liberal noninferiority margin and strict inclusion and exclusion criteria aimed at studying a population most likely to benefit from thrombectomy alone, point estimates directionally favored intravenous thrombolysis plus thrombectomy,” Urs Fischer, MD, cochair of the Stroke Center, University Hospital Basel, Switzerland, told this news organization.

“Furthermore, we could demonstrate that overall reperfusion rates were extremely high and yet significantly better in patients receiving intravenous thrombolysis plus thrombectomy than in patients treated with thrombectomy alone, a finding which has not been shown before,” Dr. Fischer said.

There was no significant difference in the risk of symptomatic intracranial bleeding (3% with combination therapy and 2% with thrombectomy alone).

Based on the results, in patients suitable for thrombolysis, skipping it before thrombectomy “is not justified,” the study team concludes.
 

DIRECT-SAFE

DIRECT-SAFE enrolled 295 patients (median age 69; 43% women) with stroke and large vessel occlusion from Australia, New Zealand, China, and Vietnam, with half undergoing direct thrombectomy and half bridging therapy first.

Functional independence (modified Rankin Scale 0-2 or return to baseline at 90 days) was more common in the bridging group (61% vs. 55%).

Safety outcomes were similar between groups. Symptomatic intracerebral hemorrhage occurred in 2 (1%) patients in the direct group and 1 (1%) patient in the bridging group. There were 22 (15%) deaths in the direct group and 24 in the bridging group.

“There has been concern across the world regarding cost of treatment, together with fears of increasing bleeding risk or clot migration with intravenous thrombolytic,” lead investigator Peter Mitchell, MBBS, director, NeuroIntervention Service, The Royal Melbourne Hospital, Parkville, Victoria, Australia, told this news organization.

“We showed that patients in the bridging treatment arm had better outcomes across the entire study, especially in Asian region patients” and therefore remains “the gold standard,” Dr. Mitchell said.

To date, six published trials have addressed this question of endovascular therapy alone or with thrombolysis – SKIP, DIRECT-MT, MR CLEAN NO IV, SWIFT-DIRECT, and DIRECT-SAFE.

Dr. Fischer said the SWIFT-DIRECT study group plans to perform an individual participant data meta-analysis known as Improving Reperfusion Strategies in Ischemic Stroke (IRIS) of all six trials to see whether there are subgroups of patients in whom thrombectomy alone is as effective as thrombolysis plus thrombectomy.

Subgroups of interest, he said, include patients with early ischemic signs on imaging, those at increased risk for hemorrhagic complications, and patients with a high clot burden.

SWIFT-DIRECT was funding by Medtronic and University Hospital Bern. DIRECT-SAFE was funded by Australian National Health and Medical Research Council and Stryker USA. A complete list of author disclosures is available with the original articles.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews - 30(9)
Issue
Neurology Reviews - 30(9)
Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET

Citation Override
July 25, 2022
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article