Slot System
Featured Buckets
Featured Buckets Admin
Reverse Chronological Sort

Vaccine Against Urinary Tract Infections in Development

Article Type
Changed
Mon, 05/06/2024 - 17:04

 

Urinary tract infections are among the most common bacterial infections. They can be painful, require antibiotic treatments, and recur in 20%-30% of cases. With the risk for the emergence or increase of resistance to antibiotics, it is important to search for potential therapeutic alternatives to treat or prevent urinary tract infections.

The MV140 Vaccine

The MV140 vaccine is produced by the Spanish pharmaceutical company Immunotek. MV140, known as Uromune, consists of a suspension of whole heat-inactivated bacteria in glycerol, sodium chloride, an artificial pineapple flavor, and water. It includes equal percentages of strains from four bacterial species (V121 Escherichia coli, V113 Klebsiella pneumoniae, V125 Enterococcus faecalis, and V127 Proteus vulgaris). MV140 is administered sublingually by spraying two 100-µL doses daily for 3 months.

The vaccine is in phase 2-3 of development. It is available under special access programs outside of marketing authorization in 26 countries, including Spain, Portugal, the United Kingdom, Lithuania, the Netherlands, Sweden, Norway, Australia, New Zealand, and Chile. Recently, MV140 was approved in Mexico and the Dominican Republic and submitted to Health Canada for registration.

randomized study published in 2022 showed the vaccine›s efficacy in preventing urinary tract infections over 9 months. In total, 240 women with a urinary tract infection received MV140 for either 3 or 6 months or a placebo for 6 months. The primary outcome was the number of urinary tract infection episodes during the 9-month study period after vaccination.

In this pivotal study, MV140 administration for 3 and 6 months was associated with a significant reduction in the median number of urinary tract infection episodes, from 3.0 to 0.0 compared with the placebo during the 9-month efficacy period. The median time to the first urinary tract infection after 3 months of treatment was 275.0 days in the MV140 groups compared with 48.0 days in the placebo group.

Nine-Year Follow-Up

On April 6 at the 2024 congress of The European Association of Urology, urologists from the Royal Berkshire NHS Foundation Trust presented the results of a study evaluating the MV140 vaccine spray for long-term prevention of bacterial urinary tract infections.

This was a prospective cohort study involving 89 participants (72 women and 17 men) older than 18 years with recurrent urinary tract infections who received a course of MV140 for 3 months. Participants had no urinary tract infection when offered the vaccine and had no other urinary abnormalities (such as tumors, stones, or kidney infections).

Postvaccination follow-up was conducted over a 9-year period, during which researchers analyzed the data from the electronic health records of their initial cohort. They queried participants about the occurrence of urinary tract infections since receiving the vaccine and about potential related side effects. Thus, the results were self-reported.

Long-Term Efficacy 

In this cohort, 48 participants (59%) reported having no infections during the 9-year follow-up. In the cohort of 89 participants, the average period without infection was 54.7 months (4.5 years; 56.7 months for women and 44.3 months for men). No vaccine-related side effects were observed.

The study’s limitations included the small number of participants and the collection of self-reported data. Furthermore, all cases were simple urinary tract infections without complications.

The authors concluded that “9 years after first receiving the sublingual spray MV140 vaccine, 54% of participants remained free from urinary tract infection.” For them, “this vaccine is safe in the long-term, and our participants reported fewer urinary tract infections and, if any, they were less severe.”

Vaccination could thus be an alternative to antibiotic treatments and could help combat the emergence of antibiotic resistance. The full study results should be published by the end of 2024.

Other studies are planned to evaluate the efficacy and safety of the MV140 vaccine in older patients residing in long-term care homes, in children suffering from acute urinary tract infections, and in adults suffering from complicated acute urinary tract infections (for example, patients with a catheter or with a neurogenic bladder). 
 

This story was translated from JIM, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

Urinary tract infections are among the most common bacterial infections. They can be painful, require antibiotic treatments, and recur in 20%-30% of cases. With the risk for the emergence or increase of resistance to antibiotics, it is important to search for potential therapeutic alternatives to treat or prevent urinary tract infections.

The MV140 Vaccine

The MV140 vaccine is produced by the Spanish pharmaceutical company Immunotek. MV140, known as Uromune, consists of a suspension of whole heat-inactivated bacteria in glycerol, sodium chloride, an artificial pineapple flavor, and water. It includes equal percentages of strains from four bacterial species (V121 Escherichia coli, V113 Klebsiella pneumoniae, V125 Enterococcus faecalis, and V127 Proteus vulgaris). MV140 is administered sublingually by spraying two 100-µL doses daily for 3 months.

The vaccine is in phase 2-3 of development. It is available under special access programs outside of marketing authorization in 26 countries, including Spain, Portugal, the United Kingdom, Lithuania, the Netherlands, Sweden, Norway, Australia, New Zealand, and Chile. Recently, MV140 was approved in Mexico and the Dominican Republic and submitted to Health Canada for registration.

randomized study published in 2022 showed the vaccine›s efficacy in preventing urinary tract infections over 9 months. In total, 240 women with a urinary tract infection received MV140 for either 3 or 6 months or a placebo for 6 months. The primary outcome was the number of urinary tract infection episodes during the 9-month study period after vaccination.

In this pivotal study, MV140 administration for 3 and 6 months was associated with a significant reduction in the median number of urinary tract infection episodes, from 3.0 to 0.0 compared with the placebo during the 9-month efficacy period. The median time to the first urinary tract infection after 3 months of treatment was 275.0 days in the MV140 groups compared with 48.0 days in the placebo group.

Nine-Year Follow-Up

On April 6 at the 2024 congress of The European Association of Urology, urologists from the Royal Berkshire NHS Foundation Trust presented the results of a study evaluating the MV140 vaccine spray for long-term prevention of bacterial urinary tract infections.

This was a prospective cohort study involving 89 participants (72 women and 17 men) older than 18 years with recurrent urinary tract infections who received a course of MV140 for 3 months. Participants had no urinary tract infection when offered the vaccine and had no other urinary abnormalities (such as tumors, stones, or kidney infections).

Postvaccination follow-up was conducted over a 9-year period, during which researchers analyzed the data from the electronic health records of their initial cohort. They queried participants about the occurrence of urinary tract infections since receiving the vaccine and about potential related side effects. Thus, the results were self-reported.

Long-Term Efficacy 

In this cohort, 48 participants (59%) reported having no infections during the 9-year follow-up. In the cohort of 89 participants, the average period without infection was 54.7 months (4.5 years; 56.7 months for women and 44.3 months for men). No vaccine-related side effects were observed.

The study’s limitations included the small number of participants and the collection of self-reported data. Furthermore, all cases were simple urinary tract infections without complications.

The authors concluded that “9 years after first receiving the sublingual spray MV140 vaccine, 54% of participants remained free from urinary tract infection.” For them, “this vaccine is safe in the long-term, and our participants reported fewer urinary tract infections and, if any, they were less severe.”

Vaccination could thus be an alternative to antibiotic treatments and could help combat the emergence of antibiotic resistance. The full study results should be published by the end of 2024.

Other studies are planned to evaluate the efficacy and safety of the MV140 vaccine in older patients residing in long-term care homes, in children suffering from acute urinary tract infections, and in adults suffering from complicated acute urinary tract infections (for example, patients with a catheter or with a neurogenic bladder). 
 

This story was translated from JIM, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

Urinary tract infections are among the most common bacterial infections. They can be painful, require antibiotic treatments, and recur in 20%-30% of cases. With the risk for the emergence or increase of resistance to antibiotics, it is important to search for potential therapeutic alternatives to treat or prevent urinary tract infections.

The MV140 Vaccine

The MV140 vaccine is produced by the Spanish pharmaceutical company Immunotek. MV140, known as Uromune, consists of a suspension of whole heat-inactivated bacteria in glycerol, sodium chloride, an artificial pineapple flavor, and water. It includes equal percentages of strains from four bacterial species (V121 Escherichia coli, V113 Klebsiella pneumoniae, V125 Enterococcus faecalis, and V127 Proteus vulgaris). MV140 is administered sublingually by spraying two 100-µL doses daily for 3 months.

The vaccine is in phase 2-3 of development. It is available under special access programs outside of marketing authorization in 26 countries, including Spain, Portugal, the United Kingdom, Lithuania, the Netherlands, Sweden, Norway, Australia, New Zealand, and Chile. Recently, MV140 was approved in Mexico and the Dominican Republic and submitted to Health Canada for registration.

randomized study published in 2022 showed the vaccine›s efficacy in preventing urinary tract infections over 9 months. In total, 240 women with a urinary tract infection received MV140 for either 3 or 6 months or a placebo for 6 months. The primary outcome was the number of urinary tract infection episodes during the 9-month study period after vaccination.

In this pivotal study, MV140 administration for 3 and 6 months was associated with a significant reduction in the median number of urinary tract infection episodes, from 3.0 to 0.0 compared with the placebo during the 9-month efficacy period. The median time to the first urinary tract infection after 3 months of treatment was 275.0 days in the MV140 groups compared with 48.0 days in the placebo group.

Nine-Year Follow-Up

On April 6 at the 2024 congress of The European Association of Urology, urologists from the Royal Berkshire NHS Foundation Trust presented the results of a study evaluating the MV140 vaccine spray for long-term prevention of bacterial urinary tract infections.

This was a prospective cohort study involving 89 participants (72 women and 17 men) older than 18 years with recurrent urinary tract infections who received a course of MV140 for 3 months. Participants had no urinary tract infection when offered the vaccine and had no other urinary abnormalities (such as tumors, stones, or kidney infections).

Postvaccination follow-up was conducted over a 9-year period, during which researchers analyzed the data from the electronic health records of their initial cohort. They queried participants about the occurrence of urinary tract infections since receiving the vaccine and about potential related side effects. Thus, the results were self-reported.

Long-Term Efficacy 

In this cohort, 48 participants (59%) reported having no infections during the 9-year follow-up. In the cohort of 89 participants, the average period without infection was 54.7 months (4.5 years; 56.7 months for women and 44.3 months for men). No vaccine-related side effects were observed.

The study’s limitations included the small number of participants and the collection of self-reported data. Furthermore, all cases were simple urinary tract infections without complications.

The authors concluded that “9 years after first receiving the sublingual spray MV140 vaccine, 54% of participants remained free from urinary tract infection.” For them, “this vaccine is safe in the long-term, and our participants reported fewer urinary tract infections and, if any, they were less severe.”

Vaccination could thus be an alternative to antibiotic treatments and could help combat the emergence of antibiotic resistance. The full study results should be published by the end of 2024.

Other studies are planned to evaluate the efficacy and safety of the MV140 vaccine in older patients residing in long-term care homes, in children suffering from acute urinary tract infections, and in adults suffering from complicated acute urinary tract infections (for example, patients with a catheter or with a neurogenic bladder). 
 

This story was translated from JIM, which is part of the Medscape Professional Network, using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Will Changing the Term Obesity Reduce Stigma?

Article Type
Changed
Wed, 05/08/2024 - 10:53

 

— The Lancet Diabetes & Endocrinology’s Commission for the Definition and Diagnosis of Clinical Obesity will soon publish criteria for distinguishing between clinical obesity and other preclinical phases. The criteria are intended to limit the negative connotations and misunderstandings associated with the word obesity and to clearly convey the idea that it is a disease and not just a condition that increases the risk for other pathologies.

One of the two Latin American experts on the 60-member commission, Ricardo Cohen, MD, PhD, coordinator of the Obesity and Diabetes Center at the Oswaldo Cruz German Hospital in São Paulo, Brazil, discussed this effort with this news organization.

The proposal being finalized would acknowledge a preclinical stage of obesity characterized by alterations in cells or tissues that lead to changes in organ structure, but not function. This stage can be measured by body mass index (BMI) or waist circumference.

The clinical stage occurs when “obesity already affects [the function of] organs, tissues, and functions like mobility. Here, it is a disease per se. And an active disease requires treatment,” said Dr. Cohen. The health risks associated with excess adiposity have already materialized and can be objectively documented through specific signs and symptoms.

Various experts from Latin America who participated in the XV Congress of the Latin American Obesity Societies (FLASO) and II Paraguayan Obesity Congress expressed to this news organization their reservations about the proposed name change and its practical effects. They highlighted the pros and cons of various terminologies that had been considered in recent years.

“Stigma undoubtedly exists. There’s also no doubt that this stigma and daily pressure on a person’s self-esteem influence behavior and condition a poor future clinical outcome because they promote denial of the disease. Healthcare professionals can make these mistakes. But I’m not sure that changing the name of a known disease will make a difference,” said Rafael Figueredo Grijalba, MD, president of FLASO and director of the Nutrition program at the Faculty of Health Sciences of the Nuestra Señora de la Asunción Catholic University in Paraguay.

Spotlight on Adiposity 

An alternative term for obesity proposed in 2016 by what is now the American Association of Clinical Endocrinology and by the American College of Endocrinology is “adiposity-based chronic disease (ABCD).” This designation “is on the right track,” said Violeta Jiménez, MD, internal medicine and endocrinology specialist at the Clinical Hospital of the National University of Asunción and the Comprehensive Diabetes Care Network of the Paraguay Social Security Institute.

The word obese is perceived as an insult, and the health impact of obesity is related to the quantity, distribution, and function of adipose tissue, said Dr. Jiménez. The BMI, the most used parameter in practice to determine overweight and obesity, “does not predict excess adiposity or determine a disease here and now, just as waist circumference does not confirm the condition.” 

Will the public be attracted to ABCD? What disease do these initials refer to, asked Dr. Jiménez. “What I like about the term ABCD is that it is not solely based on weight. It brings up the issue that a person who may not have obesity by BMI has adiposity and therefore has a disease brewing inside them.”

“Any obesity denomination is useful as long as the impact of comorbidities is taken into account, as well as the fact that it is not an aesthetic problem and treatment will be escalated aiming to benefit not only weight loss but also comorbidities,” said Paul Camperos Sánchez, MD, internal medicine and endocrinology specialist and head of research at La Trinidad Teaching Medical Center in Caracas, Venezuela, and former president of the Venezuelan Association for the Study of Obesity. 

Dr. Camperos Sánchez added that the classification of overweight and obesity into grades on the basis of BMI, which is recognized by the World Health Organization, “is the most known and for me remains the most comfortable. I will accept any other approach, but in my clinical practice, I continue to do it this way.” 

Fundamentally, knowledge can reduce social stigma and even prejudice from the medical community itself. “We must be respectful and compassionate and understand well what we are treating and the best way to approach each patient with realistic expectations. Evaluate whether, in addition to medication or intensive lifestyle changes, behavioral interventions or physiotherapy are required. If you don’t manage it well and find it challenging, perhaps that’s why we see so much stigmatization or humiliation of the patient. And that has nothing to do with the name [of the disease],” said Dr. Camperos Sánchez.

 

 

‘Biological Injustices’

Julio Montero, MD, nutritionist, president of the Argentine Society of Obesity and Eating Disorders, and former president of FLASO, told this news organization that the topic of nomenclatures “provides a lot of grounds for debate,” but he prefers the term “clinical obesity” because it has a medical meaning, is appropriate for statistical purposes, better conveys the concept of obesity as a disease, and distinguishes patients who have high weight or a spherical figure but may be free of weight-dependent conditions.

“Clinical obesity suggests that it is a person with high weight who has health problems and life expectancy issues related to excessive corpulence (weight-fat). The addition of the adjective clinical suggests that the patient has been evaluated by phenotype, fat distribution, hypertension, blood glucose, triglycerides, apnea, cardiac dilation, and mechanical problems, and based on that analysis, the diagnosis has been made,” said Dr. Montero.

Other positive aspects of the designation include not assuming that comorbidities are a direct consequence of adipose tissue accumulation because “lean mass often increases in patients with obesity, and diet and sedentary lifestyle also have an influence” nor does the term exclude people with central obesity. On the other hand, it does not propose a specific weight or fat that defines the disease, just like BMI does (which defines obesity but not its clinical consequences).

Regarding the proposed term ABCD, Montero pointed out that it focuses the diagnosis on the concept that adipose fat and adipocyte function are protagonists of the disease in question, even though there are chronic metabolic diseases like gout, porphyrias, and type 1 diabetes that do not depend on adiposity.

“ABCD also involves some degree of biological injustice, since femorogluteal adiposity (aside from aesthetic problems and excluding possible mechanical effects) is normal and healthy during pregnancy, lactation, growth, or situations of food scarcity risk, among others. Besides, it is an expression that is difficult to interpret for the untrained professional and even more so for communication to the population,” Dr. Montero concluded.

Dr. Cohen, Dr. Figueredo Grijalba, Dr. Jiménez, Dr. Camperos Sánchez, and Dr. Montero declared no relevant financial conflicts of interest. 

This story was translated from the Medscape Spanish edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

— The Lancet Diabetes & Endocrinology’s Commission for the Definition and Diagnosis of Clinical Obesity will soon publish criteria for distinguishing between clinical obesity and other preclinical phases. The criteria are intended to limit the negative connotations and misunderstandings associated with the word obesity and to clearly convey the idea that it is a disease and not just a condition that increases the risk for other pathologies.

One of the two Latin American experts on the 60-member commission, Ricardo Cohen, MD, PhD, coordinator of the Obesity and Diabetes Center at the Oswaldo Cruz German Hospital in São Paulo, Brazil, discussed this effort with this news organization.

The proposal being finalized would acknowledge a preclinical stage of obesity characterized by alterations in cells or tissues that lead to changes in organ structure, but not function. This stage can be measured by body mass index (BMI) or waist circumference.

The clinical stage occurs when “obesity already affects [the function of] organs, tissues, and functions like mobility. Here, it is a disease per se. And an active disease requires treatment,” said Dr. Cohen. The health risks associated with excess adiposity have already materialized and can be objectively documented through specific signs and symptoms.

Various experts from Latin America who participated in the XV Congress of the Latin American Obesity Societies (FLASO) and II Paraguayan Obesity Congress expressed to this news organization their reservations about the proposed name change and its practical effects. They highlighted the pros and cons of various terminologies that had been considered in recent years.

“Stigma undoubtedly exists. There’s also no doubt that this stigma and daily pressure on a person’s self-esteem influence behavior and condition a poor future clinical outcome because they promote denial of the disease. Healthcare professionals can make these mistakes. But I’m not sure that changing the name of a known disease will make a difference,” said Rafael Figueredo Grijalba, MD, president of FLASO and director of the Nutrition program at the Faculty of Health Sciences of the Nuestra Señora de la Asunción Catholic University in Paraguay.

Spotlight on Adiposity 

An alternative term for obesity proposed in 2016 by what is now the American Association of Clinical Endocrinology and by the American College of Endocrinology is “adiposity-based chronic disease (ABCD).” This designation “is on the right track,” said Violeta Jiménez, MD, internal medicine and endocrinology specialist at the Clinical Hospital of the National University of Asunción and the Comprehensive Diabetes Care Network of the Paraguay Social Security Institute.

The word obese is perceived as an insult, and the health impact of obesity is related to the quantity, distribution, and function of adipose tissue, said Dr. Jiménez. The BMI, the most used parameter in practice to determine overweight and obesity, “does not predict excess adiposity or determine a disease here and now, just as waist circumference does not confirm the condition.” 

Will the public be attracted to ABCD? What disease do these initials refer to, asked Dr. Jiménez. “What I like about the term ABCD is that it is not solely based on weight. It brings up the issue that a person who may not have obesity by BMI has adiposity and therefore has a disease brewing inside them.”

“Any obesity denomination is useful as long as the impact of comorbidities is taken into account, as well as the fact that it is not an aesthetic problem and treatment will be escalated aiming to benefit not only weight loss but also comorbidities,” said Paul Camperos Sánchez, MD, internal medicine and endocrinology specialist and head of research at La Trinidad Teaching Medical Center in Caracas, Venezuela, and former president of the Venezuelan Association for the Study of Obesity. 

Dr. Camperos Sánchez added that the classification of overweight and obesity into grades on the basis of BMI, which is recognized by the World Health Organization, “is the most known and for me remains the most comfortable. I will accept any other approach, but in my clinical practice, I continue to do it this way.” 

Fundamentally, knowledge can reduce social stigma and even prejudice from the medical community itself. “We must be respectful and compassionate and understand well what we are treating and the best way to approach each patient with realistic expectations. Evaluate whether, in addition to medication or intensive lifestyle changes, behavioral interventions or physiotherapy are required. If you don’t manage it well and find it challenging, perhaps that’s why we see so much stigmatization or humiliation of the patient. And that has nothing to do with the name [of the disease],” said Dr. Camperos Sánchez.

 

 

‘Biological Injustices’

Julio Montero, MD, nutritionist, president of the Argentine Society of Obesity and Eating Disorders, and former president of FLASO, told this news organization that the topic of nomenclatures “provides a lot of grounds for debate,” but he prefers the term “clinical obesity” because it has a medical meaning, is appropriate for statistical purposes, better conveys the concept of obesity as a disease, and distinguishes patients who have high weight or a spherical figure but may be free of weight-dependent conditions.

“Clinical obesity suggests that it is a person with high weight who has health problems and life expectancy issues related to excessive corpulence (weight-fat). The addition of the adjective clinical suggests that the patient has been evaluated by phenotype, fat distribution, hypertension, blood glucose, triglycerides, apnea, cardiac dilation, and mechanical problems, and based on that analysis, the diagnosis has been made,” said Dr. Montero.

Other positive aspects of the designation include not assuming that comorbidities are a direct consequence of adipose tissue accumulation because “lean mass often increases in patients with obesity, and diet and sedentary lifestyle also have an influence” nor does the term exclude people with central obesity. On the other hand, it does not propose a specific weight or fat that defines the disease, just like BMI does (which defines obesity but not its clinical consequences).

Regarding the proposed term ABCD, Montero pointed out that it focuses the diagnosis on the concept that adipose fat and adipocyte function are protagonists of the disease in question, even though there are chronic metabolic diseases like gout, porphyrias, and type 1 diabetes that do not depend on adiposity.

“ABCD also involves some degree of biological injustice, since femorogluteal adiposity (aside from aesthetic problems and excluding possible mechanical effects) is normal and healthy during pregnancy, lactation, growth, or situations of food scarcity risk, among others. Besides, it is an expression that is difficult to interpret for the untrained professional and even more so for communication to the population,” Dr. Montero concluded.

Dr. Cohen, Dr. Figueredo Grijalba, Dr. Jiménez, Dr. Camperos Sánchez, and Dr. Montero declared no relevant financial conflicts of interest. 

This story was translated from the Medscape Spanish edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

 

— The Lancet Diabetes & Endocrinology’s Commission for the Definition and Diagnosis of Clinical Obesity will soon publish criteria for distinguishing between clinical obesity and other preclinical phases. The criteria are intended to limit the negative connotations and misunderstandings associated with the word obesity and to clearly convey the idea that it is a disease and not just a condition that increases the risk for other pathologies.

One of the two Latin American experts on the 60-member commission, Ricardo Cohen, MD, PhD, coordinator of the Obesity and Diabetes Center at the Oswaldo Cruz German Hospital in São Paulo, Brazil, discussed this effort with this news organization.

The proposal being finalized would acknowledge a preclinical stage of obesity characterized by alterations in cells or tissues that lead to changes in organ structure, but not function. This stage can be measured by body mass index (BMI) or waist circumference.

The clinical stage occurs when “obesity already affects [the function of] organs, tissues, and functions like mobility. Here, it is a disease per se. And an active disease requires treatment,” said Dr. Cohen. The health risks associated with excess adiposity have already materialized and can be objectively documented through specific signs and symptoms.

Various experts from Latin America who participated in the XV Congress of the Latin American Obesity Societies (FLASO) and II Paraguayan Obesity Congress expressed to this news organization their reservations about the proposed name change and its practical effects. They highlighted the pros and cons of various terminologies that had been considered in recent years.

“Stigma undoubtedly exists. There’s also no doubt that this stigma and daily pressure on a person’s self-esteem influence behavior and condition a poor future clinical outcome because they promote denial of the disease. Healthcare professionals can make these mistakes. But I’m not sure that changing the name of a known disease will make a difference,” said Rafael Figueredo Grijalba, MD, president of FLASO and director of the Nutrition program at the Faculty of Health Sciences of the Nuestra Señora de la Asunción Catholic University in Paraguay.

Spotlight on Adiposity 

An alternative term for obesity proposed in 2016 by what is now the American Association of Clinical Endocrinology and by the American College of Endocrinology is “adiposity-based chronic disease (ABCD).” This designation “is on the right track,” said Violeta Jiménez, MD, internal medicine and endocrinology specialist at the Clinical Hospital of the National University of Asunción and the Comprehensive Diabetes Care Network of the Paraguay Social Security Institute.

The word obese is perceived as an insult, and the health impact of obesity is related to the quantity, distribution, and function of adipose tissue, said Dr. Jiménez. The BMI, the most used parameter in practice to determine overweight and obesity, “does not predict excess adiposity or determine a disease here and now, just as waist circumference does not confirm the condition.” 

Will the public be attracted to ABCD? What disease do these initials refer to, asked Dr. Jiménez. “What I like about the term ABCD is that it is not solely based on weight. It brings up the issue that a person who may not have obesity by BMI has adiposity and therefore has a disease brewing inside them.”

“Any obesity denomination is useful as long as the impact of comorbidities is taken into account, as well as the fact that it is not an aesthetic problem and treatment will be escalated aiming to benefit not only weight loss but also comorbidities,” said Paul Camperos Sánchez, MD, internal medicine and endocrinology specialist and head of research at La Trinidad Teaching Medical Center in Caracas, Venezuela, and former president of the Venezuelan Association for the Study of Obesity. 

Dr. Camperos Sánchez added that the classification of overweight and obesity into grades on the basis of BMI, which is recognized by the World Health Organization, “is the most known and for me remains the most comfortable. I will accept any other approach, but in my clinical practice, I continue to do it this way.” 

Fundamentally, knowledge can reduce social stigma and even prejudice from the medical community itself. “We must be respectful and compassionate and understand well what we are treating and the best way to approach each patient with realistic expectations. Evaluate whether, in addition to medication or intensive lifestyle changes, behavioral interventions or physiotherapy are required. If you don’t manage it well and find it challenging, perhaps that’s why we see so much stigmatization or humiliation of the patient. And that has nothing to do with the name [of the disease],” said Dr. Camperos Sánchez.

 

 

‘Biological Injustices’

Julio Montero, MD, nutritionist, president of the Argentine Society of Obesity and Eating Disorders, and former president of FLASO, told this news organization that the topic of nomenclatures “provides a lot of grounds for debate,” but he prefers the term “clinical obesity” because it has a medical meaning, is appropriate for statistical purposes, better conveys the concept of obesity as a disease, and distinguishes patients who have high weight or a spherical figure but may be free of weight-dependent conditions.

“Clinical obesity suggests that it is a person with high weight who has health problems and life expectancy issues related to excessive corpulence (weight-fat). The addition of the adjective clinical suggests that the patient has been evaluated by phenotype, fat distribution, hypertension, blood glucose, triglycerides, apnea, cardiac dilation, and mechanical problems, and based on that analysis, the diagnosis has been made,” said Dr. Montero.

Other positive aspects of the designation include not assuming that comorbidities are a direct consequence of adipose tissue accumulation because “lean mass often increases in patients with obesity, and diet and sedentary lifestyle also have an influence” nor does the term exclude people with central obesity. On the other hand, it does not propose a specific weight or fat that defines the disease, just like BMI does (which defines obesity but not its clinical consequences).

Regarding the proposed term ABCD, Montero pointed out that it focuses the diagnosis on the concept that adipose fat and adipocyte function are protagonists of the disease in question, even though there are chronic metabolic diseases like gout, porphyrias, and type 1 diabetes that do not depend on adiposity.

“ABCD also involves some degree of biological injustice, since femorogluteal adiposity (aside from aesthetic problems and excluding possible mechanical effects) is normal and healthy during pregnancy, lactation, growth, or situations of food scarcity risk, among others. Besides, it is an expression that is difficult to interpret for the untrained professional and even more so for communication to the population,” Dr. Montero concluded.

Dr. Cohen, Dr. Figueredo Grijalba, Dr. Jiménez, Dr. Camperos Sánchez, and Dr. Montero declared no relevant financial conflicts of interest. 

This story was translated from the Medscape Spanish edition using several editorial tools, including AI, as part of the process. Human editors reviewed this content before publication. A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Probiotics Emerge as Promising Intervention in Cirrhosis

Article Type
Changed
Wed, 05/08/2024 - 10:53

Probiotics appear to be beneficial for patients with cirrhosis, showing a reversal of hepatic encephalopathy (HE), improvement in liver function measures, and regulation of gut dysbiosis, according to a systematic review and meta-analysis.

They also improve quality of life and have a favorable safety profile, adding to their potential as a promising intervention for treating cirrhosis, the study authors wrote.

“As currently one of the top 10 leading causes of death globally, cirrhosis imposes a great health burden in many countries,” wrote lead author Xing Yang of the Health Management Research Institute at the People’s Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences in Nanning, China, and colleagues.

“The burden has escalated at the worldwide level since 1990, partly because of population growth and aging,” the authors wrote. “Thus, it is meaningful to explore effective treatments for reversing cirrhosis and preventing severe liver function and even systemic damage.”

The study was published online in Frontiers in Medicine .
 

Analyzing Probiotic Trials

The researchers conducted a systematic review and meta-analysis of 30 randomized controlled trials among 2084 adults with cirrhosis, comparing the effects of probiotic intervention and control treatments, including placebo, no treatment, standard care, or active controls such as lactulose and rifaximin. The studies spanned 14 countries and included 1049 patients in the probiotic groups and 1035 in the control groups.

The research team calculated risk ratios (RRs) or standardized mean difference (SMD) for outcomes such as HE reversal, Model for End-Stage Liver Disease (MELD) scores, safety and tolerability of probiotics, liver function, and quality of life.

Among 17 studies involving patients with different stages of HE, as compared with the control group, probiotics significantly reversed minimal HE (RR, 1.54) and improved HE (RR, 1.94). In particular, the probiotic VSL#3 — which contains StreptococcusBifidobacterium, and Lactobacillus — produced more significant HE improvement (RR, 1.44) compared with other types of probiotics.

In addition, probiotics appeared to improve liver function by reducing MELD scores (SMD, −0.57) but didn’t show a difference in other liver function parameters. There were numerical but not significant reductions in mortality and serum inflammatory cytokine expression, including endotoxin, interleukin-6, and tumor necrosis factor-alpha.

Probiotics also improved quality-of-life scores (SMD, 0.51) and gut flora (SMD, 1.67). For gut flora, the numbers of the Lactobacillus group were significantly higher after probiotic treatment, but there wasn’t a significant difference for Bifidobacterium, Enterococcus, Bacteroidaceae, and Fusobacterium.

Finally, compared with control treatments, including placebo, standard therapy, and active controls such as lactulose and rifaximin, probiotics showed higher safety and tolerability profiles, causing a significantly lower incidence of serious adverse events (RR, 0.71).

Longer intervention times reduced the risk for overt HE development, hospitalization, and infections compared with shorter intervention times.

“Probiotics contribute to the reduction of ammonia levels and the improvement of neuropsychometric or neurophysiological status, leading to the reversal of HE associated with cirrhosis,” the study authors wrote. “Moreover, they induce favorable changes in gut flora and quality of life. Therefore, probiotics emerge as a promising intervention for reversing the onset of cirrhosis and preventing disease progression.”
 

Considering Variables

The authors noted several limitations, including a high or unclear risk for bias in 28 studies and the lack of data on the intervention effect for various types of probiotics or treatment durations.

“Overall, despite a number of methodological concerns, the study shows that probiotics can improve some disease markers in cirrhosis,” Phillipp Hartmann, MD, assistant professor of pediatric gastroenterology, hepatology, and nutrition at the University of California, San Diego, said in an interview.

“One of the methodological concerns is that the authors compared probiotics with a multitude of different treatments, including fiber and lactulose (which are both prebiotics), rifaximin (which is an antibiotic), standard of care, placebo, or no therapy,” he said. “This might contribute to the sometimes-contradictory findings between the different studies. The ideal comparison would be a specific probiotic formulation versus a placebo to understand what the probiotic actually does.”

Dr. Hartmann, who wasn’t involved with this study, has published a review on the potential of probiotics, prebiotics, and synbiotics in liver disease. He and colleagues noted the mechanisms that improve a disrupted intestinal barrier, microbial translocation, and altered gut microbiome metabolism.

“Over the last few years, we and others have studied the intestinal microbiota in various liver diseases, including alcohol-associated liver disease and metabolic dysfunction-associated steatotic liver disease,” he said. “Essentially, all studies support the notion that probiotics improve the microbial structure in the gut by increasing the beneficial and decreasing the potentially pathogenic microbes.”

However, probiotics and supplements are unregulated, Dr. Hartmann noted. Many different probiotic mixes and dosages have been tested in clinical trials, and additional studies are needed to determine the best formulations and dosages.

“Usually, the best outcomes can be achieved with a higher number of strains included in the probiotic formulation (10-30+) and a higher number of colony-forming units at 30-50+ billion per day,” he said.

The study was supported by funds from the Science and Technology Major Project of Guangxi, Guangxi Key Research and Development Program, and Natural Science Foundation of Guangxi Zhuang Autonomous Region. The authors declared no conflicts of interest. Dr. Hartmann reported no relevant disclosures.

A version of this article appeared on Medscape.com .

Publications
Topics
Sections

Probiotics appear to be beneficial for patients with cirrhosis, showing a reversal of hepatic encephalopathy (HE), improvement in liver function measures, and regulation of gut dysbiosis, according to a systematic review and meta-analysis.

They also improve quality of life and have a favorable safety profile, adding to their potential as a promising intervention for treating cirrhosis, the study authors wrote.

“As currently one of the top 10 leading causes of death globally, cirrhosis imposes a great health burden in many countries,” wrote lead author Xing Yang of the Health Management Research Institute at the People’s Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences in Nanning, China, and colleagues.

“The burden has escalated at the worldwide level since 1990, partly because of population growth and aging,” the authors wrote. “Thus, it is meaningful to explore effective treatments for reversing cirrhosis and preventing severe liver function and even systemic damage.”

The study was published online in Frontiers in Medicine .
 

Analyzing Probiotic Trials

The researchers conducted a systematic review and meta-analysis of 30 randomized controlled trials among 2084 adults with cirrhosis, comparing the effects of probiotic intervention and control treatments, including placebo, no treatment, standard care, or active controls such as lactulose and rifaximin. The studies spanned 14 countries and included 1049 patients in the probiotic groups and 1035 in the control groups.

The research team calculated risk ratios (RRs) or standardized mean difference (SMD) for outcomes such as HE reversal, Model for End-Stage Liver Disease (MELD) scores, safety and tolerability of probiotics, liver function, and quality of life.

Among 17 studies involving patients with different stages of HE, as compared with the control group, probiotics significantly reversed minimal HE (RR, 1.54) and improved HE (RR, 1.94). In particular, the probiotic VSL#3 — which contains StreptococcusBifidobacterium, and Lactobacillus — produced more significant HE improvement (RR, 1.44) compared with other types of probiotics.

In addition, probiotics appeared to improve liver function by reducing MELD scores (SMD, −0.57) but didn’t show a difference in other liver function parameters. There were numerical but not significant reductions in mortality and serum inflammatory cytokine expression, including endotoxin, interleukin-6, and tumor necrosis factor-alpha.

Probiotics also improved quality-of-life scores (SMD, 0.51) and gut flora (SMD, 1.67). For gut flora, the numbers of the Lactobacillus group were significantly higher after probiotic treatment, but there wasn’t a significant difference for Bifidobacterium, Enterococcus, Bacteroidaceae, and Fusobacterium.

Finally, compared with control treatments, including placebo, standard therapy, and active controls such as lactulose and rifaximin, probiotics showed higher safety and tolerability profiles, causing a significantly lower incidence of serious adverse events (RR, 0.71).

Longer intervention times reduced the risk for overt HE development, hospitalization, and infections compared with shorter intervention times.

“Probiotics contribute to the reduction of ammonia levels and the improvement of neuropsychometric or neurophysiological status, leading to the reversal of HE associated with cirrhosis,” the study authors wrote. “Moreover, they induce favorable changes in gut flora and quality of life. Therefore, probiotics emerge as a promising intervention for reversing the onset of cirrhosis and preventing disease progression.”
 

Considering Variables

The authors noted several limitations, including a high or unclear risk for bias in 28 studies and the lack of data on the intervention effect for various types of probiotics or treatment durations.

“Overall, despite a number of methodological concerns, the study shows that probiotics can improve some disease markers in cirrhosis,” Phillipp Hartmann, MD, assistant professor of pediatric gastroenterology, hepatology, and nutrition at the University of California, San Diego, said in an interview.

“One of the methodological concerns is that the authors compared probiotics with a multitude of different treatments, including fiber and lactulose (which are both prebiotics), rifaximin (which is an antibiotic), standard of care, placebo, or no therapy,” he said. “This might contribute to the sometimes-contradictory findings between the different studies. The ideal comparison would be a specific probiotic formulation versus a placebo to understand what the probiotic actually does.”

Dr. Hartmann, who wasn’t involved with this study, has published a review on the potential of probiotics, prebiotics, and synbiotics in liver disease. He and colleagues noted the mechanisms that improve a disrupted intestinal barrier, microbial translocation, and altered gut microbiome metabolism.

“Over the last few years, we and others have studied the intestinal microbiota in various liver diseases, including alcohol-associated liver disease and metabolic dysfunction-associated steatotic liver disease,” he said. “Essentially, all studies support the notion that probiotics improve the microbial structure in the gut by increasing the beneficial and decreasing the potentially pathogenic microbes.”

However, probiotics and supplements are unregulated, Dr. Hartmann noted. Many different probiotic mixes and dosages have been tested in clinical trials, and additional studies are needed to determine the best formulations and dosages.

“Usually, the best outcomes can be achieved with a higher number of strains included in the probiotic formulation (10-30+) and a higher number of colony-forming units at 30-50+ billion per day,” he said.

The study was supported by funds from the Science and Technology Major Project of Guangxi, Guangxi Key Research and Development Program, and Natural Science Foundation of Guangxi Zhuang Autonomous Region. The authors declared no conflicts of interest. Dr. Hartmann reported no relevant disclosures.

A version of this article appeared on Medscape.com .

Probiotics appear to be beneficial for patients with cirrhosis, showing a reversal of hepatic encephalopathy (HE), improvement in liver function measures, and regulation of gut dysbiosis, according to a systematic review and meta-analysis.

They also improve quality of life and have a favorable safety profile, adding to their potential as a promising intervention for treating cirrhosis, the study authors wrote.

“As currently one of the top 10 leading causes of death globally, cirrhosis imposes a great health burden in many countries,” wrote lead author Xing Yang of the Health Management Research Institute at the People’s Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences in Nanning, China, and colleagues.

“The burden has escalated at the worldwide level since 1990, partly because of population growth and aging,” the authors wrote. “Thus, it is meaningful to explore effective treatments for reversing cirrhosis and preventing severe liver function and even systemic damage.”

The study was published online in Frontiers in Medicine .
 

Analyzing Probiotic Trials

The researchers conducted a systematic review and meta-analysis of 30 randomized controlled trials among 2084 adults with cirrhosis, comparing the effects of probiotic intervention and control treatments, including placebo, no treatment, standard care, or active controls such as lactulose and rifaximin. The studies spanned 14 countries and included 1049 patients in the probiotic groups and 1035 in the control groups.

The research team calculated risk ratios (RRs) or standardized mean difference (SMD) for outcomes such as HE reversal, Model for End-Stage Liver Disease (MELD) scores, safety and tolerability of probiotics, liver function, and quality of life.

Among 17 studies involving patients with different stages of HE, as compared with the control group, probiotics significantly reversed minimal HE (RR, 1.54) and improved HE (RR, 1.94). In particular, the probiotic VSL#3 — which contains StreptococcusBifidobacterium, and Lactobacillus — produced more significant HE improvement (RR, 1.44) compared with other types of probiotics.

In addition, probiotics appeared to improve liver function by reducing MELD scores (SMD, −0.57) but didn’t show a difference in other liver function parameters. There were numerical but not significant reductions in mortality and serum inflammatory cytokine expression, including endotoxin, interleukin-6, and tumor necrosis factor-alpha.

Probiotics also improved quality-of-life scores (SMD, 0.51) and gut flora (SMD, 1.67). For gut flora, the numbers of the Lactobacillus group were significantly higher after probiotic treatment, but there wasn’t a significant difference for Bifidobacterium, Enterococcus, Bacteroidaceae, and Fusobacterium.

Finally, compared with control treatments, including placebo, standard therapy, and active controls such as lactulose and rifaximin, probiotics showed higher safety and tolerability profiles, causing a significantly lower incidence of serious adverse events (RR, 0.71).

Longer intervention times reduced the risk for overt HE development, hospitalization, and infections compared with shorter intervention times.

“Probiotics contribute to the reduction of ammonia levels and the improvement of neuropsychometric or neurophysiological status, leading to the reversal of HE associated with cirrhosis,” the study authors wrote. “Moreover, they induce favorable changes in gut flora and quality of life. Therefore, probiotics emerge as a promising intervention for reversing the onset of cirrhosis and preventing disease progression.”
 

Considering Variables

The authors noted several limitations, including a high or unclear risk for bias in 28 studies and the lack of data on the intervention effect for various types of probiotics or treatment durations.

“Overall, despite a number of methodological concerns, the study shows that probiotics can improve some disease markers in cirrhosis,” Phillipp Hartmann, MD, assistant professor of pediatric gastroenterology, hepatology, and nutrition at the University of California, San Diego, said in an interview.

“One of the methodological concerns is that the authors compared probiotics with a multitude of different treatments, including fiber and lactulose (which are both prebiotics), rifaximin (which is an antibiotic), standard of care, placebo, or no therapy,” he said. “This might contribute to the sometimes-contradictory findings between the different studies. The ideal comparison would be a specific probiotic formulation versus a placebo to understand what the probiotic actually does.”

Dr. Hartmann, who wasn’t involved with this study, has published a review on the potential of probiotics, prebiotics, and synbiotics in liver disease. He and colleagues noted the mechanisms that improve a disrupted intestinal barrier, microbial translocation, and altered gut microbiome metabolism.

“Over the last few years, we and others have studied the intestinal microbiota in various liver diseases, including alcohol-associated liver disease and metabolic dysfunction-associated steatotic liver disease,” he said. “Essentially, all studies support the notion that probiotics improve the microbial structure in the gut by increasing the beneficial and decreasing the potentially pathogenic microbes.”

However, probiotics and supplements are unregulated, Dr. Hartmann noted. Many different probiotic mixes and dosages have been tested in clinical trials, and additional studies are needed to determine the best formulations and dosages.

“Usually, the best outcomes can be achieved with a higher number of strains included in the probiotic formulation (10-30+) and a higher number of colony-forming units at 30-50+ billion per day,” he said.

The study was supported by funds from the Science and Technology Major Project of Guangxi, Guangxi Key Research and Development Program, and Natural Science Foundation of Guangxi Zhuang Autonomous Region. The authors declared no conflicts of interest. Dr. Hartmann reported no relevant disclosures.

A version of this article appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

New Genetic Variant May Guard Against Alzheimer’s in High-Risk Individuals

Article Type
Changed
Wed, 05/08/2024 - 11:55

 

A new genetic variant in individuals who are APOE4 carriers is linked to a 70% reduction in the risk for Alzheimer’s disease, new research suggests.

The variant occurs on the fibronectin 1 (FN1) gene, which expresses fibronectin, an adhesive glycoprotein that lines the blood vessels at the blood-brain barrier and controls substances that move in and out of the brain.

While fibronectin is normally present in the blood-brain barrier in small amounts, individuals with Alzheimer’s disease tend to have it in excess. Normally, patients with Alzheimer’s disease have amyloid deposits that collect in the brain, but those with the FN1 variant appear to have the ability to amyloid from the brain before symptoms begin.

The researchers estimate that 1%-3% of APOE4 carriers in the United States — roughly 200,000-620,000 people — may have the protective mutation.

“Alzheimer’s disease may get started with amyloid deposits in the brain, but the disease manifestations are the result of changes that happen after the deposits appear,” Caghan Kizil, PhD, of Columbia University Vagelos College of Physicians and Surgeons in New York City, and a co-leader of the study, said in a press release.

The findings were published online in Acta Neuropathologica,
 

Combing Genetic Data

To find potentially protective Alzheimer’s disease variants, the investigators sequenced the genomes of more than 3500 APOE4 carriers older than 70 years with and without Alzheimer’s disease from various ethnic backgrounds.

They identified two variants on the FN1 gene, rs116558455 and rs140926439, present in healthy APOE4 carriers, that protected the APOE4 carriers against Alzheimer’s disease.

After Dr. Kizil and colleagues published their findings in a preprint, another research group that included investigators from Stanford and Washington Universities replicated the Columbia results in an independent sample of more than 7000 APOE4 carriers aged 60 years who were of European descent and identified the same FN1 variant.

The two research groups then combined their data on 11,000 participants and found that the FN1 variant rs140926439 was associated with a significantly reduced risk for Alzheimer’s disease in APOE4 carriers (odds ratio, 0.29; P = .014). A secondary analysis showed that the variant delayed Alzheimer’s disease symptom onset by 3.4 years (P = .025).

The investigators hope to use these findings to develop therapies to protect APOE4 carriers against Alzheimer’s disease.

“Anything that reduces excess fibronectin should provide some protection, and a drug that does this could be a significant step forward in the fight against this debilitating condition,” Dr. Kizil said.

Study limitations included a lack of longitudinal data on the relationship between amyloid concentration and fibronectin and the fact that investigators conducted the studies in clinically assessed individuals. Given the rare occurrence of the FN1 mutation, researchers do not have neuropathological assessments of study participants with the variant.

The study was funded by the National Institute on Aging, the Schaefer Research Scholars Program Award, Taub Institute Grants for Emerging Research, the National Institute of General Medical Sciences, and the Thompson Family Foundation Program for Accelerated Medicine Exploration in Alzheimer’s Disease and Related Disorders of the Nervous System. There were no disclosures reported.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

A new genetic variant in individuals who are APOE4 carriers is linked to a 70% reduction in the risk for Alzheimer’s disease, new research suggests.

The variant occurs on the fibronectin 1 (FN1) gene, which expresses fibronectin, an adhesive glycoprotein that lines the blood vessels at the blood-brain barrier and controls substances that move in and out of the brain.

While fibronectin is normally present in the blood-brain barrier in small amounts, individuals with Alzheimer’s disease tend to have it in excess. Normally, patients with Alzheimer’s disease have amyloid deposits that collect in the brain, but those with the FN1 variant appear to have the ability to amyloid from the brain before symptoms begin.

The researchers estimate that 1%-3% of APOE4 carriers in the United States — roughly 200,000-620,000 people — may have the protective mutation.

“Alzheimer’s disease may get started with amyloid deposits in the brain, but the disease manifestations are the result of changes that happen after the deposits appear,” Caghan Kizil, PhD, of Columbia University Vagelos College of Physicians and Surgeons in New York City, and a co-leader of the study, said in a press release.

The findings were published online in Acta Neuropathologica,
 

Combing Genetic Data

To find potentially protective Alzheimer’s disease variants, the investigators sequenced the genomes of more than 3500 APOE4 carriers older than 70 years with and without Alzheimer’s disease from various ethnic backgrounds.

They identified two variants on the FN1 gene, rs116558455 and rs140926439, present in healthy APOE4 carriers, that protected the APOE4 carriers against Alzheimer’s disease.

After Dr. Kizil and colleagues published their findings in a preprint, another research group that included investigators from Stanford and Washington Universities replicated the Columbia results in an independent sample of more than 7000 APOE4 carriers aged 60 years who were of European descent and identified the same FN1 variant.

The two research groups then combined their data on 11,000 participants and found that the FN1 variant rs140926439 was associated with a significantly reduced risk for Alzheimer’s disease in APOE4 carriers (odds ratio, 0.29; P = .014). A secondary analysis showed that the variant delayed Alzheimer’s disease symptom onset by 3.4 years (P = .025).

The investigators hope to use these findings to develop therapies to protect APOE4 carriers against Alzheimer’s disease.

“Anything that reduces excess fibronectin should provide some protection, and a drug that does this could be a significant step forward in the fight against this debilitating condition,” Dr. Kizil said.

Study limitations included a lack of longitudinal data on the relationship between amyloid concentration and fibronectin and the fact that investigators conducted the studies in clinically assessed individuals. Given the rare occurrence of the FN1 mutation, researchers do not have neuropathological assessments of study participants with the variant.

The study was funded by the National Institute on Aging, the Schaefer Research Scholars Program Award, Taub Institute Grants for Emerging Research, the National Institute of General Medical Sciences, and the Thompson Family Foundation Program for Accelerated Medicine Exploration in Alzheimer’s Disease and Related Disorders of the Nervous System. There were no disclosures reported.

A version of this article appeared on Medscape.com.

 

A new genetic variant in individuals who are APOE4 carriers is linked to a 70% reduction in the risk for Alzheimer’s disease, new research suggests.

The variant occurs on the fibronectin 1 (FN1) gene, which expresses fibronectin, an adhesive glycoprotein that lines the blood vessels at the blood-brain barrier and controls substances that move in and out of the brain.

While fibronectin is normally present in the blood-brain barrier in small amounts, individuals with Alzheimer’s disease tend to have it in excess. Normally, patients with Alzheimer’s disease have amyloid deposits that collect in the brain, but those with the FN1 variant appear to have the ability to amyloid from the brain before symptoms begin.

The researchers estimate that 1%-3% of APOE4 carriers in the United States — roughly 200,000-620,000 people — may have the protective mutation.

“Alzheimer’s disease may get started with amyloid deposits in the brain, but the disease manifestations are the result of changes that happen after the deposits appear,” Caghan Kizil, PhD, of Columbia University Vagelos College of Physicians and Surgeons in New York City, and a co-leader of the study, said in a press release.

The findings were published online in Acta Neuropathologica,
 

Combing Genetic Data

To find potentially protective Alzheimer’s disease variants, the investigators sequenced the genomes of more than 3500 APOE4 carriers older than 70 years with and without Alzheimer’s disease from various ethnic backgrounds.

They identified two variants on the FN1 gene, rs116558455 and rs140926439, present in healthy APOE4 carriers, that protected the APOE4 carriers against Alzheimer’s disease.

After Dr. Kizil and colleagues published their findings in a preprint, another research group that included investigators from Stanford and Washington Universities replicated the Columbia results in an independent sample of more than 7000 APOE4 carriers aged 60 years who were of European descent and identified the same FN1 variant.

The two research groups then combined their data on 11,000 participants and found that the FN1 variant rs140926439 was associated with a significantly reduced risk for Alzheimer’s disease in APOE4 carriers (odds ratio, 0.29; P = .014). A secondary analysis showed that the variant delayed Alzheimer’s disease symptom onset by 3.4 years (P = .025).

The investigators hope to use these findings to develop therapies to protect APOE4 carriers against Alzheimer’s disease.

“Anything that reduces excess fibronectin should provide some protection, and a drug that does this could be a significant step forward in the fight against this debilitating condition,” Dr. Kizil said.

Study limitations included a lack of longitudinal data on the relationship between amyloid concentration and fibronectin and the fact that investigators conducted the studies in clinically assessed individuals. Given the rare occurrence of the FN1 mutation, researchers do not have neuropathological assessments of study participants with the variant.

The study was funded by the National Institute on Aging, the Schaefer Research Scholars Program Award, Taub Institute Grants for Emerging Research, the National Institute of General Medical Sciences, and the Thompson Family Foundation Program for Accelerated Medicine Exploration in Alzheimer’s Disease and Related Disorders of the Nervous System. There were no disclosures reported.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ACTA NEUROPATHOLOGICA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Three Conditions for Which Cannabis Appears to Help

Article Type
Changed
Wed, 05/08/2024 - 10:53

The utility of cannabinoids to treat most medical conditions remains uncertain at best, but for at least three indications the data lean in favor of effectiveness, Ellie Grossman, MD, MPH, told attendees recently at the 2024 American College of Physicians Internal Medicine meeting.

Those are neuropathic pain, chemotherapy-induced nausea or vomiting, and spasticity in people with multiple sclerosis, said Dr. Grossman, an instructor at Harvard Medical School in Boston and medical director for primary care/behavioral health integration at Cambridge Health Alliance in Somerville, Massachusetts.

Dearth of Research Persists

Research is sorely lacking and of low quality in the field for many reasons, Dr. Grossman said. Most of the products tested come from outside the United States and often are synthetic and taken orally — which does not match the real-world use when patients go to dispensaries for cannabis derived directly from plants (or the plant product itself). And studies often rely on self-report.

Chronic pain is by far the top reason patients say they use medical cannabis, Dr. Grossman said. A Cochrane review of 16 studies found only that the potential benefits of cannabis may outweigh the potential harms for chronic neuropathic pain.
 

No Evidence in OUD

Dr. Grossman said she is frequently asked if cannabis can help people quit taking opioids. The answer seems to be no. A study published earlier this year in states with legalized medical or recreational cannabis found no difference between rates of opioid overdose compared with states with no such laws. “It seems like it doesn’t do anything to help us with our opioid problem,” she said.

Nor does high-quality evidence exist showing use of cannabis can improve sleep, she said. A 2022 systematic review found fewer than half of studies showed the substance useful for sleep outcomes. “Where studies were positives, it was in people who had chronic pain,” Dr. Grossman noted. Research indicates cannabis may have substantial benefit for chronic pain compared with placebo.
 

Potential Harms

If the medical benefits of cannabis are murky, the evidence for its potential harms, at least in the short term, are clearer, according to Dr. Grossman. A simplified guideline for prescribing medical cannabinoids in primary care includes sedation, feeling high, dizziness, speech disorders, muscle twitching, hypotension, and several other conditions among the potential hazards of the drug. 

But the potential for long-term harm is uncertain. “All the evidence comes from people who have been using it for recreational reasons,” where there may be co-use of tobacco, self-reported outcomes, and recall bias, she said. The characteristics of people using cannabis recreationally often differ from those using it medicinally.
 

Use With Other Controlled Substances

Dr. Grossman said clinicians should consider whether the co-use of cannabis and other controlled substances, such as benzodiazepines, opioids, or Adderall, raises the potential risks associated with those drugs. “Ultimately it comes down to talking to your patients,” she said. If a toxicity screen shows the presence of controlled substances, ask about their experience with the drugs they are using and let them know your main concern is their safety.

Dr. Grossman reported no relevant financial conflicts of interest.

A version of this article appeared on Medscape.com.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

The utility of cannabinoids to treat most medical conditions remains uncertain at best, but for at least three indications the data lean in favor of effectiveness, Ellie Grossman, MD, MPH, told attendees recently at the 2024 American College of Physicians Internal Medicine meeting.

Those are neuropathic pain, chemotherapy-induced nausea or vomiting, and spasticity in people with multiple sclerosis, said Dr. Grossman, an instructor at Harvard Medical School in Boston and medical director for primary care/behavioral health integration at Cambridge Health Alliance in Somerville, Massachusetts.

Dearth of Research Persists

Research is sorely lacking and of low quality in the field for many reasons, Dr. Grossman said. Most of the products tested come from outside the United States and often are synthetic and taken orally — which does not match the real-world use when patients go to dispensaries for cannabis derived directly from plants (or the plant product itself). And studies often rely on self-report.

Chronic pain is by far the top reason patients say they use medical cannabis, Dr. Grossman said. A Cochrane review of 16 studies found only that the potential benefits of cannabis may outweigh the potential harms for chronic neuropathic pain.
 

No Evidence in OUD

Dr. Grossman said she is frequently asked if cannabis can help people quit taking opioids. The answer seems to be no. A study published earlier this year in states with legalized medical or recreational cannabis found no difference between rates of opioid overdose compared with states with no such laws. “It seems like it doesn’t do anything to help us with our opioid problem,” she said.

Nor does high-quality evidence exist showing use of cannabis can improve sleep, she said. A 2022 systematic review found fewer than half of studies showed the substance useful for sleep outcomes. “Where studies were positives, it was in people who had chronic pain,” Dr. Grossman noted. Research indicates cannabis may have substantial benefit for chronic pain compared with placebo.
 

Potential Harms

If the medical benefits of cannabis are murky, the evidence for its potential harms, at least in the short term, are clearer, according to Dr. Grossman. A simplified guideline for prescribing medical cannabinoids in primary care includes sedation, feeling high, dizziness, speech disorders, muscle twitching, hypotension, and several other conditions among the potential hazards of the drug. 

But the potential for long-term harm is uncertain. “All the evidence comes from people who have been using it for recreational reasons,” where there may be co-use of tobacco, self-reported outcomes, and recall bias, she said. The characteristics of people using cannabis recreationally often differ from those using it medicinally.
 

Use With Other Controlled Substances

Dr. Grossman said clinicians should consider whether the co-use of cannabis and other controlled substances, such as benzodiazepines, opioids, or Adderall, raises the potential risks associated with those drugs. “Ultimately it comes down to talking to your patients,” she said. If a toxicity screen shows the presence of controlled substances, ask about their experience with the drugs they are using and let them know your main concern is their safety.

Dr. Grossman reported no relevant financial conflicts of interest.

A version of this article appeared on Medscape.com.

The utility of cannabinoids to treat most medical conditions remains uncertain at best, but for at least three indications the data lean in favor of effectiveness, Ellie Grossman, MD, MPH, told attendees recently at the 2024 American College of Physicians Internal Medicine meeting.

Those are neuropathic pain, chemotherapy-induced nausea or vomiting, and spasticity in people with multiple sclerosis, said Dr. Grossman, an instructor at Harvard Medical School in Boston and medical director for primary care/behavioral health integration at Cambridge Health Alliance in Somerville, Massachusetts.

Dearth of Research Persists

Research is sorely lacking and of low quality in the field for many reasons, Dr. Grossman said. Most of the products tested come from outside the United States and often are synthetic and taken orally — which does not match the real-world use when patients go to dispensaries for cannabis derived directly from plants (or the plant product itself). And studies often rely on self-report.

Chronic pain is by far the top reason patients say they use medical cannabis, Dr. Grossman said. A Cochrane review of 16 studies found only that the potential benefits of cannabis may outweigh the potential harms for chronic neuropathic pain.
 

No Evidence in OUD

Dr. Grossman said she is frequently asked if cannabis can help people quit taking opioids. The answer seems to be no. A study published earlier this year in states with legalized medical or recreational cannabis found no difference between rates of opioid overdose compared with states with no such laws. “It seems like it doesn’t do anything to help us with our opioid problem,” she said.

Nor does high-quality evidence exist showing use of cannabis can improve sleep, she said. A 2022 systematic review found fewer than half of studies showed the substance useful for sleep outcomes. “Where studies were positives, it was in people who had chronic pain,” Dr. Grossman noted. Research indicates cannabis may have substantial benefit for chronic pain compared with placebo.
 

Potential Harms

If the medical benefits of cannabis are murky, the evidence for its potential harms, at least in the short term, are clearer, according to Dr. Grossman. A simplified guideline for prescribing medical cannabinoids in primary care includes sedation, feeling high, dizziness, speech disorders, muscle twitching, hypotension, and several other conditions among the potential hazards of the drug. 

But the potential for long-term harm is uncertain. “All the evidence comes from people who have been using it for recreational reasons,” where there may be co-use of tobacco, self-reported outcomes, and recall bias, she said. The characteristics of people using cannabis recreationally often differ from those using it medicinally.
 

Use With Other Controlled Substances

Dr. Grossman said clinicians should consider whether the co-use of cannabis and other controlled substances, such as benzodiazepines, opioids, or Adderall, raises the potential risks associated with those drugs. “Ultimately it comes down to talking to your patients,” she said. If a toxicity screen shows the presence of controlled substances, ask about their experience with the drugs they are using and let them know your main concern is their safety.

Dr. Grossman reported no relevant financial conflicts of interest.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Antidepressants and Dementia Risk: Reassuring Data

Article Type
Changed
Mon, 05/06/2024 - 17:07

 

TOPLINE:

Antidepressants are not associated with an increased risk for dementia, accelerated cognitive decline, or atrophy of white and gray matter in adults with no signs of cognitive impairment, new research suggests.

METHODOLOGY:

  • Investigators studied 5511 individuals (58% women; mean age, 71 years) from the Rotterdam study, an ongoing prospective population-based cohort study.
  • Participants were free from dementia at baseline, and incident dementia was monitored from baseline until 2018 with repeated cognitive assessments using the Mini-Mental Status Examination (MMSE) and the Geriatric Mental Schedule, as well as MRIs.
  • Information on participants’ antidepressant use was extracted from pharmacy records from 1992 until baseline (2002-2008).
  • During a mean follow-up of 10 years, 12% of participants developed dementia.

TAKEAWAY:

  • Overall, 17% of participants had used antidepressants during the roughly 10-year period prior to baseline, and 4.1% were still using antidepressants at baseline.
  • Medication use at baseline was more common in women than in men (21% vs 18%), and use increased with age: From 2.1% in participants aged between 45 and 50 years to 4.5% in those older than 80 years.
  • After adjustment for confounders, there was no association between antidepressant use and dementia risk (hazard ratio [HR], 1.14; 95% CI, 0.92-1.41), accelerated cognitive decline, or atrophy of white and gray matter.
  • However, tricyclic antidepressant use was associated with increased dementia risk (HR, 1.36; 95% CI, 1.01-1.83) compared with the use of selective serotonin reuptake inhibitors (HR, 1.12; 95% CI, 0.81-1.54).

IN PRACTICE:

“Although prescription of antidepressant medication in older individuals, in particular those with some cognitive impairment, may have acute symptomatic anticholinergic effects that warrant consideration in clinical practice, our results show that long-term antidepressant use does not have lasting effects on cognition or brain health in older adults without indication of cognitive impairment,” the authors wrote.

SOURCE:

Frank J. Wolters, MD, of the Department of Epidemiology and the Department of Radiology and Nuclear Medicine and Alzheimer Center, Erasmus University Medical Center, Rotterdam, the Netherlands, was the senior author on this study that was published online in Alzheimer’s and Dementia.

LIMITATIONS:

Limitations included the concern that although exclusion of participants with MMSE < 26 at baseline prevented reversed causation (ie, antidepressant use in response to depression during the prodromal phase of dementia), it may have introduced selection bias by disregarding the effects of antidepressant use prior to baseline and excluding participants with lower education.

DISCLOSURES:

This study was conducted as part of the Netherlands Consortium of Dementia Cohorts, which receives funding in the context of Deltaplan Dementie from ZonMW Memorabel and Alzheimer Nederland. Further funding was also obtained from the Stichting Erasmus Trustfonds. This study was further supported by a 2020 NARSAD Young Investigator Grant from the Brain & Behavior Research Foundation. The authors reported no conflicts of interest or relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

TOPLINE:

Antidepressants are not associated with an increased risk for dementia, accelerated cognitive decline, or atrophy of white and gray matter in adults with no signs of cognitive impairment, new research suggests.

METHODOLOGY:

  • Investigators studied 5511 individuals (58% women; mean age, 71 years) from the Rotterdam study, an ongoing prospective population-based cohort study.
  • Participants were free from dementia at baseline, and incident dementia was monitored from baseline until 2018 with repeated cognitive assessments using the Mini-Mental Status Examination (MMSE) and the Geriatric Mental Schedule, as well as MRIs.
  • Information on participants’ antidepressant use was extracted from pharmacy records from 1992 until baseline (2002-2008).
  • During a mean follow-up of 10 years, 12% of participants developed dementia.

TAKEAWAY:

  • Overall, 17% of participants had used antidepressants during the roughly 10-year period prior to baseline, and 4.1% were still using antidepressants at baseline.
  • Medication use at baseline was more common in women than in men (21% vs 18%), and use increased with age: From 2.1% in participants aged between 45 and 50 years to 4.5% in those older than 80 years.
  • After adjustment for confounders, there was no association between antidepressant use and dementia risk (hazard ratio [HR], 1.14; 95% CI, 0.92-1.41), accelerated cognitive decline, or atrophy of white and gray matter.
  • However, tricyclic antidepressant use was associated with increased dementia risk (HR, 1.36; 95% CI, 1.01-1.83) compared with the use of selective serotonin reuptake inhibitors (HR, 1.12; 95% CI, 0.81-1.54).

IN PRACTICE:

“Although prescription of antidepressant medication in older individuals, in particular those with some cognitive impairment, may have acute symptomatic anticholinergic effects that warrant consideration in clinical practice, our results show that long-term antidepressant use does not have lasting effects on cognition or brain health in older adults without indication of cognitive impairment,” the authors wrote.

SOURCE:

Frank J. Wolters, MD, of the Department of Epidemiology and the Department of Radiology and Nuclear Medicine and Alzheimer Center, Erasmus University Medical Center, Rotterdam, the Netherlands, was the senior author on this study that was published online in Alzheimer’s and Dementia.

LIMITATIONS:

Limitations included the concern that although exclusion of participants with MMSE < 26 at baseline prevented reversed causation (ie, antidepressant use in response to depression during the prodromal phase of dementia), it may have introduced selection bias by disregarding the effects of antidepressant use prior to baseline and excluding participants with lower education.

DISCLOSURES:

This study was conducted as part of the Netherlands Consortium of Dementia Cohorts, which receives funding in the context of Deltaplan Dementie from ZonMW Memorabel and Alzheimer Nederland. Further funding was also obtained from the Stichting Erasmus Trustfonds. This study was further supported by a 2020 NARSAD Young Investigator Grant from the Brain & Behavior Research Foundation. The authors reported no conflicts of interest or relevant financial relationships.

A version of this article appeared on Medscape.com.

 

TOPLINE:

Antidepressants are not associated with an increased risk for dementia, accelerated cognitive decline, or atrophy of white and gray matter in adults with no signs of cognitive impairment, new research suggests.

METHODOLOGY:

  • Investigators studied 5511 individuals (58% women; mean age, 71 years) from the Rotterdam study, an ongoing prospective population-based cohort study.
  • Participants were free from dementia at baseline, and incident dementia was monitored from baseline until 2018 with repeated cognitive assessments using the Mini-Mental Status Examination (MMSE) and the Geriatric Mental Schedule, as well as MRIs.
  • Information on participants’ antidepressant use was extracted from pharmacy records from 1992 until baseline (2002-2008).
  • During a mean follow-up of 10 years, 12% of participants developed dementia.

TAKEAWAY:

  • Overall, 17% of participants had used antidepressants during the roughly 10-year period prior to baseline, and 4.1% were still using antidepressants at baseline.
  • Medication use at baseline was more common in women than in men (21% vs 18%), and use increased with age: From 2.1% in participants aged between 45 and 50 years to 4.5% in those older than 80 years.
  • After adjustment for confounders, there was no association between antidepressant use and dementia risk (hazard ratio [HR], 1.14; 95% CI, 0.92-1.41), accelerated cognitive decline, or atrophy of white and gray matter.
  • However, tricyclic antidepressant use was associated with increased dementia risk (HR, 1.36; 95% CI, 1.01-1.83) compared with the use of selective serotonin reuptake inhibitors (HR, 1.12; 95% CI, 0.81-1.54).

IN PRACTICE:

“Although prescription of antidepressant medication in older individuals, in particular those with some cognitive impairment, may have acute symptomatic anticholinergic effects that warrant consideration in clinical practice, our results show that long-term antidepressant use does not have lasting effects on cognition or brain health in older adults without indication of cognitive impairment,” the authors wrote.

SOURCE:

Frank J. Wolters, MD, of the Department of Epidemiology and the Department of Radiology and Nuclear Medicine and Alzheimer Center, Erasmus University Medical Center, Rotterdam, the Netherlands, was the senior author on this study that was published online in Alzheimer’s and Dementia.

LIMITATIONS:

Limitations included the concern that although exclusion of participants with MMSE < 26 at baseline prevented reversed causation (ie, antidepressant use in response to depression during the prodromal phase of dementia), it may have introduced selection bias by disregarding the effects of antidepressant use prior to baseline and excluding participants with lower education.

DISCLOSURES:

This study was conducted as part of the Netherlands Consortium of Dementia Cohorts, which receives funding in the context of Deltaplan Dementie from ZonMW Memorabel and Alzheimer Nederland. Further funding was also obtained from the Stichting Erasmus Trustfonds. This study was further supported by a 2020 NARSAD Young Investigator Grant from the Brain & Behavior Research Foundation. The authors reported no conflicts of interest or relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Does ‘Brain Training’ Really Improve Cognition and Forestall Cognitive Decline?

Article Type
Changed
Wed, 05/08/2024 - 10:53

The concept that cognitive health can be preserved or improved is often expressed as “use it or lose it.” Numerous modifiable risk factors are associated with “losing” cognitive abilities with age, and a cognitively active lifestyle may have a protective effect.

But what is a “cognitively active lifestyle” — do crosswords and Sudoku count?

One popular approach is “brain training.” While not a scientific term with an established definition, it “typically refers to tasks or drills that are designed to strengthen specific aspects of one’s cognitive function,” explained Yuko Hara, PhD, director of Aging and Alzheimer’s Prevention at the Alzheimer’s Drug Discovery Foundation.

Manuel Montero-Odasso, MD, PhD, director of the Gait and Brain Lab, Parkwood Institute, London, Ontario, Canada, elaborated: “Cognitive training involves performing a definitive task or set of tasks where you increase attentional demands to improve focus and concentration and memory. You try to execute the new things that you’ve learned and to remember them.”

In a commentary published by this news organization in 2022, neuroscientist Michael Merzenich, PhD, professor emeritus at University of California San Francisco, said that growing a person’s cognitive reserve and actively managing brain health can play an important role in preventing or delaying Alzheimer’s disease. Important components of this include brain training and physical exercise.
 

Brain Training: Mechanism of Action

Dr. Montero-Odasso, team leader at the Canadian Consortium on Neurodegeneration in Aging and team co-leader at the Ontario Neurodegenerative Research Initiative, explained that cognitive training creates new synapses in the brain, thus stimulating neuroplasticity.

“When we try to activate networks mainly in the frontal lobe, the prefrontal cortex, a key mechanism underlying this process is enhancement of the synaptic plasticity at excitatory synapses, which connect neurons into networks; in other words, we generate new synapses, and that’s how we enhance brain health and cognitive abilities.”

The more neural connections, the greater the processing speed of the brain, he continued. “Cognitive training creates an anatomical change in the brain.”

Executive functions, which include attention, inhibition, planning, and multitasking, are regulated predominantly by the prefrontal cortex. Damage in this region of the brain is also implicated in dementia. Alterations in the connectivity of this area are associated with cognitive impairment, independent of other structural pathological aberrations (eg, gray matter atrophy). These patterns may precede structural pathological changes associated with cognitive impairment and dementia.

Neuroplasticity changes have been corroborated through neuroimaging, which has demonstrated that after cognitive training, there is more activation in the prefrontal cortex that correlates with new synapses, Dr. Montero-Odasso said.

Henry Mahncke, PhD, CEO of the brain training company Posit Science/BrainHQ, explained that early research was conducted on rodents and monkeys, with Dr. Merzenich as one of the leading pioneers in developing the concept of brain plasticity. Dr. Merzenich cofounded Posit Science and is currently its chief scientific officer.

Dr. Mahncke recounted that as a graduate student, he had worked with Dr. Merzenich researching brain plasticity. When Dr. Merzenich founded Posit Science, he asked Dr. Mahncke to join the company to help develop approaches to enhance brain plasticity — building the brain-training exercises and running the clinical trials.

“It’s now well understood that the brain can rewire itself at any age and in almost any condition,” Dr. Mahncke said. “In kids and in younger and older adults, whether with healthy or unhealthy brains, the fundamental way the brain works is by continually rewiring and rebuilding itself, based on what we ask it to do.”

If we understand the principles of brain plasticity, “we can build an adaptive brain and give it exercises to rewire in a healthy direction, improving cognitive abilities like memory, speed, and attention,” Dr. Mahncke said.
 

 

 

Unsubstantiated Claims and Controversy

Brain training is not without controversy, Dr. Hara pointed out. “Some manufacturers of brain games have been criticized and even fined for making unsubstantiated claims,” she said.

2016 review found that brain-training interventions do improve performance on specific trained tasks, but there is less evidence that they improve performance on closely related tasks and little evidence that training improves everyday cognitive performance. A 2017 review  reached similar conclusions, calling evidence regarding prevention or delay of cognitive decline or dementia through brain games “insufficient,” although cognitive training could “improve cognition in the domain trained.”

“The general consensus is that for most brain-training programs, people may get better at specific tasks through practice, but these improvements don’t necessarily translate into improvement in other tasks that require other cognitive domains or prevention of dementia or age-related cognitive decline,” Dr. Hara said.

She noted that most brain-training programs “have not been rigorously tested in clinical trials” — although some, such as those featured in the ACTIVE trial, did show evidence of effectiveness.

Dr. Mahncke agreed. “Asking whether brain training works is like asking whether small molecules improve health,” he said noting that some brain-training programs are nonsense and not evidence based. He believes that his company’s product, BrainHQ, and some others are “backed by robust evidence in their ability to stave off, slow, or even reverse cognitive changes.”

BrainHQ is a web-based brain game suite that can be used independently as an app or in group settings (classes and webinars) and is covered by some Medicare Advantage insurance plans. It encompasses “dozens of individual brain-training exercises, linked by a common thread. Each one is intensively designed to make the brain faster and more accurate,” said Dr. Mahncke.

He explained that human brains “get noisy as people get older, like a radio which is wearing out, so there’s static in the background. This makes the music hard to hear, and in the case of the human brain, it makes it difficult to pay attention.” The exercises are “designed to tamp down the ‘noise,’ speed up the brain, and make information processing more accurate.”

Dr. Mahncke called this a “bottom-up” approach, in contrast to many previous cognitive-training approaches that come from the brain injury rehabilitation field. They teach “top-down” skills and strategies designed to compensate for deficits in specific domains, such as reading, concentration, or fine motor skills.

By contrast, the approach of BrainHQ is “to improve the overall processing system of the brain with speed, attention, working memory, and executive function, which will in turn impact all skills and activities.”
 

Supporting Evidence

Dr. Mahncke cited several supporting studies. For example, the IMPACT study randomized 487 adults (aged ≥ 65 years) to receive either a brain plasticity–based computerized cognitive training program (BrainHQ) or a novelty- and intensity-matched general cognitive stimulation treatment program (intervention and control group, respectively) for an 8-week period.

Those who underwent brain training showed significantly greater improvement in the repeatable Battery for the Assessment of Neuropsychological Status (RBANS Auditory Memory/Attention) compared with those in the control group (3.9 vs 1.8, respectively; P =.02). The intervention group also showed significant improvements on multiple secondary measures of attention and memory. The magnitude of the effect sizes suggests that the results are clinically significant, according to the authors.

The ACTIVE study tested the effects of different cognitive training programs on cognitive function and time to dementia. The researchers randomized 2802 healthy older adults (mean age, 74 years) to a control group with no cognitive training or one of three brain-training groups comprising:

1. In-person training on verbal memory skills

2. In-person training on reasoning and problem-solving

3. Computer-based speed-of-processing training on visual attention

Participants in the training groups completed 10 sessions, each lasting 60-75 minutes, over a 5- to 6-week period. A random subsample of each training group was selected to receive “booster” sessions, with four-session booster training delivered at 11 and 35 months. All study participants completed follow-up tests of cognition and function after 1, 2, 3, 5, and 10 years.

At the end of 10 years, those assigned to the speed-of-processing training, now part of BrainHQ, had a 29% lower risk for dementia than those in the control group who received no training. No reduction was found in the memory or reasoning training groups. Participants who completed the “booster” sessions had an even greater reduction: Each additional booster session was associated with a 10% lower risk for dementia.

Dr. Montero-Odasso was involved in the SYNERGIC study that randomized 175 participants with mild cognitive impairment (MCI; average age, 73 years) to one of five study arms:

1. Multidomain intervention with exercise, cognitive training, and vitamin D

2. Exercise, cognitive training, and placebo

3. Exercise, sham cognitive training, and vitamin D

4. Exercise, sham cognitive training, and placebo

5. Control group with balance-toning exercise, sham cognitive training, and placebo

“Sham” cognitive training consisted of alternating between two tasks (touristic search and video watching) performed on a tablet, with the same time exposure as the intervention training.

The researchers found that after 6 months of interventions, all active arms with aerobic-resistance exercise showed improvement in the ADAS-Cog-13, an established outcome to evaluate dementia treatments, when compared with the control group — regardless of the addition of cognitive training or vitamin D.

Compared with exercise alone (arms 3 and 4), those who did exercise plus cognitive training (arms 1 and 2) showed greater improvements in their ADAS-Cog-13l score, with a mean difference of −1.45 points (P = .02). The greatest improvement was seen in those who underwent the multidomain intervention in arm 1.

The authors noted that the mean 2.64-point improvement seen in the ADAS-Cog-13 for the multidomain intervention is actually larger than changes seen in previous pharmaceutical trials among individuals with MCI or mild dementia and “approaches” the three points considered clinically meaningful.

“We found that older adults with MCI who received aerobic-resistance exercise with sequential computerized cognitive training significantly improved cognition,” Dr. Montero-Odasso said. “The cognitive training we used was called Neuropeak, a multidomain lifestyle training delivered through a web-based platform developed by our co-leader Louis Bherer at Université de Montréal.”

He explained that the purpose “is to challenge your brain to the point where you need to make an effort to remember things, pay attention, and later to execute tasks. The evidence from clinical trials, including ours, shows this type of brain challenge is effective in slowing and even reversing cognitive decline.”

A follow-up study, SYNERGIC 2.0, is ongoing.
 

 

 

Puzzles, Board Games, and New Challenges

Formal brain-training programs aren’t the only way to improve brain plasticity, Dr. Hara said. Observational studies suggested an association between improved cognitive performance and/or lower dementia risk and engaging in number and word puzzles, such as crosswordscards, or board games.

Some studies suggested that older adults who use technology might also protect their cognitive reserve. Dr. Hara cited a US longitudinal study of more than 18,000 older adults suggesting that regular Internet users had roughly half the risk for dementia compared to nonregular Internet users. Estimates of daily Internet use suggested a U-shaped relationship with dementia with 0.1-2.0 hours daily (excluding time spent watching television or movies online) associated with the lowest risk. Similar associations between Internet use and a lower risk for cognitive decline have been reported in the United Kingdom and Europe.

“Engaging in mentally stimulating activities can increase ‘cognitive reserve’ — meaning, capacity of the brain to resist the effects of age-related changes or disease-related pathology, such that one can maintain cognitive function for longer,” Dr. Hara said. “Cognitively stimulating activities, regardless of the type, may help delay the onset of cognitive decline.”

She listed several examples of activities that are stimulating to the brain, including learning a new game or puzzle, a new language, or a new dance, and learning how to play a musical instrument.

Dr. Montero-Odasso emphasized that the “newness” is key to increasing and preserving cognitive reserve. “Just surfing the Internet, playing word or board games, or doing crossword puzzles won’t be enough if you’ve been doing these things all your life,” he said. “It won’t hurt, of course, but it won’t necessarily increase your cognitive abilities.

“For example, a person who regularly engages in public speaking may not improve cognition by taking a public-speaking course, but someone who has never spoken before an audience might show cognitive improvements as a result of learning a new skill,” he said. “Or someone who knows several languages already might gain from learning a brand-new language.”

He cited research supporting the benefits of dancing, which he called “an ideal activity because it’s physical, so it provides the exercise that’s been associated with improved cognition. But it also requires learning new steps and moves, which builds the synapses in the brain. And the socialization of dance classes adds another component that can improve cognition.”

Dr. Mahncke hopes that beyond engaging in day-to-day new activities, seniors will participate in computerized brain training. “There’s no reason that evidence-based training can’t be offered in senior and community centers, as yoga and swimming are,” he said. “It doesn’t have to be simply something people do on their own virtually.”

Zoom classes and Medicare reimbursements are “good steps in the right direction, but it’s time to expand this potentially life-transformative intervention so that it reaches the ever-expanding population of seniors in the United States and beyond.”

Dr. Hara reported having no disclosures. Dr. Montero-Odasso reported having no commercial or financial interest related to this topic. He serves as the president of the Canadian Geriatrics Société and is team leader in the Canadian Consortium of Neurodegeneration in Aging. Dr. Mahncke is CEO of the brain training company Posit Science/BrainHQ.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

The concept that cognitive health can be preserved or improved is often expressed as “use it or lose it.” Numerous modifiable risk factors are associated with “losing” cognitive abilities with age, and a cognitively active lifestyle may have a protective effect.

But what is a “cognitively active lifestyle” — do crosswords and Sudoku count?

One popular approach is “brain training.” While not a scientific term with an established definition, it “typically refers to tasks or drills that are designed to strengthen specific aspects of one’s cognitive function,” explained Yuko Hara, PhD, director of Aging and Alzheimer’s Prevention at the Alzheimer’s Drug Discovery Foundation.

Manuel Montero-Odasso, MD, PhD, director of the Gait and Brain Lab, Parkwood Institute, London, Ontario, Canada, elaborated: “Cognitive training involves performing a definitive task or set of tasks where you increase attentional demands to improve focus and concentration and memory. You try to execute the new things that you’ve learned and to remember them.”

In a commentary published by this news organization in 2022, neuroscientist Michael Merzenich, PhD, professor emeritus at University of California San Francisco, said that growing a person’s cognitive reserve and actively managing brain health can play an important role in preventing or delaying Alzheimer’s disease. Important components of this include brain training and physical exercise.
 

Brain Training: Mechanism of Action

Dr. Montero-Odasso, team leader at the Canadian Consortium on Neurodegeneration in Aging and team co-leader at the Ontario Neurodegenerative Research Initiative, explained that cognitive training creates new synapses in the brain, thus stimulating neuroplasticity.

“When we try to activate networks mainly in the frontal lobe, the prefrontal cortex, a key mechanism underlying this process is enhancement of the synaptic plasticity at excitatory synapses, which connect neurons into networks; in other words, we generate new synapses, and that’s how we enhance brain health and cognitive abilities.”

The more neural connections, the greater the processing speed of the brain, he continued. “Cognitive training creates an anatomical change in the brain.”

Executive functions, which include attention, inhibition, planning, and multitasking, are regulated predominantly by the prefrontal cortex. Damage in this region of the brain is also implicated in dementia. Alterations in the connectivity of this area are associated with cognitive impairment, independent of other structural pathological aberrations (eg, gray matter atrophy). These patterns may precede structural pathological changes associated with cognitive impairment and dementia.

Neuroplasticity changes have been corroborated through neuroimaging, which has demonstrated that after cognitive training, there is more activation in the prefrontal cortex that correlates with new synapses, Dr. Montero-Odasso said.

Henry Mahncke, PhD, CEO of the brain training company Posit Science/BrainHQ, explained that early research was conducted on rodents and monkeys, with Dr. Merzenich as one of the leading pioneers in developing the concept of brain plasticity. Dr. Merzenich cofounded Posit Science and is currently its chief scientific officer.

Dr. Mahncke recounted that as a graduate student, he had worked with Dr. Merzenich researching brain plasticity. When Dr. Merzenich founded Posit Science, he asked Dr. Mahncke to join the company to help develop approaches to enhance brain plasticity — building the brain-training exercises and running the clinical trials.

“It’s now well understood that the brain can rewire itself at any age and in almost any condition,” Dr. Mahncke said. “In kids and in younger and older adults, whether with healthy or unhealthy brains, the fundamental way the brain works is by continually rewiring and rebuilding itself, based on what we ask it to do.”

If we understand the principles of brain plasticity, “we can build an adaptive brain and give it exercises to rewire in a healthy direction, improving cognitive abilities like memory, speed, and attention,” Dr. Mahncke said.
 

 

 

Unsubstantiated Claims and Controversy

Brain training is not without controversy, Dr. Hara pointed out. “Some manufacturers of brain games have been criticized and even fined for making unsubstantiated claims,” she said.

2016 review found that brain-training interventions do improve performance on specific trained tasks, but there is less evidence that they improve performance on closely related tasks and little evidence that training improves everyday cognitive performance. A 2017 review  reached similar conclusions, calling evidence regarding prevention or delay of cognitive decline or dementia through brain games “insufficient,” although cognitive training could “improve cognition in the domain trained.”

“The general consensus is that for most brain-training programs, people may get better at specific tasks through practice, but these improvements don’t necessarily translate into improvement in other tasks that require other cognitive domains or prevention of dementia or age-related cognitive decline,” Dr. Hara said.

She noted that most brain-training programs “have not been rigorously tested in clinical trials” — although some, such as those featured in the ACTIVE trial, did show evidence of effectiveness.

Dr. Mahncke agreed. “Asking whether brain training works is like asking whether small molecules improve health,” he said noting that some brain-training programs are nonsense and not evidence based. He believes that his company’s product, BrainHQ, and some others are “backed by robust evidence in their ability to stave off, slow, or even reverse cognitive changes.”

BrainHQ is a web-based brain game suite that can be used independently as an app or in group settings (classes and webinars) and is covered by some Medicare Advantage insurance plans. It encompasses “dozens of individual brain-training exercises, linked by a common thread. Each one is intensively designed to make the brain faster and more accurate,” said Dr. Mahncke.

He explained that human brains “get noisy as people get older, like a radio which is wearing out, so there’s static in the background. This makes the music hard to hear, and in the case of the human brain, it makes it difficult to pay attention.” The exercises are “designed to tamp down the ‘noise,’ speed up the brain, and make information processing more accurate.”

Dr. Mahncke called this a “bottom-up” approach, in contrast to many previous cognitive-training approaches that come from the brain injury rehabilitation field. They teach “top-down” skills and strategies designed to compensate for deficits in specific domains, such as reading, concentration, or fine motor skills.

By contrast, the approach of BrainHQ is “to improve the overall processing system of the brain with speed, attention, working memory, and executive function, which will in turn impact all skills and activities.”
 

Supporting Evidence

Dr. Mahncke cited several supporting studies. For example, the IMPACT study randomized 487 adults (aged ≥ 65 years) to receive either a brain plasticity–based computerized cognitive training program (BrainHQ) or a novelty- and intensity-matched general cognitive stimulation treatment program (intervention and control group, respectively) for an 8-week period.

Those who underwent brain training showed significantly greater improvement in the repeatable Battery for the Assessment of Neuropsychological Status (RBANS Auditory Memory/Attention) compared with those in the control group (3.9 vs 1.8, respectively; P =.02). The intervention group also showed significant improvements on multiple secondary measures of attention and memory. The magnitude of the effect sizes suggests that the results are clinically significant, according to the authors.

The ACTIVE study tested the effects of different cognitive training programs on cognitive function and time to dementia. The researchers randomized 2802 healthy older adults (mean age, 74 years) to a control group with no cognitive training or one of three brain-training groups comprising:

1. In-person training on verbal memory skills

2. In-person training on reasoning and problem-solving

3. Computer-based speed-of-processing training on visual attention

Participants in the training groups completed 10 sessions, each lasting 60-75 minutes, over a 5- to 6-week period. A random subsample of each training group was selected to receive “booster” sessions, with four-session booster training delivered at 11 and 35 months. All study participants completed follow-up tests of cognition and function after 1, 2, 3, 5, and 10 years.

At the end of 10 years, those assigned to the speed-of-processing training, now part of BrainHQ, had a 29% lower risk for dementia than those in the control group who received no training. No reduction was found in the memory or reasoning training groups. Participants who completed the “booster” sessions had an even greater reduction: Each additional booster session was associated with a 10% lower risk for dementia.

Dr. Montero-Odasso was involved in the SYNERGIC study that randomized 175 participants with mild cognitive impairment (MCI; average age, 73 years) to one of five study arms:

1. Multidomain intervention with exercise, cognitive training, and vitamin D

2. Exercise, cognitive training, and placebo

3. Exercise, sham cognitive training, and vitamin D

4. Exercise, sham cognitive training, and placebo

5. Control group with balance-toning exercise, sham cognitive training, and placebo

“Sham” cognitive training consisted of alternating between two tasks (touristic search and video watching) performed on a tablet, with the same time exposure as the intervention training.

The researchers found that after 6 months of interventions, all active arms with aerobic-resistance exercise showed improvement in the ADAS-Cog-13, an established outcome to evaluate dementia treatments, when compared with the control group — regardless of the addition of cognitive training or vitamin D.

Compared with exercise alone (arms 3 and 4), those who did exercise plus cognitive training (arms 1 and 2) showed greater improvements in their ADAS-Cog-13l score, with a mean difference of −1.45 points (P = .02). The greatest improvement was seen in those who underwent the multidomain intervention in arm 1.

The authors noted that the mean 2.64-point improvement seen in the ADAS-Cog-13 for the multidomain intervention is actually larger than changes seen in previous pharmaceutical trials among individuals with MCI or mild dementia and “approaches” the three points considered clinically meaningful.

“We found that older adults with MCI who received aerobic-resistance exercise with sequential computerized cognitive training significantly improved cognition,” Dr. Montero-Odasso said. “The cognitive training we used was called Neuropeak, a multidomain lifestyle training delivered through a web-based platform developed by our co-leader Louis Bherer at Université de Montréal.”

He explained that the purpose “is to challenge your brain to the point where you need to make an effort to remember things, pay attention, and later to execute tasks. The evidence from clinical trials, including ours, shows this type of brain challenge is effective in slowing and even reversing cognitive decline.”

A follow-up study, SYNERGIC 2.0, is ongoing.
 

 

 

Puzzles, Board Games, and New Challenges

Formal brain-training programs aren’t the only way to improve brain plasticity, Dr. Hara said. Observational studies suggested an association between improved cognitive performance and/or lower dementia risk and engaging in number and word puzzles, such as crosswordscards, or board games.

Some studies suggested that older adults who use technology might also protect their cognitive reserve. Dr. Hara cited a US longitudinal study of more than 18,000 older adults suggesting that regular Internet users had roughly half the risk for dementia compared to nonregular Internet users. Estimates of daily Internet use suggested a U-shaped relationship with dementia with 0.1-2.0 hours daily (excluding time spent watching television or movies online) associated with the lowest risk. Similar associations between Internet use and a lower risk for cognitive decline have been reported in the United Kingdom and Europe.

“Engaging in mentally stimulating activities can increase ‘cognitive reserve’ — meaning, capacity of the brain to resist the effects of age-related changes or disease-related pathology, such that one can maintain cognitive function for longer,” Dr. Hara said. “Cognitively stimulating activities, regardless of the type, may help delay the onset of cognitive decline.”

She listed several examples of activities that are stimulating to the brain, including learning a new game or puzzle, a new language, or a new dance, and learning how to play a musical instrument.

Dr. Montero-Odasso emphasized that the “newness” is key to increasing and preserving cognitive reserve. “Just surfing the Internet, playing word or board games, or doing crossword puzzles won’t be enough if you’ve been doing these things all your life,” he said. “It won’t hurt, of course, but it won’t necessarily increase your cognitive abilities.

“For example, a person who regularly engages in public speaking may not improve cognition by taking a public-speaking course, but someone who has never spoken before an audience might show cognitive improvements as a result of learning a new skill,” he said. “Or someone who knows several languages already might gain from learning a brand-new language.”

He cited research supporting the benefits of dancing, which he called “an ideal activity because it’s physical, so it provides the exercise that’s been associated with improved cognition. But it also requires learning new steps and moves, which builds the synapses in the brain. And the socialization of dance classes adds another component that can improve cognition.”

Dr. Mahncke hopes that beyond engaging in day-to-day new activities, seniors will participate in computerized brain training. “There’s no reason that evidence-based training can’t be offered in senior and community centers, as yoga and swimming are,” he said. “It doesn’t have to be simply something people do on their own virtually.”

Zoom classes and Medicare reimbursements are “good steps in the right direction, but it’s time to expand this potentially life-transformative intervention so that it reaches the ever-expanding population of seniors in the United States and beyond.”

Dr. Hara reported having no disclosures. Dr. Montero-Odasso reported having no commercial or financial interest related to this topic. He serves as the president of the Canadian Geriatrics Société and is team leader in the Canadian Consortium of Neurodegeneration in Aging. Dr. Mahncke is CEO of the brain training company Posit Science/BrainHQ.

A version of this article appeared on Medscape.com.

The concept that cognitive health can be preserved or improved is often expressed as “use it or lose it.” Numerous modifiable risk factors are associated with “losing” cognitive abilities with age, and a cognitively active lifestyle may have a protective effect.

But what is a “cognitively active lifestyle” — do crosswords and Sudoku count?

One popular approach is “brain training.” While not a scientific term with an established definition, it “typically refers to tasks or drills that are designed to strengthen specific aspects of one’s cognitive function,” explained Yuko Hara, PhD, director of Aging and Alzheimer’s Prevention at the Alzheimer’s Drug Discovery Foundation.

Manuel Montero-Odasso, MD, PhD, director of the Gait and Brain Lab, Parkwood Institute, London, Ontario, Canada, elaborated: “Cognitive training involves performing a definitive task or set of tasks where you increase attentional demands to improve focus and concentration and memory. You try to execute the new things that you’ve learned and to remember them.”

In a commentary published by this news organization in 2022, neuroscientist Michael Merzenich, PhD, professor emeritus at University of California San Francisco, said that growing a person’s cognitive reserve and actively managing brain health can play an important role in preventing or delaying Alzheimer’s disease. Important components of this include brain training and physical exercise.
 

Brain Training: Mechanism of Action

Dr. Montero-Odasso, team leader at the Canadian Consortium on Neurodegeneration in Aging and team co-leader at the Ontario Neurodegenerative Research Initiative, explained that cognitive training creates new synapses in the brain, thus stimulating neuroplasticity.

“When we try to activate networks mainly in the frontal lobe, the prefrontal cortex, a key mechanism underlying this process is enhancement of the synaptic plasticity at excitatory synapses, which connect neurons into networks; in other words, we generate new synapses, and that’s how we enhance brain health and cognitive abilities.”

The more neural connections, the greater the processing speed of the brain, he continued. “Cognitive training creates an anatomical change in the brain.”

Executive functions, which include attention, inhibition, planning, and multitasking, are regulated predominantly by the prefrontal cortex. Damage in this region of the brain is also implicated in dementia. Alterations in the connectivity of this area are associated with cognitive impairment, independent of other structural pathological aberrations (eg, gray matter atrophy). These patterns may precede structural pathological changes associated with cognitive impairment and dementia.

Neuroplasticity changes have been corroborated through neuroimaging, which has demonstrated that after cognitive training, there is more activation in the prefrontal cortex that correlates with new synapses, Dr. Montero-Odasso said.

Henry Mahncke, PhD, CEO of the brain training company Posit Science/BrainHQ, explained that early research was conducted on rodents and monkeys, with Dr. Merzenich as one of the leading pioneers in developing the concept of brain plasticity. Dr. Merzenich cofounded Posit Science and is currently its chief scientific officer.

Dr. Mahncke recounted that as a graduate student, he had worked with Dr. Merzenich researching brain plasticity. When Dr. Merzenich founded Posit Science, he asked Dr. Mahncke to join the company to help develop approaches to enhance brain plasticity — building the brain-training exercises and running the clinical trials.

“It’s now well understood that the brain can rewire itself at any age and in almost any condition,” Dr. Mahncke said. “In kids and in younger and older adults, whether with healthy or unhealthy brains, the fundamental way the brain works is by continually rewiring and rebuilding itself, based on what we ask it to do.”

If we understand the principles of brain plasticity, “we can build an adaptive brain and give it exercises to rewire in a healthy direction, improving cognitive abilities like memory, speed, and attention,” Dr. Mahncke said.
 

 

 

Unsubstantiated Claims and Controversy

Brain training is not without controversy, Dr. Hara pointed out. “Some manufacturers of brain games have been criticized and even fined for making unsubstantiated claims,” she said.

2016 review found that brain-training interventions do improve performance on specific trained tasks, but there is less evidence that they improve performance on closely related tasks and little evidence that training improves everyday cognitive performance. A 2017 review  reached similar conclusions, calling evidence regarding prevention or delay of cognitive decline or dementia through brain games “insufficient,” although cognitive training could “improve cognition in the domain trained.”

“The general consensus is that for most brain-training programs, people may get better at specific tasks through practice, but these improvements don’t necessarily translate into improvement in other tasks that require other cognitive domains or prevention of dementia or age-related cognitive decline,” Dr. Hara said.

She noted that most brain-training programs “have not been rigorously tested in clinical trials” — although some, such as those featured in the ACTIVE trial, did show evidence of effectiveness.

Dr. Mahncke agreed. “Asking whether brain training works is like asking whether small molecules improve health,” he said noting that some brain-training programs are nonsense and not evidence based. He believes that his company’s product, BrainHQ, and some others are “backed by robust evidence in their ability to stave off, slow, or even reverse cognitive changes.”

BrainHQ is a web-based brain game suite that can be used independently as an app or in group settings (classes and webinars) and is covered by some Medicare Advantage insurance plans. It encompasses “dozens of individual brain-training exercises, linked by a common thread. Each one is intensively designed to make the brain faster and more accurate,” said Dr. Mahncke.

He explained that human brains “get noisy as people get older, like a radio which is wearing out, so there’s static in the background. This makes the music hard to hear, and in the case of the human brain, it makes it difficult to pay attention.” The exercises are “designed to tamp down the ‘noise,’ speed up the brain, and make information processing more accurate.”

Dr. Mahncke called this a “bottom-up” approach, in contrast to many previous cognitive-training approaches that come from the brain injury rehabilitation field. They teach “top-down” skills and strategies designed to compensate for deficits in specific domains, such as reading, concentration, or fine motor skills.

By contrast, the approach of BrainHQ is “to improve the overall processing system of the brain with speed, attention, working memory, and executive function, which will in turn impact all skills and activities.”
 

Supporting Evidence

Dr. Mahncke cited several supporting studies. For example, the IMPACT study randomized 487 adults (aged ≥ 65 years) to receive either a brain plasticity–based computerized cognitive training program (BrainHQ) or a novelty- and intensity-matched general cognitive stimulation treatment program (intervention and control group, respectively) for an 8-week period.

Those who underwent brain training showed significantly greater improvement in the repeatable Battery for the Assessment of Neuropsychological Status (RBANS Auditory Memory/Attention) compared with those in the control group (3.9 vs 1.8, respectively; P =.02). The intervention group also showed significant improvements on multiple secondary measures of attention and memory. The magnitude of the effect sizes suggests that the results are clinically significant, according to the authors.

The ACTIVE study tested the effects of different cognitive training programs on cognitive function and time to dementia. The researchers randomized 2802 healthy older adults (mean age, 74 years) to a control group with no cognitive training or one of three brain-training groups comprising:

1. In-person training on verbal memory skills

2. In-person training on reasoning and problem-solving

3. Computer-based speed-of-processing training on visual attention

Participants in the training groups completed 10 sessions, each lasting 60-75 minutes, over a 5- to 6-week period. A random subsample of each training group was selected to receive “booster” sessions, with four-session booster training delivered at 11 and 35 months. All study participants completed follow-up tests of cognition and function after 1, 2, 3, 5, and 10 years.

At the end of 10 years, those assigned to the speed-of-processing training, now part of BrainHQ, had a 29% lower risk for dementia than those in the control group who received no training. No reduction was found in the memory or reasoning training groups. Participants who completed the “booster” sessions had an even greater reduction: Each additional booster session was associated with a 10% lower risk for dementia.

Dr. Montero-Odasso was involved in the SYNERGIC study that randomized 175 participants with mild cognitive impairment (MCI; average age, 73 years) to one of five study arms:

1. Multidomain intervention with exercise, cognitive training, and vitamin D

2. Exercise, cognitive training, and placebo

3. Exercise, sham cognitive training, and vitamin D

4. Exercise, sham cognitive training, and placebo

5. Control group with balance-toning exercise, sham cognitive training, and placebo

“Sham” cognitive training consisted of alternating between two tasks (touristic search and video watching) performed on a tablet, with the same time exposure as the intervention training.

The researchers found that after 6 months of interventions, all active arms with aerobic-resistance exercise showed improvement in the ADAS-Cog-13, an established outcome to evaluate dementia treatments, when compared with the control group — regardless of the addition of cognitive training or vitamin D.

Compared with exercise alone (arms 3 and 4), those who did exercise plus cognitive training (arms 1 and 2) showed greater improvements in their ADAS-Cog-13l score, with a mean difference of −1.45 points (P = .02). The greatest improvement was seen in those who underwent the multidomain intervention in arm 1.

The authors noted that the mean 2.64-point improvement seen in the ADAS-Cog-13 for the multidomain intervention is actually larger than changes seen in previous pharmaceutical trials among individuals with MCI or mild dementia and “approaches” the three points considered clinically meaningful.

“We found that older adults with MCI who received aerobic-resistance exercise with sequential computerized cognitive training significantly improved cognition,” Dr. Montero-Odasso said. “The cognitive training we used was called Neuropeak, a multidomain lifestyle training delivered through a web-based platform developed by our co-leader Louis Bherer at Université de Montréal.”

He explained that the purpose “is to challenge your brain to the point where you need to make an effort to remember things, pay attention, and later to execute tasks. The evidence from clinical trials, including ours, shows this type of brain challenge is effective in slowing and even reversing cognitive decline.”

A follow-up study, SYNERGIC 2.0, is ongoing.
 

 

 

Puzzles, Board Games, and New Challenges

Formal brain-training programs aren’t the only way to improve brain plasticity, Dr. Hara said. Observational studies suggested an association between improved cognitive performance and/or lower dementia risk and engaging in number and word puzzles, such as crosswordscards, or board games.

Some studies suggested that older adults who use technology might also protect their cognitive reserve. Dr. Hara cited a US longitudinal study of more than 18,000 older adults suggesting that regular Internet users had roughly half the risk for dementia compared to nonregular Internet users. Estimates of daily Internet use suggested a U-shaped relationship with dementia with 0.1-2.0 hours daily (excluding time spent watching television or movies online) associated with the lowest risk. Similar associations between Internet use and a lower risk for cognitive decline have been reported in the United Kingdom and Europe.

“Engaging in mentally stimulating activities can increase ‘cognitive reserve’ — meaning, capacity of the brain to resist the effects of age-related changes or disease-related pathology, such that one can maintain cognitive function for longer,” Dr. Hara said. “Cognitively stimulating activities, regardless of the type, may help delay the onset of cognitive decline.”

She listed several examples of activities that are stimulating to the brain, including learning a new game or puzzle, a new language, or a new dance, and learning how to play a musical instrument.

Dr. Montero-Odasso emphasized that the “newness” is key to increasing and preserving cognitive reserve. “Just surfing the Internet, playing word or board games, or doing crossword puzzles won’t be enough if you’ve been doing these things all your life,” he said. “It won’t hurt, of course, but it won’t necessarily increase your cognitive abilities.

“For example, a person who regularly engages in public speaking may not improve cognition by taking a public-speaking course, but someone who has never spoken before an audience might show cognitive improvements as a result of learning a new skill,” he said. “Or someone who knows several languages already might gain from learning a brand-new language.”

He cited research supporting the benefits of dancing, which he called “an ideal activity because it’s physical, so it provides the exercise that’s been associated with improved cognition. But it also requires learning new steps and moves, which builds the synapses in the brain. And the socialization of dance classes adds another component that can improve cognition.”

Dr. Mahncke hopes that beyond engaging in day-to-day new activities, seniors will participate in computerized brain training. “There’s no reason that evidence-based training can’t be offered in senior and community centers, as yoga and swimming are,” he said. “It doesn’t have to be simply something people do on their own virtually.”

Zoom classes and Medicare reimbursements are “good steps in the right direction, but it’s time to expand this potentially life-transformative intervention so that it reaches the ever-expanding population of seniors in the United States and beyond.”

Dr. Hara reported having no disclosures. Dr. Montero-Odasso reported having no commercial or financial interest related to this topic. He serves as the president of the Canadian Geriatrics Société and is team leader in the Canadian Consortium of Neurodegeneration in Aging. Dr. Mahncke is CEO of the brain training company Posit Science/BrainHQ.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Do No Harm: What Smoldering Myeloma Teaches Us

Article Type
Changed
Mon, 04/29/2024 - 17:32

Smoldering multiple myeloma (SMM), a potential precursor to multiple myeloma (MM), has become a controversial topic. Some people diagnosed with SMM will live their whole lives without ever developing MM, while others will develop it quickly.

My approach to treating SMM takes into account what its history can teach us about 1) how advancements in imaging and diagnostic reclassifications can revise the entire natural history of a disease, and 2) how evidence generated by even the best of studies may have an expiration date.

Dr. Manni Mohyuddin, assistant professor, myeloma program, Huntsman Cancer Institute, University of Utah, Salt Lake City
Huntsman Cancer Institute
Manni Mohyuddin, MD

Much of what we know about SMM today dates to a pivotal study by Robert A. Kyle, MD, and colleagues, published in 2007. That inspirational team of investigators followed people diagnosed with SMM from 1970 to 1995 and established the first natural history of the condition. Their monumental effort and the data and conclusions it generated (eg,10% risk annually of SMM becoming MM for the first 5 years) are still cited today in references, papers, and slide sets.

Despite the seminal importance of this work, from today’s perspective the 2007 study might just as well have been describing a different disease. Back then people were diagnosed with SMM if their blood work detected a monoclonal protein and a follow-up bone marrow biopsy found at least 10% plasma cells (or a monoclonal protein exceeding 3g/dL). If there were no signs of end-organ damage (ie, no anemia or kidney problems) and an x-ray showed no fractures or lesions in the bones, the diagnosis was determined to be SMM.

What’s different in 2024? First and foremost: advanced, highly sensitive imaging techniques. MRIs can pick up small lytic lesions (and even the precursor to lytic lesions) that would not appear on an x-ray. In fact, relying solely on x-rays risks missing half of the lytic lesions.

Therefore, using the same criteria, many people who in the past were diagnosed with SMM would today be diagnosed with MM. Furthermore, in 2014 a diagnostic change reclassified people’s diagnosis from the highest risk category of SMM to the category of active MM.

Due to these scientific advances and classification changes, I believe that the natural history of SMM is unknown. Risk stratification models for SMM derived from data sets of people who had not undergone rigorous advanced imaging likely are skewed by data from people who had MM. In addition, current risk stratification models have very poor concordance with each other. I routinely see people whose 2-year risk according to different models varies by more than 30%-40%.

All this information tells us that SMM today is more indolent than the SMM of the past. Paradoxically, however, our therapies keep getting more and more aggressive, exposing this vulnerable group of people to intense treatment regimens that they may not require. Therapies tested on people diagnosed with SMM include an aggressive three-drug regimen, autologous stem cell transplant, and 2 years of additional therapy, as well as, more recently CAR T-cell therapy which so far has at least a 4%-5% treatment-related mortality risk in people with myeloma and a strong signal for secondary cancer risk. Other trials are testing bispecific therapies such as talquetamab, a drug which in my experience causes horrendous skin toxicity, profound weight loss, and one’s nails to fall off.

Doctors routinely keep showing slides from Kyle’s pivotal work to describe the natural history of SMM and to justify the need for treatment, and trials continue to use outdated progression prediction models. In my opinion, as people with MM keep living longer and treatments for MM keep getting better, the threshold for intervening with asymptomatic, healthy people with SMM should be getting higher, not lower.

I strongly believe that the current landscape of SMM treatment exemplifies good intentions leading to bad outcomes. A routine blood test in a completely healthy person that finds elevated total protein in the blood could culminate in well-intentioned but aggressive therapies that can lead to many serious side effects. (I repeat: Secondary cancers and deaths from infections have all occurred in SMM trials.)

With no control arm, we simply don’t know how well these people might have fared without any therapy. For all we know, treatment may have shortened their lives due to complications up to and including death — all because of a blood test often conducted for reasons that have no evidentiary basis.

For example, plasma cell diseases are not linked to low bone density or auto-immune diseases, yet these labs are sent routinely as part of a workup for those conditions, leading to increasing anxiety and costs.

So, what is my approach? When treating people with SMM, I hold nuanced discussions of this data to help prioritize and reach informed decisions. After our honest conversation about the limitations of SMM models, older data, and the limitations of prospective data studying pharmacological treatment, almost no one signs up for treatment.

I want these people to stay safe, and I’m proud to be a part of a trial (SPOTLIGHT, NCT06212323) that aims to show prospectively that these people can be watched off treatment with monitoring via advanced imaging modalities.

In conclusion: SMM teaches us how, even in the absence of pharmacological interventions, the natural history of a disease can change over time, simply via better imaging techniques and changes in diagnostic classifications. Unfortunately, SMM also illustrates how good intentions can lead to harm.
 

Dr. Mohyuddin is assistant professor in the multiple myeloma program at the Huntsman Cancer Institute at the University of Utah in Salt Lake City.

Publications
Topics
Sections

Smoldering multiple myeloma (SMM), a potential precursor to multiple myeloma (MM), has become a controversial topic. Some people diagnosed with SMM will live their whole lives without ever developing MM, while others will develop it quickly.

My approach to treating SMM takes into account what its history can teach us about 1) how advancements in imaging and diagnostic reclassifications can revise the entire natural history of a disease, and 2) how evidence generated by even the best of studies may have an expiration date.

Dr. Manni Mohyuddin, assistant professor, myeloma program, Huntsman Cancer Institute, University of Utah, Salt Lake City
Huntsman Cancer Institute
Manni Mohyuddin, MD

Much of what we know about SMM today dates to a pivotal study by Robert A. Kyle, MD, and colleagues, published in 2007. That inspirational team of investigators followed people diagnosed with SMM from 1970 to 1995 and established the first natural history of the condition. Their monumental effort and the data and conclusions it generated (eg,10% risk annually of SMM becoming MM for the first 5 years) are still cited today in references, papers, and slide sets.

Despite the seminal importance of this work, from today’s perspective the 2007 study might just as well have been describing a different disease. Back then people were diagnosed with SMM if their blood work detected a monoclonal protein and a follow-up bone marrow biopsy found at least 10% plasma cells (or a monoclonal protein exceeding 3g/dL). If there were no signs of end-organ damage (ie, no anemia or kidney problems) and an x-ray showed no fractures or lesions in the bones, the diagnosis was determined to be SMM.

What’s different in 2024? First and foremost: advanced, highly sensitive imaging techniques. MRIs can pick up small lytic lesions (and even the precursor to lytic lesions) that would not appear on an x-ray. In fact, relying solely on x-rays risks missing half of the lytic lesions.

Therefore, using the same criteria, many people who in the past were diagnosed with SMM would today be diagnosed with MM. Furthermore, in 2014 a diagnostic change reclassified people’s diagnosis from the highest risk category of SMM to the category of active MM.

Due to these scientific advances and classification changes, I believe that the natural history of SMM is unknown. Risk stratification models for SMM derived from data sets of people who had not undergone rigorous advanced imaging likely are skewed by data from people who had MM. In addition, current risk stratification models have very poor concordance with each other. I routinely see people whose 2-year risk according to different models varies by more than 30%-40%.

All this information tells us that SMM today is more indolent than the SMM of the past. Paradoxically, however, our therapies keep getting more and more aggressive, exposing this vulnerable group of people to intense treatment regimens that they may not require. Therapies tested on people diagnosed with SMM include an aggressive three-drug regimen, autologous stem cell transplant, and 2 years of additional therapy, as well as, more recently CAR T-cell therapy which so far has at least a 4%-5% treatment-related mortality risk in people with myeloma and a strong signal for secondary cancer risk. Other trials are testing bispecific therapies such as talquetamab, a drug which in my experience causes horrendous skin toxicity, profound weight loss, and one’s nails to fall off.

Doctors routinely keep showing slides from Kyle’s pivotal work to describe the natural history of SMM and to justify the need for treatment, and trials continue to use outdated progression prediction models. In my opinion, as people with MM keep living longer and treatments for MM keep getting better, the threshold for intervening with asymptomatic, healthy people with SMM should be getting higher, not lower.

I strongly believe that the current landscape of SMM treatment exemplifies good intentions leading to bad outcomes. A routine blood test in a completely healthy person that finds elevated total protein in the blood could culminate in well-intentioned but aggressive therapies that can lead to many serious side effects. (I repeat: Secondary cancers and deaths from infections have all occurred in SMM trials.)

With no control arm, we simply don’t know how well these people might have fared without any therapy. For all we know, treatment may have shortened their lives due to complications up to and including death — all because of a blood test often conducted for reasons that have no evidentiary basis.

For example, plasma cell diseases are not linked to low bone density or auto-immune diseases, yet these labs are sent routinely as part of a workup for those conditions, leading to increasing anxiety and costs.

So, what is my approach? When treating people with SMM, I hold nuanced discussions of this data to help prioritize and reach informed decisions. After our honest conversation about the limitations of SMM models, older data, and the limitations of prospective data studying pharmacological treatment, almost no one signs up for treatment.

I want these people to stay safe, and I’m proud to be a part of a trial (SPOTLIGHT, NCT06212323) that aims to show prospectively that these people can be watched off treatment with monitoring via advanced imaging modalities.

In conclusion: SMM teaches us how, even in the absence of pharmacological interventions, the natural history of a disease can change over time, simply via better imaging techniques and changes in diagnostic classifications. Unfortunately, SMM also illustrates how good intentions can lead to harm.
 

Dr. Mohyuddin is assistant professor in the multiple myeloma program at the Huntsman Cancer Institute at the University of Utah in Salt Lake City.

Smoldering multiple myeloma (SMM), a potential precursor to multiple myeloma (MM), has become a controversial topic. Some people diagnosed with SMM will live their whole lives without ever developing MM, while others will develop it quickly.

My approach to treating SMM takes into account what its history can teach us about 1) how advancements in imaging and diagnostic reclassifications can revise the entire natural history of a disease, and 2) how evidence generated by even the best of studies may have an expiration date.

Dr. Manni Mohyuddin, assistant professor, myeloma program, Huntsman Cancer Institute, University of Utah, Salt Lake City
Huntsman Cancer Institute
Manni Mohyuddin, MD

Much of what we know about SMM today dates to a pivotal study by Robert A. Kyle, MD, and colleagues, published in 2007. That inspirational team of investigators followed people diagnosed with SMM from 1970 to 1995 and established the first natural history of the condition. Their monumental effort and the data and conclusions it generated (eg,10% risk annually of SMM becoming MM for the first 5 years) are still cited today in references, papers, and slide sets.

Despite the seminal importance of this work, from today’s perspective the 2007 study might just as well have been describing a different disease. Back then people were diagnosed with SMM if their blood work detected a monoclonal protein and a follow-up bone marrow biopsy found at least 10% plasma cells (or a monoclonal protein exceeding 3g/dL). If there were no signs of end-organ damage (ie, no anemia or kidney problems) and an x-ray showed no fractures or lesions in the bones, the diagnosis was determined to be SMM.

What’s different in 2024? First and foremost: advanced, highly sensitive imaging techniques. MRIs can pick up small lytic lesions (and even the precursor to lytic lesions) that would not appear on an x-ray. In fact, relying solely on x-rays risks missing half of the lytic lesions.

Therefore, using the same criteria, many people who in the past were diagnosed with SMM would today be diagnosed with MM. Furthermore, in 2014 a diagnostic change reclassified people’s diagnosis from the highest risk category of SMM to the category of active MM.

Due to these scientific advances and classification changes, I believe that the natural history of SMM is unknown. Risk stratification models for SMM derived from data sets of people who had not undergone rigorous advanced imaging likely are skewed by data from people who had MM. In addition, current risk stratification models have very poor concordance with each other. I routinely see people whose 2-year risk according to different models varies by more than 30%-40%.

All this information tells us that SMM today is more indolent than the SMM of the past. Paradoxically, however, our therapies keep getting more and more aggressive, exposing this vulnerable group of people to intense treatment regimens that they may not require. Therapies tested on people diagnosed with SMM include an aggressive three-drug regimen, autologous stem cell transplant, and 2 years of additional therapy, as well as, more recently CAR T-cell therapy which so far has at least a 4%-5% treatment-related mortality risk in people with myeloma and a strong signal for secondary cancer risk. Other trials are testing bispecific therapies such as talquetamab, a drug which in my experience causes horrendous skin toxicity, profound weight loss, and one’s nails to fall off.

Doctors routinely keep showing slides from Kyle’s pivotal work to describe the natural history of SMM and to justify the need for treatment, and trials continue to use outdated progression prediction models. In my opinion, as people with MM keep living longer and treatments for MM keep getting better, the threshold for intervening with asymptomatic, healthy people with SMM should be getting higher, not lower.

I strongly believe that the current landscape of SMM treatment exemplifies good intentions leading to bad outcomes. A routine blood test in a completely healthy person that finds elevated total protein in the blood could culminate in well-intentioned but aggressive therapies that can lead to many serious side effects. (I repeat: Secondary cancers and deaths from infections have all occurred in SMM trials.)

With no control arm, we simply don’t know how well these people might have fared without any therapy. For all we know, treatment may have shortened their lives due to complications up to and including death — all because of a blood test often conducted for reasons that have no evidentiary basis.

For example, plasma cell diseases are not linked to low bone density or auto-immune diseases, yet these labs are sent routinely as part of a workup for those conditions, leading to increasing anxiety and costs.

So, what is my approach? When treating people with SMM, I hold nuanced discussions of this data to help prioritize and reach informed decisions. After our honest conversation about the limitations of SMM models, older data, and the limitations of prospective data studying pharmacological treatment, almost no one signs up for treatment.

I want these people to stay safe, and I’m proud to be a part of a trial (SPOTLIGHT, NCT06212323) that aims to show prospectively that these people can be watched off treatment with monitoring via advanced imaging modalities.

In conclusion: SMM teaches us how, even in the absence of pharmacological interventions, the natural history of a disease can change over time, simply via better imaging techniques and changes in diagnostic classifications. Unfortunately, SMM also illustrates how good intentions can lead to harm.
 

Dr. Mohyuddin is assistant professor in the multiple myeloma program at the Huntsman Cancer Institute at the University of Utah in Salt Lake City.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could Aspirin Help Treat Breast Cancer?

Article Type
Changed
Tue, 05/14/2024 - 15:36

Adjuvant therapy with aspirin offers no protection against recurrence or survival benefit in patients with high-risk nonmetastatic breast cancer, the results of a new phase 3 randomized controlled trial suggest.

These data are more robust than the efficacy signals from previous studies, meaning healthcare providers should not recommend aspirin as adjuvant therapy for breast cancer, reported lead author Wendy Y. Chen, MD, of Dana Farber Cancer Institute, Boston, and colleagues.

What Data Support Aspirin for Treating Breast Cancer?

“Multiple observational studies have reported a decreased risk of death among survivors of breast cancer who were regular aspirin users,” the investigators wrote in JAMA. “Even more compelling were data from randomized trials of aspirin for cardiovascular disease.”

This possible benefit was reported with mechanistic support, as aspirin’s anti-inflammatory and anti-platelet properties could theoretically control tumor growth, they added. Furthermore, aspirin impacts several cancer pathways currently targeted by agents approved by the US Food and Drug Administration (FDA).

Dr. Wendy Y. Chen, Dana Farber Cancer Institute, Boston
Brigham &amp; Women&#039;s Hospital
Dr. Wendy Y. Chen


“Collectively, evidence from laboratory and epidemiologic studies and randomized trials strongly suggested a role for aspirin to improve breast cancer outcomes, leading to [this new study, Alliance for Clinical Trials in Oncology (Alliance) A011502,] which, to our knowledge, is the first randomized, placebo-controlled trial of aspirin to report results among survivors of breast cancer,” Dr. Chen and colleagues wrote.
 

What Were The Key Findings From The A011502 Trial?

The A011502 trial enrolled 3,020 patients aged 18-70 years with ERBB2-negative breast cancer who had received standard therapy via routine clinical care. Eligibility required that chemotherapy and local therapy were complete, but ongoing endocrine therapy was allowed.

Participants were randomized in a 1:1 ratio to receive aspirin 300 mg per day or matching placebo for 5 years. The primary outcome was invasive disease-free survival, and the key secondary outcome was overall survival.

After a median follow-up of almost 3 years, at the first interim analysis, the study was suspended early due to statistical futility. By that timepoint, 253 invasive disease-free survival events occurred, of which 141 occurred in the aspirin group, compared with 112 in the placebo group, providing a hazard ratio of 1.27 (95% CI, 0.99-1.63) that was not statistically significant  (P = .06). No statistically significant difference in overall survival was observed (hazard ratio, 1.19; 95% CI, 0.82-1.72). Safety profiles were similar across groups.

How Will This Study Change Practice?

In an accompanying editorial, Jeanne S. Mandelblatt, MD, of Georgetown Lombardi Institute for Cancer and Aging Research, Washington, and colleagues, praised the trial for its comprehensive approach, but they predicted that the negative result could spell friction for health care providers.

“[C]linicians may find it challenging to communicate with their patients about the negative result in the Alliance trial, because prior lay press articles, observational studies, and meta-analyses of cardiovascular trials suggested that aspirin may decrease breast cancer recurrence,” they wrote.

Jeanne S. Mandelblatt, MD, Georgetown Lombardi Institute for Cancer and Aging Research, Washington
Georgetown University
Dr. Jeanne S. Mandelblatt


Dr. Mandelblatt and colleagues went on to explore broader implications beyond breast cancer, including considerations for communication of negative results in other medical specialties, discussions between clinicians and patients regarding aspirin use for non–breast cancer purposes, and questions about the timing of aspirin use and the role of age and biological aging.

 

 

How Might the Findings From the A011502 Trial Impact Future Research?

Finally, and “most critically,” the editorialists raised concerns about health equity, noting the limited diversity in trial participants and the potential exclusion of subgroups that might benefit from aspirin use, particularly those more likely to experience accelerated biological aging and disparities in cancer risk and outcomes due to systemic racism or adverse social determinants of health.

They concluded by emphasizing the need to consider the intersectionality of aging, cancer, and disparities in designing future trials to advance health equity.

This study was funded by the Department of Defense Breast Cancer Research Program and the National Cancer Institute of the National Institutes of Health. The research was also supported in part by Bayer, which provided the study drug. The investigators disclosed relationships with Novartis, Seagen, Orum Clinical, and others. The editorialists disclosed relationships with Cantex Pharmaceuticals, and Pfizer.

Publications
Topics
Sections

Adjuvant therapy with aspirin offers no protection against recurrence or survival benefit in patients with high-risk nonmetastatic breast cancer, the results of a new phase 3 randomized controlled trial suggest.

These data are more robust than the efficacy signals from previous studies, meaning healthcare providers should not recommend aspirin as adjuvant therapy for breast cancer, reported lead author Wendy Y. Chen, MD, of Dana Farber Cancer Institute, Boston, and colleagues.

What Data Support Aspirin for Treating Breast Cancer?

“Multiple observational studies have reported a decreased risk of death among survivors of breast cancer who were regular aspirin users,” the investigators wrote in JAMA. “Even more compelling were data from randomized trials of aspirin for cardiovascular disease.”

This possible benefit was reported with mechanistic support, as aspirin’s anti-inflammatory and anti-platelet properties could theoretically control tumor growth, they added. Furthermore, aspirin impacts several cancer pathways currently targeted by agents approved by the US Food and Drug Administration (FDA).

Dr. Wendy Y. Chen, Dana Farber Cancer Institute, Boston
Brigham &amp; Women&#039;s Hospital
Dr. Wendy Y. Chen


“Collectively, evidence from laboratory and epidemiologic studies and randomized trials strongly suggested a role for aspirin to improve breast cancer outcomes, leading to [this new study, Alliance for Clinical Trials in Oncology (Alliance) A011502,] which, to our knowledge, is the first randomized, placebo-controlled trial of aspirin to report results among survivors of breast cancer,” Dr. Chen and colleagues wrote.
 

What Were The Key Findings From The A011502 Trial?

The A011502 trial enrolled 3,020 patients aged 18-70 years with ERBB2-negative breast cancer who had received standard therapy via routine clinical care. Eligibility required that chemotherapy and local therapy were complete, but ongoing endocrine therapy was allowed.

Participants were randomized in a 1:1 ratio to receive aspirin 300 mg per day or matching placebo for 5 years. The primary outcome was invasive disease-free survival, and the key secondary outcome was overall survival.

After a median follow-up of almost 3 years, at the first interim analysis, the study was suspended early due to statistical futility. By that timepoint, 253 invasive disease-free survival events occurred, of which 141 occurred in the aspirin group, compared with 112 in the placebo group, providing a hazard ratio of 1.27 (95% CI, 0.99-1.63) that was not statistically significant  (P = .06). No statistically significant difference in overall survival was observed (hazard ratio, 1.19; 95% CI, 0.82-1.72). Safety profiles were similar across groups.

How Will This Study Change Practice?

In an accompanying editorial, Jeanne S. Mandelblatt, MD, of Georgetown Lombardi Institute for Cancer and Aging Research, Washington, and colleagues, praised the trial for its comprehensive approach, but they predicted that the negative result could spell friction for health care providers.

“[C]linicians may find it challenging to communicate with their patients about the negative result in the Alliance trial, because prior lay press articles, observational studies, and meta-analyses of cardiovascular trials suggested that aspirin may decrease breast cancer recurrence,” they wrote.

Jeanne S. Mandelblatt, MD, Georgetown Lombardi Institute for Cancer and Aging Research, Washington
Georgetown University
Dr. Jeanne S. Mandelblatt


Dr. Mandelblatt and colleagues went on to explore broader implications beyond breast cancer, including considerations for communication of negative results in other medical specialties, discussions between clinicians and patients regarding aspirin use for non–breast cancer purposes, and questions about the timing of aspirin use and the role of age and biological aging.

 

 

How Might the Findings From the A011502 Trial Impact Future Research?

Finally, and “most critically,” the editorialists raised concerns about health equity, noting the limited diversity in trial participants and the potential exclusion of subgroups that might benefit from aspirin use, particularly those more likely to experience accelerated biological aging and disparities in cancer risk and outcomes due to systemic racism or adverse social determinants of health.

They concluded by emphasizing the need to consider the intersectionality of aging, cancer, and disparities in designing future trials to advance health equity.

This study was funded by the Department of Defense Breast Cancer Research Program and the National Cancer Institute of the National Institutes of Health. The research was also supported in part by Bayer, which provided the study drug. The investigators disclosed relationships with Novartis, Seagen, Orum Clinical, and others. The editorialists disclosed relationships with Cantex Pharmaceuticals, and Pfizer.

Adjuvant therapy with aspirin offers no protection against recurrence or survival benefit in patients with high-risk nonmetastatic breast cancer, the results of a new phase 3 randomized controlled trial suggest.

These data are more robust than the efficacy signals from previous studies, meaning healthcare providers should not recommend aspirin as adjuvant therapy for breast cancer, reported lead author Wendy Y. Chen, MD, of Dana Farber Cancer Institute, Boston, and colleagues.

What Data Support Aspirin for Treating Breast Cancer?

“Multiple observational studies have reported a decreased risk of death among survivors of breast cancer who were regular aspirin users,” the investigators wrote in JAMA. “Even more compelling were data from randomized trials of aspirin for cardiovascular disease.”

This possible benefit was reported with mechanistic support, as aspirin’s anti-inflammatory and anti-platelet properties could theoretically control tumor growth, they added. Furthermore, aspirin impacts several cancer pathways currently targeted by agents approved by the US Food and Drug Administration (FDA).

Dr. Wendy Y. Chen, Dana Farber Cancer Institute, Boston
Brigham &amp; Women&#039;s Hospital
Dr. Wendy Y. Chen


“Collectively, evidence from laboratory and epidemiologic studies and randomized trials strongly suggested a role for aspirin to improve breast cancer outcomes, leading to [this new study, Alliance for Clinical Trials in Oncology (Alliance) A011502,] which, to our knowledge, is the first randomized, placebo-controlled trial of aspirin to report results among survivors of breast cancer,” Dr. Chen and colleagues wrote.
 

What Were The Key Findings From The A011502 Trial?

The A011502 trial enrolled 3,020 patients aged 18-70 years with ERBB2-negative breast cancer who had received standard therapy via routine clinical care. Eligibility required that chemotherapy and local therapy were complete, but ongoing endocrine therapy was allowed.

Participants were randomized in a 1:1 ratio to receive aspirin 300 mg per day or matching placebo for 5 years. The primary outcome was invasive disease-free survival, and the key secondary outcome was overall survival.

After a median follow-up of almost 3 years, at the first interim analysis, the study was suspended early due to statistical futility. By that timepoint, 253 invasive disease-free survival events occurred, of which 141 occurred in the aspirin group, compared with 112 in the placebo group, providing a hazard ratio of 1.27 (95% CI, 0.99-1.63) that was not statistically significant  (P = .06). No statistically significant difference in overall survival was observed (hazard ratio, 1.19; 95% CI, 0.82-1.72). Safety profiles were similar across groups.

How Will This Study Change Practice?

In an accompanying editorial, Jeanne S. Mandelblatt, MD, of Georgetown Lombardi Institute for Cancer and Aging Research, Washington, and colleagues, praised the trial for its comprehensive approach, but they predicted that the negative result could spell friction for health care providers.

“[C]linicians may find it challenging to communicate with their patients about the negative result in the Alliance trial, because prior lay press articles, observational studies, and meta-analyses of cardiovascular trials suggested that aspirin may decrease breast cancer recurrence,” they wrote.

Jeanne S. Mandelblatt, MD, Georgetown Lombardi Institute for Cancer and Aging Research, Washington
Georgetown University
Dr. Jeanne S. Mandelblatt


Dr. Mandelblatt and colleagues went on to explore broader implications beyond breast cancer, including considerations for communication of negative results in other medical specialties, discussions between clinicians and patients regarding aspirin use for non–breast cancer purposes, and questions about the timing of aspirin use and the role of age and biological aging.

 

 

How Might the Findings From the A011502 Trial Impact Future Research?

Finally, and “most critically,” the editorialists raised concerns about health equity, noting the limited diversity in trial participants and the potential exclusion of subgroups that might benefit from aspirin use, particularly those more likely to experience accelerated biological aging and disparities in cancer risk and outcomes due to systemic racism or adverse social determinants of health.

They concluded by emphasizing the need to consider the intersectionality of aging, cancer, and disparities in designing future trials to advance health equity.

This study was funded by the Department of Defense Breast Cancer Research Program and the National Cancer Institute of the National Institutes of Health. The research was also supported in part by Bayer, which provided the study drug. The investigators disclosed relationships with Novartis, Seagen, Orum Clinical, and others. The editorialists disclosed relationships with Cantex Pharmaceuticals, and Pfizer.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Late-Stage Incidence Rates Support CRC Screening From Age 45

Article Type
Changed
Mon, 04/29/2024 - 10:34

In the setting of conflicting national screening guidelines, the incidence of distant- and regional-stage colorectal adenocarcinoma (CRC) has been increasing in individuals aged 46-49 years, a cross-sectional study of stage-stratified CRC found.

It is well known that CRC is becoming more prevalent generally in the under 50-year population, but stage-related analyses have not been done.

Staging analysis in this age group is important, however, as an increasing burden of advance-staged disease would provide further evidence for earlier screening initiation, wrote Eric M. Montminy, MD, a gastroenterologist at John H. Stroger Hospital of County Cook, Chicago, Illinois, and colleagues in JAMA Network Open.

Dr. Eric M. Montminy


The United States Preventive Services Task Force (USPSTF) has recommended that average-risk screening begin at 45 years of age, as do the American Gastroenterological Association and other GI societies, although the American College of Physicians last year published clinical guidance recommending 50 years as the age to start screening for CRC for patients with average risk.

“Patients aged 46-49 may become confused on which guideline to follow, similar to confusion occurring with prior breast cancer screening changes,” Dr. Montminy said in an interview. “We wanted to demonstrate incidence rates with stage stratification to help clarify the incidence trends in this age group. Stage stratification is a key because it provides insight into the relationship between time and cancer incidence, ie, is screening finding early cancer or not?”

A 2020 study in JAMA Network Open demonstrated a 46.1% increase in CRC incidence rates (IRs) in persons aged 49-50 years. This steep increase is consistent with the presence of a large preexisting and undetected case burden.

“Our results demonstrate that adults aged 46-49 years, who are between now-conflicting guidelines on whether to start screening at age 45 or 50 years, have an increasing burden of more advanced-stage CRC and thus may be at an increased risk if screening is not initiated at age 45 years,” Dr. Montminy’s group wrote.

Using incidence data per 100,000 population from the National Cancer Institute’s Surveillance, Epidemiology, and End Results registry, the investigators observed the following IRs for early-onset CRC in the age group of 46-49 years:

  • Distant adenocarcinoma IRs increased faster than other stages: annual percentage change (APC), 2.2 (95% CI, 1.8-2.6).
  • Regional IRs also significantly increased: APC, 1.3 (95% CI, 0.8-1.7).
  • Absolute regional IRs of CRC in the age bracket of 46-49 years are similar to total pancreatic cancer IRs in all ages and all stages combined (13.2 of 100,000) over similar years. When distant IRs for CRC are included with regional IRs, those for IRs for CRC are double those for pancreatic cancer of all stages combined.
  • The only decrease was seen in localized IRs: APC, -0.6 (95% CI, -1 to -0.2).

“My best advice for clinicians is to provide the facts from the data to patients so they can make an informed health decision,” Dr. Montminy said. “This includes taking an appropriate personal and family history and having the patient factor this aspect into their decision on when and how they want to perform colon cancer screening.”

His institution adheres to the USPSTF recommendation of initiation of CRC screening at age 45 years.
 

 

 

Findings From 2000 to 2020

During 2000-2020 period, 26,887 CRCs were diagnosed in adults aged 46-49 years (54.5% in men).

As of 2020, the localized adenocarcinoma IR decreased to 7.7 of 100,000, but regional adenocarcinoma IR increased to 13.4 of 100,000 and distant adenocarcinoma IR increased to 9.0 of 100,000.

Regional adenocarcinoma IR remained the highest of all stages in 2000-2020. From 2014 to 2020, distant IRs became similar to localized IRs, except in 2017 when distant IRs were significantly higher than localized.
 

Why the CRC Uptick?

“It remains an enigma at this time as to why we’re seeing this shift,” Dr. Montminy said, noting that etiologies from the colonic microbiome to cellphones have been postulated. “To date, no theory has substantially provided causality. But whatever the source is, it is affecting Western countries in unison with data demonstrating a birth cohort effect as well,” he added. “We additionally know, based on the current epidemiologic data, that current screening practices are failing, and a unified discussion must occur in order to prevent young patients from developing advanced colon cancer.”

Dr. Joshua Meyer

Offering his perspective on the findings, Joshua Meyer, MD, vice chair of translational research in the Department of Radiation Oncology at Fox Chase Cancer Center in Philadelphia, said the findings reinforce the practice of offering screening to average-risk individuals starting at age 45 years, the threshold at his institution. “There are previously published data demonstrating an increase in advanced stage at the time of screening initiation, and these data support that,” said Dr. Meyer, who was not involved in the present analysis.

More research needs to be done, he continued, not just on optimal age but also on the effect of multiple other factors impacting risk. “These may include family history and genetic risk as well as the role of blood- and stool-based screening assays in an integrated strategy to screen for colorectal cancer.”

There are multiple screening tests, and while colonoscopy, the gold standard, is very safe, it is not completely without risks, Dr. Meyer added. “And the question of the appropriate allocation of limited societal resources continues to be discussed on a broader level and largely explains the difference between the two guidelines.”

This study received no specific funding. Co-author Jordan J. Karlitz, MD, reported personal fees from GRAIL (senior medical director) and an equity position from Gastro Girl/GI On Demand outside f the submitted work. Dr. Meyer disclosed no conflicts of interest relevant to his comments.

Publications
Topics
Sections

In the setting of conflicting national screening guidelines, the incidence of distant- and regional-stage colorectal adenocarcinoma (CRC) has been increasing in individuals aged 46-49 years, a cross-sectional study of stage-stratified CRC found.

It is well known that CRC is becoming more prevalent generally in the under 50-year population, but stage-related analyses have not been done.

Staging analysis in this age group is important, however, as an increasing burden of advance-staged disease would provide further evidence for earlier screening initiation, wrote Eric M. Montminy, MD, a gastroenterologist at John H. Stroger Hospital of County Cook, Chicago, Illinois, and colleagues in JAMA Network Open.

Dr. Eric M. Montminy


The United States Preventive Services Task Force (USPSTF) has recommended that average-risk screening begin at 45 years of age, as do the American Gastroenterological Association and other GI societies, although the American College of Physicians last year published clinical guidance recommending 50 years as the age to start screening for CRC for patients with average risk.

“Patients aged 46-49 may become confused on which guideline to follow, similar to confusion occurring with prior breast cancer screening changes,” Dr. Montminy said in an interview. “We wanted to demonstrate incidence rates with stage stratification to help clarify the incidence trends in this age group. Stage stratification is a key because it provides insight into the relationship between time and cancer incidence, ie, is screening finding early cancer or not?”

A 2020 study in JAMA Network Open demonstrated a 46.1% increase in CRC incidence rates (IRs) in persons aged 49-50 years. This steep increase is consistent with the presence of a large preexisting and undetected case burden.

“Our results demonstrate that adults aged 46-49 years, who are between now-conflicting guidelines on whether to start screening at age 45 or 50 years, have an increasing burden of more advanced-stage CRC and thus may be at an increased risk if screening is not initiated at age 45 years,” Dr. Montminy’s group wrote.

Using incidence data per 100,000 population from the National Cancer Institute’s Surveillance, Epidemiology, and End Results registry, the investigators observed the following IRs for early-onset CRC in the age group of 46-49 years:

  • Distant adenocarcinoma IRs increased faster than other stages: annual percentage change (APC), 2.2 (95% CI, 1.8-2.6).
  • Regional IRs also significantly increased: APC, 1.3 (95% CI, 0.8-1.7).
  • Absolute regional IRs of CRC in the age bracket of 46-49 years are similar to total pancreatic cancer IRs in all ages and all stages combined (13.2 of 100,000) over similar years. When distant IRs for CRC are included with regional IRs, those for IRs for CRC are double those for pancreatic cancer of all stages combined.
  • The only decrease was seen in localized IRs: APC, -0.6 (95% CI, -1 to -0.2).

“My best advice for clinicians is to provide the facts from the data to patients so they can make an informed health decision,” Dr. Montminy said. “This includes taking an appropriate personal and family history and having the patient factor this aspect into their decision on when and how they want to perform colon cancer screening.”

His institution adheres to the USPSTF recommendation of initiation of CRC screening at age 45 years.
 

 

 

Findings From 2000 to 2020

During 2000-2020 period, 26,887 CRCs were diagnosed in adults aged 46-49 years (54.5% in men).

As of 2020, the localized adenocarcinoma IR decreased to 7.7 of 100,000, but regional adenocarcinoma IR increased to 13.4 of 100,000 and distant adenocarcinoma IR increased to 9.0 of 100,000.

Regional adenocarcinoma IR remained the highest of all stages in 2000-2020. From 2014 to 2020, distant IRs became similar to localized IRs, except in 2017 when distant IRs were significantly higher than localized.
 

Why the CRC Uptick?

“It remains an enigma at this time as to why we’re seeing this shift,” Dr. Montminy said, noting that etiologies from the colonic microbiome to cellphones have been postulated. “To date, no theory has substantially provided causality. But whatever the source is, it is affecting Western countries in unison with data demonstrating a birth cohort effect as well,” he added. “We additionally know, based on the current epidemiologic data, that current screening practices are failing, and a unified discussion must occur in order to prevent young patients from developing advanced colon cancer.”

Dr. Joshua Meyer

Offering his perspective on the findings, Joshua Meyer, MD, vice chair of translational research in the Department of Radiation Oncology at Fox Chase Cancer Center in Philadelphia, said the findings reinforce the practice of offering screening to average-risk individuals starting at age 45 years, the threshold at his institution. “There are previously published data demonstrating an increase in advanced stage at the time of screening initiation, and these data support that,” said Dr. Meyer, who was not involved in the present analysis.

More research needs to be done, he continued, not just on optimal age but also on the effect of multiple other factors impacting risk. “These may include family history and genetic risk as well as the role of blood- and stool-based screening assays in an integrated strategy to screen for colorectal cancer.”

There are multiple screening tests, and while colonoscopy, the gold standard, is very safe, it is not completely without risks, Dr. Meyer added. “And the question of the appropriate allocation of limited societal resources continues to be discussed on a broader level and largely explains the difference between the two guidelines.”

This study received no specific funding. Co-author Jordan J. Karlitz, MD, reported personal fees from GRAIL (senior medical director) and an equity position from Gastro Girl/GI On Demand outside f the submitted work. Dr. Meyer disclosed no conflicts of interest relevant to his comments.

In the setting of conflicting national screening guidelines, the incidence of distant- and regional-stage colorectal adenocarcinoma (CRC) has been increasing in individuals aged 46-49 years, a cross-sectional study of stage-stratified CRC found.

It is well known that CRC is becoming more prevalent generally in the under 50-year population, but stage-related analyses have not been done.

Staging analysis in this age group is important, however, as an increasing burden of advance-staged disease would provide further evidence for earlier screening initiation, wrote Eric M. Montminy, MD, a gastroenterologist at John H. Stroger Hospital of County Cook, Chicago, Illinois, and colleagues in JAMA Network Open.

Dr. Eric M. Montminy


The United States Preventive Services Task Force (USPSTF) has recommended that average-risk screening begin at 45 years of age, as do the American Gastroenterological Association and other GI societies, although the American College of Physicians last year published clinical guidance recommending 50 years as the age to start screening for CRC for patients with average risk.

“Patients aged 46-49 may become confused on which guideline to follow, similar to confusion occurring with prior breast cancer screening changes,” Dr. Montminy said in an interview. “We wanted to demonstrate incidence rates with stage stratification to help clarify the incidence trends in this age group. Stage stratification is a key because it provides insight into the relationship between time and cancer incidence, ie, is screening finding early cancer or not?”

A 2020 study in JAMA Network Open demonstrated a 46.1% increase in CRC incidence rates (IRs) in persons aged 49-50 years. This steep increase is consistent with the presence of a large preexisting and undetected case burden.

“Our results demonstrate that adults aged 46-49 years, who are between now-conflicting guidelines on whether to start screening at age 45 or 50 years, have an increasing burden of more advanced-stage CRC and thus may be at an increased risk if screening is not initiated at age 45 years,” Dr. Montminy’s group wrote.

Using incidence data per 100,000 population from the National Cancer Institute’s Surveillance, Epidemiology, and End Results registry, the investigators observed the following IRs for early-onset CRC in the age group of 46-49 years:

  • Distant adenocarcinoma IRs increased faster than other stages: annual percentage change (APC), 2.2 (95% CI, 1.8-2.6).
  • Regional IRs also significantly increased: APC, 1.3 (95% CI, 0.8-1.7).
  • Absolute regional IRs of CRC in the age bracket of 46-49 years are similar to total pancreatic cancer IRs in all ages and all stages combined (13.2 of 100,000) over similar years. When distant IRs for CRC are included with regional IRs, those for IRs for CRC are double those for pancreatic cancer of all stages combined.
  • The only decrease was seen in localized IRs: APC, -0.6 (95% CI, -1 to -0.2).

“My best advice for clinicians is to provide the facts from the data to patients so they can make an informed health decision,” Dr. Montminy said. “This includes taking an appropriate personal and family history and having the patient factor this aspect into their decision on when and how they want to perform colon cancer screening.”

His institution adheres to the USPSTF recommendation of initiation of CRC screening at age 45 years.
 

 

 

Findings From 2000 to 2020

During 2000-2020 period, 26,887 CRCs were diagnosed in adults aged 46-49 years (54.5% in men).

As of 2020, the localized adenocarcinoma IR decreased to 7.7 of 100,000, but regional adenocarcinoma IR increased to 13.4 of 100,000 and distant adenocarcinoma IR increased to 9.0 of 100,000.

Regional adenocarcinoma IR remained the highest of all stages in 2000-2020. From 2014 to 2020, distant IRs became similar to localized IRs, except in 2017 when distant IRs were significantly higher than localized.
 

Why the CRC Uptick?

“It remains an enigma at this time as to why we’re seeing this shift,” Dr. Montminy said, noting that etiologies from the colonic microbiome to cellphones have been postulated. “To date, no theory has substantially provided causality. But whatever the source is, it is affecting Western countries in unison with data demonstrating a birth cohort effect as well,” he added. “We additionally know, based on the current epidemiologic data, that current screening practices are failing, and a unified discussion must occur in order to prevent young patients from developing advanced colon cancer.”

Dr. Joshua Meyer

Offering his perspective on the findings, Joshua Meyer, MD, vice chair of translational research in the Department of Radiation Oncology at Fox Chase Cancer Center in Philadelphia, said the findings reinforce the practice of offering screening to average-risk individuals starting at age 45 years, the threshold at his institution. “There are previously published data demonstrating an increase in advanced stage at the time of screening initiation, and these data support that,” said Dr. Meyer, who was not involved in the present analysis.

More research needs to be done, he continued, not just on optimal age but also on the effect of multiple other factors impacting risk. “These may include family history and genetic risk as well as the role of blood- and stool-based screening assays in an integrated strategy to screen for colorectal cancer.”

There are multiple screening tests, and while colonoscopy, the gold standard, is very safe, it is not completely without risks, Dr. Meyer added. “And the question of the appropriate allocation of limited societal resources continues to be discussed on a broader level and largely explains the difference between the two guidelines.”

This study received no specific funding. Co-author Jordan J. Karlitz, MD, reported personal fees from GRAIL (senior medical director) and an equity position from Gastro Girl/GI On Demand outside f the submitted work. Dr. Meyer disclosed no conflicts of interest relevant to his comments.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article