Cancer survivors face more age-related deficits

Article Type
Changed
Thu, 12/15/2022 - 17:42

Long-term survivors of cancer have more age-related functional deficits than do those who have not experienced cancer, and these deficits – as well as their cancer history – are both associated with a higher risk of all-cause mortality, a study has found.

A paper published in Cancer reported the outcomes of a population-based cohort study involving 1,723 female cancer survivors and 11,145 cancer-free women enrolled in the Iowa Women’s Health Study, who were followed for 10 years.

The analysis revealed that women with a history of cancer had significantly more deficits on a geriatric assessment compared with their age-matched controls without a history of cancer. While 66% of women without a cancer history had one or more deficits, 70% of those with a history had at least one age-related deficit, and they were significantly more likely to have two or more deficits.

Cancer survivors were significantly more likely to have two or more physical function limitations than were those without a history of cancer (42.4% vs. 36.9%, P less than .0001), to have two or more comorbidities (41.3% vs. 38.2%, P = .02) and to have poor general health (23.3% vs. 17.4%, P less than .0001). They were also significantly less likely to be underweight.

The study found that both cancer history and age-related functional deficits were predictors of mortality, even after adjustment for confounders such as chronological age, smoking, and physical activity levels. The highest mortality risk was seen in cancer survivors with two or more age-related health deficits, who had a twofold greater mortality risk compared with the noncancer controls with fewer than two health deficits.

Even individuals with a history of cancer but without any health deficits still had a 1.3-1.4-fold increased risk of mortality compared with individuals without a history of cancer and without health deficits.

“These results confirm the increased risk of mortality associated with GA domain deficits and extend the research by demonstrating that a cancer history is associated with an older functional age compared with aged-matched cancer-free individuals,” wrote Cindy K. Blair, PhD, of the department of internal medicine at the University of New Mexico, Albuquerque, and coauthors.

They noted that the study included very long-term cancer survivors who had survived for an average of 11 years before they underwent the geriatric assessment and were then followed for 10 years after that point.

“Further research is needed to identify older cancer survivors who are at risk of accelerated aging,” the authors wrote. “Interventions that target physical function, comorbidity, nutritional status, and general health are greatly needed to improve or maintain the quality of survivorship in older cancer survivors.”

The National Cancer Institute, the University of Minnesota Cancer Center, and the University of New Mexico Comprehensive Cancer Center supported the study. Two authors declared grants from the National Institutes of Health related to the study.

SOURCE: Blair C et al. Cancer 2019, Aug 16. doi: 10.1002/cncr.32449.

Publications
Topics
Sections

Long-term survivors of cancer have more age-related functional deficits than do those who have not experienced cancer, and these deficits – as well as their cancer history – are both associated with a higher risk of all-cause mortality, a study has found.

A paper published in Cancer reported the outcomes of a population-based cohort study involving 1,723 female cancer survivors and 11,145 cancer-free women enrolled in the Iowa Women’s Health Study, who were followed for 10 years.

The analysis revealed that women with a history of cancer had significantly more deficits on a geriatric assessment compared with their age-matched controls without a history of cancer. While 66% of women without a cancer history had one or more deficits, 70% of those with a history had at least one age-related deficit, and they were significantly more likely to have two or more deficits.

Cancer survivors were significantly more likely to have two or more physical function limitations than were those without a history of cancer (42.4% vs. 36.9%, P less than .0001), to have two or more comorbidities (41.3% vs. 38.2%, P = .02) and to have poor general health (23.3% vs. 17.4%, P less than .0001). They were also significantly less likely to be underweight.

The study found that both cancer history and age-related functional deficits were predictors of mortality, even after adjustment for confounders such as chronological age, smoking, and physical activity levels. The highest mortality risk was seen in cancer survivors with two or more age-related health deficits, who had a twofold greater mortality risk compared with the noncancer controls with fewer than two health deficits.

Even individuals with a history of cancer but without any health deficits still had a 1.3-1.4-fold increased risk of mortality compared with individuals without a history of cancer and without health deficits.

“These results confirm the increased risk of mortality associated with GA domain deficits and extend the research by demonstrating that a cancer history is associated with an older functional age compared with aged-matched cancer-free individuals,” wrote Cindy K. Blair, PhD, of the department of internal medicine at the University of New Mexico, Albuquerque, and coauthors.

They noted that the study included very long-term cancer survivors who had survived for an average of 11 years before they underwent the geriatric assessment and were then followed for 10 years after that point.

“Further research is needed to identify older cancer survivors who are at risk of accelerated aging,” the authors wrote. “Interventions that target physical function, comorbidity, nutritional status, and general health are greatly needed to improve or maintain the quality of survivorship in older cancer survivors.”

The National Cancer Institute, the University of Minnesota Cancer Center, and the University of New Mexico Comprehensive Cancer Center supported the study. Two authors declared grants from the National Institutes of Health related to the study.

SOURCE: Blair C et al. Cancer 2019, Aug 16. doi: 10.1002/cncr.32449.

Long-term survivors of cancer have more age-related functional deficits than do those who have not experienced cancer, and these deficits – as well as their cancer history – are both associated with a higher risk of all-cause mortality, a study has found.

A paper published in Cancer reported the outcomes of a population-based cohort study involving 1,723 female cancer survivors and 11,145 cancer-free women enrolled in the Iowa Women’s Health Study, who were followed for 10 years.

The analysis revealed that women with a history of cancer had significantly more deficits on a geriatric assessment compared with their age-matched controls without a history of cancer. While 66% of women without a cancer history had one or more deficits, 70% of those with a history had at least one age-related deficit, and they were significantly more likely to have two or more deficits.

Cancer survivors were significantly more likely to have two or more physical function limitations than were those without a history of cancer (42.4% vs. 36.9%, P less than .0001), to have two or more comorbidities (41.3% vs. 38.2%, P = .02) and to have poor general health (23.3% vs. 17.4%, P less than .0001). They were also significantly less likely to be underweight.

The study found that both cancer history and age-related functional deficits were predictors of mortality, even after adjustment for confounders such as chronological age, smoking, and physical activity levels. The highest mortality risk was seen in cancer survivors with two or more age-related health deficits, who had a twofold greater mortality risk compared with the noncancer controls with fewer than two health deficits.

Even individuals with a history of cancer but without any health deficits still had a 1.3-1.4-fold increased risk of mortality compared with individuals without a history of cancer and without health deficits.

“These results confirm the increased risk of mortality associated with GA domain deficits and extend the research by demonstrating that a cancer history is associated with an older functional age compared with aged-matched cancer-free individuals,” wrote Cindy K. Blair, PhD, of the department of internal medicine at the University of New Mexico, Albuquerque, and coauthors.

They noted that the study included very long-term cancer survivors who had survived for an average of 11 years before they underwent the geriatric assessment and were then followed for 10 years after that point.

“Further research is needed to identify older cancer survivors who are at risk of accelerated aging,” the authors wrote. “Interventions that target physical function, comorbidity, nutritional status, and general health are greatly needed to improve or maintain the quality of survivorship in older cancer survivors.”

The National Cancer Institute, the University of Minnesota Cancer Center, and the University of New Mexico Comprehensive Cancer Center supported the study. Two authors declared grants from the National Institutes of Health related to the study.

SOURCE: Blair C et al. Cancer 2019, Aug 16. doi: 10.1002/cncr.32449.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CANCER

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Cardiovascular cost of smoking may last up to 25 years

Article Type
Changed
Tue, 08/20/2019 - 15:01

 

Quitting smoking significantly reduces the risk of cardiovascular disease, but past smokers are still at elevated cardiovascular risk, compared with nonsmokers, for up to 25 years after smoking cessation, research in JAMA suggests.

Cigarette snuffed out amid cigarette butts
AtnoYdur/Thinkstock

A retrospective analysis of data from 8,770 individuals in the Framingham Heart Study compared the incidence of myocardial infarction, stroke, heart failure, or cardiovascular death in ever-smokers with that of never smokers.

Only 40% of the total cohort had never smoked. Of the 4,115 current smokers at baseline, 38.6% quit during the course of the study and did not relapse but 51.4% continued to smoke until they developed cardiovascular disease or dropped out of the study.

Current smokers had a significant 4.68-fold higher incidence of cardiovascular disease, compared with those who had never smoked, but those who stopped smoking showed a 39% decline in their risk of cardiovascular disease within 5 years of cessation.

However, individuals who were formerly heavy smokers – defined as at least 20 pack-years of smoking – retained a risk of cardiovascular disease 25% higher than that of never smokers until 10-15 years after quitting smoking. At 16 years, the 95% confidence interval for cardiovascular disease risk among former smokers versus that of never smokers finally and consistently included the null value of 1.

The study pooled two cohorts; the original cohort, who attended their fourth examination during 1954-1958 and an offspring cohort who attended their first examination during 1971-1975. The authors saw a difference between the two cohorts in the time course of cardiovascular disease risk in heavy smokers.

In the original cohort, former heavy smoking ceased to be significantly associated with increased cardiovascular disease risk within 5-10 years of cessation, but in the offspring cohort, it took 25 years after cessation for the incidence to decline to the same level of risk seen in never smokers.

“The upper estimate of this time course is a decade longer than that of the Nurses’ Health Study results for coronary heart disease and cardiovascular death and more than 20 years longer than in some prior reports for coronary heart disease and stroke,” wrote Meredith S. Duncan from the division of cardiovascular medicine at the Vanderbilt University Medical Center, Nashville, Tenn., and coauthors. “Although the exact amount of time after quitting at which former smokers’ CVD risk ceases to differ significantly from that of never smokers is unknown (and may further depend on cumulative exposure), these findings support a longer time course of risk reduction than was previously thought, yielding implications for CVD risk stratification of former smokers.”

However, they did note that the study could not account for environmental tobacco smoke exposure and that the participants were mostly of white European ancestry, which limited the generalizability of the findings to other populations.

The Framingham Health Study was supported by the National Heart, Lung, and Blood Institute. One author declared a consultancy with a pharmaceutical company on a proposed clinical trial. No other conflicts of interest were declared.

SOURCE: Duncan M et al. JAMA 2019. doi: 10.1001/jama.2019.10298.

Publications
Topics
Sections

 

Quitting smoking significantly reduces the risk of cardiovascular disease, but past smokers are still at elevated cardiovascular risk, compared with nonsmokers, for up to 25 years after smoking cessation, research in JAMA suggests.

Cigarette snuffed out amid cigarette butts
AtnoYdur/Thinkstock

A retrospective analysis of data from 8,770 individuals in the Framingham Heart Study compared the incidence of myocardial infarction, stroke, heart failure, or cardiovascular death in ever-smokers with that of never smokers.

Only 40% of the total cohort had never smoked. Of the 4,115 current smokers at baseline, 38.6% quit during the course of the study and did not relapse but 51.4% continued to smoke until they developed cardiovascular disease or dropped out of the study.

Current smokers had a significant 4.68-fold higher incidence of cardiovascular disease, compared with those who had never smoked, but those who stopped smoking showed a 39% decline in their risk of cardiovascular disease within 5 years of cessation.

However, individuals who were formerly heavy smokers – defined as at least 20 pack-years of smoking – retained a risk of cardiovascular disease 25% higher than that of never smokers until 10-15 years after quitting smoking. At 16 years, the 95% confidence interval for cardiovascular disease risk among former smokers versus that of never smokers finally and consistently included the null value of 1.

The study pooled two cohorts; the original cohort, who attended their fourth examination during 1954-1958 and an offspring cohort who attended their first examination during 1971-1975. The authors saw a difference between the two cohorts in the time course of cardiovascular disease risk in heavy smokers.

In the original cohort, former heavy smoking ceased to be significantly associated with increased cardiovascular disease risk within 5-10 years of cessation, but in the offspring cohort, it took 25 years after cessation for the incidence to decline to the same level of risk seen in never smokers.

“The upper estimate of this time course is a decade longer than that of the Nurses’ Health Study results for coronary heart disease and cardiovascular death and more than 20 years longer than in some prior reports for coronary heart disease and stroke,” wrote Meredith S. Duncan from the division of cardiovascular medicine at the Vanderbilt University Medical Center, Nashville, Tenn., and coauthors. “Although the exact amount of time after quitting at which former smokers’ CVD risk ceases to differ significantly from that of never smokers is unknown (and may further depend on cumulative exposure), these findings support a longer time course of risk reduction than was previously thought, yielding implications for CVD risk stratification of former smokers.”

However, they did note that the study could not account for environmental tobacco smoke exposure and that the participants were mostly of white European ancestry, which limited the generalizability of the findings to other populations.

The Framingham Health Study was supported by the National Heart, Lung, and Blood Institute. One author declared a consultancy with a pharmaceutical company on a proposed clinical trial. No other conflicts of interest were declared.

SOURCE: Duncan M et al. JAMA 2019. doi: 10.1001/jama.2019.10298.

 

Quitting smoking significantly reduces the risk of cardiovascular disease, but past smokers are still at elevated cardiovascular risk, compared with nonsmokers, for up to 25 years after smoking cessation, research in JAMA suggests.

Cigarette snuffed out amid cigarette butts
AtnoYdur/Thinkstock

A retrospective analysis of data from 8,770 individuals in the Framingham Heart Study compared the incidence of myocardial infarction, stroke, heart failure, or cardiovascular death in ever-smokers with that of never smokers.

Only 40% of the total cohort had never smoked. Of the 4,115 current smokers at baseline, 38.6% quit during the course of the study and did not relapse but 51.4% continued to smoke until they developed cardiovascular disease or dropped out of the study.

Current smokers had a significant 4.68-fold higher incidence of cardiovascular disease, compared with those who had never smoked, but those who stopped smoking showed a 39% decline in their risk of cardiovascular disease within 5 years of cessation.

However, individuals who were formerly heavy smokers – defined as at least 20 pack-years of smoking – retained a risk of cardiovascular disease 25% higher than that of never smokers until 10-15 years after quitting smoking. At 16 years, the 95% confidence interval for cardiovascular disease risk among former smokers versus that of never smokers finally and consistently included the null value of 1.

The study pooled two cohorts; the original cohort, who attended their fourth examination during 1954-1958 and an offspring cohort who attended their first examination during 1971-1975. The authors saw a difference between the two cohorts in the time course of cardiovascular disease risk in heavy smokers.

In the original cohort, former heavy smoking ceased to be significantly associated with increased cardiovascular disease risk within 5-10 years of cessation, but in the offspring cohort, it took 25 years after cessation for the incidence to decline to the same level of risk seen in never smokers.

“The upper estimate of this time course is a decade longer than that of the Nurses’ Health Study results for coronary heart disease and cardiovascular death and more than 20 years longer than in some prior reports for coronary heart disease and stroke,” wrote Meredith S. Duncan from the division of cardiovascular medicine at the Vanderbilt University Medical Center, Nashville, Tenn., and coauthors. “Although the exact amount of time after quitting at which former smokers’ CVD risk ceases to differ significantly from that of never smokers is unknown (and may further depend on cumulative exposure), these findings support a longer time course of risk reduction than was previously thought, yielding implications for CVD risk stratification of former smokers.”

However, they did note that the study could not account for environmental tobacco smoke exposure and that the participants were mostly of white European ancestry, which limited the generalizability of the findings to other populations.

The Framingham Health Study was supported by the National Heart, Lung, and Blood Institute. One author declared a consultancy with a pharmaceutical company on a proposed clinical trial. No other conflicts of interest were declared.

SOURCE: Duncan M et al. JAMA 2019. doi: 10.1001/jama.2019.10298.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The increased risk of cardiovascular disease (CVD) in smokers persists long after smoking cessation.

Major finding: In the offspring cohort, heavy smokers showed elevated incidence of CVD for up to 25 years after quitting smoking.

Study details: A retrospective analysis of data from 8,770 individuals in the Framingham Heart Study.

Disclosures: The Framingham Health Study was supported by the National Heart, Lung, and Blood Institute. One author declared a consultancy with a pharmaceutical company on a proposed clinical trial. No other conflicts of interest were declared.

Source: Duncan M et al. JAMA. 2019. doi: 10.1001/jama.2019.10298.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Self-reported falls can predict osteoporotic fracture risk

Do focus on falls when assessing fracture risk
Article Type
Changed
Fri, 08/30/2019 - 10:30

A single, simple question about a patient’s experience of falls in the previous year can help predict their risk of fractures, a study suggests.

Doctor with patient
Alexander Raths/Fotolia

In Osteoporosis International, researchers reported the outcomes of a cohort study using Manitoba clinical registry data from 24,943 men and women aged 40 years and older within the province who had undergone a fracture-probability assessment, and had data on self-reported falls for the previous year and fracture outcomes.

William D. Leslie, MD, of the University of Manitoba in Winnipeg, and coauthors wrote that a frequent criticism of the FRAX fracture risk assessment tool was the fact that it didn’t include falls or fall risk in predicting fractures.

“Recent evidence derived from carefully conducted research cohort studies in men found that falls increase fracture risk independent of FRAX probability,” they wrote. “However, data are inconsistent with a paucity of evidence demonstrating usefulness of self-reported fall data as collected in routine clinical practice.”

Over a mean observation time of 2.7 years, 3.5% of the study population sustained at least one major osteoporotic fracture, 0.8% experienced a hip fracture, and 4.9% experienced any incident fracture.

The analysis showed an increased risk of fracture with the increasing number of self-reported falls experienced in the previous year. The risk of major osteoporotic fracture was 49% higher among individuals who reported one fall, 74% in those who reported two falls and 2.6-fold higher for those who reported three or more falls in the previous year, compared with those who did not report any falls.

A similar pattern was seen for any incident fracture and hip fracture, with a 3.4-fold higher risk of hip fracture seen in those who reported three or more falls. The study also showed an increase in mortality risk with increasing number of falls.

“We documented that a simple question regarding self-reported falls in the previous year could be easily collected during routine clinical practice and that this information was strongly predictive of short-term fracture risk independent of multiple clinical risk factors including fracture probability using the FRAX tool with BMD [bone mineral density],” the authors wrote.

The analysis did not find an interaction with age or sex and the number of falls.

John A. Kanis, MD, reported grants from Amgen, Lily, and Radius Health. Three other coauthors reported nothing to declare for the context of this article, but reported research grants, speaking honoraria, consultancies from a variety of pharmaceutical companies and organizations. The remaining five coauthors declared no conflicts of interest.

SOURCE: Leslie WD et al. Osteoporos Int. 2019 Aug. 2. doi: 10.1007/s00198-019-05106-3.

Body

Fragility fractures remain a major contributor to morbidity and even mortality of aging populations. Concerted efforts of clinicians, epidemiologists, and researchers have yielded an assortment of diagnostic strategies and prognostic algorithms in efforts to identify individuals at fracture risk. A variety of demographic (age, sex), biological (family history, specific disorders and medications), anatomical (bone mineral density, body mass index), and behavioral (smoking, alcohol consumption) parameters are recognized as predictors of fracture risk, and often are incorporated in predictive algorithms for fracture predisposition. FRAX (Fracture Risk Assessment) is a widely used screening tool that is valid in offering fracture risk quantification across populations (Arch Osteoporos. 2016 Dec;11[1]:25; World Health Organization Assessment of Osteoporosis at the Primary Health Care Level).

 
Aging and accompanying neurocognitive deterioration, visual impairment, as well as iatrogenic factors are recognized to contribute to predisposition to falls in aging populations. A propensity for falls has long been regarded as a fracture risk (Curr Osteoporos Rep. 2008;6[4]:149-54). However, the evidence to support this logical assumption has been mixed with resulting exclusion of tendency to fall from commonly utilized fracture risk predictive models and tools. A predisposition to and frequency of falls is considered neither a risk modulator nor a mediator in the commonly utilized FRAX-based fracture risk assessments, and it is believed that fracture probability may be underestimated by FRAX in those predisposed to frequent falls (J Clin Densitom. 2011 Jul-Sep;14[3]:194–204).

 
The landscape of fracture risk assessment and quantification in the aforementioned backdrop has been refreshingly enhanced by a recent contribution by Leslie et al. wherein the authors provide real-life evidence relating self-reported falls to fracture risk. In a robust population sample nearing 25,000 women, increasing number of falls within the past year was associated with an increasing fracture risk, and this relationship persisted after adjusting for covariates that are recognized to predispose to fragility fractures, including age, body mass index, and bone mineral density. Women’s health providers are encouraged to familiarize themselves with the work of Leslie et al.; the authors’ message, that fall history be incorporated into risk quantification measures, is striking in its simplicity and profound in its preventative potential given that fall risk in and of itself may be mitigated in many through targeted interventions.

 
Lubna Pal, MBBS, MS, is professor and fellowship director of the division of reproductive endocrinology & infertility at Yale University, New Haven, Conn. She also is the director of the Yale reproductive endocrinology & infertility menopause program. She said she had no relevant financial disclosures. Email her at obnews@mdedge.com.
 

Publications
Topics
Sections
Body

Fragility fractures remain a major contributor to morbidity and even mortality of aging populations. Concerted efforts of clinicians, epidemiologists, and researchers have yielded an assortment of diagnostic strategies and prognostic algorithms in efforts to identify individuals at fracture risk. A variety of demographic (age, sex), biological (family history, specific disorders and medications), anatomical (bone mineral density, body mass index), and behavioral (smoking, alcohol consumption) parameters are recognized as predictors of fracture risk, and often are incorporated in predictive algorithms for fracture predisposition. FRAX (Fracture Risk Assessment) is a widely used screening tool that is valid in offering fracture risk quantification across populations (Arch Osteoporos. 2016 Dec;11[1]:25; World Health Organization Assessment of Osteoporosis at the Primary Health Care Level).

 
Aging and accompanying neurocognitive deterioration, visual impairment, as well as iatrogenic factors are recognized to contribute to predisposition to falls in aging populations. A propensity for falls has long been regarded as a fracture risk (Curr Osteoporos Rep. 2008;6[4]:149-54). However, the evidence to support this logical assumption has been mixed with resulting exclusion of tendency to fall from commonly utilized fracture risk predictive models and tools. A predisposition to and frequency of falls is considered neither a risk modulator nor a mediator in the commonly utilized FRAX-based fracture risk assessments, and it is believed that fracture probability may be underestimated by FRAX in those predisposed to frequent falls (J Clin Densitom. 2011 Jul-Sep;14[3]:194–204).

 
The landscape of fracture risk assessment and quantification in the aforementioned backdrop has been refreshingly enhanced by a recent contribution by Leslie et al. wherein the authors provide real-life evidence relating self-reported falls to fracture risk. In a robust population sample nearing 25,000 women, increasing number of falls within the past year was associated with an increasing fracture risk, and this relationship persisted after adjusting for covariates that are recognized to predispose to fragility fractures, including age, body mass index, and bone mineral density. Women’s health providers are encouraged to familiarize themselves with the work of Leslie et al.; the authors’ message, that fall history be incorporated into risk quantification measures, is striking in its simplicity and profound in its preventative potential given that fall risk in and of itself may be mitigated in many through targeted interventions.

 
Lubna Pal, MBBS, MS, is professor and fellowship director of the division of reproductive endocrinology & infertility at Yale University, New Haven, Conn. She also is the director of the Yale reproductive endocrinology & infertility menopause program. She said she had no relevant financial disclosures. Email her at obnews@mdedge.com.
 

Body

Fragility fractures remain a major contributor to morbidity and even mortality of aging populations. Concerted efforts of clinicians, epidemiologists, and researchers have yielded an assortment of diagnostic strategies and prognostic algorithms in efforts to identify individuals at fracture risk. A variety of demographic (age, sex), biological (family history, specific disorders and medications), anatomical (bone mineral density, body mass index), and behavioral (smoking, alcohol consumption) parameters are recognized as predictors of fracture risk, and often are incorporated in predictive algorithms for fracture predisposition. FRAX (Fracture Risk Assessment) is a widely used screening tool that is valid in offering fracture risk quantification across populations (Arch Osteoporos. 2016 Dec;11[1]:25; World Health Organization Assessment of Osteoporosis at the Primary Health Care Level).

 
Aging and accompanying neurocognitive deterioration, visual impairment, as well as iatrogenic factors are recognized to contribute to predisposition to falls in aging populations. A propensity for falls has long been regarded as a fracture risk (Curr Osteoporos Rep. 2008;6[4]:149-54). However, the evidence to support this logical assumption has been mixed with resulting exclusion of tendency to fall from commonly utilized fracture risk predictive models and tools. A predisposition to and frequency of falls is considered neither a risk modulator nor a mediator in the commonly utilized FRAX-based fracture risk assessments, and it is believed that fracture probability may be underestimated by FRAX in those predisposed to frequent falls (J Clin Densitom. 2011 Jul-Sep;14[3]:194–204).

 
The landscape of fracture risk assessment and quantification in the aforementioned backdrop has been refreshingly enhanced by a recent contribution by Leslie et al. wherein the authors provide real-life evidence relating self-reported falls to fracture risk. In a robust population sample nearing 25,000 women, increasing number of falls within the past year was associated with an increasing fracture risk, and this relationship persisted after adjusting for covariates that are recognized to predispose to fragility fractures, including age, body mass index, and bone mineral density. Women’s health providers are encouraged to familiarize themselves with the work of Leslie et al.; the authors’ message, that fall history be incorporated into risk quantification measures, is striking in its simplicity and profound in its preventative potential given that fall risk in and of itself may be mitigated in many through targeted interventions.

 
Lubna Pal, MBBS, MS, is professor and fellowship director of the division of reproductive endocrinology & infertility at Yale University, New Haven, Conn. She also is the director of the Yale reproductive endocrinology & infertility menopause program. She said she had no relevant financial disclosures. Email her at obnews@mdedge.com.
 

Title
Do focus on falls when assessing fracture risk
Do focus on falls when assessing fracture risk

A single, simple question about a patient’s experience of falls in the previous year can help predict their risk of fractures, a study suggests.

Doctor with patient
Alexander Raths/Fotolia

In Osteoporosis International, researchers reported the outcomes of a cohort study using Manitoba clinical registry data from 24,943 men and women aged 40 years and older within the province who had undergone a fracture-probability assessment, and had data on self-reported falls for the previous year and fracture outcomes.

William D. Leslie, MD, of the University of Manitoba in Winnipeg, and coauthors wrote that a frequent criticism of the FRAX fracture risk assessment tool was the fact that it didn’t include falls or fall risk in predicting fractures.

“Recent evidence derived from carefully conducted research cohort studies in men found that falls increase fracture risk independent of FRAX probability,” they wrote. “However, data are inconsistent with a paucity of evidence demonstrating usefulness of self-reported fall data as collected in routine clinical practice.”

Over a mean observation time of 2.7 years, 3.5% of the study population sustained at least one major osteoporotic fracture, 0.8% experienced a hip fracture, and 4.9% experienced any incident fracture.

The analysis showed an increased risk of fracture with the increasing number of self-reported falls experienced in the previous year. The risk of major osteoporotic fracture was 49% higher among individuals who reported one fall, 74% in those who reported two falls and 2.6-fold higher for those who reported three or more falls in the previous year, compared with those who did not report any falls.

A similar pattern was seen for any incident fracture and hip fracture, with a 3.4-fold higher risk of hip fracture seen in those who reported three or more falls. The study also showed an increase in mortality risk with increasing number of falls.

“We documented that a simple question regarding self-reported falls in the previous year could be easily collected during routine clinical practice and that this information was strongly predictive of short-term fracture risk independent of multiple clinical risk factors including fracture probability using the FRAX tool with BMD [bone mineral density],” the authors wrote.

The analysis did not find an interaction with age or sex and the number of falls.

John A. Kanis, MD, reported grants from Amgen, Lily, and Radius Health. Three other coauthors reported nothing to declare for the context of this article, but reported research grants, speaking honoraria, consultancies from a variety of pharmaceutical companies and organizations. The remaining five coauthors declared no conflicts of interest.

SOURCE: Leslie WD et al. Osteoporos Int. 2019 Aug. 2. doi: 10.1007/s00198-019-05106-3.

A single, simple question about a patient’s experience of falls in the previous year can help predict their risk of fractures, a study suggests.

Doctor with patient
Alexander Raths/Fotolia

In Osteoporosis International, researchers reported the outcomes of a cohort study using Manitoba clinical registry data from 24,943 men and women aged 40 years and older within the province who had undergone a fracture-probability assessment, and had data on self-reported falls for the previous year and fracture outcomes.

William D. Leslie, MD, of the University of Manitoba in Winnipeg, and coauthors wrote that a frequent criticism of the FRAX fracture risk assessment tool was the fact that it didn’t include falls or fall risk in predicting fractures.

“Recent evidence derived from carefully conducted research cohort studies in men found that falls increase fracture risk independent of FRAX probability,” they wrote. “However, data are inconsistent with a paucity of evidence demonstrating usefulness of self-reported fall data as collected in routine clinical practice.”

Over a mean observation time of 2.7 years, 3.5% of the study population sustained at least one major osteoporotic fracture, 0.8% experienced a hip fracture, and 4.9% experienced any incident fracture.

The analysis showed an increased risk of fracture with the increasing number of self-reported falls experienced in the previous year. The risk of major osteoporotic fracture was 49% higher among individuals who reported one fall, 74% in those who reported two falls and 2.6-fold higher for those who reported three or more falls in the previous year, compared with those who did not report any falls.

A similar pattern was seen for any incident fracture and hip fracture, with a 3.4-fold higher risk of hip fracture seen in those who reported three or more falls. The study also showed an increase in mortality risk with increasing number of falls.

“We documented that a simple question regarding self-reported falls in the previous year could be easily collected during routine clinical practice and that this information was strongly predictive of short-term fracture risk independent of multiple clinical risk factors including fracture probability using the FRAX tool with BMD [bone mineral density],” the authors wrote.

The analysis did not find an interaction with age or sex and the number of falls.

John A. Kanis, MD, reported grants from Amgen, Lily, and Radius Health. Three other coauthors reported nothing to declare for the context of this article, but reported research grants, speaking honoraria, consultancies from a variety of pharmaceutical companies and organizations. The remaining five coauthors declared no conflicts of interest.

SOURCE: Leslie WD et al. Osteoporos Int. 2019 Aug. 2. doi: 10.1007/s00198-019-05106-3.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM OSTEOPOROSIS INTERNATIONAL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Analysis finds no mortality reductions with osteoporosis drugs

Article Type
Changed
Mon, 08/26/2019 - 14:35

 

Despite earlier research suggesting that some drug treatments for osteoporosis reduce mortality, a meta-analysis has failed to find any clear mortality benefits in patients with osteoporosis.

A paper published in JAMA Internal Medicine analyzed data from 38 randomized, placebo-controlled clinical trials of osteoporosis drugs involving a total of 101,642 participants.

“Studies have estimated that less than 30% of the mortality following hip and vertebral fractures may be attributed to the fracture itself and, therefore, potentially avoidable by preventing the fracture,” wrote Steven R. Cummings, MD, of the San Francisco Coordinating Center at the University of California, San Francisco, and colleagues. “Some studies have suggested that treatments for osteoporosis may directly reduce overall mortality rates in addition to decreasing fracture risk.”

Despite including a diversity of drugs including bisphosphonates, denosumab (Prolia), selective estrogen receptor modulators, parathyroid hormone analogues, odanacatib, and romosozumab (Evenity), the analysis found no significant association between receiving a drug treatment for osteoporosis and overall mortality.

The researchers did a separate analysis of the 21 clinical trials of bisphosphonate treatments, again finding no impact of the treatment on overall mortality. Similarly, analysis of six zoledronate clinical trials found no statistically significant impact on mortality, although the authors noted that there was some heterogeneity in the results. For example, two large trials found 28% and 35% reductions in mortality, however these effects were not seen in another other zoledronate trials.

An analysis limited to nitrogen-containing bisphosphonates (alendronate, risedronate, ibandronate, and zoledronate) showed a nonsignificant trend toward lower overall mortality, although this became even less statistically significant when trials of zoledronate were excluded.

“More data from placebo-controlled clinical trials of zoledronate therapy and mortality rates are needed to resolve whether treatment with zoledronate is associated with reduced mortality in addition to decreased fracture risk,” the authors wrote.

They added that the 25%-60% mortality reductions seen in previous observational were too large to be attributable solely to reductions in the risk of fracture, but were perhaps the result of unmeasured confounders that could have contributed to lower mortality.

“The apparent reduction in mortality may be an example of the ‘healthy adherer effect,’ which has been documented in studies reporting that participants who adhered to placebo treatment in clinical trials had lower mortality,” they wrote, citing data from the Women’s Health Study that showed 36% lower mortality in those who were at least 80% adherent to placebo.

“This effect is particularly applicable to observational studies of treatments for osteoporosis because only an estimated half of women taking oral drugs for the treatment of osteoporosis continued the regimen for 1 year, and even fewer continued longer,” they added.

They did note one limitation of their analysis was that it did not include a large clinical trial of the antiresorptive drug odanacatib, which was only available in abstract form at the time.

One author reported receiving grants and personal fees from a pharmaceutical company during the conduct of the study, and another reported receiving grants and personal fees outside the submitted work. No other conflicts of interest were reported.

SOURCE: Cummings SR et al. JAMA Intern Med. 2019 Aug 19. doi: 10.1001/jamainternmed.2019.2779.

Publications
Topics
Sections

 

Despite earlier research suggesting that some drug treatments for osteoporosis reduce mortality, a meta-analysis has failed to find any clear mortality benefits in patients with osteoporosis.

A paper published in JAMA Internal Medicine analyzed data from 38 randomized, placebo-controlled clinical trials of osteoporosis drugs involving a total of 101,642 participants.

“Studies have estimated that less than 30% of the mortality following hip and vertebral fractures may be attributed to the fracture itself and, therefore, potentially avoidable by preventing the fracture,” wrote Steven R. Cummings, MD, of the San Francisco Coordinating Center at the University of California, San Francisco, and colleagues. “Some studies have suggested that treatments for osteoporosis may directly reduce overall mortality rates in addition to decreasing fracture risk.”

Despite including a diversity of drugs including bisphosphonates, denosumab (Prolia), selective estrogen receptor modulators, parathyroid hormone analogues, odanacatib, and romosozumab (Evenity), the analysis found no significant association between receiving a drug treatment for osteoporosis and overall mortality.

The researchers did a separate analysis of the 21 clinical trials of bisphosphonate treatments, again finding no impact of the treatment on overall mortality. Similarly, analysis of six zoledronate clinical trials found no statistically significant impact on mortality, although the authors noted that there was some heterogeneity in the results. For example, two large trials found 28% and 35% reductions in mortality, however these effects were not seen in another other zoledronate trials.

An analysis limited to nitrogen-containing bisphosphonates (alendronate, risedronate, ibandronate, and zoledronate) showed a nonsignificant trend toward lower overall mortality, although this became even less statistically significant when trials of zoledronate were excluded.

“More data from placebo-controlled clinical trials of zoledronate therapy and mortality rates are needed to resolve whether treatment with zoledronate is associated with reduced mortality in addition to decreased fracture risk,” the authors wrote.

They added that the 25%-60% mortality reductions seen in previous observational were too large to be attributable solely to reductions in the risk of fracture, but were perhaps the result of unmeasured confounders that could have contributed to lower mortality.

“The apparent reduction in mortality may be an example of the ‘healthy adherer effect,’ which has been documented in studies reporting that participants who adhered to placebo treatment in clinical trials had lower mortality,” they wrote, citing data from the Women’s Health Study that showed 36% lower mortality in those who were at least 80% adherent to placebo.

“This effect is particularly applicable to observational studies of treatments for osteoporosis because only an estimated half of women taking oral drugs for the treatment of osteoporosis continued the regimen for 1 year, and even fewer continued longer,” they added.

They did note one limitation of their analysis was that it did not include a large clinical trial of the antiresorptive drug odanacatib, which was only available in abstract form at the time.

One author reported receiving grants and personal fees from a pharmaceutical company during the conduct of the study, and another reported receiving grants and personal fees outside the submitted work. No other conflicts of interest were reported.

SOURCE: Cummings SR et al. JAMA Intern Med. 2019 Aug 19. doi: 10.1001/jamainternmed.2019.2779.

 

Despite earlier research suggesting that some drug treatments for osteoporosis reduce mortality, a meta-analysis has failed to find any clear mortality benefits in patients with osteoporosis.

A paper published in JAMA Internal Medicine analyzed data from 38 randomized, placebo-controlled clinical trials of osteoporosis drugs involving a total of 101,642 participants.

“Studies have estimated that less than 30% of the mortality following hip and vertebral fractures may be attributed to the fracture itself and, therefore, potentially avoidable by preventing the fracture,” wrote Steven R. Cummings, MD, of the San Francisco Coordinating Center at the University of California, San Francisco, and colleagues. “Some studies have suggested that treatments for osteoporosis may directly reduce overall mortality rates in addition to decreasing fracture risk.”

Despite including a diversity of drugs including bisphosphonates, denosumab (Prolia), selective estrogen receptor modulators, parathyroid hormone analogues, odanacatib, and romosozumab (Evenity), the analysis found no significant association between receiving a drug treatment for osteoporosis and overall mortality.

The researchers did a separate analysis of the 21 clinical trials of bisphosphonate treatments, again finding no impact of the treatment on overall mortality. Similarly, analysis of six zoledronate clinical trials found no statistically significant impact on mortality, although the authors noted that there was some heterogeneity in the results. For example, two large trials found 28% and 35% reductions in mortality, however these effects were not seen in another other zoledronate trials.

An analysis limited to nitrogen-containing bisphosphonates (alendronate, risedronate, ibandronate, and zoledronate) showed a nonsignificant trend toward lower overall mortality, although this became even less statistically significant when trials of zoledronate were excluded.

“More data from placebo-controlled clinical trials of zoledronate therapy and mortality rates are needed to resolve whether treatment with zoledronate is associated with reduced mortality in addition to decreased fracture risk,” the authors wrote.

They added that the 25%-60% mortality reductions seen in previous observational were too large to be attributable solely to reductions in the risk of fracture, but were perhaps the result of unmeasured confounders that could have contributed to lower mortality.

“The apparent reduction in mortality may be an example of the ‘healthy adherer effect,’ which has been documented in studies reporting that participants who adhered to placebo treatment in clinical trials had lower mortality,” they wrote, citing data from the Women’s Health Study that showed 36% lower mortality in those who were at least 80% adherent to placebo.

“This effect is particularly applicable to observational studies of treatments for osteoporosis because only an estimated half of women taking oral drugs for the treatment of osteoporosis continued the regimen for 1 year, and even fewer continued longer,” they added.

They did note one limitation of their analysis was that it did not include a large clinical trial of the antiresorptive drug odanacatib, which was only available in abstract form at the time.

One author reported receiving grants and personal fees from a pharmaceutical company during the conduct of the study, and another reported receiving grants and personal fees outside the submitted work. No other conflicts of interest were reported.

SOURCE: Cummings SR et al. JAMA Intern Med. 2019 Aug 19. doi: 10.1001/jamainternmed.2019.2779.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Bisphosphonates improve BMD in pediatric rheumatic disease

Article Type
Changed
Mon, 08/19/2019 - 13:49

Prophylactic treatment with bisphosphonates could significantly improve bone mineral density (BMD) in children and adolescents receiving steroids for chronic rheumatic disease, a study has found.

A paper published in EClinicalMedicine reported the outcomes of a multicenter, double-dummy, double-blind, placebo-controlled trial involving 217 patients who were receiving steroid therapy for juvenile idiopathic arthritis, juvenile systemic lupus erythematosus, juvenile dermatomyositis, or juvenile vasculitis. The patients were randomized to risedronate, alfacalcidol, or placebo, and all of the participants received 500 mg calcium and 400 IU vitamin D daily.

Lumbar spine and total body (less head) BMD increased in all groups, but the greatest increase was seen in patients treated with risedronate.

After 1 year, lumbar spine and total body (less head) BMD had increased in all groups, compared with baseline, but the greatest increase was seen in patients who had been treated with risedronate.

The lumbar spine areal BMD z score remained the same in the placebo group (−1.15 to −1.13), decreased from −0.96 to −1.00 in the alfacalcidol group, and increased from −0.99 to −0.75 in the risedronate group.

The change in z scores was significantly different between placebo and risedronate groups, and between risedronate and alfacalcidol groups, but not between placebo and alfacalcidol.

“The acquisition of adequate peak bone mass is not only important for the young person in reducing fracture risk but also has significant implications for the development of osteoporosis in later life, if peak bone mass is suboptimal,” wrote Madeleine Rooney, MBBCH, from the Queens University of Belfast, Northern Ireland, and associates.

There were no significant differences between the three groups in fracture rates. However, researchers were also able to compare Genant scores for vertebral fractures in 187 patients with pre- and posttreatment lateral spinal x-rays. That showed that the 54 patients in the placebo arm and 52 patients in the alfacalcidol arm had no change in their baseline Genant score of 0 (normal). However, although all 53 patients in the risedronate group had a Genant score of 0 at baseline, at 1-year follow-up, 2 patients had a Genant score of 1 (mild fracture), and 1 patient had a score of 3 (severe fracture).

In biochemical parameters, researchers saw a drop in parathyroid hormone in the placebo and alfacalcidol groups, but a rise in the risedronate group. However, the authors were not able to see any changes in bone markers that might have indicated which patients responded better to treatment.

Around 90% of participants in each group were also being treated with disease-modifying antirheumatic drugs. The rates of biologic use were 10.5% in the placebo group, 23.9% in the alfacalcidol group, and 10.1% in the risedronate group.

The researchers also noted a 7% higher rate of serious adverse events in the risedronate group, but emphasized that there were no differences in events related to the treatment.

In an accompanying editorial, Ian R. Reid, MBBCH, of the department of medicine, University of Auckland (New Zealand) noted that the study was an important step toward finding interventions for the prevention of steroid-induced bone loss in children. “The present study indicates that risedronate, and probably other potent bisphosphonates, can provide bone preservation in children and young people receiving therapeutic doses of glucocorticoid drugs, whereas alfacalcidol is without benefit. The targeted use of bisphosphonates in children and young people judged to be at significant fracture risk is appropriate. However, whether preventing loss of bone density will reduce fracture incidence remains to be established.”

The study was funded by Arthritis Research UK. No conflicts of interest were declared.
 

SOURCE: Rooney M et al. EClinicalMedicine. 2019 Jul 3. doi: 10.1016/j.eclinm.2019.06.004.

Publications
Topics
Sections

Prophylactic treatment with bisphosphonates could significantly improve bone mineral density (BMD) in children and adolescents receiving steroids for chronic rheumatic disease, a study has found.

A paper published in EClinicalMedicine reported the outcomes of a multicenter, double-dummy, double-blind, placebo-controlled trial involving 217 patients who were receiving steroid therapy for juvenile idiopathic arthritis, juvenile systemic lupus erythematosus, juvenile dermatomyositis, or juvenile vasculitis. The patients were randomized to risedronate, alfacalcidol, or placebo, and all of the participants received 500 mg calcium and 400 IU vitamin D daily.

Lumbar spine and total body (less head) BMD increased in all groups, but the greatest increase was seen in patients treated with risedronate.

After 1 year, lumbar spine and total body (less head) BMD had increased in all groups, compared with baseline, but the greatest increase was seen in patients who had been treated with risedronate.

The lumbar spine areal BMD z score remained the same in the placebo group (−1.15 to −1.13), decreased from −0.96 to −1.00 in the alfacalcidol group, and increased from −0.99 to −0.75 in the risedronate group.

The change in z scores was significantly different between placebo and risedronate groups, and between risedronate and alfacalcidol groups, but not between placebo and alfacalcidol.

“The acquisition of adequate peak bone mass is not only important for the young person in reducing fracture risk but also has significant implications for the development of osteoporosis in later life, if peak bone mass is suboptimal,” wrote Madeleine Rooney, MBBCH, from the Queens University of Belfast, Northern Ireland, and associates.

There were no significant differences between the three groups in fracture rates. However, researchers were also able to compare Genant scores for vertebral fractures in 187 patients with pre- and posttreatment lateral spinal x-rays. That showed that the 54 patients in the placebo arm and 52 patients in the alfacalcidol arm had no change in their baseline Genant score of 0 (normal). However, although all 53 patients in the risedronate group had a Genant score of 0 at baseline, at 1-year follow-up, 2 patients had a Genant score of 1 (mild fracture), and 1 patient had a score of 3 (severe fracture).

In biochemical parameters, researchers saw a drop in parathyroid hormone in the placebo and alfacalcidol groups, but a rise in the risedronate group. However, the authors were not able to see any changes in bone markers that might have indicated which patients responded better to treatment.

Around 90% of participants in each group were also being treated with disease-modifying antirheumatic drugs. The rates of biologic use were 10.5% in the placebo group, 23.9% in the alfacalcidol group, and 10.1% in the risedronate group.

The researchers also noted a 7% higher rate of serious adverse events in the risedronate group, but emphasized that there were no differences in events related to the treatment.

In an accompanying editorial, Ian R. Reid, MBBCH, of the department of medicine, University of Auckland (New Zealand) noted that the study was an important step toward finding interventions for the prevention of steroid-induced bone loss in children. “The present study indicates that risedronate, and probably other potent bisphosphonates, can provide bone preservation in children and young people receiving therapeutic doses of glucocorticoid drugs, whereas alfacalcidol is without benefit. The targeted use of bisphosphonates in children and young people judged to be at significant fracture risk is appropriate. However, whether preventing loss of bone density will reduce fracture incidence remains to be established.”

The study was funded by Arthritis Research UK. No conflicts of interest were declared.
 

SOURCE: Rooney M et al. EClinicalMedicine. 2019 Jul 3. doi: 10.1016/j.eclinm.2019.06.004.

Prophylactic treatment with bisphosphonates could significantly improve bone mineral density (BMD) in children and adolescents receiving steroids for chronic rheumatic disease, a study has found.

A paper published in EClinicalMedicine reported the outcomes of a multicenter, double-dummy, double-blind, placebo-controlled trial involving 217 patients who were receiving steroid therapy for juvenile idiopathic arthritis, juvenile systemic lupus erythematosus, juvenile dermatomyositis, or juvenile vasculitis. The patients were randomized to risedronate, alfacalcidol, or placebo, and all of the participants received 500 mg calcium and 400 IU vitamin D daily.

Lumbar spine and total body (less head) BMD increased in all groups, but the greatest increase was seen in patients treated with risedronate.

After 1 year, lumbar spine and total body (less head) BMD had increased in all groups, compared with baseline, but the greatest increase was seen in patients who had been treated with risedronate.

The lumbar spine areal BMD z score remained the same in the placebo group (−1.15 to −1.13), decreased from −0.96 to −1.00 in the alfacalcidol group, and increased from −0.99 to −0.75 in the risedronate group.

The change in z scores was significantly different between placebo and risedronate groups, and between risedronate and alfacalcidol groups, but not between placebo and alfacalcidol.

“The acquisition of adequate peak bone mass is not only important for the young person in reducing fracture risk but also has significant implications for the development of osteoporosis in later life, if peak bone mass is suboptimal,” wrote Madeleine Rooney, MBBCH, from the Queens University of Belfast, Northern Ireland, and associates.

There were no significant differences between the three groups in fracture rates. However, researchers were also able to compare Genant scores for vertebral fractures in 187 patients with pre- and posttreatment lateral spinal x-rays. That showed that the 54 patients in the placebo arm and 52 patients in the alfacalcidol arm had no change in their baseline Genant score of 0 (normal). However, although all 53 patients in the risedronate group had a Genant score of 0 at baseline, at 1-year follow-up, 2 patients had a Genant score of 1 (mild fracture), and 1 patient had a score of 3 (severe fracture).

In biochemical parameters, researchers saw a drop in parathyroid hormone in the placebo and alfacalcidol groups, but a rise in the risedronate group. However, the authors were not able to see any changes in bone markers that might have indicated which patients responded better to treatment.

Around 90% of participants in each group were also being treated with disease-modifying antirheumatic drugs. The rates of biologic use were 10.5% in the placebo group, 23.9% in the alfacalcidol group, and 10.1% in the risedronate group.

The researchers also noted a 7% higher rate of serious adverse events in the risedronate group, but emphasized that there were no differences in events related to the treatment.

In an accompanying editorial, Ian R. Reid, MBBCH, of the department of medicine, University of Auckland (New Zealand) noted that the study was an important step toward finding interventions for the prevention of steroid-induced bone loss in children. “The present study indicates that risedronate, and probably other potent bisphosphonates, can provide bone preservation in children and young people receiving therapeutic doses of glucocorticoid drugs, whereas alfacalcidol is without benefit. The targeted use of bisphosphonates in children and young people judged to be at significant fracture risk is appropriate. However, whether preventing loss of bone density will reduce fracture incidence remains to be established.”

The study was funded by Arthritis Research UK. No conflicts of interest were declared.
 

SOURCE: Rooney M et al. EClinicalMedicine. 2019 Jul 3. doi: 10.1016/j.eclinm.2019.06.004.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ECLINICALMEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Timing of adjuvant treatment impacts pancreatic cancer survival

Article Type
Changed
Wed, 05/26/2021 - 13:46

 

The timing for adjuvant treatment following surgery for pancreatic cancer appears to have a sweet spot associated with the best survival outcomes, according to a study published in JAMA Network Open.

Researchers analyzed data from the National Cancer Database for 7,548 patients with stage I-II resected pancreatic cancer, 5,453 of whom had received adjuvant therapy and 2,095 who did not.

“While the benefit of adjuvant therapy to patients with resected pancreatic cancer is accepted, its optimal timing after surgery remains under investigation,” wrote Sung Jun Ma, MD, from the Roswell Park Comprehensive Cancer Center in Buffalo, N.Y., and coauthors.

After a median overall follow-up of 38.6 months, they found the lowest mortality risk was in the reference cohort of patients who started adjuvant therapy 28-59 days after surgery. In comparison, patients who received early adjuvant therapy – within 28 days of surgery – had a 17% higher mortality (P = .03), and those who received adjuvant therapy late – 59 days or more after surgery – had a 9% higher mortality (P = .008)

The overall survival rate at 2 years was 45.2% for the early adjuvant therapy cohort and 52.5% for the reference cohort.

Despite the higher mortality among the early adjuvant therapy cohort, patients treated with adjuvant therapy more than 12 weeks after surgery still showed improved survival, compared with patient treated with surgery alone, particular those with node-positive disease.

“To our knowledge, it is the first study to suggest that patients who commence adjuvant therapy within 28-59 days after primary surgical resection of pancreatic adenocarcinoma have improved survival outcomes compared with those who waited for more than 59 days,” the authors wrote. “However, patients who recover slowly from surgery may still benefit from delayed adjuvant therapy initiated more than 12 weeks after surgery.”

No treatment interactions were seen for other variables such as age, comorbidity score, tumor size, pathologic T stages, surgical margin, duration of postoperative inpatient admission, unplanned readmission within 30 days after surgery, and time from diagnosis to surgery.

The analysis also revealed that patients with a primary tumor at the pancreatic body and tail and those receiving multiagent chemotherapy or radiation therapy were less likely to receive delayed adjuvant therapy, However, older or black patients, those with lower income, with postoperative inpatient admission longer than 1 week or with unplanned readmission within 30 days after surgery were more likely to have delayed initiation of adjuvant therapy.

No conflicts of interest were reported.

SOURCE: Ma SJ et al. JAMA Netw Open. 2019 Aug 14. doi: 10.1001/jamanetworkopen.2019.9126.

Publications
Topics
Sections

 

The timing for adjuvant treatment following surgery for pancreatic cancer appears to have a sweet spot associated with the best survival outcomes, according to a study published in JAMA Network Open.

Researchers analyzed data from the National Cancer Database for 7,548 patients with stage I-II resected pancreatic cancer, 5,453 of whom had received adjuvant therapy and 2,095 who did not.

“While the benefit of adjuvant therapy to patients with resected pancreatic cancer is accepted, its optimal timing after surgery remains under investigation,” wrote Sung Jun Ma, MD, from the Roswell Park Comprehensive Cancer Center in Buffalo, N.Y., and coauthors.

After a median overall follow-up of 38.6 months, they found the lowest mortality risk was in the reference cohort of patients who started adjuvant therapy 28-59 days after surgery. In comparison, patients who received early adjuvant therapy – within 28 days of surgery – had a 17% higher mortality (P = .03), and those who received adjuvant therapy late – 59 days or more after surgery – had a 9% higher mortality (P = .008)

The overall survival rate at 2 years was 45.2% for the early adjuvant therapy cohort and 52.5% for the reference cohort.

Despite the higher mortality among the early adjuvant therapy cohort, patients treated with adjuvant therapy more than 12 weeks after surgery still showed improved survival, compared with patient treated with surgery alone, particular those with node-positive disease.

“To our knowledge, it is the first study to suggest that patients who commence adjuvant therapy within 28-59 days after primary surgical resection of pancreatic adenocarcinoma have improved survival outcomes compared with those who waited for more than 59 days,” the authors wrote. “However, patients who recover slowly from surgery may still benefit from delayed adjuvant therapy initiated more than 12 weeks after surgery.”

No treatment interactions were seen for other variables such as age, comorbidity score, tumor size, pathologic T stages, surgical margin, duration of postoperative inpatient admission, unplanned readmission within 30 days after surgery, and time from diagnosis to surgery.

The analysis also revealed that patients with a primary tumor at the pancreatic body and tail and those receiving multiagent chemotherapy or radiation therapy were less likely to receive delayed adjuvant therapy, However, older or black patients, those with lower income, with postoperative inpatient admission longer than 1 week or with unplanned readmission within 30 days after surgery were more likely to have delayed initiation of adjuvant therapy.

No conflicts of interest were reported.

SOURCE: Ma SJ et al. JAMA Netw Open. 2019 Aug 14. doi: 10.1001/jamanetworkopen.2019.9126.

 

The timing for adjuvant treatment following surgery for pancreatic cancer appears to have a sweet spot associated with the best survival outcomes, according to a study published in JAMA Network Open.

Researchers analyzed data from the National Cancer Database for 7,548 patients with stage I-II resected pancreatic cancer, 5,453 of whom had received adjuvant therapy and 2,095 who did not.

“While the benefit of adjuvant therapy to patients with resected pancreatic cancer is accepted, its optimal timing after surgery remains under investigation,” wrote Sung Jun Ma, MD, from the Roswell Park Comprehensive Cancer Center in Buffalo, N.Y., and coauthors.

After a median overall follow-up of 38.6 months, they found the lowest mortality risk was in the reference cohort of patients who started adjuvant therapy 28-59 days after surgery. In comparison, patients who received early adjuvant therapy – within 28 days of surgery – had a 17% higher mortality (P = .03), and those who received adjuvant therapy late – 59 days or more after surgery – had a 9% higher mortality (P = .008)

The overall survival rate at 2 years was 45.2% for the early adjuvant therapy cohort and 52.5% for the reference cohort.

Despite the higher mortality among the early adjuvant therapy cohort, patients treated with adjuvant therapy more than 12 weeks after surgery still showed improved survival, compared with patient treated with surgery alone, particular those with node-positive disease.

“To our knowledge, it is the first study to suggest that patients who commence adjuvant therapy within 28-59 days after primary surgical resection of pancreatic adenocarcinoma have improved survival outcomes compared with those who waited for more than 59 days,” the authors wrote. “However, patients who recover slowly from surgery may still benefit from delayed adjuvant therapy initiated more than 12 weeks after surgery.”

No treatment interactions were seen for other variables such as age, comorbidity score, tumor size, pathologic T stages, surgical margin, duration of postoperative inpatient admission, unplanned readmission within 30 days after surgery, and time from diagnosis to surgery.

The analysis also revealed that patients with a primary tumor at the pancreatic body and tail and those receiving multiagent chemotherapy or radiation therapy were less likely to receive delayed adjuvant therapy, However, older or black patients, those with lower income, with postoperative inpatient admission longer than 1 week or with unplanned readmission within 30 days after surgery were more likely to have delayed initiation of adjuvant therapy.

No conflicts of interest were reported.

SOURCE: Ma SJ et al. JAMA Netw Open. 2019 Aug 14. doi: 10.1001/jamanetworkopen.2019.9126.

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
206425
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Algorithm boosts MACE prediction in patients with chest pain

Algorithms to guide chest pain management
Article Type
Changed
Tue, 08/13/2019 - 08:31

Adding electrocardiogram findings and clinical assessment to high-sensitivity cardiac troponin measurements in patients presenting with chest pain could improve predictions of their risk of 30-day major adverse cardiac events, particularly unstable angina, research suggests.

Investigators reported outcomes of a prospective study involving 3,123 patients with suspected acute myocardial infarction. The findings are in the Journal of the American College of Cardiology.

The aim of the researchers was to validate an extended algorithm that combined the European Society of Cardiology’s high-sensitivity cardiac troponin measurement at presentation and after 1 hour (ESC hs-cTn 0/1 h algorithm) with clinical assessment and ECG findings to aid prediction of major adverse cardiac events (MACE) within 30 days.

The clinical assessment involved the treating ED physician’s use of a visual analog scale to assess the patient’s pretest probability for an acute coronary syndrome (ACS), with a score above 70% qualifying as high likelihood.

The researchers found that the ESC hs-cTn 0/1 h algorithm alone triaged significantly more patients toward rule-out for MACE than did the extended algorithm (60% vs. 45%, P less than .001). This resulted in 487 patients being reclassified toward “observe” by the extended algorithm, and among this group the 30-day MACE rate was 1.1%.

However, the 30-day MACE rates were similar in the two groups – 0.6% among those ruled out by the ESC hs-cTn 0/1 h algorithm alone and 0.4% in those ruled out by the extended algorithm – resulting in a similar negative predictive value.

“These estimates will help clinicians to appropriately manage patients triaged toward rule-out according to the ESC hs-cTnT 0/1 h algorithm, in whom either the [visual analog scale] for ACS or the ECG still suggests the presence of an ACS,” wrote Thomas Nestelberger, MD, of the Cardiovascular Research Institute Basel (Switzerland) at the University of Basel, and coinvestigators.

The ESC hs-cTn 0/1 h algorithm also ruled in fewer patients than did the extended algorithm (16% vs. 26%, P less than .001), giving it a higher positive predictive value.


When the researchers added unstable angina to the major adverse cardiac event outcome, they found the ESC hs-cTn 0/1 h algorithm had a lower negative predictive value and a higher negative likelihood ratio compared with the extended algorithm for patients ruled out, but a higher positive predictive value and positive likelihood ratio for patients ruled in.

“Our findings corroborate and extend previous research regarding the development and validation of algorithms for the safe and effective rule-out and rule-in of MACE in patients with symptoms suggestive of AMI,” the authors wrote.

This study was supported by the Swiss National Science Foundation, the Swiss Heart Foundation, the European Union, the Cardiovascular Research Foundation Basel, the University Hospital Basel, Abbott, Beckman Coulter, Biomerieux, BRAHMS, Roche, Nanosphere, Siemens, Ortho Diagnostics, and Singulex. Several authors reported grants and support from the pharmaceutical sector.

SOURCE: Nestelberger T et al. J Am Coll Cardiol. 2019 Aug 20. doi: 10.1016/j.jacc.2019.06.025.

Body

In patients presenting at the emergency department with chest pain, it’s important not only to diagnose acute myocardial infarction, but also to predict short-term risk of cardiac events to help guide management. This thoughtful and comprehensive analysis is the largest study assessing the added value of clinical and ECG assessment to the prognostication by high-sensitivity cardiac troponin algorithms in patients evaluated for chest pain. It reinforces the accuracy of hs-cTn at presentation and after 1 hour (ESC hs-cTn 0/1 h) algorithms to predict AMI and 30-day AMI-related events.

It is important to note that if unstable angina had been included as a major adverse cardiac event, the study would have found that the extended algorithm performs better than the hs-cTn 0/1 h algorithm in the prediction of this endpoint.

Germán Cediel, MD, is from Hospital Universitari Germans Trias i Pujol in Spain, Alfredo Bardají, MD, is from the Joan XXIII University Hospital in Spain, and José A. Barrabés, MD, is from Vall d’Hebron University Hospital and Research Institute, Universitat Autònoma de Barcelona. The comments are adapted from an editorial (J Am Coll Cardiol. 2019 Aug 20. doi: 10.1016/j.jacc.2019.05.065). The authors declared support from Instituto de Salud Carlos III, Spain, cofinanced by the European Regional Development Fund, and declared consultancies and educational activities with the pharmaceutical sector.

Publications
Topics
Sections
Body

In patients presenting at the emergency department with chest pain, it’s important not only to diagnose acute myocardial infarction, but also to predict short-term risk of cardiac events to help guide management. This thoughtful and comprehensive analysis is the largest study assessing the added value of clinical and ECG assessment to the prognostication by high-sensitivity cardiac troponin algorithms in patients evaluated for chest pain. It reinforces the accuracy of hs-cTn at presentation and after 1 hour (ESC hs-cTn 0/1 h) algorithms to predict AMI and 30-day AMI-related events.

It is important to note that if unstable angina had been included as a major adverse cardiac event, the study would have found that the extended algorithm performs better than the hs-cTn 0/1 h algorithm in the prediction of this endpoint.

Germán Cediel, MD, is from Hospital Universitari Germans Trias i Pujol in Spain, Alfredo Bardají, MD, is from the Joan XXIII University Hospital in Spain, and José A. Barrabés, MD, is from Vall d’Hebron University Hospital and Research Institute, Universitat Autònoma de Barcelona. The comments are adapted from an editorial (J Am Coll Cardiol. 2019 Aug 20. doi: 10.1016/j.jacc.2019.05.065). The authors declared support from Instituto de Salud Carlos III, Spain, cofinanced by the European Regional Development Fund, and declared consultancies and educational activities with the pharmaceutical sector.

Body

In patients presenting at the emergency department with chest pain, it’s important not only to diagnose acute myocardial infarction, but also to predict short-term risk of cardiac events to help guide management. This thoughtful and comprehensive analysis is the largest study assessing the added value of clinical and ECG assessment to the prognostication by high-sensitivity cardiac troponin algorithms in patients evaluated for chest pain. It reinforces the accuracy of hs-cTn at presentation and after 1 hour (ESC hs-cTn 0/1 h) algorithms to predict AMI and 30-day AMI-related events.

It is important to note that if unstable angina had been included as a major adverse cardiac event, the study would have found that the extended algorithm performs better than the hs-cTn 0/1 h algorithm in the prediction of this endpoint.

Germán Cediel, MD, is from Hospital Universitari Germans Trias i Pujol in Spain, Alfredo Bardají, MD, is from the Joan XXIII University Hospital in Spain, and José A. Barrabés, MD, is from Vall d’Hebron University Hospital and Research Institute, Universitat Autònoma de Barcelona. The comments are adapted from an editorial (J Am Coll Cardiol. 2019 Aug 20. doi: 10.1016/j.jacc.2019.05.065). The authors declared support from Instituto de Salud Carlos III, Spain, cofinanced by the European Regional Development Fund, and declared consultancies and educational activities with the pharmaceutical sector.

Title
Algorithms to guide chest pain management
Algorithms to guide chest pain management

Adding electrocardiogram findings and clinical assessment to high-sensitivity cardiac troponin measurements in patients presenting with chest pain could improve predictions of their risk of 30-day major adverse cardiac events, particularly unstable angina, research suggests.

Investigators reported outcomes of a prospective study involving 3,123 patients with suspected acute myocardial infarction. The findings are in the Journal of the American College of Cardiology.

The aim of the researchers was to validate an extended algorithm that combined the European Society of Cardiology’s high-sensitivity cardiac troponin measurement at presentation and after 1 hour (ESC hs-cTn 0/1 h algorithm) with clinical assessment and ECG findings to aid prediction of major adverse cardiac events (MACE) within 30 days.

The clinical assessment involved the treating ED physician’s use of a visual analog scale to assess the patient’s pretest probability for an acute coronary syndrome (ACS), with a score above 70% qualifying as high likelihood.

The researchers found that the ESC hs-cTn 0/1 h algorithm alone triaged significantly more patients toward rule-out for MACE than did the extended algorithm (60% vs. 45%, P less than .001). This resulted in 487 patients being reclassified toward “observe” by the extended algorithm, and among this group the 30-day MACE rate was 1.1%.

However, the 30-day MACE rates were similar in the two groups – 0.6% among those ruled out by the ESC hs-cTn 0/1 h algorithm alone and 0.4% in those ruled out by the extended algorithm – resulting in a similar negative predictive value.

“These estimates will help clinicians to appropriately manage patients triaged toward rule-out according to the ESC hs-cTnT 0/1 h algorithm, in whom either the [visual analog scale] for ACS or the ECG still suggests the presence of an ACS,” wrote Thomas Nestelberger, MD, of the Cardiovascular Research Institute Basel (Switzerland) at the University of Basel, and coinvestigators.

The ESC hs-cTn 0/1 h algorithm also ruled in fewer patients than did the extended algorithm (16% vs. 26%, P less than .001), giving it a higher positive predictive value.


When the researchers added unstable angina to the major adverse cardiac event outcome, they found the ESC hs-cTn 0/1 h algorithm had a lower negative predictive value and a higher negative likelihood ratio compared with the extended algorithm for patients ruled out, but a higher positive predictive value and positive likelihood ratio for patients ruled in.

“Our findings corroborate and extend previous research regarding the development and validation of algorithms for the safe and effective rule-out and rule-in of MACE in patients with symptoms suggestive of AMI,” the authors wrote.

This study was supported by the Swiss National Science Foundation, the Swiss Heart Foundation, the European Union, the Cardiovascular Research Foundation Basel, the University Hospital Basel, Abbott, Beckman Coulter, Biomerieux, BRAHMS, Roche, Nanosphere, Siemens, Ortho Diagnostics, and Singulex. Several authors reported grants and support from the pharmaceutical sector.

SOURCE: Nestelberger T et al. J Am Coll Cardiol. 2019 Aug 20. doi: 10.1016/j.jacc.2019.06.025.

Adding electrocardiogram findings and clinical assessment to high-sensitivity cardiac troponin measurements in patients presenting with chest pain could improve predictions of their risk of 30-day major adverse cardiac events, particularly unstable angina, research suggests.

Investigators reported outcomes of a prospective study involving 3,123 patients with suspected acute myocardial infarction. The findings are in the Journal of the American College of Cardiology.

The aim of the researchers was to validate an extended algorithm that combined the European Society of Cardiology’s high-sensitivity cardiac troponin measurement at presentation and after 1 hour (ESC hs-cTn 0/1 h algorithm) with clinical assessment and ECG findings to aid prediction of major adverse cardiac events (MACE) within 30 days.

The clinical assessment involved the treating ED physician’s use of a visual analog scale to assess the patient’s pretest probability for an acute coronary syndrome (ACS), with a score above 70% qualifying as high likelihood.

The researchers found that the ESC hs-cTn 0/1 h algorithm alone triaged significantly more patients toward rule-out for MACE than did the extended algorithm (60% vs. 45%, P less than .001). This resulted in 487 patients being reclassified toward “observe” by the extended algorithm, and among this group the 30-day MACE rate was 1.1%.

However, the 30-day MACE rates were similar in the two groups – 0.6% among those ruled out by the ESC hs-cTn 0/1 h algorithm alone and 0.4% in those ruled out by the extended algorithm – resulting in a similar negative predictive value.

“These estimates will help clinicians to appropriately manage patients triaged toward rule-out according to the ESC hs-cTnT 0/1 h algorithm, in whom either the [visual analog scale] for ACS or the ECG still suggests the presence of an ACS,” wrote Thomas Nestelberger, MD, of the Cardiovascular Research Institute Basel (Switzerland) at the University of Basel, and coinvestigators.

The ESC hs-cTn 0/1 h algorithm also ruled in fewer patients than did the extended algorithm (16% vs. 26%, P less than .001), giving it a higher positive predictive value.


When the researchers added unstable angina to the major adverse cardiac event outcome, they found the ESC hs-cTn 0/1 h algorithm had a lower negative predictive value and a higher negative likelihood ratio compared with the extended algorithm for patients ruled out, but a higher positive predictive value and positive likelihood ratio for patients ruled in.

“Our findings corroborate and extend previous research regarding the development and validation of algorithms for the safe and effective rule-out and rule-in of MACE in patients with symptoms suggestive of AMI,” the authors wrote.

This study was supported by the Swiss National Science Foundation, the Swiss Heart Foundation, the European Union, the Cardiovascular Research Foundation Basel, the University Hospital Basel, Abbott, Beckman Coulter, Biomerieux, BRAHMS, Roche, Nanosphere, Siemens, Ortho Diagnostics, and Singulex. Several authors reported grants and support from the pharmaceutical sector.

SOURCE: Nestelberger T et al. J Am Coll Cardiol. 2019 Aug 20. doi: 10.1016/j.jacc.2019.06.025.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JOURNAL OF THE AMERICAN COLLEGE OF CARDIOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Clinical assessment and ECG may add to assessment of MACE risk in patients with chest pain.

Major finding: High-sensitivity cardiac troponin measurements combined with ECG and clinical assessment can help rule out MACE and unstable angina.

Study details: A prospective study of 3,123 patients with suspected acute myocardial infarction.

Disclosures: This study was supported by the Swiss National Science Foundation, the Swiss Heart Foundation, the European Union, the Cardiovascular Research Foundation Basel, the University Hospital Basel, Abbott, Beckman Coulter, Biomerieux, BRAHMS, Roche, NanoSphere, Siemens, Ortho Diagnostics, and Singulex. Several authors reported grants and support from the pharmaceutical sector.

Source: Nestelberger T et al. J Am Coll Cardiol. 2019 Aug 20. doi: 10.1016/j.jacc.2019.06.025.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Aspirin interacts with epigenetics to influence breast cancer mortality

Intersection of breast cancer, epigenetics, and aspirin
Article Type
Changed
Thu, 12/15/2022 - 17:42

 

The impact of prediagnosis aspirin use on mortality in women with breast cancer is significantly tied to epigenetic changes in certain breast cancer-related genes, investigators reported.

While studies have shown aspirin reduces the risk of breast cancer development, there is limited and inconsistent data on the effect of aspirin on prognosis and mortality after a diagnosis of breast cancer, Tengteng Wang, PhD, from the department of epidemiology at the University of North Carolina at Chapel Hill and coauthors wrote in Cancer.

To address this, they analyzed data from 1,508 women who had a first diagnosis of primary breast cancer and were involved in the Long Island Breast Cancer Study Project; they then looked at the women’s methylation status, which is a mechanism of epigenetic change.

Around one in five participants reported ever using aspirin, and the analysis showed that ever use of aspirin was associated with an overall 13% decrease in breast cancer–specific mortality.

However researchers saw significant interactions between aspirin use and LINE-1 methylation status – which is a marker of methylation of genetic elements that play key roles in maintaining genomic stability – and breast cancer–specific genes.

They found that aspirin use in women with LINE-1 hypomethylation was associated with a risk of breast cancer–specific mortality that was 45% higher than that of nonusers (P = .05).

Compared with nonusers, aspirin users with methylated tumor BRCA1 promoter had significant 16% higher breast cancer mortality (P = .04) and 67% higher all-cause mortality (P = .02). However the study showed aspirin did not affect mortality in women with unmethylated BRCA1 promoter.

Among women with the PR breast cancer gene, aspirin use by those with methylation of the PR promoter was associated with a 63% higher breast cancer–specific mortality, but methylation showed no statistically significant effect on all-cause mortality, compared with nonusers.

The study found no significant change when they restricted the analysis to receptor-positive or invasive breast cancer, and the associations remained consistent even after adjusting for global methylation.

“Our findings suggest that the association between aspirin use and mortality after breast cancer may depend on methylation profiles and warrant further investigation,” the authors wrote. “These findings, if confirmed, may provide new biological insights into the association between aspirin use and breast cancer prognosis, may affect clinical decision making by identifying a subgroup of patients with breast cancer using epigenetic markers for whom prediagnosis aspirin use affects subsequent mortality, and may help refine risk-reduction strategies to improve survival among women with breast cancer.”

The study was partly supported by the National Institutes of Health. One author declared personal fees from the private sector outside the submitted work.

SOURCE: Wang T et al. Cancer. 2019 Aug 12. doi: 10.1002/cncr.32364.

Body

 

This study offers new insights into the intersection of epigenetics, prediagnosis aspirin use, and breast cancer survival at a time when there is an urgent need to understand why some women respond differently to treatment and to find cost-effective therapies for the disease.

Epigenetics is a promising avenue of investigation because epigenetic shifts, such as DNA methylation, that impact the genes responsible for cell behavior and DNA damage and repair are known to contribute to and exacerbate cancer. These epigenetic signatures could act as biomarkers for risk in cancer and also aid with more effective treatment approaches. For example, aspirin is known to affect DNA methylation at certain sites in colon cancer, hence this study’s hypothesis that pre–cancer diagnosis aspirin use would interact with epigenetic signatures and influence breast cancer outcomes.
 

Kristen M. C. Malecki, PhD, is from the department of population health sciences in the School of Medicine and Public Health at the University of Wisconsin, Madison. The comments are adapted from an accompanying editorial (Cancer. 2019 Aug 12. doi: 10.1002/cncr.32365). Dr. Malecki declared support from the National Institutes of Health, National Institute for Environmental Health Sciences Breast Cancer, and the Environment Research Program.

Publications
Topics
Sections
Body

 

This study offers new insights into the intersection of epigenetics, prediagnosis aspirin use, and breast cancer survival at a time when there is an urgent need to understand why some women respond differently to treatment and to find cost-effective therapies for the disease.

Epigenetics is a promising avenue of investigation because epigenetic shifts, such as DNA methylation, that impact the genes responsible for cell behavior and DNA damage and repair are known to contribute to and exacerbate cancer. These epigenetic signatures could act as biomarkers for risk in cancer and also aid with more effective treatment approaches. For example, aspirin is known to affect DNA methylation at certain sites in colon cancer, hence this study’s hypothesis that pre–cancer diagnosis aspirin use would interact with epigenetic signatures and influence breast cancer outcomes.
 

Kristen M. C. Malecki, PhD, is from the department of population health sciences in the School of Medicine and Public Health at the University of Wisconsin, Madison. The comments are adapted from an accompanying editorial (Cancer. 2019 Aug 12. doi: 10.1002/cncr.32365). Dr. Malecki declared support from the National Institutes of Health, National Institute for Environmental Health Sciences Breast Cancer, and the Environment Research Program.

Body

 

This study offers new insights into the intersection of epigenetics, prediagnosis aspirin use, and breast cancer survival at a time when there is an urgent need to understand why some women respond differently to treatment and to find cost-effective therapies for the disease.

Epigenetics is a promising avenue of investigation because epigenetic shifts, such as DNA methylation, that impact the genes responsible for cell behavior and DNA damage and repair are known to contribute to and exacerbate cancer. These epigenetic signatures could act as biomarkers for risk in cancer and also aid with more effective treatment approaches. For example, aspirin is known to affect DNA methylation at certain sites in colon cancer, hence this study’s hypothesis that pre–cancer diagnosis aspirin use would interact with epigenetic signatures and influence breast cancer outcomes.
 

Kristen M. C. Malecki, PhD, is from the department of population health sciences in the School of Medicine and Public Health at the University of Wisconsin, Madison. The comments are adapted from an accompanying editorial (Cancer. 2019 Aug 12. doi: 10.1002/cncr.32365). Dr. Malecki declared support from the National Institutes of Health, National Institute for Environmental Health Sciences Breast Cancer, and the Environment Research Program.

Title
Intersection of breast cancer, epigenetics, and aspirin
Intersection of breast cancer, epigenetics, and aspirin

 

The impact of prediagnosis aspirin use on mortality in women with breast cancer is significantly tied to epigenetic changes in certain breast cancer-related genes, investigators reported.

While studies have shown aspirin reduces the risk of breast cancer development, there is limited and inconsistent data on the effect of aspirin on prognosis and mortality after a diagnosis of breast cancer, Tengteng Wang, PhD, from the department of epidemiology at the University of North Carolina at Chapel Hill and coauthors wrote in Cancer.

To address this, they analyzed data from 1,508 women who had a first diagnosis of primary breast cancer and were involved in the Long Island Breast Cancer Study Project; they then looked at the women’s methylation status, which is a mechanism of epigenetic change.

Around one in five participants reported ever using aspirin, and the analysis showed that ever use of aspirin was associated with an overall 13% decrease in breast cancer–specific mortality.

However researchers saw significant interactions between aspirin use and LINE-1 methylation status – which is a marker of methylation of genetic elements that play key roles in maintaining genomic stability – and breast cancer–specific genes.

They found that aspirin use in women with LINE-1 hypomethylation was associated with a risk of breast cancer–specific mortality that was 45% higher than that of nonusers (P = .05).

Compared with nonusers, aspirin users with methylated tumor BRCA1 promoter had significant 16% higher breast cancer mortality (P = .04) and 67% higher all-cause mortality (P = .02). However the study showed aspirin did not affect mortality in women with unmethylated BRCA1 promoter.

Among women with the PR breast cancer gene, aspirin use by those with methylation of the PR promoter was associated with a 63% higher breast cancer–specific mortality, but methylation showed no statistically significant effect on all-cause mortality, compared with nonusers.

The study found no significant change when they restricted the analysis to receptor-positive or invasive breast cancer, and the associations remained consistent even after adjusting for global methylation.

“Our findings suggest that the association between aspirin use and mortality after breast cancer may depend on methylation profiles and warrant further investigation,” the authors wrote. “These findings, if confirmed, may provide new biological insights into the association between aspirin use and breast cancer prognosis, may affect clinical decision making by identifying a subgroup of patients with breast cancer using epigenetic markers for whom prediagnosis aspirin use affects subsequent mortality, and may help refine risk-reduction strategies to improve survival among women with breast cancer.”

The study was partly supported by the National Institutes of Health. One author declared personal fees from the private sector outside the submitted work.

SOURCE: Wang T et al. Cancer. 2019 Aug 12. doi: 10.1002/cncr.32364.

 

The impact of prediagnosis aspirin use on mortality in women with breast cancer is significantly tied to epigenetic changes in certain breast cancer-related genes, investigators reported.

While studies have shown aspirin reduces the risk of breast cancer development, there is limited and inconsistent data on the effect of aspirin on prognosis and mortality after a diagnosis of breast cancer, Tengteng Wang, PhD, from the department of epidemiology at the University of North Carolina at Chapel Hill and coauthors wrote in Cancer.

To address this, they analyzed data from 1,508 women who had a first diagnosis of primary breast cancer and were involved in the Long Island Breast Cancer Study Project; they then looked at the women’s methylation status, which is a mechanism of epigenetic change.

Around one in five participants reported ever using aspirin, and the analysis showed that ever use of aspirin was associated with an overall 13% decrease in breast cancer–specific mortality.

However researchers saw significant interactions between aspirin use and LINE-1 methylation status – which is a marker of methylation of genetic elements that play key roles in maintaining genomic stability – and breast cancer–specific genes.

They found that aspirin use in women with LINE-1 hypomethylation was associated with a risk of breast cancer–specific mortality that was 45% higher than that of nonusers (P = .05).

Compared with nonusers, aspirin users with methylated tumor BRCA1 promoter had significant 16% higher breast cancer mortality (P = .04) and 67% higher all-cause mortality (P = .02). However the study showed aspirin did not affect mortality in women with unmethylated BRCA1 promoter.

Among women with the PR breast cancer gene, aspirin use by those with methylation of the PR promoter was associated with a 63% higher breast cancer–specific mortality, but methylation showed no statistically significant effect on all-cause mortality, compared with nonusers.

The study found no significant change when they restricted the analysis to receptor-positive or invasive breast cancer, and the associations remained consistent even after adjusting for global methylation.

“Our findings suggest that the association between aspirin use and mortality after breast cancer may depend on methylation profiles and warrant further investigation,” the authors wrote. “These findings, if confirmed, may provide new biological insights into the association between aspirin use and breast cancer prognosis, may affect clinical decision making by identifying a subgroup of patients with breast cancer using epigenetic markers for whom prediagnosis aspirin use affects subsequent mortality, and may help refine risk-reduction strategies to improve survival among women with breast cancer.”

The study was partly supported by the National Institutes of Health. One author declared personal fees from the private sector outside the submitted work.

SOURCE: Wang T et al. Cancer. 2019 Aug 12. doi: 10.1002/cncr.32364.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM CANCER

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

NSAIDs a significant mediator of cardiovascular risk in osteoarthritis

Article Type
Changed
Mon, 08/19/2019 - 22:31

A significant proportion of the increased cardiovascular disease (CVD) risk seen in people with osteoarthritis could be attributable to NSAIDs, new research has suggested.

Generic nonsteroidal anti-inflammatory drugs naproxen and ibuprofen
Denise Fulton/MDedge News

Writing in Arthritis & Rheumatology, researchers reported the outcomes of a longitudinal, population-based cohort study of 7,743 individuals with osteoarthritis patients and 23,229 age- and sex-matched controls without osteoarthritis.

“The prevailing hypothesis in the OA to CVD relationship has been that OA patients frequently take NSAIDs to control their pain and inflammation and that this may lead to them developing CVD,” wrote Mohammad Atiquzzaman, a PhD student at the University of British Columbia, Vancouver, and his coauthors. However they commented that no studies had so far examined this directly in patients with osteoarthritis.

Overall, people with osteoarthritis had a significant 23% higher risk of cardiovascular disease, compared with controls, after adjustment for factors such body mass index, hypertension, diabetes, hyperlipidemia, and socioeconomic status. They also had a 42% higher risk of congestive heart failure, 17% higher risk of ischemic heart disease, and 14% higher risk of stroke.

NSAID use was five times more common among people with osteoarthritis, and NSAIDs alone were associated with a greater than fourfold higher risk of cardiovascular disease, after adjusting for osteoarthritis and other potential confounders.

When the authors performed modeling to break down the effect of osteoarthritis on CVD risk into the direct effect of osteoarthritis itself and the indirect effect mediated by NSAID use, they concluded that 41% of the total effect of osteoarthritis on cardiovascular risk was mediated by NSAIDs. The effect of NSAIDs was particularly pronounced for stroke, in which cases they estimated that the drugs contributed to 64% of the increased in risk, and in ischemic heart disease, in which they contributed to 56% of the increased risk.

Subgroup analysis suggested that conventional NSAIDs were responsible for around 29% of the total increased risk of cardiovascular disease, while selective COX-2 inhibitors, or coxibs, such as celecoxib, lumiracoxib, rofecoxib, and valdecoxib mediated around 21%. For ischemic heart disease, conventional NSAIDs explained around 45% of the increased risk, while selective coxibs explained around 32% of the risk. Similarly, with congestive heart failure and stroke, the proportion of risk mediated by NSAIDs was higher for conventional NSAIDs, compared with coxibs.


The authors noted that while a number of previous studies have found osteoarthritis is an independent risk factor for cardiovascular disease, theirs was the first study to specifically examine the role that NSAIDs play in that increased risk.

However, they noted that their information on NSAID use was gleaned from prescription claims data, which did not include information on over-the-counter NSAID use. Their analysis was also unable to include information on family history of cardiovascular disease, smoking, and physical activity, which are important cardiovascular disease risk factors. They did observe that the rates of obesity were higher among the osteoarthritis group when compared with controls (29% vs. 20%), and hypertension and COPD were also more common among individuals with osteoarthritis.

There was no outside funding for the study, and the authors had no conflicts of interest to declare.

SOURCE: Atiquzzaman M et al. Arthritis Rheumatol. 2019 Aug 6. doi: 10.1002/art.41027

Publications
Topics
Sections

A significant proportion of the increased cardiovascular disease (CVD) risk seen in people with osteoarthritis could be attributable to NSAIDs, new research has suggested.

Generic nonsteroidal anti-inflammatory drugs naproxen and ibuprofen
Denise Fulton/MDedge News

Writing in Arthritis & Rheumatology, researchers reported the outcomes of a longitudinal, population-based cohort study of 7,743 individuals with osteoarthritis patients and 23,229 age- and sex-matched controls without osteoarthritis.

“The prevailing hypothesis in the OA to CVD relationship has been that OA patients frequently take NSAIDs to control their pain and inflammation and that this may lead to them developing CVD,” wrote Mohammad Atiquzzaman, a PhD student at the University of British Columbia, Vancouver, and his coauthors. However they commented that no studies had so far examined this directly in patients with osteoarthritis.

Overall, people with osteoarthritis had a significant 23% higher risk of cardiovascular disease, compared with controls, after adjustment for factors such body mass index, hypertension, diabetes, hyperlipidemia, and socioeconomic status. They also had a 42% higher risk of congestive heart failure, 17% higher risk of ischemic heart disease, and 14% higher risk of stroke.

NSAID use was five times more common among people with osteoarthritis, and NSAIDs alone were associated with a greater than fourfold higher risk of cardiovascular disease, after adjusting for osteoarthritis and other potential confounders.

When the authors performed modeling to break down the effect of osteoarthritis on CVD risk into the direct effect of osteoarthritis itself and the indirect effect mediated by NSAID use, they concluded that 41% of the total effect of osteoarthritis on cardiovascular risk was mediated by NSAIDs. The effect of NSAIDs was particularly pronounced for stroke, in which cases they estimated that the drugs contributed to 64% of the increased in risk, and in ischemic heart disease, in which they contributed to 56% of the increased risk.

Subgroup analysis suggested that conventional NSAIDs were responsible for around 29% of the total increased risk of cardiovascular disease, while selective COX-2 inhibitors, or coxibs, such as celecoxib, lumiracoxib, rofecoxib, and valdecoxib mediated around 21%. For ischemic heart disease, conventional NSAIDs explained around 45% of the increased risk, while selective coxibs explained around 32% of the risk. Similarly, with congestive heart failure and stroke, the proportion of risk mediated by NSAIDs was higher for conventional NSAIDs, compared with coxibs.


The authors noted that while a number of previous studies have found osteoarthritis is an independent risk factor for cardiovascular disease, theirs was the first study to specifically examine the role that NSAIDs play in that increased risk.

However, they noted that their information on NSAID use was gleaned from prescription claims data, which did not include information on over-the-counter NSAID use. Their analysis was also unable to include information on family history of cardiovascular disease, smoking, and physical activity, which are important cardiovascular disease risk factors. They did observe that the rates of obesity were higher among the osteoarthritis group when compared with controls (29% vs. 20%), and hypertension and COPD were also more common among individuals with osteoarthritis.

There was no outside funding for the study, and the authors had no conflicts of interest to declare.

SOURCE: Atiquzzaman M et al. Arthritis Rheumatol. 2019 Aug 6. doi: 10.1002/art.41027

A significant proportion of the increased cardiovascular disease (CVD) risk seen in people with osteoarthritis could be attributable to NSAIDs, new research has suggested.

Generic nonsteroidal anti-inflammatory drugs naproxen and ibuprofen
Denise Fulton/MDedge News

Writing in Arthritis & Rheumatology, researchers reported the outcomes of a longitudinal, population-based cohort study of 7,743 individuals with osteoarthritis patients and 23,229 age- and sex-matched controls without osteoarthritis.

“The prevailing hypothesis in the OA to CVD relationship has been that OA patients frequently take NSAIDs to control their pain and inflammation and that this may lead to them developing CVD,” wrote Mohammad Atiquzzaman, a PhD student at the University of British Columbia, Vancouver, and his coauthors. However they commented that no studies had so far examined this directly in patients with osteoarthritis.

Overall, people with osteoarthritis had a significant 23% higher risk of cardiovascular disease, compared with controls, after adjustment for factors such body mass index, hypertension, diabetes, hyperlipidemia, and socioeconomic status. They also had a 42% higher risk of congestive heart failure, 17% higher risk of ischemic heart disease, and 14% higher risk of stroke.

NSAID use was five times more common among people with osteoarthritis, and NSAIDs alone were associated with a greater than fourfold higher risk of cardiovascular disease, after adjusting for osteoarthritis and other potential confounders.

When the authors performed modeling to break down the effect of osteoarthritis on CVD risk into the direct effect of osteoarthritis itself and the indirect effect mediated by NSAID use, they concluded that 41% of the total effect of osteoarthritis on cardiovascular risk was mediated by NSAIDs. The effect of NSAIDs was particularly pronounced for stroke, in which cases they estimated that the drugs contributed to 64% of the increased in risk, and in ischemic heart disease, in which they contributed to 56% of the increased risk.

Subgroup analysis suggested that conventional NSAIDs were responsible for around 29% of the total increased risk of cardiovascular disease, while selective COX-2 inhibitors, or coxibs, such as celecoxib, lumiracoxib, rofecoxib, and valdecoxib mediated around 21%. For ischemic heart disease, conventional NSAIDs explained around 45% of the increased risk, while selective coxibs explained around 32% of the risk. Similarly, with congestive heart failure and stroke, the proportion of risk mediated by NSAIDs was higher for conventional NSAIDs, compared with coxibs.


The authors noted that while a number of previous studies have found osteoarthritis is an independent risk factor for cardiovascular disease, theirs was the first study to specifically examine the role that NSAIDs play in that increased risk.

However, they noted that their information on NSAID use was gleaned from prescription claims data, which did not include information on over-the-counter NSAID use. Their analysis was also unable to include information on family history of cardiovascular disease, smoking, and physical activity, which are important cardiovascular disease risk factors. They did observe that the rates of obesity were higher among the osteoarthritis group when compared with controls (29% vs. 20%), and hypertension and COPD were also more common among individuals with osteoarthritis.

There was no outside funding for the study, and the authors had no conflicts of interest to declare.

SOURCE: Atiquzzaman M et al. Arthritis Rheumatol. 2019 Aug 6. doi: 10.1002/art.41027

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM ARTHRITIS & RHEUMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
206019
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Half of psychiatry, psychology trial abstracts contain spin

Article Type
Changed
Tue, 08/06/2019 - 15:43

About half of papers in a sample of psychiatry and psychology journals have evidence of “spin” in the abstract, according to an analysis published in BMJ Evidence-Based Medicine.

Samuel Jellison, a third-year medical student at Oklahoma State University, Tulsa, and coauthors wrote that the results of randomized, controlled trials should be reported objectively because of their importance for psychiatry clinical practice. However, given that researchers were allowed more latitude in the abstract of a paper to highlight certain results or conclusions, the abstract might not accurately represent the findings of the study.

To evaluate this, the authors investigated the use of spin in abstracts, which they defined as “‘use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results.”

They analyzed 116 randomized, controlled trials of interventions where there was a nonsignificant primary endpoint and found that 56% of those contained spin in the abstract. Spin was evident in 2% of publication titles, 21% of abstract results sections, and 49.1% of abstract conclusions sections. In 15% of trials, spin was found both in the results and the conclusions.

Spin was more common in trials of pharmacologic treatments, compared with nonpharmacologic treatments. However, the study did not find a higher level of spin in industry-funded studies, and in fact, spin was more common in publicly funded research.

The most common form of spin was focusing on a statistically significant primary or secondary endpoint while omitting one or more nonsignificant primary endpoints. Other spin strategies included claiming noninferiority or equivalence for a statistically nonsignificant endpoint; using phrases such as “trend towards significance”; or focusing on statistically significant subgroup analyses, such as per protocol instead of intention to treat.

The authors observed that most physicians only read article abstracts, and up to one-quarter of editorial decisions are based on the abstract alone.

“Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about a treatment for patients,” they wrote, while calling for efforts to discourage spin in abstracts.

In an interview, Paul S. Nestadt, MD, said the findings were not surprising.

“The proportion [56%] of psychiatry and psychology abstracts which Jellison et al. found to contain spin is similar to that found in broader studies of all biomedical literature in previous reviews,” said Dr. Nestadt, assistant professor in the department of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore. “It is disheartening that attempts to mislead have become ‘standard of care’ in medical literature, but it is a predictable outcome of increasing competition for shrinking grant funding, awarded partly on the basis of publication history in leading journals that maintain clear publication biases toward positive results.

“As the authors point out, we all share a responsibility to call out spin when we see it, whether in our role as reviewer, editor, coauthor, or as the writers themselves.”

Neither Mr. Jellison nor his coauthors reported funding or conflicts of interest.

SOURCE: Jellison S et al. BMJ Evid Based Med. 2019 Aug 5. doi: 10.1136/bmjebm-2019-111176.

Publications
Topics
Sections

About half of papers in a sample of psychiatry and psychology journals have evidence of “spin” in the abstract, according to an analysis published in BMJ Evidence-Based Medicine.

Samuel Jellison, a third-year medical student at Oklahoma State University, Tulsa, and coauthors wrote that the results of randomized, controlled trials should be reported objectively because of their importance for psychiatry clinical practice. However, given that researchers were allowed more latitude in the abstract of a paper to highlight certain results or conclusions, the abstract might not accurately represent the findings of the study.

To evaluate this, the authors investigated the use of spin in abstracts, which they defined as “‘use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results.”

They analyzed 116 randomized, controlled trials of interventions where there was a nonsignificant primary endpoint and found that 56% of those contained spin in the abstract. Spin was evident in 2% of publication titles, 21% of abstract results sections, and 49.1% of abstract conclusions sections. In 15% of trials, spin was found both in the results and the conclusions.

Spin was more common in trials of pharmacologic treatments, compared with nonpharmacologic treatments. However, the study did not find a higher level of spin in industry-funded studies, and in fact, spin was more common in publicly funded research.

The most common form of spin was focusing on a statistically significant primary or secondary endpoint while omitting one or more nonsignificant primary endpoints. Other spin strategies included claiming noninferiority or equivalence for a statistically nonsignificant endpoint; using phrases such as “trend towards significance”; or focusing on statistically significant subgroup analyses, such as per protocol instead of intention to treat.

The authors observed that most physicians only read article abstracts, and up to one-quarter of editorial decisions are based on the abstract alone.

“Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about a treatment for patients,” they wrote, while calling for efforts to discourage spin in abstracts.

In an interview, Paul S. Nestadt, MD, said the findings were not surprising.

“The proportion [56%] of psychiatry and psychology abstracts which Jellison et al. found to contain spin is similar to that found in broader studies of all biomedical literature in previous reviews,” said Dr. Nestadt, assistant professor in the department of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore. “It is disheartening that attempts to mislead have become ‘standard of care’ in medical literature, but it is a predictable outcome of increasing competition for shrinking grant funding, awarded partly on the basis of publication history in leading journals that maintain clear publication biases toward positive results.

“As the authors point out, we all share a responsibility to call out spin when we see it, whether in our role as reviewer, editor, coauthor, or as the writers themselves.”

Neither Mr. Jellison nor his coauthors reported funding or conflicts of interest.

SOURCE: Jellison S et al. BMJ Evid Based Med. 2019 Aug 5. doi: 10.1136/bmjebm-2019-111176.

About half of papers in a sample of psychiatry and psychology journals have evidence of “spin” in the abstract, according to an analysis published in BMJ Evidence-Based Medicine.

Samuel Jellison, a third-year medical student at Oklahoma State University, Tulsa, and coauthors wrote that the results of randomized, controlled trials should be reported objectively because of their importance for psychiatry clinical practice. However, given that researchers were allowed more latitude in the abstract of a paper to highlight certain results or conclusions, the abstract might not accurately represent the findings of the study.

To evaluate this, the authors investigated the use of spin in abstracts, which they defined as “‘use of specific reporting strategies, from whatever motive, to highlight that the experimental treatment is beneficial, despite a statistically nonsignificant difference for the primary outcome, or to distract the reader from statistically nonsignificant results.”

They analyzed 116 randomized, controlled trials of interventions where there was a nonsignificant primary endpoint and found that 56% of those contained spin in the abstract. Spin was evident in 2% of publication titles, 21% of abstract results sections, and 49.1% of abstract conclusions sections. In 15% of trials, spin was found both in the results and the conclusions.

Spin was more common in trials of pharmacologic treatments, compared with nonpharmacologic treatments. However, the study did not find a higher level of spin in industry-funded studies, and in fact, spin was more common in publicly funded research.

The most common form of spin was focusing on a statistically significant primary or secondary endpoint while omitting one or more nonsignificant primary endpoints. Other spin strategies included claiming noninferiority or equivalence for a statistically nonsignificant endpoint; using phrases such as “trend towards significance”; or focusing on statistically significant subgroup analyses, such as per protocol instead of intention to treat.

The authors observed that most physicians only read article abstracts, and up to one-quarter of editorial decisions are based on the abstract alone.

“Adding spin to the abstract of an article may mislead physicians who are attempting to draw conclusions about a treatment for patients,” they wrote, while calling for efforts to discourage spin in abstracts.

In an interview, Paul S. Nestadt, MD, said the findings were not surprising.

“The proportion [56%] of psychiatry and psychology abstracts which Jellison et al. found to contain spin is similar to that found in broader studies of all biomedical literature in previous reviews,” said Dr. Nestadt, assistant professor in the department of psychiatry and behavioral sciences at Johns Hopkins University, Baltimore. “It is disheartening that attempts to mislead have become ‘standard of care’ in medical literature, but it is a predictable outcome of increasing competition for shrinking grant funding, awarded partly on the basis of publication history in leading journals that maintain clear publication biases toward positive results.

“As the authors point out, we all share a responsibility to call out spin when we see it, whether in our role as reviewer, editor, coauthor, or as the writers themselves.”

Neither Mr. Jellison nor his coauthors reported funding or conflicts of interest.

SOURCE: Jellison S et al. BMJ Evid Based Med. 2019 Aug 5. doi: 10.1136/bmjebm-2019-111176.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM BMJ EVIDENCE-BASED MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point. Spin is common in abstracts of psychiatry and psychology clinical trial papers.

Major finding: Spin was found in 56% of abstracts for psychiatry and psychology trials with a nonsignificant primary outcome.

Study details: An analysis of 116 randomized, controlled trials of interventions where there was a nonsignificant primary outcome.

Disclosures: No funding or conflicts of interest were reported.

Source: Jellison S et al. BMJ Evid Based Med. 2019 Aug 5. doi: 10.1136/bmjebm-2019-111176.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.