Eosinophil-guided therapy reduces corticosteroid use in COPD

Article Type
Changed
Thu, 06/13/2019 - 16:56

 

Using eosinophil levels to guide steroid treatment in patients with chronic obstructive pulmonary disease (COPD) was found to be noninferior to standard treatment in terms of the number of days out of hospital and alive, new research has found.

Writing in the Lancet Respiratory Medicine, researchers reported the outcomes of a multicenter, controlled, open-label trial comparing eosinophil-guided and standard therapy with systemic corticosteroids in 318 patients with COPD.

Pradeesh Sivapalan, MD, of the respiratory medicine section of Herlev and Gentofte Hospital at the University of Copenhagen, and coauthors wrote that eosinophilic inflammation had been seen in 20%-40% of patients with acute exacerbations of COPD. Patients with higher eosinophilic blood counts were at increased risk of acute exacerbations but were also more likely to benefit from corticosteroid treatment.

In the eosinophil-guided therapy arm of the study, 159 patients received 80 mg of intravenous methylprednisolone on day 1, then from the second day were treated with 37.5 mg of prednisolone oral tablet daily ­– up to 4 days – only on days when their blood eosinophil count was at least 0.3 x 10 cells/L. In the control arm, 159 patients also received 80 mg of intravenous methylprednisolone on day 1, followed by 37.5 mg of prednisolone tablets daily for 4 days.

After 14 days, there were no significant differences between the two groups for mean days alive and out of hospital.

There were 12 more cases of readmission with COPD, including three fatalities, in the eosinophil-guided group within the first month. However the authors said these differences were not statistically significant, but “because the study was not powered to detect differences in this absolute risk range, we cannot rule out that this was an actual harm effect from the interventional strategy.”

The eosinophil-guided therapy group did show more than a 50% reduction in the median duration of systemic corticosteroid therapy, which was 2 days in the eosinophil-guided group, compared with 5 days in the control group (P less than .0001), and the differences between the two groups remained significant at days 30 and 90.

“The tested strategy was successful in reducing the exposure to systemic corticosteroids, but we cannot exclude the possibility that a more aggressive algorithm, such as a single dose of systemic corticosteroid, might have been more effective,” the authors wrote.

At the 90-day follow-up, there were no differences in the number of infections requiring antibiotic treatment, nor in dyspepsia, ulcer complications, or initiation of new proton-pump inhibitor treatment.

The study was supported by the Danish Regions Medical Fund and the Danish Council for Independent Research. Two authors declared personal fees from pharmaceutical companies outside the submitted work. No other conflicts were declared.

SOURCE: Sivapalan P et al. Lancet Respir Med. 2019, May 20. doi: 10.1016/S2213-2600(19)30176-6.

Publications
Topics
Sections

 

Using eosinophil levels to guide steroid treatment in patients with chronic obstructive pulmonary disease (COPD) was found to be noninferior to standard treatment in terms of the number of days out of hospital and alive, new research has found.

Writing in the Lancet Respiratory Medicine, researchers reported the outcomes of a multicenter, controlled, open-label trial comparing eosinophil-guided and standard therapy with systemic corticosteroids in 318 patients with COPD.

Pradeesh Sivapalan, MD, of the respiratory medicine section of Herlev and Gentofte Hospital at the University of Copenhagen, and coauthors wrote that eosinophilic inflammation had been seen in 20%-40% of patients with acute exacerbations of COPD. Patients with higher eosinophilic blood counts were at increased risk of acute exacerbations but were also more likely to benefit from corticosteroid treatment.

In the eosinophil-guided therapy arm of the study, 159 patients received 80 mg of intravenous methylprednisolone on day 1, then from the second day were treated with 37.5 mg of prednisolone oral tablet daily ­– up to 4 days – only on days when their blood eosinophil count was at least 0.3 x 10 cells/L. In the control arm, 159 patients also received 80 mg of intravenous methylprednisolone on day 1, followed by 37.5 mg of prednisolone tablets daily for 4 days.

After 14 days, there were no significant differences between the two groups for mean days alive and out of hospital.

There were 12 more cases of readmission with COPD, including three fatalities, in the eosinophil-guided group within the first month. However the authors said these differences were not statistically significant, but “because the study was not powered to detect differences in this absolute risk range, we cannot rule out that this was an actual harm effect from the interventional strategy.”

The eosinophil-guided therapy group did show more than a 50% reduction in the median duration of systemic corticosteroid therapy, which was 2 days in the eosinophil-guided group, compared with 5 days in the control group (P less than .0001), and the differences between the two groups remained significant at days 30 and 90.

“The tested strategy was successful in reducing the exposure to systemic corticosteroids, but we cannot exclude the possibility that a more aggressive algorithm, such as a single dose of systemic corticosteroid, might have been more effective,” the authors wrote.

At the 90-day follow-up, there were no differences in the number of infections requiring antibiotic treatment, nor in dyspepsia, ulcer complications, or initiation of new proton-pump inhibitor treatment.

The study was supported by the Danish Regions Medical Fund and the Danish Council for Independent Research. Two authors declared personal fees from pharmaceutical companies outside the submitted work. No other conflicts were declared.

SOURCE: Sivapalan P et al. Lancet Respir Med. 2019, May 20. doi: 10.1016/S2213-2600(19)30176-6.

 

Using eosinophil levels to guide steroid treatment in patients with chronic obstructive pulmonary disease (COPD) was found to be noninferior to standard treatment in terms of the number of days out of hospital and alive, new research has found.

Writing in the Lancet Respiratory Medicine, researchers reported the outcomes of a multicenter, controlled, open-label trial comparing eosinophil-guided and standard therapy with systemic corticosteroids in 318 patients with COPD.

Pradeesh Sivapalan, MD, of the respiratory medicine section of Herlev and Gentofte Hospital at the University of Copenhagen, and coauthors wrote that eosinophilic inflammation had been seen in 20%-40% of patients with acute exacerbations of COPD. Patients with higher eosinophilic blood counts were at increased risk of acute exacerbations but were also more likely to benefit from corticosteroid treatment.

In the eosinophil-guided therapy arm of the study, 159 patients received 80 mg of intravenous methylprednisolone on day 1, then from the second day were treated with 37.5 mg of prednisolone oral tablet daily ­– up to 4 days – only on days when their blood eosinophil count was at least 0.3 x 10 cells/L. In the control arm, 159 patients also received 80 mg of intravenous methylprednisolone on day 1, followed by 37.5 mg of prednisolone tablets daily for 4 days.

After 14 days, there were no significant differences between the two groups for mean days alive and out of hospital.

There were 12 more cases of readmission with COPD, including three fatalities, in the eosinophil-guided group within the first month. However the authors said these differences were not statistically significant, but “because the study was not powered to detect differences in this absolute risk range, we cannot rule out that this was an actual harm effect from the interventional strategy.”

The eosinophil-guided therapy group did show more than a 50% reduction in the median duration of systemic corticosteroid therapy, which was 2 days in the eosinophil-guided group, compared with 5 days in the control group (P less than .0001), and the differences between the two groups remained significant at days 30 and 90.

“The tested strategy was successful in reducing the exposure to systemic corticosteroids, but we cannot exclude the possibility that a more aggressive algorithm, such as a single dose of systemic corticosteroid, might have been more effective,” the authors wrote.

At the 90-day follow-up, there were no differences in the number of infections requiring antibiotic treatment, nor in dyspepsia, ulcer complications, or initiation of new proton-pump inhibitor treatment.

The study was supported by the Danish Regions Medical Fund and the Danish Council for Independent Research. Two authors declared personal fees from pharmaceutical companies outside the submitted work. No other conflicts were declared.

SOURCE: Sivapalan P et al. Lancet Respir Med. 2019, May 20. doi: 10.1016/S2213-2600(19)30176-6.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM LANCET RESPIRATORY MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Immunotherapy drug teplizumab may stall onset of type 1 diabetes

Striking results, but questions still to be answered
Article Type
Changed
Tue, 05/03/2022 - 15:14

The monoclonal antibody teplizumab may delay the onset of type 1 diabetes in individuals at high risk, according to research presented at the annual scientific sessions of the American Diabetes Association.

In this study, 76 first-degree relatives of individuals with type 1 diabetes – who did not themselves have the disease but were considered at high risk because of antibodies and abnormal glucose tolerance tests – were randomized to a single two-week outpatient course of intravenous teplizumab or saline placebo. The patients, of whom 72% were 18 years of age or younger, were followed for a median of 745 days and had twice-yearly oral glucose tolerance testing.

Overall, 43% of the 44 patients who received teplizumab were diagnosed with type 1 diabetes during the course of the study, compared with 72% of the 32 who received the placebo. The treatment was associated with a 59% reduction in the hazard ratio for type 1 diabetes, even after adjusting for age, the results of a second oral glucose-tolerance testing before randomization, or the presence of anti-GAD65 antibodies.

The median time to diagnosis was 48.4 months in the teplizumab group and 24.4 months in the placebo group. The greatest effect was seen in the first year after randomization, during which only 7% of the teplizumab group were diagnosed with type 1 diabetes, compared with 44% of the placebo group. The findings were published simultaneously in the New England Journal of Medicine.

“The delay of progression to type 1 diabetes is of clinical importance, particularly for children, in whom the diagnosis is associated with adverse outcomes, and given the challenges of daily management of the condition,” said Dr. Kevan C. Herold, professor of immunobiology and medicine at Yale University, New Haven, Conn., and coauthors.

There were significantly more adverse events in the teplizumab group, compared with placebo, with three-quarters of the 20 grade 3 adverse events being lymphopenia during the first 30 days. In all but one participant, however, the lymphopenia resolved by day 45. Participants receiving teplizumab also reported a higher incidence of dermatologic adverse events, such as a spontaneously-resolving rash that was experienced by just over one-third of the group.

The researchers also looked for evidence of T-cell unresponsiveness, which has been previously seen in patients with new-onset type 1 diabetes who received treatment with teplizumab. They noted an increase in a particular type of CD8+ T cell associated with T-cell unresponsiveness at months 3 and 6 in participants treated with teplizumab.

Teplizumab is an Fc receptor-nonbinding monoclonal antibody that has been shown to reduce the loss of beta-cell function in patients with type 1 diabetes (Diabetes. 2013 Nov;62(11):3766-74).

The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

SOURCE: Herold K et al. NEJM. 2019 Jun 9. doi: 10.1056/NEJMoa1902226*

*Correction, 6/9/2019: An earlier version of this story misstated the doi number for the journal article. The number is 10.1056/NEJMoa1902226.

Body

While the results of this trial are striking, there are several caveats that are important to note. The trial did show a significant delay in the onset of type 1 diabetes – with the greatest preventive benefit in the first year of the trial – but these results do not necessarily mean that immune modulation represents a potential cure.

They do, however, provide indirect evidence of the pathogenesis of beta-cell destruction and the potential for newer biologic agents to alter the course of this.

The study also was small and involved only a 2-week course of the treatment. As such, there are still questions to be answered about the duration of treatment, longer-term side effects, sub-groups of patients who may respond differently to treatment, and the longer clinical course of those who do respond to treatment.

Julie R. Ingelfinger, MD, is deputy editor of the New England Journal of Medicine, and Clifford J. Rosen, MD, is from the Maine Medical Center Research Institute and is associate editor of the journal. Their comments are adapted from an accompanying editorial (NEJM 2019, Jun 9. doi: 10.1056/NEJMe1907458). No conflicts of interest were declared.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event
Body

While the results of this trial are striking, there are several caveats that are important to note. The trial did show a significant delay in the onset of type 1 diabetes – with the greatest preventive benefit in the first year of the trial – but these results do not necessarily mean that immune modulation represents a potential cure.

They do, however, provide indirect evidence of the pathogenesis of beta-cell destruction and the potential for newer biologic agents to alter the course of this.

The study also was small and involved only a 2-week course of the treatment. As such, there are still questions to be answered about the duration of treatment, longer-term side effects, sub-groups of patients who may respond differently to treatment, and the longer clinical course of those who do respond to treatment.

Julie R. Ingelfinger, MD, is deputy editor of the New England Journal of Medicine, and Clifford J. Rosen, MD, is from the Maine Medical Center Research Institute and is associate editor of the journal. Their comments are adapted from an accompanying editorial (NEJM 2019, Jun 9. doi: 10.1056/NEJMe1907458). No conflicts of interest were declared.

Body

While the results of this trial are striking, there are several caveats that are important to note. The trial did show a significant delay in the onset of type 1 diabetes – with the greatest preventive benefit in the first year of the trial – but these results do not necessarily mean that immune modulation represents a potential cure.

They do, however, provide indirect evidence of the pathogenesis of beta-cell destruction and the potential for newer biologic agents to alter the course of this.

The study also was small and involved only a 2-week course of the treatment. As such, there are still questions to be answered about the duration of treatment, longer-term side effects, sub-groups of patients who may respond differently to treatment, and the longer clinical course of those who do respond to treatment.

Julie R. Ingelfinger, MD, is deputy editor of the New England Journal of Medicine, and Clifford J. Rosen, MD, is from the Maine Medical Center Research Institute and is associate editor of the journal. Their comments are adapted from an accompanying editorial (NEJM 2019, Jun 9. doi: 10.1056/NEJMe1907458). No conflicts of interest were declared.

Title
Striking results, but questions still to be answered
Striking results, but questions still to be answered

The monoclonal antibody teplizumab may delay the onset of type 1 diabetes in individuals at high risk, according to research presented at the annual scientific sessions of the American Diabetes Association.

In this study, 76 first-degree relatives of individuals with type 1 diabetes – who did not themselves have the disease but were considered at high risk because of antibodies and abnormal glucose tolerance tests – were randomized to a single two-week outpatient course of intravenous teplizumab or saline placebo. The patients, of whom 72% were 18 years of age or younger, were followed for a median of 745 days and had twice-yearly oral glucose tolerance testing.

Overall, 43% of the 44 patients who received teplizumab were diagnosed with type 1 diabetes during the course of the study, compared with 72% of the 32 who received the placebo. The treatment was associated with a 59% reduction in the hazard ratio for type 1 diabetes, even after adjusting for age, the results of a second oral glucose-tolerance testing before randomization, or the presence of anti-GAD65 antibodies.

The median time to diagnosis was 48.4 months in the teplizumab group and 24.4 months in the placebo group. The greatest effect was seen in the first year after randomization, during which only 7% of the teplizumab group were diagnosed with type 1 diabetes, compared with 44% of the placebo group. The findings were published simultaneously in the New England Journal of Medicine.

“The delay of progression to type 1 diabetes is of clinical importance, particularly for children, in whom the diagnosis is associated with adverse outcomes, and given the challenges of daily management of the condition,” said Dr. Kevan C. Herold, professor of immunobiology and medicine at Yale University, New Haven, Conn., and coauthors.

There were significantly more adverse events in the teplizumab group, compared with placebo, with three-quarters of the 20 grade 3 adverse events being lymphopenia during the first 30 days. In all but one participant, however, the lymphopenia resolved by day 45. Participants receiving teplizumab also reported a higher incidence of dermatologic adverse events, such as a spontaneously-resolving rash that was experienced by just over one-third of the group.

The researchers also looked for evidence of T-cell unresponsiveness, which has been previously seen in patients with new-onset type 1 diabetes who received treatment with teplizumab. They noted an increase in a particular type of CD8+ T cell associated with T-cell unresponsiveness at months 3 and 6 in participants treated with teplizumab.

Teplizumab is an Fc receptor-nonbinding monoclonal antibody that has been shown to reduce the loss of beta-cell function in patients with type 1 diabetes (Diabetes. 2013 Nov;62(11):3766-74).

The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

SOURCE: Herold K et al. NEJM. 2019 Jun 9. doi: 10.1056/NEJMoa1902226*

*Correction, 6/9/2019: An earlier version of this story misstated the doi number for the journal article. The number is 10.1056/NEJMoa1902226.

The monoclonal antibody teplizumab may delay the onset of type 1 diabetes in individuals at high risk, according to research presented at the annual scientific sessions of the American Diabetes Association.

In this study, 76 first-degree relatives of individuals with type 1 diabetes – who did not themselves have the disease but were considered at high risk because of antibodies and abnormal glucose tolerance tests – were randomized to a single two-week outpatient course of intravenous teplizumab or saline placebo. The patients, of whom 72% were 18 years of age or younger, were followed for a median of 745 days and had twice-yearly oral glucose tolerance testing.

Overall, 43% of the 44 patients who received teplizumab were diagnosed with type 1 diabetes during the course of the study, compared with 72% of the 32 who received the placebo. The treatment was associated with a 59% reduction in the hazard ratio for type 1 diabetes, even after adjusting for age, the results of a second oral glucose-tolerance testing before randomization, or the presence of anti-GAD65 antibodies.

The median time to diagnosis was 48.4 months in the teplizumab group and 24.4 months in the placebo group. The greatest effect was seen in the first year after randomization, during which only 7% of the teplizumab group were diagnosed with type 1 diabetes, compared with 44% of the placebo group. The findings were published simultaneously in the New England Journal of Medicine.

“The delay of progression to type 1 diabetes is of clinical importance, particularly for children, in whom the diagnosis is associated with adverse outcomes, and given the challenges of daily management of the condition,” said Dr. Kevan C. Herold, professor of immunobiology and medicine at Yale University, New Haven, Conn., and coauthors.

There were significantly more adverse events in the teplizumab group, compared with placebo, with three-quarters of the 20 grade 3 adverse events being lymphopenia during the first 30 days. In all but one participant, however, the lymphopenia resolved by day 45. Participants receiving teplizumab also reported a higher incidence of dermatologic adverse events, such as a spontaneously-resolving rash that was experienced by just over one-third of the group.

The researchers also looked for evidence of T-cell unresponsiveness, which has been previously seen in patients with new-onset type 1 diabetes who received treatment with teplizumab. They noted an increase in a particular type of CD8+ T cell associated with T-cell unresponsiveness at months 3 and 6 in participants treated with teplizumab.

Teplizumab is an Fc receptor-nonbinding monoclonal antibody that has been shown to reduce the loss of beta-cell function in patients with type 1 diabetes (Diabetes. 2013 Nov;62(11):3766-74).

The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

SOURCE: Herold K et al. NEJM. 2019 Jun 9. doi: 10.1056/NEJMoa1902226*

*Correction, 6/9/2019: An earlier version of this story misstated the doi number for the journal article. The number is 10.1056/NEJMoa1902226.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM ADA 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Teplizumab may delay the onset of type 1 diabetes in individuals at risk.

Major finding: Templizumab treatment was associated with a 59% lower hazard ratio for the diagnosis of type 1 diabetes.

Study details: Phase 2, randomized, double-blind, placebo-controlled trial in 76 participants.

Disclosures: The study was supported by the National Institutes of Health, the Juvenile Diabetes Research Foundation, and the American Diabetes Association, with the study drug and additional site monitoring provided by MacroGenics. Eight authors declared grants, personal fees, and other support from private industry, with one also declaring income and stock options from MacroGenics.

Source: Herold K et al. NEJM 2019, June 9. DOI: 10.1065/NEJMoa1902226.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Postpartum LARC uptake increased with separate payment

Article Type
Changed
Tue, 06/11/2019 - 09:56

The introduction of separate payment for the immediate postpartum implantation of long-acting reversible contraception was associated with increased use and a slow-down in the number of short-interval births in patients covered by South Carolina’s Medicaid program.

Immediate postpartum long-acting reversible contraception (IPP-LARC) is recommended to reduce the incidence of short pregnancy intervals – pregnancies within 6-24 months of each other. The global payment for hospital labor and delivery, however, may act as a disincentive to providing IPP-LARC, according to Maria W. Steenland of Brown University, Providence, R.I., and co-authors.

They looked at inpatient Medicaid claims data for 242,825 childbirth hospitalizations in South Carolina from 2010-2017; during that time the state Medicaid program began to provide an additional payment for IPP-LARC.

At the start of the study, just 0.07% of women received an IPP-LARC. After the change in reimbursement policy in March 2012, there was a steady 0.07 percentage point monthly increase in their use in adults and 0.1 percentage point increase per month in adolescents. In December 2017, 5.65% of adults and 10.48% of adolescents received an IPP-LARC (JAMA. 2019; doi: 10.1001/jama.2019.6854).

There was a corresponding, significant change in the trend of short-interval births among adolescents. Before the policy change, adolescent short-interval births had been increasing, but by March 2016 – 4 years after the payment change – the adolescent short-interval birth rate was 5.28 percentage points lower than what was expected had the increasing trend continued.

There was no significant change in the trend for short-interval births among adults.

“These findings suggest that IPP-LARC reimbursement could increase immediate postpartum contraceptive options and help adolescents avoid short-interval births,” the authors wrote, noting that as of February 2018, 36 other states’ Medicaid programs had began separately reimbursing for IPP-LARC.

They also raised the possibility that there may have been confounding due to other events that occurred at the same time as the policy changes.

The study was supported by the Eric M. Mindich Research Fund and one author was supported by National Institutes of Health. No conflicts of interest were declared.

SOURCE: Steenland M et al. JAMA 2019, DOI:10.1001/jama.2019.6854.

Publications
Topics
Sections

The introduction of separate payment for the immediate postpartum implantation of long-acting reversible contraception was associated with increased use and a slow-down in the number of short-interval births in patients covered by South Carolina’s Medicaid program.

Immediate postpartum long-acting reversible contraception (IPP-LARC) is recommended to reduce the incidence of short pregnancy intervals – pregnancies within 6-24 months of each other. The global payment for hospital labor and delivery, however, may act as a disincentive to providing IPP-LARC, according to Maria W. Steenland of Brown University, Providence, R.I., and co-authors.

They looked at inpatient Medicaid claims data for 242,825 childbirth hospitalizations in South Carolina from 2010-2017; during that time the state Medicaid program began to provide an additional payment for IPP-LARC.

At the start of the study, just 0.07% of women received an IPP-LARC. After the change in reimbursement policy in March 2012, there was a steady 0.07 percentage point monthly increase in their use in adults and 0.1 percentage point increase per month in adolescents. In December 2017, 5.65% of adults and 10.48% of adolescents received an IPP-LARC (JAMA. 2019; doi: 10.1001/jama.2019.6854).

There was a corresponding, significant change in the trend of short-interval births among adolescents. Before the policy change, adolescent short-interval births had been increasing, but by March 2016 – 4 years after the payment change – the adolescent short-interval birth rate was 5.28 percentage points lower than what was expected had the increasing trend continued.

There was no significant change in the trend for short-interval births among adults.

“These findings suggest that IPP-LARC reimbursement could increase immediate postpartum contraceptive options and help adolescents avoid short-interval births,” the authors wrote, noting that as of February 2018, 36 other states’ Medicaid programs had began separately reimbursing for IPP-LARC.

They also raised the possibility that there may have been confounding due to other events that occurred at the same time as the policy changes.

The study was supported by the Eric M. Mindich Research Fund and one author was supported by National Institutes of Health. No conflicts of interest were declared.

SOURCE: Steenland M et al. JAMA 2019, DOI:10.1001/jama.2019.6854.

The introduction of separate payment for the immediate postpartum implantation of long-acting reversible contraception was associated with increased use and a slow-down in the number of short-interval births in patients covered by South Carolina’s Medicaid program.

Immediate postpartum long-acting reversible contraception (IPP-LARC) is recommended to reduce the incidence of short pregnancy intervals – pregnancies within 6-24 months of each other. The global payment for hospital labor and delivery, however, may act as a disincentive to providing IPP-LARC, according to Maria W. Steenland of Brown University, Providence, R.I., and co-authors.

They looked at inpatient Medicaid claims data for 242,825 childbirth hospitalizations in South Carolina from 2010-2017; during that time the state Medicaid program began to provide an additional payment for IPP-LARC.

At the start of the study, just 0.07% of women received an IPP-LARC. After the change in reimbursement policy in March 2012, there was a steady 0.07 percentage point monthly increase in their use in adults and 0.1 percentage point increase per month in adolescents. In December 2017, 5.65% of adults and 10.48% of adolescents received an IPP-LARC (JAMA. 2019; doi: 10.1001/jama.2019.6854).

There was a corresponding, significant change in the trend of short-interval births among adolescents. Before the policy change, adolescent short-interval births had been increasing, but by March 2016 – 4 years after the payment change – the adolescent short-interval birth rate was 5.28 percentage points lower than what was expected had the increasing trend continued.

There was no significant change in the trend for short-interval births among adults.

“These findings suggest that IPP-LARC reimbursement could increase immediate postpartum contraceptive options and help adolescents avoid short-interval births,” the authors wrote, noting that as of February 2018, 36 other states’ Medicaid programs had began separately reimbursing for IPP-LARC.

They also raised the possibility that there may have been confounding due to other events that occurred at the same time as the policy changes.

The study was supported by the Eric M. Mindich Research Fund and one author was supported by National Institutes of Health. No conflicts of interest were declared.

SOURCE: Steenland M et al. JAMA 2019, DOI:10.1001/jama.2019.6854.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Gun ownership practices linked to soldier suicide risk

Limiting access is a prevention strategy
Article Type
Changed
Mon, 06/10/2019 - 15:57

U.S. soldiers who own firearms, store a loaded gun at home, or carry a gun publicly when not on duty are at significantly greater risk of suicide death, a case-control study of 135 U.S. Army soldiers who died by suicide shows.

“Our findings concurred with earlier studies by showing that factors beyond ownership of a firearm were associated with an increased risk of suicide,” wrote Catherine L. Dempsey, PhD, MPH, and her coauthors.

Since 2004, the rate of suicide deaths among Army soldiers has exceeded the rate of combat deaths each year, which prompted Dr. Dempsey, of the Center for the Study of Traumatic Stress at the Uniformed Services University of the Health Sciences, Bethesda, Md., and her coauthors to investigate whether increased access to firearms might be associated with an increased risk of suicide. The study was published by JAMA Network Open.

In the study, the researchers interviewed the next of kin or supervisors of 135 Army soldiers who had taken their own lives while on active duty between 2011 and 2013, 55% of whom used firearms to do so. They compared those findings with those 137 controls matched for suicide propensity based on sociodemographic and Army history risk factors, and 118 soldiers who had experienced suicidal ideation in the past year.

This analysis showed that soldiers who had stored a loaded gun with ammunition at home, or who had carried a personal gun in public had nearly fourfold higher odds of suicide (odds ratio, 3.9; P = .002), compared with propensity-matched controls.

Similarly, those who owned one or more handguns, stored a gun loaded with ammunition at home, and carried a personal gun in public had a greater than threefold odds of suicide.

The study found that soldiers who died by suicide were 90% more likely than matched controls to own at least one gun, four times more likely to store that gun loaded with ammunition at home, and three times more likely to carry a gun in public, compared with controls.

There was the suggestion that the use of safety locks at home was protective, but this did not reach statistical significance.

However, the study did not find significant differences in firearm accessibility characteristics between the soldiers who died by suicide, and the controls with suicidal ideation.

“Some current theories of suicide (eg., the interpersonal theory of suicide) suggest that fatal suicidal behavior results require not only the presence of suicidal desire but also a developed capability or capacity for suicidal behavior,” the authors wrote. “According to the interpersonal theory of suicide, this capability for lethal self-injury is acquired through repeated exposure to painful and fear-inducing experiences, thus habituating an individual to the pain and fear required to enact a fatal suicide attempt.”

Dr. Dempsey and her coauthors argued that their study supported a continued focus on “means restriction” counseling; limiting or removing access to lethal methods for suicide; and “motivational interviewing.”

They cited the fairly small sample size and relatively small response rates to surveys as limitations. However, they wrote, the response rates “were high for multi-informant interviews conducted within a military population.”

The study was supported by the U.S. Department of the Army, U.S. Department of Defense, U.S. Department of Health & Human Services, National Institutes of Health, and National Institute of Mental Health. One author declared grants from the Military Suicide Research Consortium outside the submitted work, and one author declared support, consultancies, and advisory board positions with the pharmaceutical industry, and co-ownership of a mental health market research firm. No other conflicts of interest were declared.

 

SOURCE: Dempsey CL et al. JAMA Netw Open. 2019 Jun 7. doi: 10.1001/jamanetworkopen.2019.5383.

Body

 

These findings add to the growing body of evidence that firearm-related behaviors, beyond just gun ownership, may influence suicide risk, and support the most evidence-based and conceptually sound recommendation for suicide prevention, which is to remove the firearm from the home.

There are some limitations to this study. For example, the validity of the “psychological autopsy” approach used in the study has not been determined, and there is also a question mark over the accuracy of family members’ reports about gun ownership and storage practices. Despite this, the study provides support for recommendations for a change in firearm behaviors to reduce the risk of suicide.

Joseph A. Simonetti, MD, MPH, is affiliated with the Rocky Mountain Mental Illness, Research, Education and Clinical Center for Suicide Prevention at the Rocky Mountain Regional VA Medical Center, Aurora, Colo. Ali Rowhani-Rahbar, MD, MPH, PhD, is affiliated with the Harborview Injury Prevention and Research Center at the University of Washington, Seattle. Those comments are adapted from an accompanying editorial (JAMA Netw Open. 2019. Jun 7. doi: 10.1001/jamanetworkopen.2019.5400). No conflicts of interest were declared.

Publications
Topics
Sections
Body

 

These findings add to the growing body of evidence that firearm-related behaviors, beyond just gun ownership, may influence suicide risk, and support the most evidence-based and conceptually sound recommendation for suicide prevention, which is to remove the firearm from the home.

There are some limitations to this study. For example, the validity of the “psychological autopsy” approach used in the study has not been determined, and there is also a question mark over the accuracy of family members’ reports about gun ownership and storage practices. Despite this, the study provides support for recommendations for a change in firearm behaviors to reduce the risk of suicide.

Joseph A. Simonetti, MD, MPH, is affiliated with the Rocky Mountain Mental Illness, Research, Education and Clinical Center for Suicide Prevention at the Rocky Mountain Regional VA Medical Center, Aurora, Colo. Ali Rowhani-Rahbar, MD, MPH, PhD, is affiliated with the Harborview Injury Prevention and Research Center at the University of Washington, Seattle. Those comments are adapted from an accompanying editorial (JAMA Netw Open. 2019. Jun 7. doi: 10.1001/jamanetworkopen.2019.5400). No conflicts of interest were declared.

Body

 

These findings add to the growing body of evidence that firearm-related behaviors, beyond just gun ownership, may influence suicide risk, and support the most evidence-based and conceptually sound recommendation for suicide prevention, which is to remove the firearm from the home.

There are some limitations to this study. For example, the validity of the “psychological autopsy” approach used in the study has not been determined, and there is also a question mark over the accuracy of family members’ reports about gun ownership and storage practices. Despite this, the study provides support for recommendations for a change in firearm behaviors to reduce the risk of suicide.

Joseph A. Simonetti, MD, MPH, is affiliated with the Rocky Mountain Mental Illness, Research, Education and Clinical Center for Suicide Prevention at the Rocky Mountain Regional VA Medical Center, Aurora, Colo. Ali Rowhani-Rahbar, MD, MPH, PhD, is affiliated with the Harborview Injury Prevention and Research Center at the University of Washington, Seattle. Those comments are adapted from an accompanying editorial (JAMA Netw Open. 2019. Jun 7. doi: 10.1001/jamanetworkopen.2019.5400). No conflicts of interest were declared.

Title
Limiting access is a prevention strategy
Limiting access is a prevention strategy

U.S. soldiers who own firearms, store a loaded gun at home, or carry a gun publicly when not on duty are at significantly greater risk of suicide death, a case-control study of 135 U.S. Army soldiers who died by suicide shows.

“Our findings concurred with earlier studies by showing that factors beyond ownership of a firearm were associated with an increased risk of suicide,” wrote Catherine L. Dempsey, PhD, MPH, and her coauthors.

Since 2004, the rate of suicide deaths among Army soldiers has exceeded the rate of combat deaths each year, which prompted Dr. Dempsey, of the Center for the Study of Traumatic Stress at the Uniformed Services University of the Health Sciences, Bethesda, Md., and her coauthors to investigate whether increased access to firearms might be associated with an increased risk of suicide. The study was published by JAMA Network Open.

In the study, the researchers interviewed the next of kin or supervisors of 135 Army soldiers who had taken their own lives while on active duty between 2011 and 2013, 55% of whom used firearms to do so. They compared those findings with those 137 controls matched for suicide propensity based on sociodemographic and Army history risk factors, and 118 soldiers who had experienced suicidal ideation in the past year.

This analysis showed that soldiers who had stored a loaded gun with ammunition at home, or who had carried a personal gun in public had nearly fourfold higher odds of suicide (odds ratio, 3.9; P = .002), compared with propensity-matched controls.

Similarly, those who owned one or more handguns, stored a gun loaded with ammunition at home, and carried a personal gun in public had a greater than threefold odds of suicide.

The study found that soldiers who died by suicide were 90% more likely than matched controls to own at least one gun, four times more likely to store that gun loaded with ammunition at home, and three times more likely to carry a gun in public, compared with controls.

There was the suggestion that the use of safety locks at home was protective, but this did not reach statistical significance.

However, the study did not find significant differences in firearm accessibility characteristics between the soldiers who died by suicide, and the controls with suicidal ideation.

“Some current theories of suicide (eg., the interpersonal theory of suicide) suggest that fatal suicidal behavior results require not only the presence of suicidal desire but also a developed capability or capacity for suicidal behavior,” the authors wrote. “According to the interpersonal theory of suicide, this capability for lethal self-injury is acquired through repeated exposure to painful and fear-inducing experiences, thus habituating an individual to the pain and fear required to enact a fatal suicide attempt.”

Dr. Dempsey and her coauthors argued that their study supported a continued focus on “means restriction” counseling; limiting or removing access to lethal methods for suicide; and “motivational interviewing.”

They cited the fairly small sample size and relatively small response rates to surveys as limitations. However, they wrote, the response rates “were high for multi-informant interviews conducted within a military population.”

The study was supported by the U.S. Department of the Army, U.S. Department of Defense, U.S. Department of Health & Human Services, National Institutes of Health, and National Institute of Mental Health. One author declared grants from the Military Suicide Research Consortium outside the submitted work, and one author declared support, consultancies, and advisory board positions with the pharmaceutical industry, and co-ownership of a mental health market research firm. No other conflicts of interest were declared.

 

SOURCE: Dempsey CL et al. JAMA Netw Open. 2019 Jun 7. doi: 10.1001/jamanetworkopen.2019.5383.

U.S. soldiers who own firearms, store a loaded gun at home, or carry a gun publicly when not on duty are at significantly greater risk of suicide death, a case-control study of 135 U.S. Army soldiers who died by suicide shows.

“Our findings concurred with earlier studies by showing that factors beyond ownership of a firearm were associated with an increased risk of suicide,” wrote Catherine L. Dempsey, PhD, MPH, and her coauthors.

Since 2004, the rate of suicide deaths among Army soldiers has exceeded the rate of combat deaths each year, which prompted Dr. Dempsey, of the Center for the Study of Traumatic Stress at the Uniformed Services University of the Health Sciences, Bethesda, Md., and her coauthors to investigate whether increased access to firearms might be associated with an increased risk of suicide. The study was published by JAMA Network Open.

In the study, the researchers interviewed the next of kin or supervisors of 135 Army soldiers who had taken their own lives while on active duty between 2011 and 2013, 55% of whom used firearms to do so. They compared those findings with those 137 controls matched for suicide propensity based on sociodemographic and Army history risk factors, and 118 soldiers who had experienced suicidal ideation in the past year.

This analysis showed that soldiers who had stored a loaded gun with ammunition at home, or who had carried a personal gun in public had nearly fourfold higher odds of suicide (odds ratio, 3.9; P = .002), compared with propensity-matched controls.

Similarly, those who owned one or more handguns, stored a gun loaded with ammunition at home, and carried a personal gun in public had a greater than threefold odds of suicide.

The study found that soldiers who died by suicide were 90% more likely than matched controls to own at least one gun, four times more likely to store that gun loaded with ammunition at home, and three times more likely to carry a gun in public, compared with controls.

There was the suggestion that the use of safety locks at home was protective, but this did not reach statistical significance.

However, the study did not find significant differences in firearm accessibility characteristics between the soldiers who died by suicide, and the controls with suicidal ideation.

“Some current theories of suicide (eg., the interpersonal theory of suicide) suggest that fatal suicidal behavior results require not only the presence of suicidal desire but also a developed capability or capacity for suicidal behavior,” the authors wrote. “According to the interpersonal theory of suicide, this capability for lethal self-injury is acquired through repeated exposure to painful and fear-inducing experiences, thus habituating an individual to the pain and fear required to enact a fatal suicide attempt.”

Dr. Dempsey and her coauthors argued that their study supported a continued focus on “means restriction” counseling; limiting or removing access to lethal methods for suicide; and “motivational interviewing.”

They cited the fairly small sample size and relatively small response rates to surveys as limitations. However, they wrote, the response rates “were high for multi-informant interviews conducted within a military population.”

The study was supported by the U.S. Department of the Army, U.S. Department of Defense, U.S. Department of Health & Human Services, National Institutes of Health, and National Institute of Mental Health. One author declared grants from the Military Suicide Research Consortium outside the submitted work, and one author declared support, consultancies, and advisory board positions with the pharmaceutical industry, and co-ownership of a mental health market research firm. No other conflicts of interest were declared.

 

SOURCE: Dempsey CL et al. JAMA Netw Open. 2019 Jun 7. doi: 10.1001/jamanetworkopen.2019.5383.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Lipoprotein(a) levels can guide CV risk assessment and treatment

Article Type
Changed
Wed, 06/12/2019 - 12:17

 

Lipoprotein(a) is an independent risk factor for atherosclerotic cardiovascular disease–related events, and plasma levels of Lp(a) could help refine risk assessment and influence treatment decisions, say the authors of a scientific statement from the National Lipid Association.

Don P. Wilson, MD, of Cook Children’s Medical Center, Fort Worth, Tex., and coauthors reviewed the evidence around testing of Lp(a) in clinical practice and its use in guiding treatment for both primary and secondary prevention. Their report is in the Journal of Clinical Lipidology.

Prospective, population-based studies point to a clear link between high Lp(a) levels and high risk of myocardial infarction, coronary heart disease, coronary artery stenosis, carotid stenosis, valvular aortic stenosis, ischemic stroke, cardiovascular mortality, and all-cause mortality, the authors wrote. This association was independent of the effect of other risk factors, including LDL cholesterol.

However, existing Lp(a) assays have not been globally standardized, and there is only incomplete evidence for age, sex, or ethnicity-specific cutoff points for high risk.

The authors suggested Lp(a) levels greater than 50 mg/dL (100 nmol/L) could be considered a risk factor that justifies the initiation of statin therapy. However ,they pointed out this level corresponded to the 80th population percentile in predominantly white populations, while in African American populations the equivalent cutoff was around 150 nmol/L.



On the issue of whom to test for Lp(a) serum levels, the authors said testing could reasonably be used to refine risk assessment for atherosclerotic cardiovascular disease in adults with first-degree relatives who experienced premature atherosclerotic cardiovascular disease, those with a personal history of the disease, or in those with severe hypercholesterolemia or suspected familial hypercholesterolemia.

However, statin therapy does not decrease Lp(a) levels, and there is also evidence that patients with high Lp(a) levels may not show as much LDL-C lowering in response to statin therapy.

“There is a lack of current evidence demonstrating that lowering Lp(a), independently of LDL-C, reduces ASCVD events in individuals with established ASCVD,” the authors wrote. “It appears that large absolute reductions in Lp(a) may be needed to demonstrate a significant clinical benefit.”

Despite this, the authors argued that in primary prevention, it was reasonable to use a Lp(a) level greater than 50 mg/dL (100 nmol/L) as a “risk-enhancing factor,” and in high-risk or very-high-risk patients with elevated LDL-C, it could prompt use of more intensive therapies.

Five authors disclosed honorarium or advisory board positions with the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Wilson D et al. J Clin Lipidol. 2019 May 17. doi: 10.1016/j.jacl.2019.04.010.

Publications
Topics
Sections

 

Lipoprotein(a) is an independent risk factor for atherosclerotic cardiovascular disease–related events, and plasma levels of Lp(a) could help refine risk assessment and influence treatment decisions, say the authors of a scientific statement from the National Lipid Association.

Don P. Wilson, MD, of Cook Children’s Medical Center, Fort Worth, Tex., and coauthors reviewed the evidence around testing of Lp(a) in clinical practice and its use in guiding treatment for both primary and secondary prevention. Their report is in the Journal of Clinical Lipidology.

Prospective, population-based studies point to a clear link between high Lp(a) levels and high risk of myocardial infarction, coronary heart disease, coronary artery stenosis, carotid stenosis, valvular aortic stenosis, ischemic stroke, cardiovascular mortality, and all-cause mortality, the authors wrote. This association was independent of the effect of other risk factors, including LDL cholesterol.

However, existing Lp(a) assays have not been globally standardized, and there is only incomplete evidence for age, sex, or ethnicity-specific cutoff points for high risk.

The authors suggested Lp(a) levels greater than 50 mg/dL (100 nmol/L) could be considered a risk factor that justifies the initiation of statin therapy. However ,they pointed out this level corresponded to the 80th population percentile in predominantly white populations, while in African American populations the equivalent cutoff was around 150 nmol/L.



On the issue of whom to test for Lp(a) serum levels, the authors said testing could reasonably be used to refine risk assessment for atherosclerotic cardiovascular disease in adults with first-degree relatives who experienced premature atherosclerotic cardiovascular disease, those with a personal history of the disease, or in those with severe hypercholesterolemia or suspected familial hypercholesterolemia.

However, statin therapy does not decrease Lp(a) levels, and there is also evidence that patients with high Lp(a) levels may not show as much LDL-C lowering in response to statin therapy.

“There is a lack of current evidence demonstrating that lowering Lp(a), independently of LDL-C, reduces ASCVD events in individuals with established ASCVD,” the authors wrote. “It appears that large absolute reductions in Lp(a) may be needed to demonstrate a significant clinical benefit.”

Despite this, the authors argued that in primary prevention, it was reasonable to use a Lp(a) level greater than 50 mg/dL (100 nmol/L) as a “risk-enhancing factor,” and in high-risk or very-high-risk patients with elevated LDL-C, it could prompt use of more intensive therapies.

Five authors disclosed honorarium or advisory board positions with the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Wilson D et al. J Clin Lipidol. 2019 May 17. doi: 10.1016/j.jacl.2019.04.010.

 

Lipoprotein(a) is an independent risk factor for atherosclerotic cardiovascular disease–related events, and plasma levels of Lp(a) could help refine risk assessment and influence treatment decisions, say the authors of a scientific statement from the National Lipid Association.

Don P. Wilson, MD, of Cook Children’s Medical Center, Fort Worth, Tex., and coauthors reviewed the evidence around testing of Lp(a) in clinical practice and its use in guiding treatment for both primary and secondary prevention. Their report is in the Journal of Clinical Lipidology.

Prospective, population-based studies point to a clear link between high Lp(a) levels and high risk of myocardial infarction, coronary heart disease, coronary artery stenosis, carotid stenosis, valvular aortic stenosis, ischemic stroke, cardiovascular mortality, and all-cause mortality, the authors wrote. This association was independent of the effect of other risk factors, including LDL cholesterol.

However, existing Lp(a) assays have not been globally standardized, and there is only incomplete evidence for age, sex, or ethnicity-specific cutoff points for high risk.

The authors suggested Lp(a) levels greater than 50 mg/dL (100 nmol/L) could be considered a risk factor that justifies the initiation of statin therapy. However ,they pointed out this level corresponded to the 80th population percentile in predominantly white populations, while in African American populations the equivalent cutoff was around 150 nmol/L.



On the issue of whom to test for Lp(a) serum levels, the authors said testing could reasonably be used to refine risk assessment for atherosclerotic cardiovascular disease in adults with first-degree relatives who experienced premature atherosclerotic cardiovascular disease, those with a personal history of the disease, or in those with severe hypercholesterolemia or suspected familial hypercholesterolemia.

However, statin therapy does not decrease Lp(a) levels, and there is also evidence that patients with high Lp(a) levels may not show as much LDL-C lowering in response to statin therapy.

“There is a lack of current evidence demonstrating that lowering Lp(a), independently of LDL-C, reduces ASCVD events in individuals with established ASCVD,” the authors wrote. “It appears that large absolute reductions in Lp(a) may be needed to demonstrate a significant clinical benefit.”

Despite this, the authors argued that in primary prevention, it was reasonable to use a Lp(a) level greater than 50 mg/dL (100 nmol/L) as a “risk-enhancing factor,” and in high-risk or very-high-risk patients with elevated LDL-C, it could prompt use of more intensive therapies.

Five authors disclosed honorarium or advisory board positions with the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Wilson D et al. J Clin Lipidol. 2019 May 17. doi: 10.1016/j.jacl.2019.04.010.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF CLINICAL LIPIDOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Health insurance rates among cancer survivors increased after ACA

Article Type
Changed
Fri, 05/17/2019 - 12:29

The introduction of the Affordable Care Act was associated with an increase in health insurance among cancer survivors, but cost still remains a barrier to insurance, particularly among certain groups, research suggests.

Researchers conducted an analysis of National Health Interview Survey data from 17,806 survey participants who reported a cancer diagnosis. The findings are in JAMA Oncology.

Around one in ten of those surveyed did not have health insurance, but this rate was significantly higher before the introduction of the Affordable Care Act compared to the 3-year period after its implementation in 2014 (10.6% vs. 6.2%, P less than .001).

While cost was the most common reason for not having health insurance, the survey showed that the proportion of noninsured cancer survivors who cited cost as the reason for noninsurance decreased significantly after implementation of the Affordable Care Act (49.6% vs. 37.6%, P = .003).

Unemployment was the second-most common reason for noninsurance, but this also decreased in the 2014-2017 period compared with 2000-2013 (37.1% vs. 28.5%, P = .005).

Younger cancer survivors – aged below the mean age of 50.9 years – were 84% more likely to be uninsured, compared with those above the mean age of the study population, and those with a family income below the poverty threshold were nearly twice as likely not to be insured.

Participants of Hispanic ethnicity, noncitizens, and current smokers were significantly more likely to be uninsured.

Before the implementation of the Affordable Care Act, black patients were 29% more likely to be uninsured compared with nonblack patients. But after the ACA was introduced, this difference disappeared.

Nina N. Sanford, MD, of the University of Texas, Dallas, and coauthors wrote that, to their knowledge, this was the first study to look at reasons for noninsurance among cancer survivors, and highlighted that efforts to improve insurance coverage would require “diverse policy initiatives.

“Despite these improvements [after the Affordable Care Act], our study identified several demographic subgroups who appear to continue to be at risk for not having insurance even after the ACA, which may contribute to worse cancer-specific outcomes, decreased quality of life, and greater mortality,” they wrote. “Policymakers should be aware of these disparities when proposing legislation to either augment or limit health care coverage.”

They expressed concern about the association between smoking and a lack of health insurance, noting that the Affordable Care Act allows insurers to impose a surcharge premium on smokers. However they pointed out that this initiative has not decreased rates of smoking, and may instead have led to higher rates of noninsurance among cancer survivors who continue to smoke.

One author was supported by the American Society of Radiation Oncology and the Prostate Cancer Foundation, and one author declared funding from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Sanford N et al. JAMA Oncology 2019, May 15. doi: 10.1001/jamaoncol.2019.1973.

Publications
Topics
Sections

The introduction of the Affordable Care Act was associated with an increase in health insurance among cancer survivors, but cost still remains a barrier to insurance, particularly among certain groups, research suggests.

Researchers conducted an analysis of National Health Interview Survey data from 17,806 survey participants who reported a cancer diagnosis. The findings are in JAMA Oncology.

Around one in ten of those surveyed did not have health insurance, but this rate was significantly higher before the introduction of the Affordable Care Act compared to the 3-year period after its implementation in 2014 (10.6% vs. 6.2%, P less than .001).

While cost was the most common reason for not having health insurance, the survey showed that the proportion of noninsured cancer survivors who cited cost as the reason for noninsurance decreased significantly after implementation of the Affordable Care Act (49.6% vs. 37.6%, P = .003).

Unemployment was the second-most common reason for noninsurance, but this also decreased in the 2014-2017 period compared with 2000-2013 (37.1% vs. 28.5%, P = .005).

Younger cancer survivors – aged below the mean age of 50.9 years – were 84% more likely to be uninsured, compared with those above the mean age of the study population, and those with a family income below the poverty threshold were nearly twice as likely not to be insured.

Participants of Hispanic ethnicity, noncitizens, and current smokers were significantly more likely to be uninsured.

Before the implementation of the Affordable Care Act, black patients were 29% more likely to be uninsured compared with nonblack patients. But after the ACA was introduced, this difference disappeared.

Nina N. Sanford, MD, of the University of Texas, Dallas, and coauthors wrote that, to their knowledge, this was the first study to look at reasons for noninsurance among cancer survivors, and highlighted that efforts to improve insurance coverage would require “diverse policy initiatives.

“Despite these improvements [after the Affordable Care Act], our study identified several demographic subgroups who appear to continue to be at risk for not having insurance even after the ACA, which may contribute to worse cancer-specific outcomes, decreased quality of life, and greater mortality,” they wrote. “Policymakers should be aware of these disparities when proposing legislation to either augment or limit health care coverage.”

They expressed concern about the association between smoking and a lack of health insurance, noting that the Affordable Care Act allows insurers to impose a surcharge premium on smokers. However they pointed out that this initiative has not decreased rates of smoking, and may instead have led to higher rates of noninsurance among cancer survivors who continue to smoke.

One author was supported by the American Society of Radiation Oncology and the Prostate Cancer Foundation, and one author declared funding from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Sanford N et al. JAMA Oncology 2019, May 15. doi: 10.1001/jamaoncol.2019.1973.

The introduction of the Affordable Care Act was associated with an increase in health insurance among cancer survivors, but cost still remains a barrier to insurance, particularly among certain groups, research suggests.

Researchers conducted an analysis of National Health Interview Survey data from 17,806 survey participants who reported a cancer diagnosis. The findings are in JAMA Oncology.

Around one in ten of those surveyed did not have health insurance, but this rate was significantly higher before the introduction of the Affordable Care Act compared to the 3-year period after its implementation in 2014 (10.6% vs. 6.2%, P less than .001).

While cost was the most common reason for not having health insurance, the survey showed that the proportion of noninsured cancer survivors who cited cost as the reason for noninsurance decreased significantly after implementation of the Affordable Care Act (49.6% vs. 37.6%, P = .003).

Unemployment was the second-most common reason for noninsurance, but this also decreased in the 2014-2017 period compared with 2000-2013 (37.1% vs. 28.5%, P = .005).

Younger cancer survivors – aged below the mean age of 50.9 years – were 84% more likely to be uninsured, compared with those above the mean age of the study population, and those with a family income below the poverty threshold were nearly twice as likely not to be insured.

Participants of Hispanic ethnicity, noncitizens, and current smokers were significantly more likely to be uninsured.

Before the implementation of the Affordable Care Act, black patients were 29% more likely to be uninsured compared with nonblack patients. But after the ACA was introduced, this difference disappeared.

Nina N. Sanford, MD, of the University of Texas, Dallas, and coauthors wrote that, to their knowledge, this was the first study to look at reasons for noninsurance among cancer survivors, and highlighted that efforts to improve insurance coverage would require “diverse policy initiatives.

“Despite these improvements [after the Affordable Care Act], our study identified several demographic subgroups who appear to continue to be at risk for not having insurance even after the ACA, which may contribute to worse cancer-specific outcomes, decreased quality of life, and greater mortality,” they wrote. “Policymakers should be aware of these disparities when proposing legislation to either augment or limit health care coverage.”

They expressed concern about the association between smoking and a lack of health insurance, noting that the Affordable Care Act allows insurers to impose a surcharge premium on smokers. However they pointed out that this initiative has not decreased rates of smoking, and may instead have led to higher rates of noninsurance among cancer survivors who continue to smoke.

One author was supported by the American Society of Radiation Oncology and the Prostate Cancer Foundation, and one author declared funding from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Sanford N et al. JAMA Oncology 2019, May 15. doi: 10.1001/jamaoncol.2019.1973.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Anomalous RT dose linked to lower survival in uterine cancer

Article Type
Changed
Fri, 05/17/2019 - 12:25

As many as one in eight patients with uterine cancer who undergo adjuvant radiation therapy may have been treated with doses that are inconsistent with standard practice, a new study suggests.

Writing in JCO Clinical Cancer Informatics, Corbin D. Jacobs, MD, of Duke University, Durham, N.C., and coauthors analyzed National Cancer Database data from 14,298 women with stage IIIC1-C2 uterine cancer who underwent adjuvant radiation therapy after hysterectomy. The analysis included information on radiation therapy site, modality, dose, fractions, timing, duration, and stage, as well as details about the facilities at which the treatment was given.

Overall, 16% of the women had at least one ‘anomalous’ entry in their records of radiation therapy. The most common anomalies were that the combined total radiation therapy dose was insufficient, or there was an insufficient number of external beam radiation therapy fractions, both of which may have represented an incomplete course of radiation therapy.

Other anomalies were excessive brachytherapy fractions, inconsistency in the staging, and less than 100 days of radiation therapy.

The study showed that the 5-year overall survival rate in individuals who had at least one anomalous data entry was 51.3% compared with 58% among individuals without any anomalous entries (P less than .001). The difference in survival rates was entirely accounted for by insufficient, excessive, or unknown radiation therapy dose.

More than half of patients in the study had missing or unknown data for at least one entry, and this was associated with lower 5-year survival compared with patients with complete data entries.

The researchers also looked at facility-specific factors, such as the type of facility, its location, and its distance from the patient’s home, and how these impacted the frequency of anomalous data. They found that comprehensive community cancer programs had the lowest incidence of anomalous data (14.7%) compared with non–comprehensive community cancer programs, which had an incidence of 17.1%.

The incidence of anomalous data was highest in facilities in the south Atlantic, east south central and west south central regions of the United States. The further away from a patient’s home the reporting facility was, the higher the presence of anomalous data.

“Because an insufficient RT dose or fewer than 20 fractions accounted for such a large proportion of the anomalies, patients may potentially be more likely to have an incomplete RT course or complete RT in a hypofractionated manner when the facility is farther from their home,” the authors wrote.

One author was an employee of Bioventus, and three declared research funding from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Jacobs C et al. JCO Clin Cancer Inform. 2019, May 3. doi: 10.1200/CCI.18.00118.

Publications
Topics
Sections

As many as one in eight patients with uterine cancer who undergo adjuvant radiation therapy may have been treated with doses that are inconsistent with standard practice, a new study suggests.

Writing in JCO Clinical Cancer Informatics, Corbin D. Jacobs, MD, of Duke University, Durham, N.C., and coauthors analyzed National Cancer Database data from 14,298 women with stage IIIC1-C2 uterine cancer who underwent adjuvant radiation therapy after hysterectomy. The analysis included information on radiation therapy site, modality, dose, fractions, timing, duration, and stage, as well as details about the facilities at which the treatment was given.

Overall, 16% of the women had at least one ‘anomalous’ entry in their records of radiation therapy. The most common anomalies were that the combined total radiation therapy dose was insufficient, or there was an insufficient number of external beam radiation therapy fractions, both of which may have represented an incomplete course of radiation therapy.

Other anomalies were excessive brachytherapy fractions, inconsistency in the staging, and less than 100 days of radiation therapy.

The study showed that the 5-year overall survival rate in individuals who had at least one anomalous data entry was 51.3% compared with 58% among individuals without any anomalous entries (P less than .001). The difference in survival rates was entirely accounted for by insufficient, excessive, or unknown radiation therapy dose.

More than half of patients in the study had missing or unknown data for at least one entry, and this was associated with lower 5-year survival compared with patients with complete data entries.

The researchers also looked at facility-specific factors, such as the type of facility, its location, and its distance from the patient’s home, and how these impacted the frequency of anomalous data. They found that comprehensive community cancer programs had the lowest incidence of anomalous data (14.7%) compared with non–comprehensive community cancer programs, which had an incidence of 17.1%.

The incidence of anomalous data was highest in facilities in the south Atlantic, east south central and west south central regions of the United States. The further away from a patient’s home the reporting facility was, the higher the presence of anomalous data.

“Because an insufficient RT dose or fewer than 20 fractions accounted for such a large proportion of the anomalies, patients may potentially be more likely to have an incomplete RT course or complete RT in a hypofractionated manner when the facility is farther from their home,” the authors wrote.

One author was an employee of Bioventus, and three declared research funding from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Jacobs C et al. JCO Clin Cancer Inform. 2019, May 3. doi: 10.1200/CCI.18.00118.

As many as one in eight patients with uterine cancer who undergo adjuvant radiation therapy may have been treated with doses that are inconsistent with standard practice, a new study suggests.

Writing in JCO Clinical Cancer Informatics, Corbin D. Jacobs, MD, of Duke University, Durham, N.C., and coauthors analyzed National Cancer Database data from 14,298 women with stage IIIC1-C2 uterine cancer who underwent adjuvant radiation therapy after hysterectomy. The analysis included information on radiation therapy site, modality, dose, fractions, timing, duration, and stage, as well as details about the facilities at which the treatment was given.

Overall, 16% of the women had at least one ‘anomalous’ entry in their records of radiation therapy. The most common anomalies were that the combined total radiation therapy dose was insufficient, or there was an insufficient number of external beam radiation therapy fractions, both of which may have represented an incomplete course of radiation therapy.

Other anomalies were excessive brachytherapy fractions, inconsistency in the staging, and less than 100 days of radiation therapy.

The study showed that the 5-year overall survival rate in individuals who had at least one anomalous data entry was 51.3% compared with 58% among individuals without any anomalous entries (P less than .001). The difference in survival rates was entirely accounted for by insufficient, excessive, or unknown radiation therapy dose.

More than half of patients in the study had missing or unknown data for at least one entry, and this was associated with lower 5-year survival compared with patients with complete data entries.

The researchers also looked at facility-specific factors, such as the type of facility, its location, and its distance from the patient’s home, and how these impacted the frequency of anomalous data. They found that comprehensive community cancer programs had the lowest incidence of anomalous data (14.7%) compared with non–comprehensive community cancer programs, which had an incidence of 17.1%.

The incidence of anomalous data was highest in facilities in the south Atlantic, east south central and west south central regions of the United States. The further away from a patient’s home the reporting facility was, the higher the presence of anomalous data.

“Because an insufficient RT dose or fewer than 20 fractions accounted for such a large proportion of the anomalies, patients may potentially be more likely to have an incomplete RT course or complete RT in a hypofractionated manner when the facility is farther from their home,” the authors wrote.

One author was an employee of Bioventus, and three declared research funding from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Jacobs C et al. JCO Clin Cancer Inform. 2019, May 3. doi: 10.1200/CCI.18.00118.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JCO CLINICAL CANCER INFORMATICS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Youth suicide: Rates rising more rapidly in girls

Suicide rates a sign of social media impact?
Article Type
Changed
Mon, 07/01/2019 - 11:20

 

Youth suicide rates appear to be increasing faster in girls than in boys, narrowing the historical gap between the two, according to research published in JAMA Network Open.

From 1999 to 2014, suicide rates in the United States have increased by 33%, but the incidence always has been higher among men than women in all age groups.

Recent reports suggesting that suicide rates were increasing in girls prompted Donna A. Ruch, PhD, from the the Research Institute at Nationwide Children’s Hospital in Columbus, Ohio, and her coauthors to undertake a cross-sectional study of all suicides in the United States among individuals aged 10-19 years, between 1975 and 2016.

During that time there were a total of 85,051 suicide deaths in youth aged 10-19 years; approximately 80% of the deaths were in males, representing a nearly fourfold higher rate in males than females (incidence rate ratio [IRR], 3.82).

From 1975 to 1993, researchers noted a 5.4% increase each year in suicide rates among girls aged 10-14 years and a 4.5% increase among boys in the same age group. From 2003-2007, suicide rates among both sexes declined until 2007, at which point the suicide rates among girls increased annually by 12.7%, compared with 7.1% among boys.

Overall, the male to female incidence rate ratio among youth aged 10-14 decreased from 3.14 in 1975-1991 to 1.80 from 2007-2016, a statistically significant difference.

“The narrowing gap between male and female rates of suicide was most pronounced among youth aged 10 to 14 years, underscoring the importance of early prevention efforts that take both sex and developmental level into consideration,” the authors wrote.

Ethnicity was an influence, with the most consistent declining trend in male to female incidence rate ratio seen in non-Hispanic white youth, and also was significant in non-Hispanic youth of other races. There was no significant change in the male to female incidence rate ratio seen with younger non-Hispanic black youth or Hispanic youth.

Among youth aged 15-19, the differences between male and female suicide rates decreased significantly in non-Hispanic youth of other race, and a significant downward trend also was seen in Hispanic youth.

The analysis also looked at method of suicide. The results showed that while the male to female incidence rate ratio for shooting suicides increased significantly in youth aged 15-19 years, it decreased for suicide by hanging or suffocation across all ages groups.

“Future research to identify sex-specific risk factors for youth suicide and distinct mechanisms of suicide in male and female individuals within racial/ethnic groups could lead to improved suicide prevention strategies and interventions,” the authors wrote.

One author was supported by a grant from the National Institute of Mental Health, and declared unpaid board membership for the scientific advisory board of a mental health company. No other conflicts of interest were declared.

SOURCE: Ruch D et al. JAMA Network Open. 2019, May 17. doi:10.1001/jamanetworkopen.2019.3886.

Body

Rates of suicide among girls aged 10-14 have tripled between 1999-2014, and this new study raises questions about what is driving this trend. Fingers have been pointed at the rise of social media use, particularly among this age group, as a clear and powerful social change that has occurred over the same period. But social media use has risen among both sexes, so why is it disproportionately impacting girls?

It may be that girls’ social media use is more likely to result in interpersonal stress, but also that girls are known to use social media more frequently and are more likely to experience cyberbullying. Research also suggests that girls with depression experience more negative comments from peers on social media compared to boys with depression, suggesting that increasing social media use may make young girls more vulnerable to suicide.

Joan Luby, MD, and Sarah Kertz, PhD, are from the department of psychiatry at Washington University in St. Louis. These comments are adapted from an editorial accompanying the article by Ruch et al. (JAMA Network Open. 2019, May 17. doi: 10.1001/jamanetworkopen.2019.3916). Dr Luby reported grants from the National Institute of Mental Health. No other disclosures were reported.

Publications
Topics
Sections
Body

Rates of suicide among girls aged 10-14 have tripled between 1999-2014, and this new study raises questions about what is driving this trend. Fingers have been pointed at the rise of social media use, particularly among this age group, as a clear and powerful social change that has occurred over the same period. But social media use has risen among both sexes, so why is it disproportionately impacting girls?

It may be that girls’ social media use is more likely to result in interpersonal stress, but also that girls are known to use social media more frequently and are more likely to experience cyberbullying. Research also suggests that girls with depression experience more negative comments from peers on social media compared to boys with depression, suggesting that increasing social media use may make young girls more vulnerable to suicide.

Joan Luby, MD, and Sarah Kertz, PhD, are from the department of psychiatry at Washington University in St. Louis. These comments are adapted from an editorial accompanying the article by Ruch et al. (JAMA Network Open. 2019, May 17. doi: 10.1001/jamanetworkopen.2019.3916). Dr Luby reported grants from the National Institute of Mental Health. No other disclosures were reported.

Body

Rates of suicide among girls aged 10-14 have tripled between 1999-2014, and this new study raises questions about what is driving this trend. Fingers have been pointed at the rise of social media use, particularly among this age group, as a clear and powerful social change that has occurred over the same period. But social media use has risen among both sexes, so why is it disproportionately impacting girls?

It may be that girls’ social media use is more likely to result in interpersonal stress, but also that girls are known to use social media more frequently and are more likely to experience cyberbullying. Research also suggests that girls with depression experience more negative comments from peers on social media compared to boys with depression, suggesting that increasing social media use may make young girls more vulnerable to suicide.

Joan Luby, MD, and Sarah Kertz, PhD, are from the department of psychiatry at Washington University in St. Louis. These comments are adapted from an editorial accompanying the article by Ruch et al. (JAMA Network Open. 2019, May 17. doi: 10.1001/jamanetworkopen.2019.3916). Dr Luby reported grants from the National Institute of Mental Health. No other disclosures were reported.

Title
Suicide rates a sign of social media impact?
Suicide rates a sign of social media impact?

 

Youth suicide rates appear to be increasing faster in girls than in boys, narrowing the historical gap between the two, according to research published in JAMA Network Open.

From 1999 to 2014, suicide rates in the United States have increased by 33%, but the incidence always has been higher among men than women in all age groups.

Recent reports suggesting that suicide rates were increasing in girls prompted Donna A. Ruch, PhD, from the the Research Institute at Nationwide Children’s Hospital in Columbus, Ohio, and her coauthors to undertake a cross-sectional study of all suicides in the United States among individuals aged 10-19 years, between 1975 and 2016.

During that time there were a total of 85,051 suicide deaths in youth aged 10-19 years; approximately 80% of the deaths were in males, representing a nearly fourfold higher rate in males than females (incidence rate ratio [IRR], 3.82).

From 1975 to 1993, researchers noted a 5.4% increase each year in suicide rates among girls aged 10-14 years and a 4.5% increase among boys in the same age group. From 2003-2007, suicide rates among both sexes declined until 2007, at which point the suicide rates among girls increased annually by 12.7%, compared with 7.1% among boys.

Overall, the male to female incidence rate ratio among youth aged 10-14 decreased from 3.14 in 1975-1991 to 1.80 from 2007-2016, a statistically significant difference.

“The narrowing gap between male and female rates of suicide was most pronounced among youth aged 10 to 14 years, underscoring the importance of early prevention efforts that take both sex and developmental level into consideration,” the authors wrote.

Ethnicity was an influence, with the most consistent declining trend in male to female incidence rate ratio seen in non-Hispanic white youth, and also was significant in non-Hispanic youth of other races. There was no significant change in the male to female incidence rate ratio seen with younger non-Hispanic black youth or Hispanic youth.

Among youth aged 15-19, the differences between male and female suicide rates decreased significantly in non-Hispanic youth of other race, and a significant downward trend also was seen in Hispanic youth.

The analysis also looked at method of suicide. The results showed that while the male to female incidence rate ratio for shooting suicides increased significantly in youth aged 15-19 years, it decreased for suicide by hanging or suffocation across all ages groups.

“Future research to identify sex-specific risk factors for youth suicide and distinct mechanisms of suicide in male and female individuals within racial/ethnic groups could lead to improved suicide prevention strategies and interventions,” the authors wrote.

One author was supported by a grant from the National Institute of Mental Health, and declared unpaid board membership for the scientific advisory board of a mental health company. No other conflicts of interest were declared.

SOURCE: Ruch D et al. JAMA Network Open. 2019, May 17. doi:10.1001/jamanetworkopen.2019.3886.

 

Youth suicide rates appear to be increasing faster in girls than in boys, narrowing the historical gap between the two, according to research published in JAMA Network Open.

From 1999 to 2014, suicide rates in the United States have increased by 33%, but the incidence always has been higher among men than women in all age groups.

Recent reports suggesting that suicide rates were increasing in girls prompted Donna A. Ruch, PhD, from the the Research Institute at Nationwide Children’s Hospital in Columbus, Ohio, and her coauthors to undertake a cross-sectional study of all suicides in the United States among individuals aged 10-19 years, between 1975 and 2016.

During that time there were a total of 85,051 suicide deaths in youth aged 10-19 years; approximately 80% of the deaths were in males, representing a nearly fourfold higher rate in males than females (incidence rate ratio [IRR], 3.82).

From 1975 to 1993, researchers noted a 5.4% increase each year in suicide rates among girls aged 10-14 years and a 4.5% increase among boys in the same age group. From 2003-2007, suicide rates among both sexes declined until 2007, at which point the suicide rates among girls increased annually by 12.7%, compared with 7.1% among boys.

Overall, the male to female incidence rate ratio among youth aged 10-14 decreased from 3.14 in 1975-1991 to 1.80 from 2007-2016, a statistically significant difference.

“The narrowing gap between male and female rates of suicide was most pronounced among youth aged 10 to 14 years, underscoring the importance of early prevention efforts that take both sex and developmental level into consideration,” the authors wrote.

Ethnicity was an influence, with the most consistent declining trend in male to female incidence rate ratio seen in non-Hispanic white youth, and also was significant in non-Hispanic youth of other races. There was no significant change in the male to female incidence rate ratio seen with younger non-Hispanic black youth or Hispanic youth.

Among youth aged 15-19, the differences between male and female suicide rates decreased significantly in non-Hispanic youth of other race, and a significant downward trend also was seen in Hispanic youth.

The analysis also looked at method of suicide. The results showed that while the male to female incidence rate ratio for shooting suicides increased significantly in youth aged 15-19 years, it decreased for suicide by hanging or suffocation across all ages groups.

“Future research to identify sex-specific risk factors for youth suicide and distinct mechanisms of suicide in male and female individuals within racial/ethnic groups could lead to improved suicide prevention strategies and interventions,” the authors wrote.

One author was supported by a grant from the National Institute of Mental Health, and declared unpaid board membership for the scientific advisory board of a mental health company. No other conflicts of interest were declared.

SOURCE: Ruch D et al. JAMA Network Open. 2019, May 17. doi:10.1001/jamanetworkopen.2019.3886.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Suicide rates are rising faster among girls than boys.

Major finding: Suicide rates in girls have increased annually by 12.7% since 2007, compared with 7.1% among boys.

Study details: Cross-sectional study of 85,051 suicide deaths in youth aged 10-19 years.

Disclosures: One author was supported by a grant from the National Institute of Mental Health, and declared unpaid board membership for the scientific advisory board of a mental health company. No other conflicts of interest were declared.

Source: Ruch D et al. JAMA Network Open 2019, May 17. doi: 10.1001/jamanetworkopen.2019.3886.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

New recommendations on TB screening for health care workers

Article Type
Changed
Mon, 05/20/2019 - 11:10

U.S. health care personnel no longer need to undergo routine tuberculosis testing in the absence of known exposure, according to new screening guidelines from the National Tuberculosis Controllers Association and CDC.

CDC News icon

The revised guidelines on tuberculosis screening, testing, and treatment of U.S. health care personnel, published in Morbidity and Mortality Weekly Report, are the first update since 2005. The new recommendations reflect a reduction in concern about U.S. health care personnel’s risk of occupational exposure to latent and active tuberculosis infection.

Lynn E. Sosa, MD, from the Connecticut Department of Public Health and National Tuberculosis Controllers Association, and coauthors wrote that rates of tuberculosis infection in the United States have declined by 73% since 1991, from 10.4/100,000 population in 1991 to 2.8/100,000 in 2017. This has been matched by similar declines among health care workers, which the authors said raised questions about the cost-effectiveness of the previously recommended routine serial occupational testing.

“In addition, a recent retrospective cohort study of approximately 40,000 health care personnel at a tertiary U.S. medical center in a low TB-incidence state found an extremely low rate of TST conversion (0.3%) during 1998-2014, with a limited proportion attributable to occupational exposure,” they wrote.

The new guidelines recommend health care personnel undergo baseline or preplacement tuberculosis testing with an interferon-gamma release assay (IGRA) or a tuberculin skin test (TST), as well as individual risk assessment and symptom evaluation.

The individual risk assessment considers whether the person has lived in a country with a high tuberculosis rate, whether they are immunosuppressed, or whether they have had close contact with someone with infectious tuberculosis.

This risk assessment can help decide how to interpret an initial positive test result, the authors said.

“For example, health care personnel with a positive test who are asymptomatic, unlikely to be infected with M. [Mycobacterium] tuberculosis, and at low risk for progression on the basis of their risk assessment should have a second test (either an IGRA or a TST) as recommended in the 2017 TB diagnostic guidelines of the American Thoracic Society, Infectious Diseases Society of America, and CDC,” they wrote. “In this example, the health care personnel should be considered infected with M. tuberculosis only if both the first and second tests are positive.”

After that baseline testing, personnel do not need to undergo routine serial testing except in the case of known exposure or ongoing transmission. The guideline authors suggested serial screening might be considered for health care workers whose work puts them at greater risk – for example, pulmonologists or respiratory therapists – or for those working in settings in which transmission has happened in the past.

For personnel with latent tuberculosis infection, the guidelines recommend “encouragement of treatment” unless it is contraindicated, and annual symptom screening in those not undergoing treatment.

The guideline committee also advocated for annual tuberculosis education for all health care workers.

The new recommendations were based on a systematic review of 36 studies of tuberculosis screening and testing among health care per­sonnel, 16 of which were performed in the United States, and all but two of which were conducted in a hospital setting.

The authors stressed that recommendations from the 2005 CDC guidelineswhich do not pertain to health care personnel screening, testing, treatment and education – remain unchanged.

One author declared personal fees from the National Tuberculosis Controllers Association during the conduct of the study. Two others reported unrelated grants and personal fees from private industry. No other conflicts of interest were disclosed.

SOURCE: Sosa L et al. MMWR. 2019;68:439-43.

 

 

Publications
Topics
Sections

U.S. health care personnel no longer need to undergo routine tuberculosis testing in the absence of known exposure, according to new screening guidelines from the National Tuberculosis Controllers Association and CDC.

CDC News icon

The revised guidelines on tuberculosis screening, testing, and treatment of U.S. health care personnel, published in Morbidity and Mortality Weekly Report, are the first update since 2005. The new recommendations reflect a reduction in concern about U.S. health care personnel’s risk of occupational exposure to latent and active tuberculosis infection.

Lynn E. Sosa, MD, from the Connecticut Department of Public Health and National Tuberculosis Controllers Association, and coauthors wrote that rates of tuberculosis infection in the United States have declined by 73% since 1991, from 10.4/100,000 population in 1991 to 2.8/100,000 in 2017. This has been matched by similar declines among health care workers, which the authors said raised questions about the cost-effectiveness of the previously recommended routine serial occupational testing.

“In addition, a recent retrospective cohort study of approximately 40,000 health care personnel at a tertiary U.S. medical center in a low TB-incidence state found an extremely low rate of TST conversion (0.3%) during 1998-2014, with a limited proportion attributable to occupational exposure,” they wrote.

The new guidelines recommend health care personnel undergo baseline or preplacement tuberculosis testing with an interferon-gamma release assay (IGRA) or a tuberculin skin test (TST), as well as individual risk assessment and symptom evaluation.

The individual risk assessment considers whether the person has lived in a country with a high tuberculosis rate, whether they are immunosuppressed, or whether they have had close contact with someone with infectious tuberculosis.

This risk assessment can help decide how to interpret an initial positive test result, the authors said.

“For example, health care personnel with a positive test who are asymptomatic, unlikely to be infected with M. [Mycobacterium] tuberculosis, and at low risk for progression on the basis of their risk assessment should have a second test (either an IGRA or a TST) as recommended in the 2017 TB diagnostic guidelines of the American Thoracic Society, Infectious Diseases Society of America, and CDC,” they wrote. “In this example, the health care personnel should be considered infected with M. tuberculosis only if both the first and second tests are positive.”

After that baseline testing, personnel do not need to undergo routine serial testing except in the case of known exposure or ongoing transmission. The guideline authors suggested serial screening might be considered for health care workers whose work puts them at greater risk – for example, pulmonologists or respiratory therapists – or for those working in settings in which transmission has happened in the past.

For personnel with latent tuberculosis infection, the guidelines recommend “encouragement of treatment” unless it is contraindicated, and annual symptom screening in those not undergoing treatment.

The guideline committee also advocated for annual tuberculosis education for all health care workers.

The new recommendations were based on a systematic review of 36 studies of tuberculosis screening and testing among health care per­sonnel, 16 of which were performed in the United States, and all but two of which were conducted in a hospital setting.

The authors stressed that recommendations from the 2005 CDC guidelineswhich do not pertain to health care personnel screening, testing, treatment and education – remain unchanged.

One author declared personal fees from the National Tuberculosis Controllers Association during the conduct of the study. Two others reported unrelated grants and personal fees from private industry. No other conflicts of interest were disclosed.

SOURCE: Sosa L et al. MMWR. 2019;68:439-43.

 

 

U.S. health care personnel no longer need to undergo routine tuberculosis testing in the absence of known exposure, according to new screening guidelines from the National Tuberculosis Controllers Association and CDC.

CDC News icon

The revised guidelines on tuberculosis screening, testing, and treatment of U.S. health care personnel, published in Morbidity and Mortality Weekly Report, are the first update since 2005. The new recommendations reflect a reduction in concern about U.S. health care personnel’s risk of occupational exposure to latent and active tuberculosis infection.

Lynn E. Sosa, MD, from the Connecticut Department of Public Health and National Tuberculosis Controllers Association, and coauthors wrote that rates of tuberculosis infection in the United States have declined by 73% since 1991, from 10.4/100,000 population in 1991 to 2.8/100,000 in 2017. This has been matched by similar declines among health care workers, which the authors said raised questions about the cost-effectiveness of the previously recommended routine serial occupational testing.

“In addition, a recent retrospective cohort study of approximately 40,000 health care personnel at a tertiary U.S. medical center in a low TB-incidence state found an extremely low rate of TST conversion (0.3%) during 1998-2014, with a limited proportion attributable to occupational exposure,” they wrote.

The new guidelines recommend health care personnel undergo baseline or preplacement tuberculosis testing with an interferon-gamma release assay (IGRA) or a tuberculin skin test (TST), as well as individual risk assessment and symptom evaluation.

The individual risk assessment considers whether the person has lived in a country with a high tuberculosis rate, whether they are immunosuppressed, or whether they have had close contact with someone with infectious tuberculosis.

This risk assessment can help decide how to interpret an initial positive test result, the authors said.

“For example, health care personnel with a positive test who are asymptomatic, unlikely to be infected with M. [Mycobacterium] tuberculosis, and at low risk for progression on the basis of their risk assessment should have a second test (either an IGRA or a TST) as recommended in the 2017 TB diagnostic guidelines of the American Thoracic Society, Infectious Diseases Society of America, and CDC,” they wrote. “In this example, the health care personnel should be considered infected with M. tuberculosis only if both the first and second tests are positive.”

After that baseline testing, personnel do not need to undergo routine serial testing except in the case of known exposure or ongoing transmission. The guideline authors suggested serial screening might be considered for health care workers whose work puts them at greater risk – for example, pulmonologists or respiratory therapists – or for those working in settings in which transmission has happened in the past.

For personnel with latent tuberculosis infection, the guidelines recommend “encouragement of treatment” unless it is contraindicated, and annual symptom screening in those not undergoing treatment.

The guideline committee also advocated for annual tuberculosis education for all health care workers.

The new recommendations were based on a systematic review of 36 studies of tuberculosis screening and testing among health care per­sonnel, 16 of which were performed in the United States, and all but two of which were conducted in a hospital setting.

The authors stressed that recommendations from the 2005 CDC guidelineswhich do not pertain to health care personnel screening, testing, treatment and education – remain unchanged.

One author declared personal fees from the National Tuberculosis Controllers Association during the conduct of the study. Two others reported unrelated grants and personal fees from private industry. No other conflicts of interest were disclosed.

SOURCE: Sosa L et al. MMWR. 2019;68:439-43.

 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM MMWR

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Severe OSA increases cardiovascular risk after surgery

Wake-up call on OSA surgery risk
Article Type
Changed
Wed, 05/15/2019 - 09:32

 

Unrecognized severe obstructive sleep apnea is a risk factor for cardiovascular complications after major noncardiac surgery, according to a study published in JAMA.

The researchers state that perioperative mismanagement of obstructive sleep apnea can lead to serious medical consequences. “General anesthetics, sedatives, and postoperative analgesics are potent respiratory depressants that relax the upper airway dilator muscles and impair ventilatory response to hypoxemia and hypercapnia. Each of these events exacerbates [obstructive sleep apnea] and may predispose patients to postoperative cardiovascular complications,” said researchers who conducted the The Postoperative vascular complications in unrecognised Obstructive Sleep apnoea (POSA) study (NCT01494181).

They undertook a prospective observational cohort study involving 1,218 patients undergoing major noncardiac surgery, who were already considered at high risk of postoperative cardiovascular events – having, for example, a history of coronary artery disease, stroke, diabetes, or renal impairment. However, none had a prior diagnosis of obstructive sleep apnea.

Preoperative sleep monitoring revealed that two-thirds of the cohort had unrecognized and untreated obstructive sleep apnea, including 11.2% with severe obstructive sleep apnea.

At 30 days after surgery, patients with obstructive sleep apnea had a 49% higher risk of the primary outcome of myocardial injury, cardiac death, heart failure, thromboembolism, atrial fibrillation, or stroke, compared with those without obstructive sleep apnea.

However, this association was largely due to a significant 2.23-fold higher risk among patients with severe obstructive sleep apnea, while those with only moderate or mild sleep apnea did not show a significant increased risk of cardiovascular complications.

Patients in this study with severe obstructive sleep apnea had a 13-fold higher risk of cardiac death, 80% higher risk of myocardial injury, more than sixfold higher risk of heart failure, and nearly fourfold higher risk of atrial fibrillation.

Researchers also saw an association between obstructive sleep apnea and increased risk of infective outcomes, unplanned tracheal intubation, postoperative lung ventilation, and readmission to the ICU.

The majority of patients received nocturnal oximetry monitoring during their first 3 nights after surgery. This revealed that patients without obstructive sleep apnea had significant increases in oxygen desaturation index during their first night after surgery, while those with sleep apnea did not return to their baseline oxygen desaturation index until the third night after surgery.

“Despite a substantial decrease in ODI [oxygen desaturation index] with oxygen therapy in patients with OSA during the first 3 postoperative nights, supplemental oxygen did not modify the association between OSA and postoperative cardiovascular event,” wrote Matthew T.V. Chan, MD, of Chinese University of Hong Kong, Prince of Wales Hospital, and coauthors.

Given that the events were associated with longer durations of severe oxyhemoglobin desaturation, more aggressive interventions such as positive airway pressure or oral appliances may be required, they noted.

“However, high-level evidence demonstrating the effect of these measures on perioperative outcomes is lacking [and] further clinical trials are now required to test if additional monitoring or alternative interventions would reduce the risk,” they wrote.

The study was supported by the Health and Medical Research Fund (Hong Kong), National Healthcare Group–Khoo Teck Puat Hospital, University Health Network Foundation, University of Malaya, Malaysian Society of Anaesthesiologists, Auckland Medical Research Foundation, and ResMed. One author declared grants from private industry and a patent pending on an obstructive sleep apnea risk questionnaire used in the study.

SOURCE: Chan M et al. JAMA 2019;321[18]:1788-98. doi: 10.1001/jama.2019.4783.

Body

 

This study is large, prospective, and rigorous and adds important new information to the puzzle of the impact of sleep apnea on postoperative risk, Dennis Auckley, MD, and Stavros Memtsoudis, MD, wrote in an editorial accompanying this study. The study focused on predetermined clinically significant and measurable events, used standardized and objective sleep apnea testing, and attempted to control for many of the confounders that might have influenced outcomes.

The results suggest that obstructive sleep apnea should be recognized as a major perioperative risk factor, and it should receive the same attention and optimization efforts as comorbidities such as diabetes.
 

Dr. Auckley is from the division of pulmonary, critical care and sleep medicine at MetroHealth Medical Center, Case Western Reserve University, Cleveland, and Dr. Memtsoudis is clinical professor of anesthesiology at Cornell University, New York. These comments are adapted from an editorial (JAMA 2019;231[18]:1775-6). Both declared board and executive positions with the Society of Anesthesia and Sleep Medicine. Dr. Auckley declared research funding from Medtronic, and Dr. Memtsoudis declared personal fees from Teikoku and Sandoz.

Publications
Topics
Sections
Body

 

This study is large, prospective, and rigorous and adds important new information to the puzzle of the impact of sleep apnea on postoperative risk, Dennis Auckley, MD, and Stavros Memtsoudis, MD, wrote in an editorial accompanying this study. The study focused on predetermined clinically significant and measurable events, used standardized and objective sleep apnea testing, and attempted to control for many of the confounders that might have influenced outcomes.

The results suggest that obstructive sleep apnea should be recognized as a major perioperative risk factor, and it should receive the same attention and optimization efforts as comorbidities such as diabetes.
 

Dr. Auckley is from the division of pulmonary, critical care and sleep medicine at MetroHealth Medical Center, Case Western Reserve University, Cleveland, and Dr. Memtsoudis is clinical professor of anesthesiology at Cornell University, New York. These comments are adapted from an editorial (JAMA 2019;231[18]:1775-6). Both declared board and executive positions with the Society of Anesthesia and Sleep Medicine. Dr. Auckley declared research funding from Medtronic, and Dr. Memtsoudis declared personal fees from Teikoku and Sandoz.

Body

 

This study is large, prospective, and rigorous and adds important new information to the puzzle of the impact of sleep apnea on postoperative risk, Dennis Auckley, MD, and Stavros Memtsoudis, MD, wrote in an editorial accompanying this study. The study focused on predetermined clinically significant and measurable events, used standardized and objective sleep apnea testing, and attempted to control for many of the confounders that might have influenced outcomes.

The results suggest that obstructive sleep apnea should be recognized as a major perioperative risk factor, and it should receive the same attention and optimization efforts as comorbidities such as diabetes.
 

Dr. Auckley is from the division of pulmonary, critical care and sleep medicine at MetroHealth Medical Center, Case Western Reserve University, Cleveland, and Dr. Memtsoudis is clinical professor of anesthesiology at Cornell University, New York. These comments are adapted from an editorial (JAMA 2019;231[18]:1775-6). Both declared board and executive positions with the Society of Anesthesia and Sleep Medicine. Dr. Auckley declared research funding from Medtronic, and Dr. Memtsoudis declared personal fees from Teikoku and Sandoz.

Title
Wake-up call on OSA surgery risk
Wake-up call on OSA surgery risk

 

Unrecognized severe obstructive sleep apnea is a risk factor for cardiovascular complications after major noncardiac surgery, according to a study published in JAMA.

The researchers state that perioperative mismanagement of obstructive sleep apnea can lead to serious medical consequences. “General anesthetics, sedatives, and postoperative analgesics are potent respiratory depressants that relax the upper airway dilator muscles and impair ventilatory response to hypoxemia and hypercapnia. Each of these events exacerbates [obstructive sleep apnea] and may predispose patients to postoperative cardiovascular complications,” said researchers who conducted the The Postoperative vascular complications in unrecognised Obstructive Sleep apnoea (POSA) study (NCT01494181).

They undertook a prospective observational cohort study involving 1,218 patients undergoing major noncardiac surgery, who were already considered at high risk of postoperative cardiovascular events – having, for example, a history of coronary artery disease, stroke, diabetes, or renal impairment. However, none had a prior diagnosis of obstructive sleep apnea.

Preoperative sleep monitoring revealed that two-thirds of the cohort had unrecognized and untreated obstructive sleep apnea, including 11.2% with severe obstructive sleep apnea.

At 30 days after surgery, patients with obstructive sleep apnea had a 49% higher risk of the primary outcome of myocardial injury, cardiac death, heart failure, thromboembolism, atrial fibrillation, or stroke, compared with those without obstructive sleep apnea.

However, this association was largely due to a significant 2.23-fold higher risk among patients with severe obstructive sleep apnea, while those with only moderate or mild sleep apnea did not show a significant increased risk of cardiovascular complications.

Patients in this study with severe obstructive sleep apnea had a 13-fold higher risk of cardiac death, 80% higher risk of myocardial injury, more than sixfold higher risk of heart failure, and nearly fourfold higher risk of atrial fibrillation.

Researchers also saw an association between obstructive sleep apnea and increased risk of infective outcomes, unplanned tracheal intubation, postoperative lung ventilation, and readmission to the ICU.

The majority of patients received nocturnal oximetry monitoring during their first 3 nights after surgery. This revealed that patients without obstructive sleep apnea had significant increases in oxygen desaturation index during their first night after surgery, while those with sleep apnea did not return to their baseline oxygen desaturation index until the third night after surgery.

“Despite a substantial decrease in ODI [oxygen desaturation index] with oxygen therapy in patients with OSA during the first 3 postoperative nights, supplemental oxygen did not modify the association between OSA and postoperative cardiovascular event,” wrote Matthew T.V. Chan, MD, of Chinese University of Hong Kong, Prince of Wales Hospital, and coauthors.

Given that the events were associated with longer durations of severe oxyhemoglobin desaturation, more aggressive interventions such as positive airway pressure or oral appliances may be required, they noted.

“However, high-level evidence demonstrating the effect of these measures on perioperative outcomes is lacking [and] further clinical trials are now required to test if additional monitoring or alternative interventions would reduce the risk,” they wrote.

The study was supported by the Health and Medical Research Fund (Hong Kong), National Healthcare Group–Khoo Teck Puat Hospital, University Health Network Foundation, University of Malaya, Malaysian Society of Anaesthesiologists, Auckland Medical Research Foundation, and ResMed. One author declared grants from private industry and a patent pending on an obstructive sleep apnea risk questionnaire used in the study.

SOURCE: Chan M et al. JAMA 2019;321[18]:1788-98. doi: 10.1001/jama.2019.4783.

 

Unrecognized severe obstructive sleep apnea is a risk factor for cardiovascular complications after major noncardiac surgery, according to a study published in JAMA.

The researchers state that perioperative mismanagement of obstructive sleep apnea can lead to serious medical consequences. “General anesthetics, sedatives, and postoperative analgesics are potent respiratory depressants that relax the upper airway dilator muscles and impair ventilatory response to hypoxemia and hypercapnia. Each of these events exacerbates [obstructive sleep apnea] and may predispose patients to postoperative cardiovascular complications,” said researchers who conducted the The Postoperative vascular complications in unrecognised Obstructive Sleep apnoea (POSA) study (NCT01494181).

They undertook a prospective observational cohort study involving 1,218 patients undergoing major noncardiac surgery, who were already considered at high risk of postoperative cardiovascular events – having, for example, a history of coronary artery disease, stroke, diabetes, or renal impairment. However, none had a prior diagnosis of obstructive sleep apnea.

Preoperative sleep monitoring revealed that two-thirds of the cohort had unrecognized and untreated obstructive sleep apnea, including 11.2% with severe obstructive sleep apnea.

At 30 days after surgery, patients with obstructive sleep apnea had a 49% higher risk of the primary outcome of myocardial injury, cardiac death, heart failure, thromboembolism, atrial fibrillation, or stroke, compared with those without obstructive sleep apnea.

However, this association was largely due to a significant 2.23-fold higher risk among patients with severe obstructive sleep apnea, while those with only moderate or mild sleep apnea did not show a significant increased risk of cardiovascular complications.

Patients in this study with severe obstructive sleep apnea had a 13-fold higher risk of cardiac death, 80% higher risk of myocardial injury, more than sixfold higher risk of heart failure, and nearly fourfold higher risk of atrial fibrillation.

Researchers also saw an association between obstructive sleep apnea and increased risk of infective outcomes, unplanned tracheal intubation, postoperative lung ventilation, and readmission to the ICU.

The majority of patients received nocturnal oximetry monitoring during their first 3 nights after surgery. This revealed that patients without obstructive sleep apnea had significant increases in oxygen desaturation index during their first night after surgery, while those with sleep apnea did not return to their baseline oxygen desaturation index until the third night after surgery.

“Despite a substantial decrease in ODI [oxygen desaturation index] with oxygen therapy in patients with OSA during the first 3 postoperative nights, supplemental oxygen did not modify the association between OSA and postoperative cardiovascular event,” wrote Matthew T.V. Chan, MD, of Chinese University of Hong Kong, Prince of Wales Hospital, and coauthors.

Given that the events were associated with longer durations of severe oxyhemoglobin desaturation, more aggressive interventions such as positive airway pressure or oral appliances may be required, they noted.

“However, high-level evidence demonstrating the effect of these measures on perioperative outcomes is lacking [and] further clinical trials are now required to test if additional monitoring or alternative interventions would reduce the risk,” they wrote.

The study was supported by the Health and Medical Research Fund (Hong Kong), National Healthcare Group–Khoo Teck Puat Hospital, University Health Network Foundation, University of Malaya, Malaysian Society of Anaesthesiologists, Auckland Medical Research Foundation, and ResMed. One author declared grants from private industry and a patent pending on an obstructive sleep apnea risk questionnaire used in the study.

SOURCE: Chan M et al. JAMA 2019;321[18]:1788-98. doi: 10.1001/jama.2019.4783.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.