Higher teen pregnancy risk in girls with ADHD

Article Type
Changed
Wed, 10/02/2019 - 11:10

 

Teenage girls with ADHD may be at greater risk of pregnancy than their unaffected peers, which suggests they may benefit from targeted interventions to prevent teen pregnancy.

Doctor talking with teen girl.
Rawpixel/Thinkstock

A Swedish nationwide cohort study published in JAMA Network Open examined data from 384,103 nulliparous women and girls who gave birth between 2007-2014, of whom, 6,410 (1.7%) had received treatment for ADHD.

While the overall rate of teenage births was 3%, the rate among women and girls with ADHD was 15.3%, which represents a greater than sixfold higher odds of giving birth below the age of 20 years (odds ratio, 6.23; 95% confidence interval, 5.80-6.68).

“Becoming a mother at such early age is associated with long-term adverse outcomes for both women and their children,” wrote Charlotte Skoglund, PhD, of the department of clinical neuroscience at the Karolinska Institute in Stockholm and coauthors. “Consequently, our findings argue for an improvement in the standard of care for women and girls with ADHD, including active efforts to prevent teenage pregnancies and address comorbid medical and psychiatric conditions.”

The study also found women and girls with ADHD were significantly more likely to be underweight (OR, 1.29; 95% CI, 1.12-1.49) or have a body mass index greater than 40 kg/m2 (OR, 2.01; 95% CI, 1.60-2.52) when compared with those without ADHD.

They were also six times more likely to smoke, were nearly seven times more likely to continue smoking into their third trimester of pregnancy, and had a 20-fold higher odds of alcohol and substance use disorder. Among individuals who had been diagnosed with ADHD, 7.6% continued to use stimulant and nonstimulant ADHD medication during pregnancy, and 16.4% used antidepressants during pregnancy.

Psychiatric comorbidities were also significantly more common among individuals with ADHD in the year preceding pregnancy, compared with those without ADHD. The authors saw a 17-fold higher odds of receiving a diagnosis of bipolar disorder, nearly 8-fold higher odds of a diagnosis of schizophrenia or other psychotic disorder, and 22-fold higher odds of being diagnosed with emotionally unstable personality disorder among women and girls with ADHD versus those without.

The authors commented that antenatal care should focus on trying to reduce such obstetric risk factors in these women, but also pointed out that ADHD in women and girls was still underdiagnosed and undertreated.

Commenting on the association between ADHD and teenage pregnancy, the authors noted that women and girls with ADHD may be less likely to receive adequate contraceptive counseling and less likely to access, respond to, and act on counseling. They may also experience more adverse effects from hormonal contraceptives.

While Swedish youth clinics enable easier and low-cost access to counseling and contraception, the authors called for greater collaboration between psychiatric care clinics and specialized youth clinics to provide adequate care for women and girls with ADHD.

Three authors declared advisory board positions, grants, personal fees, and speakers’ fees from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Skoglund C et al. JAMA Netw Open. 2019 Oct 2. doi: 10.1001/jamanetworkopen.2019.12463

Publications
Topics
Sections

 

Teenage girls with ADHD may be at greater risk of pregnancy than their unaffected peers, which suggests they may benefit from targeted interventions to prevent teen pregnancy.

Doctor talking with teen girl.
Rawpixel/Thinkstock

A Swedish nationwide cohort study published in JAMA Network Open examined data from 384,103 nulliparous women and girls who gave birth between 2007-2014, of whom, 6,410 (1.7%) had received treatment for ADHD.

While the overall rate of teenage births was 3%, the rate among women and girls with ADHD was 15.3%, which represents a greater than sixfold higher odds of giving birth below the age of 20 years (odds ratio, 6.23; 95% confidence interval, 5.80-6.68).

“Becoming a mother at such early age is associated with long-term adverse outcomes for both women and their children,” wrote Charlotte Skoglund, PhD, of the department of clinical neuroscience at the Karolinska Institute in Stockholm and coauthors. “Consequently, our findings argue for an improvement in the standard of care for women and girls with ADHD, including active efforts to prevent teenage pregnancies and address comorbid medical and psychiatric conditions.”

The study also found women and girls with ADHD were significantly more likely to be underweight (OR, 1.29; 95% CI, 1.12-1.49) or have a body mass index greater than 40 kg/m2 (OR, 2.01; 95% CI, 1.60-2.52) when compared with those without ADHD.

They were also six times more likely to smoke, were nearly seven times more likely to continue smoking into their third trimester of pregnancy, and had a 20-fold higher odds of alcohol and substance use disorder. Among individuals who had been diagnosed with ADHD, 7.6% continued to use stimulant and nonstimulant ADHD medication during pregnancy, and 16.4% used antidepressants during pregnancy.

Psychiatric comorbidities were also significantly more common among individuals with ADHD in the year preceding pregnancy, compared with those without ADHD. The authors saw a 17-fold higher odds of receiving a diagnosis of bipolar disorder, nearly 8-fold higher odds of a diagnosis of schizophrenia or other psychotic disorder, and 22-fold higher odds of being diagnosed with emotionally unstable personality disorder among women and girls with ADHD versus those without.

The authors commented that antenatal care should focus on trying to reduce such obstetric risk factors in these women, but also pointed out that ADHD in women and girls was still underdiagnosed and undertreated.

Commenting on the association between ADHD and teenage pregnancy, the authors noted that women and girls with ADHD may be less likely to receive adequate contraceptive counseling and less likely to access, respond to, and act on counseling. They may also experience more adverse effects from hormonal contraceptives.

While Swedish youth clinics enable easier and low-cost access to counseling and contraception, the authors called for greater collaboration between psychiatric care clinics and specialized youth clinics to provide adequate care for women and girls with ADHD.

Three authors declared advisory board positions, grants, personal fees, and speakers’ fees from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Skoglund C et al. JAMA Netw Open. 2019 Oct 2. doi: 10.1001/jamanetworkopen.2019.12463

 

Teenage girls with ADHD may be at greater risk of pregnancy than their unaffected peers, which suggests they may benefit from targeted interventions to prevent teen pregnancy.

Doctor talking with teen girl.
Rawpixel/Thinkstock

A Swedish nationwide cohort study published in JAMA Network Open examined data from 384,103 nulliparous women and girls who gave birth between 2007-2014, of whom, 6,410 (1.7%) had received treatment for ADHD.

While the overall rate of teenage births was 3%, the rate among women and girls with ADHD was 15.3%, which represents a greater than sixfold higher odds of giving birth below the age of 20 years (odds ratio, 6.23; 95% confidence interval, 5.80-6.68).

“Becoming a mother at such early age is associated with long-term adverse outcomes for both women and their children,” wrote Charlotte Skoglund, PhD, of the department of clinical neuroscience at the Karolinska Institute in Stockholm and coauthors. “Consequently, our findings argue for an improvement in the standard of care for women and girls with ADHD, including active efforts to prevent teenage pregnancies and address comorbid medical and psychiatric conditions.”

The study also found women and girls with ADHD were significantly more likely to be underweight (OR, 1.29; 95% CI, 1.12-1.49) or have a body mass index greater than 40 kg/m2 (OR, 2.01; 95% CI, 1.60-2.52) when compared with those without ADHD.

They were also six times more likely to smoke, were nearly seven times more likely to continue smoking into their third trimester of pregnancy, and had a 20-fold higher odds of alcohol and substance use disorder. Among individuals who had been diagnosed with ADHD, 7.6% continued to use stimulant and nonstimulant ADHD medication during pregnancy, and 16.4% used antidepressants during pregnancy.

Psychiatric comorbidities were also significantly more common among individuals with ADHD in the year preceding pregnancy, compared with those without ADHD. The authors saw a 17-fold higher odds of receiving a diagnosis of bipolar disorder, nearly 8-fold higher odds of a diagnosis of schizophrenia or other psychotic disorder, and 22-fold higher odds of being diagnosed with emotionally unstable personality disorder among women and girls with ADHD versus those without.

The authors commented that antenatal care should focus on trying to reduce such obstetric risk factors in these women, but also pointed out that ADHD in women and girls was still underdiagnosed and undertreated.

Commenting on the association between ADHD and teenage pregnancy, the authors noted that women and girls with ADHD may be less likely to receive adequate contraceptive counseling and less likely to access, respond to, and act on counseling. They may also experience more adverse effects from hormonal contraceptives.

While Swedish youth clinics enable easier and low-cost access to counseling and contraception, the authors called for greater collaboration between psychiatric care clinics and specialized youth clinics to provide adequate care for women and girls with ADHD.

Three authors declared advisory board positions, grants, personal fees, and speakers’ fees from the pharmaceutical sector. No other conflicts of interest were declared.

SOURCE: Skoglund C et al. JAMA Netw Open. 2019 Oct 2. doi: 10.1001/jamanetworkopen.2019.12463

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA NETWORK OPEN

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Zostavax proves safe, effective in patients with nonactive SLE

Results are reassuring, but some questions remain
Article Type
Changed
Fri, 09/27/2019 - 13:44

 

A live-attenuated herpes zoster vaccine can be used in individuals with systemic lupus erythematosus (SLE) if they are not intensively immunosuppressed and their condition is dormant, research suggests.

A paper published in Annals of the Rheumatic Diseases reported the outcomes of a randomized, placebo-controlled trial of the Zostavax herpes zoster vaccine in 90 adults with clinically stable SLE. Participants had to have been on a stable dose of immunosuppressive agents for at least 6 months and have a history of chicken pox or herpes zoster infection.

Chi Chiu Mok, MD, of the Tuen Mun Hospital in Hong Kong and coauthors wrote that herpes zoster reactivation has been reported to occur in 6.4 to 91.4 individuals with SLE per 1,000 patient-years, with consequences including postherpetic neuralgia and even death from disseminated infection. But because Zostavax is live-attenuated, it has not been widely used in immunocompromised people.

After a single subcutaneous dose of either the vaccine or placebo, researchers saw a significant increase in anti–varicella zoster virus (VZV) IgG antibodies in vaccinated individuals over 6 weeks. The magnitude of the increase in anti-VZV IgG seen in vaccinated individuals was on par with that previously seen in vaccinated healthy controls, although the authors noted that the absolute increase in values was lower.

“While the reason is not apparent, one contributing factor is the high rate of previous exposure to VZV infection in most participants, which could have led to a higher baseline anti-IgG anti-VZV value that limited its rise after vaccination,” the authors wrote.

In contrast, IgG reactivity declined in those who received the placebo injection, and the difference between the two groups was statistically significant after adjustment for baseline antibody titers.



The study also looked at the cell-mediated immune response to the vaccine and found the number of interferon-gamma secreting CD4+ T-cell spots increased in the vaccinated patients but decreased in the placebo arm, and by week 6 it was significantly higher in the treated group. The increase in the vaccine-treated patients was again similar to that previously seen in healthy controls.

However, prednisolone use at baseline may have attenuated the vaccine response. Vaccinated patients who were treated with prednisolone at baseline had a lower increase in T-cell spots and lower anti-VZV IgG reactivity after the vaccination than did those not taking prednisolone, although the difference between the two groups was not statistically significant. The study did not see any effect of age, sex, baseline lymphocyte count, disease activity scores, and other factors on response to the vaccine.

None of the patients who received the vaccine withdrew from the study because of serious adverse events. The most common adverse events reported were injection-site redness and pain, which were more common in the vaccine-treated group than in the placebo group. However these symptoms were mild and resolved by themselves after a few days. Two patients in the vaccine group and one in the placebo group experienced mild or moderate SLE flares.

The authors commented that this was the first randomized, controlled trial examining the safety and immune response of a live-attenuated herpes zoster vaccine in individuals with SLE and this trial showed it was safe and well tolerated in those with stable disease who were not on intensive immunosuppressive therapy.

“Despite the increased risk of HZ [herpes zoster] infection, SLE had the lowest HZ vaccination rates among age-eligible subjects, probably because of the concern of vaccine safety, the principle of contraindication to live-attenuated vaccines in immunocompromised hosts, as well as the current ambiguous guidelines for HZ vaccination in SLE,” they wrote.

But they also stressed that their results did not apply to patients with active disease or on more intensive immunosuppression and that longer-term data on the persistence of vaccine immunogenicity was still being collected.

The study was funded by the Hong Kong Research Fund Secretariat. No conflicts of interest were declared.

SOURCE: Mok CC et al. Ann Rheum Dis. 2019 Sep 17. doi: 10.1136/annrheumdis-2019-215925

Body

 

Probably like me you have seen a bit of zoster in our patients with SLE, and rarely we get severe outbreaks in multiple dermatomes or in the eyes or other vulnerable areas in patients on immune suppression. So I think of Zostavax the way I think of shingles per se: The more immune compromised you are, the higher the risk of something bad happening … maybe. But we do know with Zostavax the risk is small.

Dr. Joan T. Merrill
Dr. Joan T. Merrill
The study by Chi Chiu Mok et al. selected stable patients on moderate immune suppression, so I think the paper is pretty reassuring about stable patients. And to the extent that this immunization can stave off a significant outbreak at a later time when maybe the person will be on stronger immune-compromising medications, or is just older with the compromise of weaker defenses, prevention would be good.

Shingrix is a lot more effective than Zostavax and does not have the same issue of potentially causing the thing it prevents. But the most likely reason it works so well is that it has an adjuvant. We are generally a lot more concerned about injecting adjuvants in autoimmune patients here in the United States than they are in Europe where they have more experience with that, but this one is apparently a new adjuvant and has never been used in autoimmune patients, who were excluded from the trials of Shingrix. And a fair number of nonautoimmune patients get autoimmune-like symptoms in the Shingrix trials such as myalgias and fevers. I don’t think we have full confidence yet until we figure out just how worried we ought to be about that. In other words, if Shingrix only causes mild/moderate transient flares, then our patients might rationally consider that a fair trade for lifelong protection.

I think in some patients this is an easier decision than others. If somebody is 50 years old and healthy, hasn’t had nephritis or anything bad before (or not in the last 10 years), and is on no immune suppressant or just using stable, modest doses of such therapies, you would probably recommend doing something to avoid getting zoster. And here you can explain the choice to the patient: Zostavax provides good protection but less than Shingrix, is unlikely to make the patient flare, has very low risk of live vaccine causing much trouble in a generally healthy person; Shingrix is more effective overall, has caused some autoimmune symptoms in healthy people, and has unclear risk for a flare in a patient with a diagnosis (but that can be monitored).

For the sicker patients, we just have to weigh the risk of a natural zoster outbreak against the risk of a flare and the risk of disseminated zoster from the Zostavax, which is a pretty small risk but it is there. It’s a discussion you need to have in advance with each patient. Maybe with some patients, it is best to wait for an optimal time for either choice, when there’s not too much disease and not too much immune-compromising medication.

An unsolved issue for herpes zoster vaccination is age. Greater knowledge about how to best vaccinate would go a long way toward bolstering confidence in using the vaccines in patients a bit younger than 50 years given that zoster does occur in lupus patients at that age.

Joan Merrill, MD, is OMRF Professor of Medicine at the University of Oklahoma Health Sciences Center and a member of the Arthritis & Clinical Immunology Research Program at the Oklahoma Medical Research Foundation, both in Oklahoma City. She is a member of the editorial advisory board of Rheumatology News.

Publications
Topics
Sections
Body

 

Probably like me you have seen a bit of zoster in our patients with SLE, and rarely we get severe outbreaks in multiple dermatomes or in the eyes or other vulnerable areas in patients on immune suppression. So I think of Zostavax the way I think of shingles per se: The more immune compromised you are, the higher the risk of something bad happening … maybe. But we do know with Zostavax the risk is small.

Dr. Joan T. Merrill
Dr. Joan T. Merrill
The study by Chi Chiu Mok et al. selected stable patients on moderate immune suppression, so I think the paper is pretty reassuring about stable patients. And to the extent that this immunization can stave off a significant outbreak at a later time when maybe the person will be on stronger immune-compromising medications, or is just older with the compromise of weaker defenses, prevention would be good.

Shingrix is a lot more effective than Zostavax and does not have the same issue of potentially causing the thing it prevents. But the most likely reason it works so well is that it has an adjuvant. We are generally a lot more concerned about injecting adjuvants in autoimmune patients here in the United States than they are in Europe where they have more experience with that, but this one is apparently a new adjuvant and has never been used in autoimmune patients, who were excluded from the trials of Shingrix. And a fair number of nonautoimmune patients get autoimmune-like symptoms in the Shingrix trials such as myalgias and fevers. I don’t think we have full confidence yet until we figure out just how worried we ought to be about that. In other words, if Shingrix only causes mild/moderate transient flares, then our patients might rationally consider that a fair trade for lifelong protection.

I think in some patients this is an easier decision than others. If somebody is 50 years old and healthy, hasn’t had nephritis or anything bad before (or not in the last 10 years), and is on no immune suppressant or just using stable, modest doses of such therapies, you would probably recommend doing something to avoid getting zoster. And here you can explain the choice to the patient: Zostavax provides good protection but less than Shingrix, is unlikely to make the patient flare, has very low risk of live vaccine causing much trouble in a generally healthy person; Shingrix is more effective overall, has caused some autoimmune symptoms in healthy people, and has unclear risk for a flare in a patient with a diagnosis (but that can be monitored).

For the sicker patients, we just have to weigh the risk of a natural zoster outbreak against the risk of a flare and the risk of disseminated zoster from the Zostavax, which is a pretty small risk but it is there. It’s a discussion you need to have in advance with each patient. Maybe with some patients, it is best to wait for an optimal time for either choice, when there’s not too much disease and not too much immune-compromising medication.

An unsolved issue for herpes zoster vaccination is age. Greater knowledge about how to best vaccinate would go a long way toward bolstering confidence in using the vaccines in patients a bit younger than 50 years given that zoster does occur in lupus patients at that age.

Joan Merrill, MD, is OMRF Professor of Medicine at the University of Oklahoma Health Sciences Center and a member of the Arthritis & Clinical Immunology Research Program at the Oklahoma Medical Research Foundation, both in Oklahoma City. She is a member of the editorial advisory board of Rheumatology News.

Body

 

Probably like me you have seen a bit of zoster in our patients with SLE, and rarely we get severe outbreaks in multiple dermatomes or in the eyes or other vulnerable areas in patients on immune suppression. So I think of Zostavax the way I think of shingles per se: The more immune compromised you are, the higher the risk of something bad happening … maybe. But we do know with Zostavax the risk is small.

Dr. Joan T. Merrill
Dr. Joan T. Merrill
The study by Chi Chiu Mok et al. selected stable patients on moderate immune suppression, so I think the paper is pretty reassuring about stable patients. And to the extent that this immunization can stave off a significant outbreak at a later time when maybe the person will be on stronger immune-compromising medications, or is just older with the compromise of weaker defenses, prevention would be good.

Shingrix is a lot more effective than Zostavax and does not have the same issue of potentially causing the thing it prevents. But the most likely reason it works so well is that it has an adjuvant. We are generally a lot more concerned about injecting adjuvants in autoimmune patients here in the United States than they are in Europe where they have more experience with that, but this one is apparently a new adjuvant and has never been used in autoimmune patients, who were excluded from the trials of Shingrix. And a fair number of nonautoimmune patients get autoimmune-like symptoms in the Shingrix trials such as myalgias and fevers. I don’t think we have full confidence yet until we figure out just how worried we ought to be about that. In other words, if Shingrix only causes mild/moderate transient flares, then our patients might rationally consider that a fair trade for lifelong protection.

I think in some patients this is an easier decision than others. If somebody is 50 years old and healthy, hasn’t had nephritis or anything bad before (or not in the last 10 years), and is on no immune suppressant or just using stable, modest doses of such therapies, you would probably recommend doing something to avoid getting zoster. And here you can explain the choice to the patient: Zostavax provides good protection but less than Shingrix, is unlikely to make the patient flare, has very low risk of live vaccine causing much trouble in a generally healthy person; Shingrix is more effective overall, has caused some autoimmune symptoms in healthy people, and has unclear risk for a flare in a patient with a diagnosis (but that can be monitored).

For the sicker patients, we just have to weigh the risk of a natural zoster outbreak against the risk of a flare and the risk of disseminated zoster from the Zostavax, which is a pretty small risk but it is there. It’s a discussion you need to have in advance with each patient. Maybe with some patients, it is best to wait for an optimal time for either choice, when there’s not too much disease and not too much immune-compromising medication.

An unsolved issue for herpes zoster vaccination is age. Greater knowledge about how to best vaccinate would go a long way toward bolstering confidence in using the vaccines in patients a bit younger than 50 years given that zoster does occur in lupus patients at that age.

Joan Merrill, MD, is OMRF Professor of Medicine at the University of Oklahoma Health Sciences Center and a member of the Arthritis & Clinical Immunology Research Program at the Oklahoma Medical Research Foundation, both in Oklahoma City. She is a member of the editorial advisory board of Rheumatology News.

Title
Results are reassuring, but some questions remain
Results are reassuring, but some questions remain

 

A live-attenuated herpes zoster vaccine can be used in individuals with systemic lupus erythematosus (SLE) if they are not intensively immunosuppressed and their condition is dormant, research suggests.

A paper published in Annals of the Rheumatic Diseases reported the outcomes of a randomized, placebo-controlled trial of the Zostavax herpes zoster vaccine in 90 adults with clinically stable SLE. Participants had to have been on a stable dose of immunosuppressive agents for at least 6 months and have a history of chicken pox or herpes zoster infection.

Chi Chiu Mok, MD, of the Tuen Mun Hospital in Hong Kong and coauthors wrote that herpes zoster reactivation has been reported to occur in 6.4 to 91.4 individuals with SLE per 1,000 patient-years, with consequences including postherpetic neuralgia and even death from disseminated infection. But because Zostavax is live-attenuated, it has not been widely used in immunocompromised people.

After a single subcutaneous dose of either the vaccine or placebo, researchers saw a significant increase in anti–varicella zoster virus (VZV) IgG antibodies in vaccinated individuals over 6 weeks. The magnitude of the increase in anti-VZV IgG seen in vaccinated individuals was on par with that previously seen in vaccinated healthy controls, although the authors noted that the absolute increase in values was lower.

“While the reason is not apparent, one contributing factor is the high rate of previous exposure to VZV infection in most participants, which could have led to a higher baseline anti-IgG anti-VZV value that limited its rise after vaccination,” the authors wrote.

In contrast, IgG reactivity declined in those who received the placebo injection, and the difference between the two groups was statistically significant after adjustment for baseline antibody titers.



The study also looked at the cell-mediated immune response to the vaccine and found the number of interferon-gamma secreting CD4+ T-cell spots increased in the vaccinated patients but decreased in the placebo arm, and by week 6 it was significantly higher in the treated group. The increase in the vaccine-treated patients was again similar to that previously seen in healthy controls.

However, prednisolone use at baseline may have attenuated the vaccine response. Vaccinated patients who were treated with prednisolone at baseline had a lower increase in T-cell spots and lower anti-VZV IgG reactivity after the vaccination than did those not taking prednisolone, although the difference between the two groups was not statistically significant. The study did not see any effect of age, sex, baseline lymphocyte count, disease activity scores, and other factors on response to the vaccine.

None of the patients who received the vaccine withdrew from the study because of serious adverse events. The most common adverse events reported were injection-site redness and pain, which were more common in the vaccine-treated group than in the placebo group. However these symptoms were mild and resolved by themselves after a few days. Two patients in the vaccine group and one in the placebo group experienced mild or moderate SLE flares.

The authors commented that this was the first randomized, controlled trial examining the safety and immune response of a live-attenuated herpes zoster vaccine in individuals with SLE and this trial showed it was safe and well tolerated in those with stable disease who were not on intensive immunosuppressive therapy.

“Despite the increased risk of HZ [herpes zoster] infection, SLE had the lowest HZ vaccination rates among age-eligible subjects, probably because of the concern of vaccine safety, the principle of contraindication to live-attenuated vaccines in immunocompromised hosts, as well as the current ambiguous guidelines for HZ vaccination in SLE,” they wrote.

But they also stressed that their results did not apply to patients with active disease or on more intensive immunosuppression and that longer-term data on the persistence of vaccine immunogenicity was still being collected.

The study was funded by the Hong Kong Research Fund Secretariat. No conflicts of interest were declared.

SOURCE: Mok CC et al. Ann Rheum Dis. 2019 Sep 17. doi: 10.1136/annrheumdis-2019-215925

 

A live-attenuated herpes zoster vaccine can be used in individuals with systemic lupus erythematosus (SLE) if they are not intensively immunosuppressed and their condition is dormant, research suggests.

A paper published in Annals of the Rheumatic Diseases reported the outcomes of a randomized, placebo-controlled trial of the Zostavax herpes zoster vaccine in 90 adults with clinically stable SLE. Participants had to have been on a stable dose of immunosuppressive agents for at least 6 months and have a history of chicken pox or herpes zoster infection.

Chi Chiu Mok, MD, of the Tuen Mun Hospital in Hong Kong and coauthors wrote that herpes zoster reactivation has been reported to occur in 6.4 to 91.4 individuals with SLE per 1,000 patient-years, with consequences including postherpetic neuralgia and even death from disseminated infection. But because Zostavax is live-attenuated, it has not been widely used in immunocompromised people.

After a single subcutaneous dose of either the vaccine or placebo, researchers saw a significant increase in anti–varicella zoster virus (VZV) IgG antibodies in vaccinated individuals over 6 weeks. The magnitude of the increase in anti-VZV IgG seen in vaccinated individuals was on par with that previously seen in vaccinated healthy controls, although the authors noted that the absolute increase in values was lower.

“While the reason is not apparent, one contributing factor is the high rate of previous exposure to VZV infection in most participants, which could have led to a higher baseline anti-IgG anti-VZV value that limited its rise after vaccination,” the authors wrote.

In contrast, IgG reactivity declined in those who received the placebo injection, and the difference between the two groups was statistically significant after adjustment for baseline antibody titers.



The study also looked at the cell-mediated immune response to the vaccine and found the number of interferon-gamma secreting CD4+ T-cell spots increased in the vaccinated patients but decreased in the placebo arm, and by week 6 it was significantly higher in the treated group. The increase in the vaccine-treated patients was again similar to that previously seen in healthy controls.

However, prednisolone use at baseline may have attenuated the vaccine response. Vaccinated patients who were treated with prednisolone at baseline had a lower increase in T-cell spots and lower anti-VZV IgG reactivity after the vaccination than did those not taking prednisolone, although the difference between the two groups was not statistically significant. The study did not see any effect of age, sex, baseline lymphocyte count, disease activity scores, and other factors on response to the vaccine.

None of the patients who received the vaccine withdrew from the study because of serious adverse events. The most common adverse events reported were injection-site redness and pain, which were more common in the vaccine-treated group than in the placebo group. However these symptoms were mild and resolved by themselves after a few days. Two patients in the vaccine group and one in the placebo group experienced mild or moderate SLE flares.

The authors commented that this was the first randomized, controlled trial examining the safety and immune response of a live-attenuated herpes zoster vaccine in individuals with SLE and this trial showed it was safe and well tolerated in those with stable disease who were not on intensive immunosuppressive therapy.

“Despite the increased risk of HZ [herpes zoster] infection, SLE had the lowest HZ vaccination rates among age-eligible subjects, probably because of the concern of vaccine safety, the principle of contraindication to live-attenuated vaccines in immunocompromised hosts, as well as the current ambiguous guidelines for HZ vaccination in SLE,” they wrote.

But they also stressed that their results did not apply to patients with active disease or on more intensive immunosuppression and that longer-term data on the persistence of vaccine immunogenicity was still being collected.

The study was funded by the Hong Kong Research Fund Secretariat. No conflicts of interest were declared.

SOURCE: Mok CC et al. Ann Rheum Dis. 2019 Sep 17. doi: 10.1136/annrheumdis-2019-215925

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM ANNALS OF THE RHEUMATIC DISEASES

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Secondary prevention of osteoporotic fractures lacking

Article Type
Changed
Fri, 09/27/2019 - 14:49

 

Osteoporotic fractures are responsible for more hospitalizations of Americans than heart attacks, strokes, and breast cancer combined, despite many fractures being preventable, according to research commissioned by the National Osteoporosis Foundation.

The report by independent actuarial firm Milliman examined the economic and clinical burden of new osteoporotic fractures in 2015 in the Medicare fee-for-service population, with data from a large medical claims database.

More than 10 million adults aged 50 years and older in the United States are thought to have osteoporosis, and 43.9% of adults are affected by low bone mass.

This report found that about 1.4 million Medicare fee-for-service beneficiaries experienced more than 1.6 million osteoporotic fractures in that year, which if extrapolated to include Medicare Advantage beneficiaries would increase to a total of 2.3 million fractures in 2 million individuals.

The most common types of fractures were of the spine (23%) and hip (17%), although the authors noted that the spinal fracture figure did not account for potential underdiagnosis of vertebral fractures.

Women had a 79% higher rate of osteoporotic fractures than that of men, and one-third of people who experienced at least one osteoporotic fracture were aged 74-85 years.

Dane Hansen and colleagues from Milliman from drew particular attention to the lack of secondary prevention in people who had experienced a first osteoporotic fracture. They estimated that 15% of those who had a new osteoporotic fracture experienced one or more subsequent fractures within 12 months, yet only 9% of women received a bone mineral density test within 6 months to evaluate them for osteoporosis.

Overall, 21% of individuals who had a new osteoporotic fracture underwent bone mineral density testing during the fracture episode.



The authors pointed out that their analysis wasn’t able to look at pharmaceutical treatment, and so did not present “a full picture of the overall rate of BMD [bone mineral density] testing and appropriate treatment after a fracture for surviving patients.”

Nearly one in five Medicare beneficiaries experienced at least one new pressure ulcer during the fracture episode, and beneficiaries with osteoporotic fracture were two times more likely than were other Medicare beneficiaries to experience pressure ulcers. “This is significant because research has found that pressure ulcers are clinically difficult and expensive to manage,” the authors wrote. They also saw that nearly 20% of Medicare beneficiaries who experienced an osteoporotic fracture died within 12 months, with the highest mortality (30%) seen in those with hip fracture.

Osteoporotic fractures presented a significant cost burden, with 45% of beneficiaries having at least one acute inpatient hospital stay within 30 days of having a new osteoporotic fracture. The hospitalization rate was as high as 92% for individuals with hip fracture, while 11% of those with wrist fractures were hospitalized within 7 days of the fracture.

The annual allowed medical costs in the 12 months after a new fracture were more than twice the costs of the 12-month period before the fracture in the same individual, and each new fracture was associated with an incremental annual medical cost greater than $21,800.

“An osteoporotic fracture is a sentinel event that should trigger appropriate clinical attention directed to reducing the risk of future subsequent fractures,” the authors said. “Therefore, the months following an osteoporotic fracture, in which the risk of a subsequent fracture is high, provide an important opportunity to identify and treat osteoporosis and to perform other interventions, such as patient education and care coordination, in order to reduce the individual’s risk of a subsequent fracture.”

The report estimated that preventing 5% of subsequent osteoporotic fractures could have saved the Medicare program $310 million just in the 2-3 years after a new fracture, while preventing 20% of subsequent fractures could have saved $1,230 million. These figures included the cost of the additional bone mineral density testing, but did not account for the increased costs of treatment or fracture prevention.

“In future analysis, it will be important to net the total cost of the intervention and additional pharmaceutical treatment for osteoporosis against Medicare savings from avoided subsequent fractures to comprehensively measure the savings from secondary fracture prevention initiatives.”

SOURCE: Milliman Research Report, Medicare Report of Osteoporotic Fractures,” August 2019.

Publications
Topics
Sections

 

Osteoporotic fractures are responsible for more hospitalizations of Americans than heart attacks, strokes, and breast cancer combined, despite many fractures being preventable, according to research commissioned by the National Osteoporosis Foundation.

The report by independent actuarial firm Milliman examined the economic and clinical burden of new osteoporotic fractures in 2015 in the Medicare fee-for-service population, with data from a large medical claims database.

More than 10 million adults aged 50 years and older in the United States are thought to have osteoporosis, and 43.9% of adults are affected by low bone mass.

This report found that about 1.4 million Medicare fee-for-service beneficiaries experienced more than 1.6 million osteoporotic fractures in that year, which if extrapolated to include Medicare Advantage beneficiaries would increase to a total of 2.3 million fractures in 2 million individuals.

The most common types of fractures were of the spine (23%) and hip (17%), although the authors noted that the spinal fracture figure did not account for potential underdiagnosis of vertebral fractures.

Women had a 79% higher rate of osteoporotic fractures than that of men, and one-third of people who experienced at least one osteoporotic fracture were aged 74-85 years.

Dane Hansen and colleagues from Milliman from drew particular attention to the lack of secondary prevention in people who had experienced a first osteoporotic fracture. They estimated that 15% of those who had a new osteoporotic fracture experienced one or more subsequent fractures within 12 months, yet only 9% of women received a bone mineral density test within 6 months to evaluate them for osteoporosis.

Overall, 21% of individuals who had a new osteoporotic fracture underwent bone mineral density testing during the fracture episode.



The authors pointed out that their analysis wasn’t able to look at pharmaceutical treatment, and so did not present “a full picture of the overall rate of BMD [bone mineral density] testing and appropriate treatment after a fracture for surviving patients.”

Nearly one in five Medicare beneficiaries experienced at least one new pressure ulcer during the fracture episode, and beneficiaries with osteoporotic fracture were two times more likely than were other Medicare beneficiaries to experience pressure ulcers. “This is significant because research has found that pressure ulcers are clinically difficult and expensive to manage,” the authors wrote. They also saw that nearly 20% of Medicare beneficiaries who experienced an osteoporotic fracture died within 12 months, with the highest mortality (30%) seen in those with hip fracture.

Osteoporotic fractures presented a significant cost burden, with 45% of beneficiaries having at least one acute inpatient hospital stay within 30 days of having a new osteoporotic fracture. The hospitalization rate was as high as 92% for individuals with hip fracture, while 11% of those with wrist fractures were hospitalized within 7 days of the fracture.

The annual allowed medical costs in the 12 months after a new fracture were more than twice the costs of the 12-month period before the fracture in the same individual, and each new fracture was associated with an incremental annual medical cost greater than $21,800.

“An osteoporotic fracture is a sentinel event that should trigger appropriate clinical attention directed to reducing the risk of future subsequent fractures,” the authors said. “Therefore, the months following an osteoporotic fracture, in which the risk of a subsequent fracture is high, provide an important opportunity to identify and treat osteoporosis and to perform other interventions, such as patient education and care coordination, in order to reduce the individual’s risk of a subsequent fracture.”

The report estimated that preventing 5% of subsequent osteoporotic fractures could have saved the Medicare program $310 million just in the 2-3 years after a new fracture, while preventing 20% of subsequent fractures could have saved $1,230 million. These figures included the cost of the additional bone mineral density testing, but did not account for the increased costs of treatment or fracture prevention.

“In future analysis, it will be important to net the total cost of the intervention and additional pharmaceutical treatment for osteoporosis against Medicare savings from avoided subsequent fractures to comprehensively measure the savings from secondary fracture prevention initiatives.”

SOURCE: Milliman Research Report, Medicare Report of Osteoporotic Fractures,” August 2019.

 

Osteoporotic fractures are responsible for more hospitalizations of Americans than heart attacks, strokes, and breast cancer combined, despite many fractures being preventable, according to research commissioned by the National Osteoporosis Foundation.

The report by independent actuarial firm Milliman examined the economic and clinical burden of new osteoporotic fractures in 2015 in the Medicare fee-for-service population, with data from a large medical claims database.

More than 10 million adults aged 50 years and older in the United States are thought to have osteoporosis, and 43.9% of adults are affected by low bone mass.

This report found that about 1.4 million Medicare fee-for-service beneficiaries experienced more than 1.6 million osteoporotic fractures in that year, which if extrapolated to include Medicare Advantage beneficiaries would increase to a total of 2.3 million fractures in 2 million individuals.

The most common types of fractures were of the spine (23%) and hip (17%), although the authors noted that the spinal fracture figure did not account for potential underdiagnosis of vertebral fractures.

Women had a 79% higher rate of osteoporotic fractures than that of men, and one-third of people who experienced at least one osteoporotic fracture were aged 74-85 years.

Dane Hansen and colleagues from Milliman from drew particular attention to the lack of secondary prevention in people who had experienced a first osteoporotic fracture. They estimated that 15% of those who had a new osteoporotic fracture experienced one or more subsequent fractures within 12 months, yet only 9% of women received a bone mineral density test within 6 months to evaluate them for osteoporosis.

Overall, 21% of individuals who had a new osteoporotic fracture underwent bone mineral density testing during the fracture episode.



The authors pointed out that their analysis wasn’t able to look at pharmaceutical treatment, and so did not present “a full picture of the overall rate of BMD [bone mineral density] testing and appropriate treatment after a fracture for surviving patients.”

Nearly one in five Medicare beneficiaries experienced at least one new pressure ulcer during the fracture episode, and beneficiaries with osteoporotic fracture were two times more likely than were other Medicare beneficiaries to experience pressure ulcers. “This is significant because research has found that pressure ulcers are clinically difficult and expensive to manage,” the authors wrote. They also saw that nearly 20% of Medicare beneficiaries who experienced an osteoporotic fracture died within 12 months, with the highest mortality (30%) seen in those with hip fracture.

Osteoporotic fractures presented a significant cost burden, with 45% of beneficiaries having at least one acute inpatient hospital stay within 30 days of having a new osteoporotic fracture. The hospitalization rate was as high as 92% for individuals with hip fracture, while 11% of those with wrist fractures were hospitalized within 7 days of the fracture.

The annual allowed medical costs in the 12 months after a new fracture were more than twice the costs of the 12-month period before the fracture in the same individual, and each new fracture was associated with an incremental annual medical cost greater than $21,800.

“An osteoporotic fracture is a sentinel event that should trigger appropriate clinical attention directed to reducing the risk of future subsequent fractures,” the authors said. “Therefore, the months following an osteoporotic fracture, in which the risk of a subsequent fracture is high, provide an important opportunity to identify and treat osteoporosis and to perform other interventions, such as patient education and care coordination, in order to reduce the individual’s risk of a subsequent fracture.”

The report estimated that preventing 5% of subsequent osteoporotic fractures could have saved the Medicare program $310 million just in the 2-3 years after a new fracture, while preventing 20% of subsequent fractures could have saved $1,230 million. These figures included the cost of the additional bone mineral density testing, but did not account for the increased costs of treatment or fracture prevention.

“In future analysis, it will be important to net the total cost of the intervention and additional pharmaceutical treatment for osteoporosis against Medicare savings from avoided subsequent fractures to comprehensively measure the savings from secondary fracture prevention initiatives.”

SOURCE: Milliman Research Report, Medicare Report of Osteoporotic Fractures,” August 2019.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE NATIONAL OSTEOPOROSIS FOUNDATION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Obesity, moderate drinking linked to psoriatic arthritis

Article Type
Changed
Tue, 02/07/2023 - 16:51

Higher body mass index and moderate – but not heavy – drinking may increase the risk of individuals with psoriasis going on to develop psoriatic arthritis, a study has found.

Around one in five people with psoriasis will develop psoriatic arthritis (PsA), wrote Amelia Green of the University of Bath (England) and coauthors in the British Journal of Dermatology.

Previous studies have explored possible links between obesity, alcohol consumption, or smoking, and an increased risk of developing psoriatic arthritis. However, some of these studies found conflicting results or had limitations such as measuring only a single exposure.

In a cohort study, the Ms. Green and her colleagues examined data from the U.K. Clinical Practice Research Datalink for 90,189 individuals with psoriasis, 1,409 of whom were subsequently also diagnosed with psoriatic arthritis.

The analysis showed a significant association between increasing body mass index (BMI) and increasing odds of developing psoriatic arthritis. Compared with individuals with a BMI below 25 kg/m2, those with a BMI of 25.0-29.9 had a 79% greater odds of psoriatic arthritis, those with a BMI of 30.0-34.9 had a 2.10-fold greater odds, and those with a BMI at or above 35 had a 2.68-fold greater odds of developing psoriatic arthritis (P for trend less than .001). Adjustment for potential confounders such as sex, age, duration and severity of psoriasis, diabetes, smoking, and alcohol use slightly attenuated the association, but it remained statistically significant.

Researchers also examined the cumulative effect of lower BMIs over time, and found that over a 10-year period, reductions in BMI were associated with reductions in the risk of developing PsA, compared with remaining at the same BMI over that time.

“Here we have shown for the first time that losing weight over time could reduce the risk of developing PsA in a population with documented psoriasis,” the authors wrote. “As the effect of obesity on the risk of developing PsA may in fact occur with some delay and change over time, our analysis took into account both updated BMI measurements over time and the possible nonlinear and cumulative effects of BMI, which have not previously been investigated.”

Commenting on the mechanisms underlying the association between obesity and the development of PsA, the authors noted that adipose tissue is a source of inflammatory mediators such as adipokines and proinflammatory cytokines, which could lead to the development of PsA. Increasing body weight also could cause microtraumas of the connective tissue between tendon and bone, which may act as an initiating pathogenic event for PsA.


Moderate drinkers – defined as 0.1–3.0 drinks per day ­– had 57% higher odds of developing PsA when compared with nondrinkers, but former drinkers or heavy drinkers did not have an increased risk.

The study also didn’t see any effect of either past or current smoking on the risk of PsA, although there was a nonsignificant interaction with obesity that hinted at increased odds.

“While we found no association between smoking status and the development of PsA in people with psoriasis, further analysis revealed that the effect of smoking on the risk of PsA was possibly mediated through the effect of BMI on PsA; in other words, the protective effect of smoking may be associated with lower BMI among smokers,” the authors wrote.

Patients who developed PsA were also more likely to be younger (mean age of 44.7 years vs. 48.5 years), have severe psoriasis, and have had the disease for a shorter duration.

The study was funded by the National Institute for Health Research, and the authors declared grants from the funder during the conduct of the study. No other conflicts of interest were declared.

SOURCE: Green A et al. Br J Dermatol. 2019 Jun 18. doi: 10.1111/bjd.18227

 

 

Publications
Topics
Sections

Higher body mass index and moderate – but not heavy – drinking may increase the risk of individuals with psoriasis going on to develop psoriatic arthritis, a study has found.

Around one in five people with psoriasis will develop psoriatic arthritis (PsA), wrote Amelia Green of the University of Bath (England) and coauthors in the British Journal of Dermatology.

Previous studies have explored possible links between obesity, alcohol consumption, or smoking, and an increased risk of developing psoriatic arthritis. However, some of these studies found conflicting results or had limitations such as measuring only a single exposure.

In a cohort study, the Ms. Green and her colleagues examined data from the U.K. Clinical Practice Research Datalink for 90,189 individuals with psoriasis, 1,409 of whom were subsequently also diagnosed with psoriatic arthritis.

The analysis showed a significant association between increasing body mass index (BMI) and increasing odds of developing psoriatic arthritis. Compared with individuals with a BMI below 25 kg/m2, those with a BMI of 25.0-29.9 had a 79% greater odds of psoriatic arthritis, those with a BMI of 30.0-34.9 had a 2.10-fold greater odds, and those with a BMI at or above 35 had a 2.68-fold greater odds of developing psoriatic arthritis (P for trend less than .001). Adjustment for potential confounders such as sex, age, duration and severity of psoriasis, diabetes, smoking, and alcohol use slightly attenuated the association, but it remained statistically significant.

Researchers also examined the cumulative effect of lower BMIs over time, and found that over a 10-year period, reductions in BMI were associated with reductions in the risk of developing PsA, compared with remaining at the same BMI over that time.

“Here we have shown for the first time that losing weight over time could reduce the risk of developing PsA in a population with documented psoriasis,” the authors wrote. “As the effect of obesity on the risk of developing PsA may in fact occur with some delay and change over time, our analysis took into account both updated BMI measurements over time and the possible nonlinear and cumulative effects of BMI, which have not previously been investigated.”

Commenting on the mechanisms underlying the association between obesity and the development of PsA, the authors noted that adipose tissue is a source of inflammatory mediators such as adipokines and proinflammatory cytokines, which could lead to the development of PsA. Increasing body weight also could cause microtraumas of the connective tissue between tendon and bone, which may act as an initiating pathogenic event for PsA.


Moderate drinkers – defined as 0.1–3.0 drinks per day ­– had 57% higher odds of developing PsA when compared with nondrinkers, but former drinkers or heavy drinkers did not have an increased risk.

The study also didn’t see any effect of either past or current smoking on the risk of PsA, although there was a nonsignificant interaction with obesity that hinted at increased odds.

“While we found no association between smoking status and the development of PsA in people with psoriasis, further analysis revealed that the effect of smoking on the risk of PsA was possibly mediated through the effect of BMI on PsA; in other words, the protective effect of smoking may be associated with lower BMI among smokers,” the authors wrote.

Patients who developed PsA were also more likely to be younger (mean age of 44.7 years vs. 48.5 years), have severe psoriasis, and have had the disease for a shorter duration.

The study was funded by the National Institute for Health Research, and the authors declared grants from the funder during the conduct of the study. No other conflicts of interest were declared.

SOURCE: Green A et al. Br J Dermatol. 2019 Jun 18. doi: 10.1111/bjd.18227

 

 

Higher body mass index and moderate – but not heavy – drinking may increase the risk of individuals with psoriasis going on to develop psoriatic arthritis, a study has found.

Around one in five people with psoriasis will develop psoriatic arthritis (PsA), wrote Amelia Green of the University of Bath (England) and coauthors in the British Journal of Dermatology.

Previous studies have explored possible links between obesity, alcohol consumption, or smoking, and an increased risk of developing psoriatic arthritis. However, some of these studies found conflicting results or had limitations such as measuring only a single exposure.

In a cohort study, the Ms. Green and her colleagues examined data from the U.K. Clinical Practice Research Datalink for 90,189 individuals with psoriasis, 1,409 of whom were subsequently also diagnosed with psoriatic arthritis.

The analysis showed a significant association between increasing body mass index (BMI) and increasing odds of developing psoriatic arthritis. Compared with individuals with a BMI below 25 kg/m2, those with a BMI of 25.0-29.9 had a 79% greater odds of psoriatic arthritis, those with a BMI of 30.0-34.9 had a 2.10-fold greater odds, and those with a BMI at or above 35 had a 2.68-fold greater odds of developing psoriatic arthritis (P for trend less than .001). Adjustment for potential confounders such as sex, age, duration and severity of psoriasis, diabetes, smoking, and alcohol use slightly attenuated the association, but it remained statistically significant.

Researchers also examined the cumulative effect of lower BMIs over time, and found that over a 10-year period, reductions in BMI were associated with reductions in the risk of developing PsA, compared with remaining at the same BMI over that time.

“Here we have shown for the first time that losing weight over time could reduce the risk of developing PsA in a population with documented psoriasis,” the authors wrote. “As the effect of obesity on the risk of developing PsA may in fact occur with some delay and change over time, our analysis took into account both updated BMI measurements over time and the possible nonlinear and cumulative effects of BMI, which have not previously been investigated.”

Commenting on the mechanisms underlying the association between obesity and the development of PsA, the authors noted that adipose tissue is a source of inflammatory mediators such as adipokines and proinflammatory cytokines, which could lead to the development of PsA. Increasing body weight also could cause microtraumas of the connective tissue between tendon and bone, which may act as an initiating pathogenic event for PsA.


Moderate drinkers – defined as 0.1–3.0 drinks per day ­– had 57% higher odds of developing PsA when compared with nondrinkers, but former drinkers or heavy drinkers did not have an increased risk.

The study also didn’t see any effect of either past or current smoking on the risk of PsA, although there was a nonsignificant interaction with obesity that hinted at increased odds.

“While we found no association between smoking status and the development of PsA in people with psoriasis, further analysis revealed that the effect of smoking on the risk of PsA was possibly mediated through the effect of BMI on PsA; in other words, the protective effect of smoking may be associated with lower BMI among smokers,” the authors wrote.

Patients who developed PsA were also more likely to be younger (mean age of 44.7 years vs. 48.5 years), have severe psoriasis, and have had the disease for a shorter duration.

The study was funded by the National Institute for Health Research, and the authors declared grants from the funder during the conduct of the study. No other conflicts of interest were declared.

SOURCE: Green A et al. Br J Dermatol. 2019 Jun 18. doi: 10.1111/bjd.18227

 

 

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM THE BRITISH JOURNAL OF DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
208373
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Meta-analysis finds platelet-rich plasma may improve hair growth

Article Type
Changed
Tue, 09/17/2019 - 10:25

 

Treatment with intradermal injections of autologous platelet-rich plasma (PRP) was associated with increased hair density in men and women with androgenetic alopecia, compared with placebo, according to a meta-analysis of seven studies.

In the report of the results, published in the Journal of Cosmetic Dermatology, Gezim Dervishi, from University Hospital Cologne (Germany), and coauthors noted that autologous PRP contains cytokines and growth factors – including platelet‐derived growth factor, vascular endothelial growth factor, and insulinlike growth factor–1 – that are believed to play a role in new hair growth. While the evidence to support the clinical efficacy of this approach is currently limited and PRP is not approved for hair-loss treatment, it is being used in clinical trials and off label for hair restoration in Europe and the United States.

The authors reviewed 13 randomized, controlled trials involving 343 people with pattern hair loss randomized to PRP control treatment, or placebo in a simple-parallel or half-head design, and assessed at 3-6 months after starting treatment. The studies were conducted in countries that included the United States, Brazil, Spain, and Italy.

The meta-analysis included seven studies that used change in hair density as the primary outcome. Six of these studies compared PRP with placebo and one compared PRP plus minoxidil or finasteride versus placebo plus minoxidil or finasteride. Five evaluated outcomes 6 months after the intervention, and two studies evaluated outcomes 3 months afterwards.



Of these seven studies, five reported statistically significant increases in hair density in favor of PRP over placebo, while the remaining two did not find any significant treatment effect. In the seven studies, the pooled mean difference in increasing hair density between treatment and placebo was 30.35 (95% confidence interval, 1.77-58.93; P less than .00001).

The 13 studies involved applications of platelet-rich plasma one to eight times over treatment periods of 1-5 months. Three studies compared PRP in conjunction with either minoxidil, finasteride, or polydeoxyribonucleotide, compared with those same therapies without PRP, and 10 studies compared PRP with either normal saline or distilled water. Five studies were judged to be at high risk of bias for at least one item, but none of these studies were included in the meta-analysis.

None of the studies identified any significant short-term treatment-related adverse effects, but the authors suggested that future studies look at potential long-term adverse effects of treatment.

“We recommend further [randomized, controlled trials] that should ensure adequate random sequence generation, adequate allocation concealment, blinding of performance and detection, and that should prevent incomplete data and selective reporting,” they wrote.

Funding and conflicts of interest statements were not available.

SOURCE: Dervishi G et al. J Cosmet Dermatol. 2019 Aug 12. doi: 10.1111/jocd.13113.

Publications
Topics
Sections

 

Treatment with intradermal injections of autologous platelet-rich plasma (PRP) was associated with increased hair density in men and women with androgenetic alopecia, compared with placebo, according to a meta-analysis of seven studies.

In the report of the results, published in the Journal of Cosmetic Dermatology, Gezim Dervishi, from University Hospital Cologne (Germany), and coauthors noted that autologous PRP contains cytokines and growth factors – including platelet‐derived growth factor, vascular endothelial growth factor, and insulinlike growth factor–1 – that are believed to play a role in new hair growth. While the evidence to support the clinical efficacy of this approach is currently limited and PRP is not approved for hair-loss treatment, it is being used in clinical trials and off label for hair restoration in Europe and the United States.

The authors reviewed 13 randomized, controlled trials involving 343 people with pattern hair loss randomized to PRP control treatment, or placebo in a simple-parallel or half-head design, and assessed at 3-6 months after starting treatment. The studies were conducted in countries that included the United States, Brazil, Spain, and Italy.

The meta-analysis included seven studies that used change in hair density as the primary outcome. Six of these studies compared PRP with placebo and one compared PRP plus minoxidil or finasteride versus placebo plus minoxidil or finasteride. Five evaluated outcomes 6 months after the intervention, and two studies evaluated outcomes 3 months afterwards.



Of these seven studies, five reported statistically significant increases in hair density in favor of PRP over placebo, while the remaining two did not find any significant treatment effect. In the seven studies, the pooled mean difference in increasing hair density between treatment and placebo was 30.35 (95% confidence interval, 1.77-58.93; P less than .00001).

The 13 studies involved applications of platelet-rich plasma one to eight times over treatment periods of 1-5 months. Three studies compared PRP in conjunction with either minoxidil, finasteride, or polydeoxyribonucleotide, compared with those same therapies without PRP, and 10 studies compared PRP with either normal saline or distilled water. Five studies were judged to be at high risk of bias for at least one item, but none of these studies were included in the meta-analysis.

None of the studies identified any significant short-term treatment-related adverse effects, but the authors suggested that future studies look at potential long-term adverse effects of treatment.

“We recommend further [randomized, controlled trials] that should ensure adequate random sequence generation, adequate allocation concealment, blinding of performance and detection, and that should prevent incomplete data and selective reporting,” they wrote.

Funding and conflicts of interest statements were not available.

SOURCE: Dervishi G et al. J Cosmet Dermatol. 2019 Aug 12. doi: 10.1111/jocd.13113.

 

Treatment with intradermal injections of autologous platelet-rich plasma (PRP) was associated with increased hair density in men and women with androgenetic alopecia, compared with placebo, according to a meta-analysis of seven studies.

In the report of the results, published in the Journal of Cosmetic Dermatology, Gezim Dervishi, from University Hospital Cologne (Germany), and coauthors noted that autologous PRP contains cytokines and growth factors – including platelet‐derived growth factor, vascular endothelial growth factor, and insulinlike growth factor–1 – that are believed to play a role in new hair growth. While the evidence to support the clinical efficacy of this approach is currently limited and PRP is not approved for hair-loss treatment, it is being used in clinical trials and off label for hair restoration in Europe and the United States.

The authors reviewed 13 randomized, controlled trials involving 343 people with pattern hair loss randomized to PRP control treatment, or placebo in a simple-parallel or half-head design, and assessed at 3-6 months after starting treatment. The studies were conducted in countries that included the United States, Brazil, Spain, and Italy.

The meta-analysis included seven studies that used change in hair density as the primary outcome. Six of these studies compared PRP with placebo and one compared PRP plus minoxidil or finasteride versus placebo plus minoxidil or finasteride. Five evaluated outcomes 6 months after the intervention, and two studies evaluated outcomes 3 months afterwards.



Of these seven studies, five reported statistically significant increases in hair density in favor of PRP over placebo, while the remaining two did not find any significant treatment effect. In the seven studies, the pooled mean difference in increasing hair density between treatment and placebo was 30.35 (95% confidence interval, 1.77-58.93; P less than .00001).

The 13 studies involved applications of platelet-rich plasma one to eight times over treatment periods of 1-5 months. Three studies compared PRP in conjunction with either minoxidil, finasteride, or polydeoxyribonucleotide, compared with those same therapies without PRP, and 10 studies compared PRP with either normal saline or distilled water. Five studies were judged to be at high risk of bias for at least one item, but none of these studies were included in the meta-analysis.

None of the studies identified any significant short-term treatment-related adverse effects, but the authors suggested that future studies look at potential long-term adverse effects of treatment.

“We recommend further [randomized, controlled trials] that should ensure adequate random sequence generation, adequate allocation concealment, blinding of performance and detection, and that should prevent incomplete data and selective reporting,” they wrote.

Funding and conflicts of interest statements were not available.

SOURCE: Dervishi G et al. J Cosmet Dermatol. 2019 Aug 12. doi: 10.1111/jocd.13113.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF COSMETIC DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Postdural puncture headache linked to increased risk of subdural hematoma

Article Type
Changed
Wed, 05/06/2020 - 12:33

Postdural puncture headache in women who have undergone neuraxial anesthesia in childbirth may be associated with a small but significant increase in the risk of being diagnosed with intracranial subdural hematoma, research findings suggest.

A cohort study, published online in JAMA Neurology, looked at the incidence of intracranial subdural hematoma within 2 months of delivery in 22,130,815 women, using data from the U.S. Agency for Healthcare Research and Quality’s National Readmission Database.

The overall rate of postdural puncture headaches was 309 per 100,000 deliveries, and the overall incidence of subdural hematoma was 1.5 per 100,000 deliveries. Among the women with postdural puncture headache, however, the unadjusted rate of subdural hematoma was 147 per 100,000. After adjusting for confounding factors, women who experienced postdural puncture headache had a nearly 200-fold higher risk of subdural hematoma (odds ratio, 199; less than .001), representing an absolute risk increase of 130 per 100,000 deliveries.

“This was a small absolute increase because of the rarity of this outcome in this population,” wrote Dr. Albert R. Moore of the Royal Victoria Hospital at McGill University, Montreal, and coauthors. “However, this is an important and devastating outcome for a common exposure in young and usually healthy mothers.”

The authors noted that these findings confirmed other reports linking postdural puncture headache and intracranial subdural hematoma. The proposed mechanism connecting the two conditions was that decreased intracranial pressure from cerebrospinal fluid leakages leads to “sagging” of the brain and tension on the veins between the dura and arachnoid, which in turn could trigger a rupture and formation of a subdural hematoma.

Other risk factors for subdural hematoma included coagulopathy, arteriovenous malformation, and delayed blood patch. The investigators also found that obesity was associated with a lower risk of headache after postdural puncture, which might be the result of increased intracranial pressure providing resistance to the development of subdural hematoma.

There was a significant interaction between postdural puncture headache, severe preeclampsia, and chronic hypertension. In the absence of postdural puncture headache, severe preeclampsia and chronic hypertension were both independently associated with significant increases in the risk of subdural hematoma, Dr. Moore and associates noted.

In women who experienced postdural puncture headache, only chronic hypertension was significantly associated with subdural hematoma, they said.

The study was limited in being observational and at risk of misclassification. In addition, there was a risk of surveillance bias in that women with postdural puncture headaches might be more likely to receive brain imaging that would pick up minor subdural hematomas, the investigators said.

The study was supported by McGill University Health Center’s department of anesthesia. The authors reported no conflicts of interest.

SOURCE: Moore A et al. JAMA Neurol. 2019 Sep 16. doi: 10.1001/jamaneurol.2019.2995.

Publications
Topics
Sections

Postdural puncture headache in women who have undergone neuraxial anesthesia in childbirth may be associated with a small but significant increase in the risk of being diagnosed with intracranial subdural hematoma, research findings suggest.

A cohort study, published online in JAMA Neurology, looked at the incidence of intracranial subdural hematoma within 2 months of delivery in 22,130,815 women, using data from the U.S. Agency for Healthcare Research and Quality’s National Readmission Database.

The overall rate of postdural puncture headaches was 309 per 100,000 deliveries, and the overall incidence of subdural hematoma was 1.5 per 100,000 deliveries. Among the women with postdural puncture headache, however, the unadjusted rate of subdural hematoma was 147 per 100,000. After adjusting for confounding factors, women who experienced postdural puncture headache had a nearly 200-fold higher risk of subdural hematoma (odds ratio, 199; less than .001), representing an absolute risk increase of 130 per 100,000 deliveries.

“This was a small absolute increase because of the rarity of this outcome in this population,” wrote Dr. Albert R. Moore of the Royal Victoria Hospital at McGill University, Montreal, and coauthors. “However, this is an important and devastating outcome for a common exposure in young and usually healthy mothers.”

The authors noted that these findings confirmed other reports linking postdural puncture headache and intracranial subdural hematoma. The proposed mechanism connecting the two conditions was that decreased intracranial pressure from cerebrospinal fluid leakages leads to “sagging” of the brain and tension on the veins between the dura and arachnoid, which in turn could trigger a rupture and formation of a subdural hematoma.

Other risk factors for subdural hematoma included coagulopathy, arteriovenous malformation, and delayed blood patch. The investigators also found that obesity was associated with a lower risk of headache after postdural puncture, which might be the result of increased intracranial pressure providing resistance to the development of subdural hematoma.

There was a significant interaction between postdural puncture headache, severe preeclampsia, and chronic hypertension. In the absence of postdural puncture headache, severe preeclampsia and chronic hypertension were both independently associated with significant increases in the risk of subdural hematoma, Dr. Moore and associates noted.

In women who experienced postdural puncture headache, only chronic hypertension was significantly associated with subdural hematoma, they said.

The study was limited in being observational and at risk of misclassification. In addition, there was a risk of surveillance bias in that women with postdural puncture headaches might be more likely to receive brain imaging that would pick up minor subdural hematomas, the investigators said.

The study was supported by McGill University Health Center’s department of anesthesia. The authors reported no conflicts of interest.

SOURCE: Moore A et al. JAMA Neurol. 2019 Sep 16. doi: 10.1001/jamaneurol.2019.2995.

Postdural puncture headache in women who have undergone neuraxial anesthesia in childbirth may be associated with a small but significant increase in the risk of being diagnosed with intracranial subdural hematoma, research findings suggest.

A cohort study, published online in JAMA Neurology, looked at the incidence of intracranial subdural hematoma within 2 months of delivery in 22,130,815 women, using data from the U.S. Agency for Healthcare Research and Quality’s National Readmission Database.

The overall rate of postdural puncture headaches was 309 per 100,000 deliveries, and the overall incidence of subdural hematoma was 1.5 per 100,000 deliveries. Among the women with postdural puncture headache, however, the unadjusted rate of subdural hematoma was 147 per 100,000. After adjusting for confounding factors, women who experienced postdural puncture headache had a nearly 200-fold higher risk of subdural hematoma (odds ratio, 199; less than .001), representing an absolute risk increase of 130 per 100,000 deliveries.

“This was a small absolute increase because of the rarity of this outcome in this population,” wrote Dr. Albert R. Moore of the Royal Victoria Hospital at McGill University, Montreal, and coauthors. “However, this is an important and devastating outcome for a common exposure in young and usually healthy mothers.”

The authors noted that these findings confirmed other reports linking postdural puncture headache and intracranial subdural hematoma. The proposed mechanism connecting the two conditions was that decreased intracranial pressure from cerebrospinal fluid leakages leads to “sagging” of the brain and tension on the veins between the dura and arachnoid, which in turn could trigger a rupture and formation of a subdural hematoma.

Other risk factors for subdural hematoma included coagulopathy, arteriovenous malformation, and delayed blood patch. The investigators also found that obesity was associated with a lower risk of headache after postdural puncture, which might be the result of increased intracranial pressure providing resistance to the development of subdural hematoma.

There was a significant interaction between postdural puncture headache, severe preeclampsia, and chronic hypertension. In the absence of postdural puncture headache, severe preeclampsia and chronic hypertension were both independently associated with significant increases in the risk of subdural hematoma, Dr. Moore and associates noted.

In women who experienced postdural puncture headache, only chronic hypertension was significantly associated with subdural hematoma, they said.

The study was limited in being observational and at risk of misclassification. In addition, there was a risk of surveillance bias in that women with postdural puncture headaches might be more likely to receive brain imaging that would pick up minor subdural hematomas, the investigators said.

The study was supported by McGill University Health Center’s department of anesthesia. The authors reported no conflicts of interest.

SOURCE: Moore A et al. JAMA Neurol. 2019 Sep 16. doi: 10.1001/jamaneurol.2019.2995.

Publications
Publications
Topics
Article Type
Click for Credit Status
Active
Sections
Article Source

FROM JAMA NEUROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
CME ID
208139
Vitals

Key clinical point: Postdural puncture headache after neuraxial anesthesia in childbirth may be associated with a small increase in subdural hematoma risk.

Major finding: The subdural hematoma rate increased slightly, but significantly, to 147 per 100,000 deliveries.

Study details: A cohort study in 22,130,815 patients.

Disclosures: The study was supported by McGill University Health Center’s department of anesthesia. The authors reported no conflicts of interest.

Source: Moore A et al. JAMA Neurol. 2019 Sep 16. doi: 10.1001/jamaneurol.2019.2995.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Prior antibiotic use lowers checkpoint inhibitor response and survival

Article Type
Changed
Thu, 09/12/2019 - 14:57

 

Prior antibiotic use may be associated with a reduced treatment response to checkpoint inhibitors, and worse outcomes, in patients with cancer, according to investigators.

In a prospective cohort study, researchers followed 196 patients with cancer who were treated with immune checkpoint inhibitors in routine clinical practice.

A total of 22 patients had been treated with a 7-day or less course of broad-spectrum beta-lactam–based antibiotics in the 30 days prior to starting immune checkpoint inhibitor therapy, and 68 patients were concurrently taking broad-spectrum beta-lactam–based antibiotics with their checkpoint inhibitor therapy.

The analysis revealed that prior antibiotic therapy was associated with nearly a 100% greater likelihood of poor response to checkpoint inhibitor therapy (P less than .001) and significantly worse overall survival (2 vs. 26 months). Patients who had been on prior antibiotic therapy were also more likely to stop checkpoint inhibitor therapy because their disease had progressed, and were more likely to die of progressive disease while on checkpoint inhibitors.

However, concurrent antibiotic use did not appear to affect either treatment response to checkpoint inhibitors or overall survival.

The most common indication for both prior and concurrent antibiotic use was respiratory tract infections. Researchers examined whether cancer type might play a role in contributing to the association; for example, chronic airway disease in lung cancer might mean higher likelihood of antibiotic use but also lower treatment response and survival.

They found that the association between prior antibiotic therapy and overall survival was consistent across the 119 patients with non–small cell lung cancer, the 38 patients with melanoma, and the 39 patients with other tumor types.

The association was also independent of the class of antibiotic used, the patient’s performance status, and their corticosteroid use.

“Broad-spectrum ATB [antibiotic] use can cause prolonged disruption of the gut ecosystem and impair the effectiveness of the cytotoxic T-cell response against cancer, strengthening the biologic plausibility underlying the adverse effect of ATB therapy on immunotherapy outcomes,” wrote Dr. David J. Pinato, from Imperial College London, and coauthors in JAMA Oncology.

Addressing the question of whether comorbidities might be the mediating factor, the authors pointed out that the use of antibiotics during checkpoint inhibitor therapy – which was a potential indicator of patients’ status worsening during treatment – was not associated with reduced response to treatment or lower overall survival.

“Although provision of cATB [concurrent antibiotic] therapy appears to be safe in the context of immunotherapy, clinicians should carefully weigh the pros and cons of prescribing broad-spectrum ATBs prior to ICI [immune checkpoint inhibitor] treatment,” they wrote.

The study was supported by the Imperial College National Institute for Health Research Biomedical Research Centre, the Imperial College Tissue Bank, the Imperial Cancer Research U.K. Centre, the National Institute for Health Research, and the Wellcome Trust Strategic Fund. Two authors reported receiving grant funding and personal fees from the pharmaceutical sector unrelated to the study.

SOURCE: Pinato D et al. JAMA Oncol. 2019 Sep 12. doi: 10.1001/jamaoncol.2019.2785.

Publications
Topics
Sections

 

Prior antibiotic use may be associated with a reduced treatment response to checkpoint inhibitors, and worse outcomes, in patients with cancer, according to investigators.

In a prospective cohort study, researchers followed 196 patients with cancer who were treated with immune checkpoint inhibitors in routine clinical practice.

A total of 22 patients had been treated with a 7-day or less course of broad-spectrum beta-lactam–based antibiotics in the 30 days prior to starting immune checkpoint inhibitor therapy, and 68 patients were concurrently taking broad-spectrum beta-lactam–based antibiotics with their checkpoint inhibitor therapy.

The analysis revealed that prior antibiotic therapy was associated with nearly a 100% greater likelihood of poor response to checkpoint inhibitor therapy (P less than .001) and significantly worse overall survival (2 vs. 26 months). Patients who had been on prior antibiotic therapy were also more likely to stop checkpoint inhibitor therapy because their disease had progressed, and were more likely to die of progressive disease while on checkpoint inhibitors.

However, concurrent antibiotic use did not appear to affect either treatment response to checkpoint inhibitors or overall survival.

The most common indication for both prior and concurrent antibiotic use was respiratory tract infections. Researchers examined whether cancer type might play a role in contributing to the association; for example, chronic airway disease in lung cancer might mean higher likelihood of antibiotic use but also lower treatment response and survival.

They found that the association between prior antibiotic therapy and overall survival was consistent across the 119 patients with non–small cell lung cancer, the 38 patients with melanoma, and the 39 patients with other tumor types.

The association was also independent of the class of antibiotic used, the patient’s performance status, and their corticosteroid use.

“Broad-spectrum ATB [antibiotic] use can cause prolonged disruption of the gut ecosystem and impair the effectiveness of the cytotoxic T-cell response against cancer, strengthening the biologic plausibility underlying the adverse effect of ATB therapy on immunotherapy outcomes,” wrote Dr. David J. Pinato, from Imperial College London, and coauthors in JAMA Oncology.

Addressing the question of whether comorbidities might be the mediating factor, the authors pointed out that the use of antibiotics during checkpoint inhibitor therapy – which was a potential indicator of patients’ status worsening during treatment – was not associated with reduced response to treatment or lower overall survival.

“Although provision of cATB [concurrent antibiotic] therapy appears to be safe in the context of immunotherapy, clinicians should carefully weigh the pros and cons of prescribing broad-spectrum ATBs prior to ICI [immune checkpoint inhibitor] treatment,” they wrote.

The study was supported by the Imperial College National Institute for Health Research Biomedical Research Centre, the Imperial College Tissue Bank, the Imperial Cancer Research U.K. Centre, the National Institute for Health Research, and the Wellcome Trust Strategic Fund. Two authors reported receiving grant funding and personal fees from the pharmaceutical sector unrelated to the study.

SOURCE: Pinato D et al. JAMA Oncol. 2019 Sep 12. doi: 10.1001/jamaoncol.2019.2785.

 

Prior antibiotic use may be associated with a reduced treatment response to checkpoint inhibitors, and worse outcomes, in patients with cancer, according to investigators.

In a prospective cohort study, researchers followed 196 patients with cancer who were treated with immune checkpoint inhibitors in routine clinical practice.

A total of 22 patients had been treated with a 7-day or less course of broad-spectrum beta-lactam–based antibiotics in the 30 days prior to starting immune checkpoint inhibitor therapy, and 68 patients were concurrently taking broad-spectrum beta-lactam–based antibiotics with their checkpoint inhibitor therapy.

The analysis revealed that prior antibiotic therapy was associated with nearly a 100% greater likelihood of poor response to checkpoint inhibitor therapy (P less than .001) and significantly worse overall survival (2 vs. 26 months). Patients who had been on prior antibiotic therapy were also more likely to stop checkpoint inhibitor therapy because their disease had progressed, and were more likely to die of progressive disease while on checkpoint inhibitors.

However, concurrent antibiotic use did not appear to affect either treatment response to checkpoint inhibitors or overall survival.

The most common indication for both prior and concurrent antibiotic use was respiratory tract infections. Researchers examined whether cancer type might play a role in contributing to the association; for example, chronic airway disease in lung cancer might mean higher likelihood of antibiotic use but also lower treatment response and survival.

They found that the association between prior antibiotic therapy and overall survival was consistent across the 119 patients with non–small cell lung cancer, the 38 patients with melanoma, and the 39 patients with other tumor types.

The association was also independent of the class of antibiotic used, the patient’s performance status, and their corticosteroid use.

“Broad-spectrum ATB [antibiotic] use can cause prolonged disruption of the gut ecosystem and impair the effectiveness of the cytotoxic T-cell response against cancer, strengthening the biologic plausibility underlying the adverse effect of ATB therapy on immunotherapy outcomes,” wrote Dr. David J. Pinato, from Imperial College London, and coauthors in JAMA Oncology.

Addressing the question of whether comorbidities might be the mediating factor, the authors pointed out that the use of antibiotics during checkpoint inhibitor therapy – which was a potential indicator of patients’ status worsening during treatment – was not associated with reduced response to treatment or lower overall survival.

“Although provision of cATB [concurrent antibiotic] therapy appears to be safe in the context of immunotherapy, clinicians should carefully weigh the pros and cons of prescribing broad-spectrum ATBs prior to ICI [immune checkpoint inhibitor] treatment,” they wrote.

The study was supported by the Imperial College National Institute for Health Research Biomedical Research Centre, the Imperial College Tissue Bank, the Imperial Cancer Research U.K. Centre, the National Institute for Health Research, and the Wellcome Trust Strategic Fund. Two authors reported receiving grant funding and personal fees from the pharmaceutical sector unrelated to the study.

SOURCE: Pinato D et al. JAMA Oncol. 2019 Sep 12. doi: 10.1001/jamaoncol.2019.2785.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM JAMA ONCOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: People who take antibiotics prior to checkpoint inhibitor therapy have lower treatment response and overall survival.

Major finding: Prior antibiotic use is associated with a nearly a 100% greater likelihood of poor response to checkpoint inhibitor therapy.

Study details: A prospective cohort study involving 196 patients receiving checkpoint inhibitor therapy for cancer.

Disclosures: The study was supported by the Imperial College National Institute for Health Research Biomedical Research Centre, the Imperial College Tissue Bank, the Imperial Cancer Research U.K. Centre, the National Institute for Health Research, and the Wellcome Trust Strategic Fund. Two authors reported receiving grant funding and personal fees from the pharmaceutical sector unrelated to the study.

Source: Pinato D et al. JAMA Oncol. 2019 Sep 12. doi: 10.1001/jamaoncol.2019.2785.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

No decrease in preterm birth with n-3 fatty acid supplements

Article Type
Changed
Wed, 09/11/2019 - 17:00

 

Long-chain n-3 fatty acid supplements do not appear to either decrease the risk of preterm delivery or increase the risk of late term delivery, according to data published in the New England Journal of Medicine.

A pregnant woman taking pills
Creatas Images

Maria Makrides, PhD, of the South Australian Health and Medical Research Institute, North Adelaide, and coauthors wrote there is evidence that n-3 long-chain polyunsaturated fatty acids play an essential role in labor initiation.

“Typical Western diets are relatively low in n-3 long-chain polyunsaturated fatty acids, which leads to a predominance of 2-series prostaglandin substrate in the fetoplacental unit and potentially confers a predisposition to preterm delivery,” they wrote, adding that epidemiologic studies have suggested associations between lower fish consumption in pregnancy and a higher rate of preterm delivery.

In a multicenter, double-blind trial, 5,517 women were randomized to either a daily fish oil supplement containing 900 mg of n-3 long-chain polyunsaturated fatty acids or vegetable oil capsules, from before 20 weeks’ gestation until 34 weeks’ gestation or delivery.

Among the 5,486 pregnancies included in the final analysis, there were no differences between the intervention and control groups in the primary outcome of early preterm delivery, which occurred in 2.2% of pregnancies in the n-3 fatty acid group and 2% of the control group (P = 0.5).

The study also saw no significant differences between the two groups in other outcomes such as the rates of preterm delivery, preterm spontaneous labor, postterm induction, or gestational age at delivery. Similarly, there were no apparent effects of supplementation on maternal and neonatal outcomes including low birth weight, admission to neonatal intensive care, gestational diabetes, postpartum hemorrhage, or preeclampsia.

The analysis did suggest a greater incidence of infants born very large for gestational age – with a birth weight above the 97th percentile – among women in the fatty acid supplement group, but this did not correspond to an increased rate of interventions such as cesarean section or postterm induction.

The authors commented that their finding of more very-large-for-gestational-age babies added to the debate about whether n-3 long-chain polyunsaturated fatty acid supplementation did have a direct impact on fetal growth, although they also noted that it could be a chance finding.

There were also no significant differences between the two groups in serious adverse events, including miscarriage.

The authors noted that the baseline level of n-3 long-chain polyunsaturated fatty acids in the women enrolled in trial may have been higher than in previous studies.

The study was supported by the Australian National Health and Medical Research Council and the Thyne Reid Foundation, with in-kind support from Croda UK and Efamol/Wassen UK. Two authors declared advisory board fees from private industry, and one also declared a patent relating to fatty acids in research. No other conflicts of interest were declared.

SOURCE: Makrides M et al. N Engl J Med. 2019;381:1035-45.

Publications
Topics
Sections

 

Long-chain n-3 fatty acid supplements do not appear to either decrease the risk of preterm delivery or increase the risk of late term delivery, according to data published in the New England Journal of Medicine.

A pregnant woman taking pills
Creatas Images

Maria Makrides, PhD, of the South Australian Health and Medical Research Institute, North Adelaide, and coauthors wrote there is evidence that n-3 long-chain polyunsaturated fatty acids play an essential role in labor initiation.

“Typical Western diets are relatively low in n-3 long-chain polyunsaturated fatty acids, which leads to a predominance of 2-series prostaglandin substrate in the fetoplacental unit and potentially confers a predisposition to preterm delivery,” they wrote, adding that epidemiologic studies have suggested associations between lower fish consumption in pregnancy and a higher rate of preterm delivery.

In a multicenter, double-blind trial, 5,517 women were randomized to either a daily fish oil supplement containing 900 mg of n-3 long-chain polyunsaturated fatty acids or vegetable oil capsules, from before 20 weeks’ gestation until 34 weeks’ gestation or delivery.

Among the 5,486 pregnancies included in the final analysis, there were no differences between the intervention and control groups in the primary outcome of early preterm delivery, which occurred in 2.2% of pregnancies in the n-3 fatty acid group and 2% of the control group (P = 0.5).

The study also saw no significant differences between the two groups in other outcomes such as the rates of preterm delivery, preterm spontaneous labor, postterm induction, or gestational age at delivery. Similarly, there were no apparent effects of supplementation on maternal and neonatal outcomes including low birth weight, admission to neonatal intensive care, gestational diabetes, postpartum hemorrhage, or preeclampsia.

The analysis did suggest a greater incidence of infants born very large for gestational age – with a birth weight above the 97th percentile – among women in the fatty acid supplement group, but this did not correspond to an increased rate of interventions such as cesarean section or postterm induction.

The authors commented that their finding of more very-large-for-gestational-age babies added to the debate about whether n-3 long-chain polyunsaturated fatty acid supplementation did have a direct impact on fetal growth, although they also noted that it could be a chance finding.

There were also no significant differences between the two groups in serious adverse events, including miscarriage.

The authors noted that the baseline level of n-3 long-chain polyunsaturated fatty acids in the women enrolled in trial may have been higher than in previous studies.

The study was supported by the Australian National Health and Medical Research Council and the Thyne Reid Foundation, with in-kind support from Croda UK and Efamol/Wassen UK. Two authors declared advisory board fees from private industry, and one also declared a patent relating to fatty acids in research. No other conflicts of interest were declared.

SOURCE: Makrides M et al. N Engl J Med. 2019;381:1035-45.

 

Long-chain n-3 fatty acid supplements do not appear to either decrease the risk of preterm delivery or increase the risk of late term delivery, according to data published in the New England Journal of Medicine.

A pregnant woman taking pills
Creatas Images

Maria Makrides, PhD, of the South Australian Health and Medical Research Institute, North Adelaide, and coauthors wrote there is evidence that n-3 long-chain polyunsaturated fatty acids play an essential role in labor initiation.

“Typical Western diets are relatively low in n-3 long-chain polyunsaturated fatty acids, which leads to a predominance of 2-series prostaglandin substrate in the fetoplacental unit and potentially confers a predisposition to preterm delivery,” they wrote, adding that epidemiologic studies have suggested associations between lower fish consumption in pregnancy and a higher rate of preterm delivery.

In a multicenter, double-blind trial, 5,517 women were randomized to either a daily fish oil supplement containing 900 mg of n-3 long-chain polyunsaturated fatty acids or vegetable oil capsules, from before 20 weeks’ gestation until 34 weeks’ gestation or delivery.

Among the 5,486 pregnancies included in the final analysis, there were no differences between the intervention and control groups in the primary outcome of early preterm delivery, which occurred in 2.2% of pregnancies in the n-3 fatty acid group and 2% of the control group (P = 0.5).

The study also saw no significant differences between the two groups in other outcomes such as the rates of preterm delivery, preterm spontaneous labor, postterm induction, or gestational age at delivery. Similarly, there were no apparent effects of supplementation on maternal and neonatal outcomes including low birth weight, admission to neonatal intensive care, gestational diabetes, postpartum hemorrhage, or preeclampsia.

The analysis did suggest a greater incidence of infants born very large for gestational age – with a birth weight above the 97th percentile – among women in the fatty acid supplement group, but this did not correspond to an increased rate of interventions such as cesarean section or postterm induction.

The authors commented that their finding of more very-large-for-gestational-age babies added to the debate about whether n-3 long-chain polyunsaturated fatty acid supplementation did have a direct impact on fetal growth, although they also noted that it could be a chance finding.

There were also no significant differences between the two groups in serious adverse events, including miscarriage.

The authors noted that the baseline level of n-3 long-chain polyunsaturated fatty acids in the women enrolled in trial may have been higher than in previous studies.

The study was supported by the Australian National Health and Medical Research Council and the Thyne Reid Foundation, with in-kind support from Croda UK and Efamol/Wassen UK. Two authors declared advisory board fees from private industry, and one also declared a patent relating to fatty acids in research. No other conflicts of interest were declared.

SOURCE: Makrides M et al. N Engl J Med. 2019;381:1035-45.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: No decrease in preterm birth was seen with n-3 fatty acid supplementation during pregnancy, compared with controls.

Major finding: Early preterm delivery occurred in 2.2% of pregnancies in the n-3 fatty acid group and 2% of the control group (P = 0.5).

Study details: A multicenter, double-blind trial in 5,517 women.

Disclosures: The study was supported by the Australian National Health and Medical Research Council and the Thyne Reid Foundation, with in-kind support from Croda UK and Efamol/Wassen UK. Two authors declared advisory board fees from private industry, and one also declared a patent relating to fatty acids in research. No other conflicts of interest were declared.

Source: Makrides M et al. N Engl J Med. 2019;381:1035-45.
 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Chronic hypertension in pregnancy increased 13-fold since 1970

Article Type
Changed
Fri, 09/13/2019 - 10:17

 

The rate of chronic hypertension during pregnancy has increased significantly in the United States since 1970 and is more common in older women and in black women, according to a population-based, cross-sectional analysis.

A pregnant woman is being tested for high blood pressure by her doctor.
Jovanmandic/Getty Images

Researchers analyzed data from more than 151 million women with delivery-related hospitalizations in the United States between 1970 and 2010 and found that the rate of chronic hypertension in pregnancy increased steadily over time from 1970 to 1990, plateaued from 1990 to 2000, then increased again to 2010.

The analysis revealed an average annual increase of 6% – which was higher among white women than among black women – and an overall 13-fold increase from 1970 to 2010. These increases appeared to be independent of rates of obesity and smoking. The findings were published in Hypertension.

The rates of chronic hypertension also increased with maternal age, among both black and white women.

“The strong association between age and rates of chronic hypertension underscores the potential for both biological and social determinants of health to influence risk,” wrote Cande V. Ananth, PhD, from the Rutgers University, New Brunswick, N.J., and coauthors. “The period effect in chronic hypertension in pregnancy is thus largely a product of the age effect and the increasing mean age at first birth in the U.S.”

The overall prevalence of chronic hypertension in pregnancy was 0.63%, but was twofold higher in black women, compared with white women (1.24% vs. 0.53%). The authors noted that black women experienced disproportionally higher rates of ischemic placental disease, pregestational and gestational diabetes, preterm delivery and perinatal mortality, which may be a consequences of higher rates of obesity, social disadvantage, smoking, and less access to care.

“This disparity may also be related to the higher tendency of black women to develop vascular disease at an earlier age than white women, which may also explain why the age-associated increase in chronic hypertension among black women is relatively smaller than white women,” they wrote. “The persistent race disparity in chronic hypertension is also a cause for continued concern and underscores the role of complex population dynamics that shape risks.”

This was the largest study to evaluate changes in the prevalence of chronic hypertension in pregnancy over time and particularly how the prevalence is influenced by age, period, and birth cohort.

In regard to the 13-fold increase from 1970 to 2010, the researchers suggested that changing diagnostic criteria for hypertension, as well as earlier access to prenatal care, may have played a part. For example, the American College of Cardiology recently modified their guidelines to include patients with systolic and diastolic blood pressures of 130-139 mm Hg and 80-89 mm Hg as stage 1 hypertension, which they noted would increase the prevalence rates of chronic hypertension during pregnancy.

The researchers reported having no outside funding and no conflicts of interest.

SOURCE: Ananth CV et al. Hypertension. 2019 Sept 9. doi: 10.1161/HYPERTENSIONAHA.119.12968.

Publications
Topics
Sections

 

The rate of chronic hypertension during pregnancy has increased significantly in the United States since 1970 and is more common in older women and in black women, according to a population-based, cross-sectional analysis.

A pregnant woman is being tested for high blood pressure by her doctor.
Jovanmandic/Getty Images

Researchers analyzed data from more than 151 million women with delivery-related hospitalizations in the United States between 1970 and 2010 and found that the rate of chronic hypertension in pregnancy increased steadily over time from 1970 to 1990, plateaued from 1990 to 2000, then increased again to 2010.

The analysis revealed an average annual increase of 6% – which was higher among white women than among black women – and an overall 13-fold increase from 1970 to 2010. These increases appeared to be independent of rates of obesity and smoking. The findings were published in Hypertension.

The rates of chronic hypertension also increased with maternal age, among both black and white women.

“The strong association between age and rates of chronic hypertension underscores the potential for both biological and social determinants of health to influence risk,” wrote Cande V. Ananth, PhD, from the Rutgers University, New Brunswick, N.J., and coauthors. “The period effect in chronic hypertension in pregnancy is thus largely a product of the age effect and the increasing mean age at first birth in the U.S.”

The overall prevalence of chronic hypertension in pregnancy was 0.63%, but was twofold higher in black women, compared with white women (1.24% vs. 0.53%). The authors noted that black women experienced disproportionally higher rates of ischemic placental disease, pregestational and gestational diabetes, preterm delivery and perinatal mortality, which may be a consequences of higher rates of obesity, social disadvantage, smoking, and less access to care.

“This disparity may also be related to the higher tendency of black women to develop vascular disease at an earlier age than white women, which may also explain why the age-associated increase in chronic hypertension among black women is relatively smaller than white women,” they wrote. “The persistent race disparity in chronic hypertension is also a cause for continued concern and underscores the role of complex population dynamics that shape risks.”

This was the largest study to evaluate changes in the prevalence of chronic hypertension in pregnancy over time and particularly how the prevalence is influenced by age, period, and birth cohort.

In regard to the 13-fold increase from 1970 to 2010, the researchers suggested that changing diagnostic criteria for hypertension, as well as earlier access to prenatal care, may have played a part. For example, the American College of Cardiology recently modified their guidelines to include patients with systolic and diastolic blood pressures of 130-139 mm Hg and 80-89 mm Hg as stage 1 hypertension, which they noted would increase the prevalence rates of chronic hypertension during pregnancy.

The researchers reported having no outside funding and no conflicts of interest.

SOURCE: Ananth CV et al. Hypertension. 2019 Sept 9. doi: 10.1161/HYPERTENSIONAHA.119.12968.

 

The rate of chronic hypertension during pregnancy has increased significantly in the United States since 1970 and is more common in older women and in black women, according to a population-based, cross-sectional analysis.

A pregnant woman is being tested for high blood pressure by her doctor.
Jovanmandic/Getty Images

Researchers analyzed data from more than 151 million women with delivery-related hospitalizations in the United States between 1970 and 2010 and found that the rate of chronic hypertension in pregnancy increased steadily over time from 1970 to 1990, plateaued from 1990 to 2000, then increased again to 2010.

The analysis revealed an average annual increase of 6% – which was higher among white women than among black women – and an overall 13-fold increase from 1970 to 2010. These increases appeared to be independent of rates of obesity and smoking. The findings were published in Hypertension.

The rates of chronic hypertension also increased with maternal age, among both black and white women.

“The strong association between age and rates of chronic hypertension underscores the potential for both biological and social determinants of health to influence risk,” wrote Cande V. Ananth, PhD, from the Rutgers University, New Brunswick, N.J., and coauthors. “The period effect in chronic hypertension in pregnancy is thus largely a product of the age effect and the increasing mean age at first birth in the U.S.”

The overall prevalence of chronic hypertension in pregnancy was 0.63%, but was twofold higher in black women, compared with white women (1.24% vs. 0.53%). The authors noted that black women experienced disproportionally higher rates of ischemic placental disease, pregestational and gestational diabetes, preterm delivery and perinatal mortality, which may be a consequences of higher rates of obesity, social disadvantage, smoking, and less access to care.

“This disparity may also be related to the higher tendency of black women to develop vascular disease at an earlier age than white women, which may also explain why the age-associated increase in chronic hypertension among black women is relatively smaller than white women,” they wrote. “The persistent race disparity in chronic hypertension is also a cause for continued concern and underscores the role of complex population dynamics that shape risks.”

This was the largest study to evaluate changes in the prevalence of chronic hypertension in pregnancy over time and particularly how the prevalence is influenced by age, period, and birth cohort.

In regard to the 13-fold increase from 1970 to 2010, the researchers suggested that changing diagnostic criteria for hypertension, as well as earlier access to prenatal care, may have played a part. For example, the American College of Cardiology recently modified their guidelines to include patients with systolic and diastolic blood pressures of 130-139 mm Hg and 80-89 mm Hg as stage 1 hypertension, which they noted would increase the prevalence rates of chronic hypertension during pregnancy.

The researchers reported having no outside funding and no conflicts of interest.

SOURCE: Ananth CV et al. Hypertension. 2019 Sept 9. doi: 10.1161/HYPERTENSIONAHA.119.12968.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM HYPERTENSION

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Pediatric HSCT recipients still risking sunburn

Article Type
Changed
Tue, 09/10/2019 - 12:30

 

Young people who have received allogeneic hematopoietic stem cell transplants (HSCTs) are more likely to wear hats, sunscreen and other sun protection, but still intentionally tan and experience sunburn at the same rate as their peers, new research suggests.

young women sunbathing
Yuri Arcurs/Fotolia

In a survey‐based, cross‐sectional cohort study, researchers compared sun-protection behaviors and sun exposure in 85 children aged 21 years and younger who had undergone HSCT and 85 age-, sex-, and skin type–matched controls. The findings were published in Pediatric Dermatology.

HSCT recipients have a higher risk of long-term complications such as skin cancer, for which sun exposure is a major modifiable environmental risk factor.

“Therefore, consistent sun avoidance and protection as well as regular dermatologic evaluations are important for HSCT recipients,” wrote Edward B. Li, PhD, from Harvard Medical School, Boston, and coauthors.

The survey found no significant difference between the transplant and control group in the amount of intentional sun exposure, such as the amount of time spent outside on weekdays and weekends during the peak sun intensity hours of 10 a.m. and 4 p.m. More than one in five transplant recipients (21.2%) reported spending at least 3 hours a day outside between 10 a.m. and 4 p.m. on weekdays, as did 36.5% of transplant recipients on weekends.

There were also no significant differences between the two groups in terms of time spent tanning, either in the sun or in a tanning bed. Additionally, a similar number of transplant recipients and controls experienced one or more red or painful sunburns in the past year (25.9% vs. 27.1%).

However, transplant patients did practice better sun protection behaviors than did the control group, with 60% reporting that they always wore sunscreen, compared with 29.4% of controls. The transplant recipients were also significantly more likely to wear sunglasses and a hat and to stay in the shade or use an umbrella.

“While these data may reflect that HSCT patients are not practicing adequate sun avoidance, it may also suggest that these long‐term survivors are able to enjoy being outdoors as much as their peers and have a similar desire to have a tanned appearance,” the researchers wrote. “While a healthy and active lifestyle should be encouraged for all children, our results emphasize the need for pediatric HSCT survivors to be educated on their increased risk for UV‐related skin cancers, counseled on avoidance of intentional tanning, and advised on the importance of sun protection behaviors in an effort to improve long-term outcomes.”

The researchers noted that transplant recipients were significantly more likely to have had a full body skin exam from a health care professional than were individuals in the control group (61.2% vs. 4.7%) and were more likely to have done a self-check or been checked by a partner in the previous year.

The study was supported by the Society for Pediatric Dermatology, the Dermatology Foundation, the National Institutes of Health, and the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation. One author declared a financial interest in a company developing a dermatological product. No other conflicts of interest were declared.

SOURCE: Li EB et al. Pediatr Dermatol. 2019 Aug 13. doi: 10.1111/pde.13984.

Publications
Topics
Sections

 

Young people who have received allogeneic hematopoietic stem cell transplants (HSCTs) are more likely to wear hats, sunscreen and other sun protection, but still intentionally tan and experience sunburn at the same rate as their peers, new research suggests.

young women sunbathing
Yuri Arcurs/Fotolia

In a survey‐based, cross‐sectional cohort study, researchers compared sun-protection behaviors and sun exposure in 85 children aged 21 years and younger who had undergone HSCT and 85 age-, sex-, and skin type–matched controls. The findings were published in Pediatric Dermatology.

HSCT recipients have a higher risk of long-term complications such as skin cancer, for which sun exposure is a major modifiable environmental risk factor.

“Therefore, consistent sun avoidance and protection as well as regular dermatologic evaluations are important for HSCT recipients,” wrote Edward B. Li, PhD, from Harvard Medical School, Boston, and coauthors.

The survey found no significant difference between the transplant and control group in the amount of intentional sun exposure, such as the amount of time spent outside on weekdays and weekends during the peak sun intensity hours of 10 a.m. and 4 p.m. More than one in five transplant recipients (21.2%) reported spending at least 3 hours a day outside between 10 a.m. and 4 p.m. on weekdays, as did 36.5% of transplant recipients on weekends.

There were also no significant differences between the two groups in terms of time spent tanning, either in the sun or in a tanning bed. Additionally, a similar number of transplant recipients and controls experienced one or more red or painful sunburns in the past year (25.9% vs. 27.1%).

However, transplant patients did practice better sun protection behaviors than did the control group, with 60% reporting that they always wore sunscreen, compared with 29.4% of controls. The transplant recipients were also significantly more likely to wear sunglasses and a hat and to stay in the shade or use an umbrella.

“While these data may reflect that HSCT patients are not practicing adequate sun avoidance, it may also suggest that these long‐term survivors are able to enjoy being outdoors as much as their peers and have a similar desire to have a tanned appearance,” the researchers wrote. “While a healthy and active lifestyle should be encouraged for all children, our results emphasize the need for pediatric HSCT survivors to be educated on their increased risk for UV‐related skin cancers, counseled on avoidance of intentional tanning, and advised on the importance of sun protection behaviors in an effort to improve long-term outcomes.”

The researchers noted that transplant recipients were significantly more likely to have had a full body skin exam from a health care professional than were individuals in the control group (61.2% vs. 4.7%) and were more likely to have done a self-check or been checked by a partner in the previous year.

The study was supported by the Society for Pediatric Dermatology, the Dermatology Foundation, the National Institutes of Health, and the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation. One author declared a financial interest in a company developing a dermatological product. No other conflicts of interest were declared.

SOURCE: Li EB et al. Pediatr Dermatol. 2019 Aug 13. doi: 10.1111/pde.13984.

 

Young people who have received allogeneic hematopoietic stem cell transplants (HSCTs) are more likely to wear hats, sunscreen and other sun protection, but still intentionally tan and experience sunburn at the same rate as their peers, new research suggests.

young women sunbathing
Yuri Arcurs/Fotolia

In a survey‐based, cross‐sectional cohort study, researchers compared sun-protection behaviors and sun exposure in 85 children aged 21 years and younger who had undergone HSCT and 85 age-, sex-, and skin type–matched controls. The findings were published in Pediatric Dermatology.

HSCT recipients have a higher risk of long-term complications such as skin cancer, for which sun exposure is a major modifiable environmental risk factor.

“Therefore, consistent sun avoidance and protection as well as regular dermatologic evaluations are important for HSCT recipients,” wrote Edward B. Li, PhD, from Harvard Medical School, Boston, and coauthors.

The survey found no significant difference between the transplant and control group in the amount of intentional sun exposure, such as the amount of time spent outside on weekdays and weekends during the peak sun intensity hours of 10 a.m. and 4 p.m. More than one in five transplant recipients (21.2%) reported spending at least 3 hours a day outside between 10 a.m. and 4 p.m. on weekdays, as did 36.5% of transplant recipients on weekends.

There were also no significant differences between the two groups in terms of time spent tanning, either in the sun or in a tanning bed. Additionally, a similar number of transplant recipients and controls experienced one or more red or painful sunburns in the past year (25.9% vs. 27.1%).

However, transplant patients did practice better sun protection behaviors than did the control group, with 60% reporting that they always wore sunscreen, compared with 29.4% of controls. The transplant recipients were also significantly more likely to wear sunglasses and a hat and to stay in the shade or use an umbrella.

“While these data may reflect that HSCT patients are not practicing adequate sun avoidance, it may also suggest that these long‐term survivors are able to enjoy being outdoors as much as their peers and have a similar desire to have a tanned appearance,” the researchers wrote. “While a healthy and active lifestyle should be encouraged for all children, our results emphasize the need for pediatric HSCT survivors to be educated on their increased risk for UV‐related skin cancers, counseled on avoidance of intentional tanning, and advised on the importance of sun protection behaviors in an effort to improve long-term outcomes.”

The researchers noted that transplant recipients were significantly more likely to have had a full body skin exam from a health care professional than were individuals in the control group (61.2% vs. 4.7%) and were more likely to have done a self-check or been checked by a partner in the previous year.

The study was supported by the Society for Pediatric Dermatology, the Dermatology Foundation, the National Institutes of Health, and the Dr. Miriam and Sheldon G. Adelson Medical Research Foundation. One author declared a financial interest in a company developing a dermatological product. No other conflicts of interest were declared.

SOURCE: Li EB et al. Pediatr Dermatol. 2019 Aug 13. doi: 10.1111/pde.13984.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM PEDIATRIC DERMATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

 

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.