Teen Girls' Self-Reports of Abstinence Unreliable

Article Type
Changed
Thu, 12/06/2018 - 16:08
Display Headline
Teen Girls' Self-Reports of Abstinence Unreliable

Major Finding: Adolescent women who reported being abstinent were just as likely as were those who reported being sexually active to acquire a new chlamydia and/or gonorrhea infection.

Data Source: An observational study of 701 black women aged 14-20 years followed up for 2 years.

Disclosures: Dr. Sales reported that she did not have any relevant conflicts of interest.

Adolescent women's self-reports of abstinence are unreliable, even when assessed with contemporary methods designed to reduce bias in reporting, new data show.

In a large study of black adolescent women who were participants in an HIV-prevention trial, those reporting that they had not had vaginal sex in the past 6 months were just as likely as were those reporting they had to have acquired chlamydia, gonorrhea, or both during that time, according to results reported at the annual meeting of the Society for Adolescent Health and Medicine.

“Marked discordance was observed between adolescents' self-report of abstinence and laboratory-confirmed sexually transmitted diseases [STDs],” commented lead investigator Jessica McDermott Sales, Ph.D. “These findings suggest the need to include biological markers, such as STDs or YcPCRs [Y-chromosome polymerase chain reactions], as objective and quantifiable markers to complement self-reported sexual behavior in evaluating the efficacy of these large-scale STD and HIV prevention or abstinence promotion interventions,” she said.

And, as for clinical care, “given the discrepancies observed, adolescent providers may wish to consider screening adolescents for STDs regardless of self-reported sexual behaviors,” she said, pointing out that self-reported behavior has been the cornerstone of reproductive and sexual health research.

“However, self-report of sexual behaviors is prone to biases,” she said, such as recall bias, whereby recall of past events might be inaccurate, and social desirability bias, whereby respondents might give answers that think will be more socially acceptable.

Researchers have addressed these issues by using shorter follow-up periods, providing cues to enhance reporting of events, and using audio computer-assisted self-interviews (ACASIs), which not only help overcome literacy barriers but also enhance perceived confidentiality.

Dr. Sales and her colleagues studied black girls and women aged 14-20 years who were recruited from health clinics in downtown Atlanta and enrolled in a parent randomized trial of an HIV prevention intervention. All had reported having unprotected vaginal sex in the past 6 months and were neither pregnant nor married.

At baseline and again at follow-up time points of 6, 12, 18, and 24 months, the women completed ACASIs that asked how many times they had had vaginal sex in the past 6 months and whether they had used a condom every time. Also, self-collected vaginal swabs were tested for STDs.

Any woman found to have an STD was given treatment. “Thus, the STDs presented at the follow-up time points were new STD infections,” noted Dr. Sales of the department of behavioral sciences and health education at Emory University, Atlanta.

At each time point, women were classified as abstinent if they reported not having had vaginal sex in the past 6 months, and, among those reporting sex, as consistent condom users if they reported that they had used a condom every time they had sex.

Results were based on 701 women with an average age of 18 years at baseline. Two-thirds attended school. Eighty percent were currently in a relationship, and the mean length of the relationship was 14.4 months.

The percentage of women who reported abstinence was low, ranging from 3% to 5% at each follow-up time point. But 5%-23% of this group, depending on the time point, tested positive for chlamydia, gonorrhea, or both.

In fact, no significant difference was found between the self-reported abstinent group and the self-reported sexually active group in the rate of these new infections at any of the time points.

The findings were essentially the same in an additional analysis comparing a combined group of women who reported abstinence or consistent condom use with women who reported inconsistent condom use.

The explanation for the discordance between self-reported abstinence and STD acquisition is unclear, according to Dr. Sales.

The study used ACASIs, calendars, and other reminders, which should have reduced some sources of reporting bias. Also, the HIV intervention being tested focused on condom use, not abstinence, and all women were sexually active at baseline, so it is unlikely that they felt compelled to misreport abstinence because of the trial. But perceived social acceptability is a possibility.

“Further research is needed to identify factors associated with or strategies to increase the accuracy of self-reported sexual behaviors,” she concluded.

 

 

'Providers may wish to consider screening adolescents for STDs regardless of self-reported sexual behaviors.'

Source DR. SALES

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: Adolescent women who reported being abstinent were just as likely as were those who reported being sexually active to acquire a new chlamydia and/or gonorrhea infection.

Data Source: An observational study of 701 black women aged 14-20 years followed up for 2 years.

Disclosures: Dr. Sales reported that she did not have any relevant conflicts of interest.

Adolescent women's self-reports of abstinence are unreliable, even when assessed with contemporary methods designed to reduce bias in reporting, new data show.

In a large study of black adolescent women who were participants in an HIV-prevention trial, those reporting that they had not had vaginal sex in the past 6 months were just as likely as were those reporting they had to have acquired chlamydia, gonorrhea, or both during that time, according to results reported at the annual meeting of the Society for Adolescent Health and Medicine.

“Marked discordance was observed between adolescents' self-report of abstinence and laboratory-confirmed sexually transmitted diseases [STDs],” commented lead investigator Jessica McDermott Sales, Ph.D. “These findings suggest the need to include biological markers, such as STDs or YcPCRs [Y-chromosome polymerase chain reactions], as objective and quantifiable markers to complement self-reported sexual behavior in evaluating the efficacy of these large-scale STD and HIV prevention or abstinence promotion interventions,” she said.

And, as for clinical care, “given the discrepancies observed, adolescent providers may wish to consider screening adolescents for STDs regardless of self-reported sexual behaviors,” she said, pointing out that self-reported behavior has been the cornerstone of reproductive and sexual health research.

“However, self-report of sexual behaviors is prone to biases,” she said, such as recall bias, whereby recall of past events might be inaccurate, and social desirability bias, whereby respondents might give answers that think will be more socially acceptable.

Researchers have addressed these issues by using shorter follow-up periods, providing cues to enhance reporting of events, and using audio computer-assisted self-interviews (ACASIs), which not only help overcome literacy barriers but also enhance perceived confidentiality.

Dr. Sales and her colleagues studied black girls and women aged 14-20 years who were recruited from health clinics in downtown Atlanta and enrolled in a parent randomized trial of an HIV prevention intervention. All had reported having unprotected vaginal sex in the past 6 months and were neither pregnant nor married.

At baseline and again at follow-up time points of 6, 12, 18, and 24 months, the women completed ACASIs that asked how many times they had had vaginal sex in the past 6 months and whether they had used a condom every time. Also, self-collected vaginal swabs were tested for STDs.

Any woman found to have an STD was given treatment. “Thus, the STDs presented at the follow-up time points were new STD infections,” noted Dr. Sales of the department of behavioral sciences and health education at Emory University, Atlanta.

At each time point, women were classified as abstinent if they reported not having had vaginal sex in the past 6 months, and, among those reporting sex, as consistent condom users if they reported that they had used a condom every time they had sex.

Results were based on 701 women with an average age of 18 years at baseline. Two-thirds attended school. Eighty percent were currently in a relationship, and the mean length of the relationship was 14.4 months.

The percentage of women who reported abstinence was low, ranging from 3% to 5% at each follow-up time point. But 5%-23% of this group, depending on the time point, tested positive for chlamydia, gonorrhea, or both.

In fact, no significant difference was found between the self-reported abstinent group and the self-reported sexually active group in the rate of these new infections at any of the time points.

The findings were essentially the same in an additional analysis comparing a combined group of women who reported abstinence or consistent condom use with women who reported inconsistent condom use.

The explanation for the discordance between self-reported abstinence and STD acquisition is unclear, according to Dr. Sales.

The study used ACASIs, calendars, and other reminders, which should have reduced some sources of reporting bias. Also, the HIV intervention being tested focused on condom use, not abstinence, and all women were sexually active at baseline, so it is unlikely that they felt compelled to misreport abstinence because of the trial. But perceived social acceptability is a possibility.

“Further research is needed to identify factors associated with or strategies to increase the accuracy of self-reported sexual behaviors,” she concluded.

 

 

'Providers may wish to consider screening adolescents for STDs regardless of self-reported sexual behaviors.'

Source DR. SALES

Major Finding: Adolescent women who reported being abstinent were just as likely as were those who reported being sexually active to acquire a new chlamydia and/or gonorrhea infection.

Data Source: An observational study of 701 black women aged 14-20 years followed up for 2 years.

Disclosures: Dr. Sales reported that she did not have any relevant conflicts of interest.

Adolescent women's self-reports of abstinence are unreliable, even when assessed with contemporary methods designed to reduce bias in reporting, new data show.

In a large study of black adolescent women who were participants in an HIV-prevention trial, those reporting that they had not had vaginal sex in the past 6 months were just as likely as were those reporting they had to have acquired chlamydia, gonorrhea, or both during that time, according to results reported at the annual meeting of the Society for Adolescent Health and Medicine.

“Marked discordance was observed between adolescents' self-report of abstinence and laboratory-confirmed sexually transmitted diseases [STDs],” commented lead investigator Jessica McDermott Sales, Ph.D. “These findings suggest the need to include biological markers, such as STDs or YcPCRs [Y-chromosome polymerase chain reactions], as objective and quantifiable markers to complement self-reported sexual behavior in evaluating the efficacy of these large-scale STD and HIV prevention or abstinence promotion interventions,” she said.

And, as for clinical care, “given the discrepancies observed, adolescent providers may wish to consider screening adolescents for STDs regardless of self-reported sexual behaviors,” she said, pointing out that self-reported behavior has been the cornerstone of reproductive and sexual health research.

“However, self-report of sexual behaviors is prone to biases,” she said, such as recall bias, whereby recall of past events might be inaccurate, and social desirability bias, whereby respondents might give answers that think will be more socially acceptable.

Researchers have addressed these issues by using shorter follow-up periods, providing cues to enhance reporting of events, and using audio computer-assisted self-interviews (ACASIs), which not only help overcome literacy barriers but also enhance perceived confidentiality.

Dr. Sales and her colleagues studied black girls and women aged 14-20 years who were recruited from health clinics in downtown Atlanta and enrolled in a parent randomized trial of an HIV prevention intervention. All had reported having unprotected vaginal sex in the past 6 months and were neither pregnant nor married.

At baseline and again at follow-up time points of 6, 12, 18, and 24 months, the women completed ACASIs that asked how many times they had had vaginal sex in the past 6 months and whether they had used a condom every time. Also, self-collected vaginal swabs were tested for STDs.

Any woman found to have an STD was given treatment. “Thus, the STDs presented at the follow-up time points were new STD infections,” noted Dr. Sales of the department of behavioral sciences and health education at Emory University, Atlanta.

At each time point, women were classified as abstinent if they reported not having had vaginal sex in the past 6 months, and, among those reporting sex, as consistent condom users if they reported that they had used a condom every time they had sex.

Results were based on 701 women with an average age of 18 years at baseline. Two-thirds attended school. Eighty percent were currently in a relationship, and the mean length of the relationship was 14.4 months.

The percentage of women who reported abstinence was low, ranging from 3% to 5% at each follow-up time point. But 5%-23% of this group, depending on the time point, tested positive for chlamydia, gonorrhea, or both.

In fact, no significant difference was found between the self-reported abstinent group and the self-reported sexually active group in the rate of these new infections at any of the time points.

The findings were essentially the same in an additional analysis comparing a combined group of women who reported abstinence or consistent condom use with women who reported inconsistent condom use.

The explanation for the discordance between self-reported abstinence and STD acquisition is unclear, according to Dr. Sales.

The study used ACASIs, calendars, and other reminders, which should have reduced some sources of reporting bias. Also, the HIV intervention being tested focused on condom use, not abstinence, and all women were sexually active at baseline, so it is unlikely that they felt compelled to misreport abstinence because of the trial. But perceived social acceptability is a possibility.

“Further research is needed to identify factors associated with or strategies to increase the accuracy of self-reported sexual behaviors,” she concluded.

 

 

'Providers may wish to consider screening adolescents for STDs regardless of self-reported sexual behaviors.'

Source DR. SALES

Publications
Publications
Topics
Article Type
Display Headline
Teen Girls' Self-Reports of Abstinence Unreliable
Display Headline
Teen Girls' Self-Reports of Abstinence Unreliable
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

Women Are Seldom Counseled About Weight Gain During Pregnancy

Article Type
Changed
Fri, 01/18/2019 - 11:10
Display Headline
Women Are Seldom Counseled About Weight Gain During Pregnancy

VANCOUVER, B.C. – When it comes to counseling women about weight gain during pregnancy, there is plenty of room for improvement, new data suggest.

In a survey of more than 300 pregnant women, less than a third reported being counseled on the topic, researchers reported at the annual meeting of the Society of Obstetricians and Gynaecologists of Canada. And even fewer, merely an eighth, were counseled correctly about how much weight to gain.

In likely related findings, three-fourths of women who were overweight or obese before conceiving planned to gain more weight than was recommended for them in guidelines.

"A lack of reported counseling has been associated in the literature with inappropriate weight gain, both excessive and inadequate," said lead investigator Dr. Sarah McDonald, an obstetrician-gynecologist at McMaster University in Hamilton, Ont. "So these findings were very concerning for us."

She noted that most women who were approached agreed to participate in the survey and were comfortable when it came to discussing weight. Therefore, "it appeared unlikely that the lack of reported counseling was due to patient-driven factors, apart from possibly forgetting."

Interestingly, a staggered companion survey of the providers had dramatically different findings, showing high reported rates of counseling. "It was like I was surveying people on a different planet," she commented. "We think we are doing very well," yet there is an obvious discrepancy that is as yet unexplained.

Citing the obesity epidemic, Dr. McDonald endorsed repeated counseling of women about weight, both before and during pregnancy.

"Obviously, an optimal BMI [body mass index] prepregnancy is ideal, but that’s not the situation where most of us come into contact with our patients – it’s when they are already pregnant. Then, I think talking about optimal gestational weight gain to not compound the problems of overweight and obesity is important," she said. "But given the size of the [obesity] epidemic, [the approach has] got to be multipronged."

In 2009, the U.S. Institute of Medicine released new recommendations regarding gestational weight gain, tailored to prepregnancy BMI, that have been adopted by Canada and other countries.

"However, previous studies done in the era of the 1990 guidelines have shown that only about 30% to 40% of pregnant women gained the appropriate amount of weight during pregnancy," Dr. McDonald noted. "And we were curious what was going on in the era of the new guidelines."

The investigators surveyed 310 women (94% of those approached) who made at least one visit to representative Hamilton prenatal clinics, other than for pregnancy diagnosis, and currently had a live, singleton gestation. The women’s mean age was 30 years, and the median gestational age was 33.0 weeks. Fully 74% were white, and for 43%, the birth would be their first. They had a mean prepregnancy BMI of 25.1 kg/m2.

"Interestingly enough, 84% of the women reported that they were either comfortable or very comfortable talking about weight-related issues with their care provider, despite the fact that the mean BMI [in this study] is already in the overweight category prepregnancy," Dr. McDonald observed.

Only 29% of the women reported that their provider counseled them to gain a specific amount or range of weight, and for just 12% overall, that amount or range was correct according to the new guidelines. Additionally, only about a quarter of women reported being told that there were risks associated with gaining too much weight or too little weight during pregnancy.

The median number of prenatal visits before the survey was 10 for the study population, she pointed out, and "so there were multiple opportunities for discussion about weight gain."

"We wondered, are clinicians just too busy to be talking about weight and weight-related matters, and nutrition, and preventive-type medicine?" said Dr. McDonald. Yet, nearly all of the women (97%) reported being counseled to take a vitamin.

When asked how much weight they planned to gain during pregnancy, only 12%-54% of women, depending on prepregnancy BMI category, cited an amount within the guideline-recommended range for them. In particular, in a finding that she described as "alarming," 75% of overweight and obese women were planning to gain more weight than was recommended for them.

The proportion of women counseled about weight gain differed by the type of provider that had provided the majority of a woman’s pregnancy care; it was 40% for midwives, 24% for obstetricians, 23% for general practitioners, and 28% for other providers. The proportion that was correctly counseled showed a similar pattern, but the differences were not significant.

The investigators are still analyzing data on whether women had been counseled about weight in a previous pregnancy, which might have led providers to assume they already had the information, according to Dr. McDonald. But "that is a dangerous assumption, it would appear, based on our results."

 

 

Dr. McDonald reported that she had no relevant financial disclosures.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
counseling, women, weight gain, pregnancy, pregnant women, the Society of Obstetricians and Gynaecologists of Canada,

Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

VANCOUVER, B.C. – When it comes to counseling women about weight gain during pregnancy, there is plenty of room for improvement, new data suggest.

In a survey of more than 300 pregnant women, less than a third reported being counseled on the topic, researchers reported at the annual meeting of the Society of Obstetricians and Gynaecologists of Canada. And even fewer, merely an eighth, were counseled correctly about how much weight to gain.

In likely related findings, three-fourths of women who were overweight or obese before conceiving planned to gain more weight than was recommended for them in guidelines.

"A lack of reported counseling has been associated in the literature with inappropriate weight gain, both excessive and inadequate," said lead investigator Dr. Sarah McDonald, an obstetrician-gynecologist at McMaster University in Hamilton, Ont. "So these findings were very concerning for us."

She noted that most women who were approached agreed to participate in the survey and were comfortable when it came to discussing weight. Therefore, "it appeared unlikely that the lack of reported counseling was due to patient-driven factors, apart from possibly forgetting."

Interestingly, a staggered companion survey of the providers had dramatically different findings, showing high reported rates of counseling. "It was like I was surveying people on a different planet," she commented. "We think we are doing very well," yet there is an obvious discrepancy that is as yet unexplained.

Citing the obesity epidemic, Dr. McDonald endorsed repeated counseling of women about weight, both before and during pregnancy.

"Obviously, an optimal BMI [body mass index] prepregnancy is ideal, but that’s not the situation where most of us come into contact with our patients – it’s when they are already pregnant. Then, I think talking about optimal gestational weight gain to not compound the problems of overweight and obesity is important," she said. "But given the size of the [obesity] epidemic, [the approach has] got to be multipronged."

In 2009, the U.S. Institute of Medicine released new recommendations regarding gestational weight gain, tailored to prepregnancy BMI, that have been adopted by Canada and other countries.

"However, previous studies done in the era of the 1990 guidelines have shown that only about 30% to 40% of pregnant women gained the appropriate amount of weight during pregnancy," Dr. McDonald noted. "And we were curious what was going on in the era of the new guidelines."

The investigators surveyed 310 women (94% of those approached) who made at least one visit to representative Hamilton prenatal clinics, other than for pregnancy diagnosis, and currently had a live, singleton gestation. The women’s mean age was 30 years, and the median gestational age was 33.0 weeks. Fully 74% were white, and for 43%, the birth would be their first. They had a mean prepregnancy BMI of 25.1 kg/m2.

"Interestingly enough, 84% of the women reported that they were either comfortable or very comfortable talking about weight-related issues with their care provider, despite the fact that the mean BMI [in this study] is already in the overweight category prepregnancy," Dr. McDonald observed.

Only 29% of the women reported that their provider counseled them to gain a specific amount or range of weight, and for just 12% overall, that amount or range was correct according to the new guidelines. Additionally, only about a quarter of women reported being told that there were risks associated with gaining too much weight or too little weight during pregnancy.

The median number of prenatal visits before the survey was 10 for the study population, she pointed out, and "so there were multiple opportunities for discussion about weight gain."

"We wondered, are clinicians just too busy to be talking about weight and weight-related matters, and nutrition, and preventive-type medicine?" said Dr. McDonald. Yet, nearly all of the women (97%) reported being counseled to take a vitamin.

When asked how much weight they planned to gain during pregnancy, only 12%-54% of women, depending on prepregnancy BMI category, cited an amount within the guideline-recommended range for them. In particular, in a finding that she described as "alarming," 75% of overweight and obese women were planning to gain more weight than was recommended for them.

The proportion of women counseled about weight gain differed by the type of provider that had provided the majority of a woman’s pregnancy care; it was 40% for midwives, 24% for obstetricians, 23% for general practitioners, and 28% for other providers. The proportion that was correctly counseled showed a similar pattern, but the differences were not significant.

The investigators are still analyzing data on whether women had been counseled about weight in a previous pregnancy, which might have led providers to assume they already had the information, according to Dr. McDonald. But "that is a dangerous assumption, it would appear, based on our results."

 

 

Dr. McDonald reported that she had no relevant financial disclosures.

VANCOUVER, B.C. – When it comes to counseling women about weight gain during pregnancy, there is plenty of room for improvement, new data suggest.

In a survey of more than 300 pregnant women, less than a third reported being counseled on the topic, researchers reported at the annual meeting of the Society of Obstetricians and Gynaecologists of Canada. And even fewer, merely an eighth, were counseled correctly about how much weight to gain.

In likely related findings, three-fourths of women who were overweight or obese before conceiving planned to gain more weight than was recommended for them in guidelines.

"A lack of reported counseling has been associated in the literature with inappropriate weight gain, both excessive and inadequate," said lead investigator Dr. Sarah McDonald, an obstetrician-gynecologist at McMaster University in Hamilton, Ont. "So these findings were very concerning for us."

She noted that most women who were approached agreed to participate in the survey and were comfortable when it came to discussing weight. Therefore, "it appeared unlikely that the lack of reported counseling was due to patient-driven factors, apart from possibly forgetting."

Interestingly, a staggered companion survey of the providers had dramatically different findings, showing high reported rates of counseling. "It was like I was surveying people on a different planet," she commented. "We think we are doing very well," yet there is an obvious discrepancy that is as yet unexplained.

Citing the obesity epidemic, Dr. McDonald endorsed repeated counseling of women about weight, both before and during pregnancy.

"Obviously, an optimal BMI [body mass index] prepregnancy is ideal, but that’s not the situation where most of us come into contact with our patients – it’s when they are already pregnant. Then, I think talking about optimal gestational weight gain to not compound the problems of overweight and obesity is important," she said. "But given the size of the [obesity] epidemic, [the approach has] got to be multipronged."

In 2009, the U.S. Institute of Medicine released new recommendations regarding gestational weight gain, tailored to prepregnancy BMI, that have been adopted by Canada and other countries.

"However, previous studies done in the era of the 1990 guidelines have shown that only about 30% to 40% of pregnant women gained the appropriate amount of weight during pregnancy," Dr. McDonald noted. "And we were curious what was going on in the era of the new guidelines."

The investigators surveyed 310 women (94% of those approached) who made at least one visit to representative Hamilton prenatal clinics, other than for pregnancy diagnosis, and currently had a live, singleton gestation. The women’s mean age was 30 years, and the median gestational age was 33.0 weeks. Fully 74% were white, and for 43%, the birth would be their first. They had a mean prepregnancy BMI of 25.1 kg/m2.

"Interestingly enough, 84% of the women reported that they were either comfortable or very comfortable talking about weight-related issues with their care provider, despite the fact that the mean BMI [in this study] is already in the overweight category prepregnancy," Dr. McDonald observed.

Only 29% of the women reported that their provider counseled them to gain a specific amount or range of weight, and for just 12% overall, that amount or range was correct according to the new guidelines. Additionally, only about a quarter of women reported being told that there were risks associated with gaining too much weight or too little weight during pregnancy.

The median number of prenatal visits before the survey was 10 for the study population, she pointed out, and "so there were multiple opportunities for discussion about weight gain."

"We wondered, are clinicians just too busy to be talking about weight and weight-related matters, and nutrition, and preventive-type medicine?" said Dr. McDonald. Yet, nearly all of the women (97%) reported being counseled to take a vitamin.

When asked how much weight they planned to gain during pregnancy, only 12%-54% of women, depending on prepregnancy BMI category, cited an amount within the guideline-recommended range for them. In particular, in a finding that she described as "alarming," 75% of overweight and obese women were planning to gain more weight than was recommended for them.

The proportion of women counseled about weight gain differed by the type of provider that had provided the majority of a woman’s pregnancy care; it was 40% for midwives, 24% for obstetricians, 23% for general practitioners, and 28% for other providers. The proportion that was correctly counseled showed a similar pattern, but the differences were not significant.

The investigators are still analyzing data on whether women had been counseled about weight in a previous pregnancy, which might have led providers to assume they already had the information, according to Dr. McDonald. But "that is a dangerous assumption, it would appear, based on our results."

 

 

Dr. McDonald reported that she had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Women Are Seldom Counseled About Weight Gain During Pregnancy
Display Headline
Women Are Seldom Counseled About Weight Gain During Pregnancy
Legacy Keywords
counseling, women, weight gain, pregnancy, pregnant women, the Society of Obstetricians and Gynaecologists of Canada,

Legacy Keywords
counseling, women, weight gain, pregnancy, pregnant women, the Society of Obstetricians and Gynaecologists of Canada,

Article Source

FROM THE ANNUAL MEETING OF THE SOCIETY OF OBSTETRICIANS AND GYNAECOLOGISTS OF CANADA

PURLs Copyright

Inside the Article

Vitals

Major Finding: Only 29% of women were counseled about gaining a specific amount or range of weight during pregnancy, and 12% were counseled correctly about how much to gain.

Data Source: A cross-sectional survey of 310 pregnant women with a live, singleton gestation, who visited prenatal clinics.

Disclosures: Dr. McDonald reported that she had no relevant financial disclosures.

Interpregnancy Interval Linked to Rate of Congenital Anomalies

Article Type
Changed
Fri, 01/18/2019 - 11:10
Display Headline
Interpregnancy Interval Linked to Rate of Congenital Anomalies

VANCOUVER, B.C. – The risk of congenital anomalies for a given pregnancy varies according to the time elapsed since the last pregnancy, a retrospective population-based cohort study of more than 46,000 women has shown.

Study results, reported at the annual meeting of the Society of Obstetricians and Gynaecologists of Canada, showed that the rate of congenital anomalies was lowest when the interpregnancy interval was 12-17 months and increased with both shorter and longer intervals. The pattern was similar for folate-dependent and folate-independent anomalies individually.

"A J-shaped relationship exists between interpregnancy interval and congenital anomalies," said principal investigator Dr. Innie Chen, a resident in the department of obstetrics and gynecology at the University of Alberta, Edmonton. "The observation that long intervals were associated with congenital anomalies as well as the preservation of the association for folate-independent anomalies suggests that the mechanism of the observed effect is unlikely to be mediated by folate deficiency alone.

"To date and to our knowledge, this is the most comprehensive data available on this topic. The implications of this study are broad and touch on prenatal risk assessment, prenatal counseling, and future recommendations regarding birth spacing and nutritional supplementation," she said.

But she also cautioned that it could be problematic to apply the findings to individual women who ask when is the best time to conceive again to minimize risk.

"This is an epidemiological study. Decisions for individuals depend on a lot of things, such as where they are in their life and their career situation," she explained. "But I think this data adds to the growing literature about the effect of interpregnancy interval and adverse perinatal outcomes, which we see again and again. Compared to 50 or 100 years ago, we have much better contraception, so I think it is within our control."

A variety of adverse perinatal outcomes – preterm birth, small for gestational age, low birth weight, and perinatal death – have shown a J-shaped association with interpregnancy interval.

"The most-often-cited postulated mechanism for the observed effect is a folate deficiency hypothesis, which is based on the observation that maternal serum levels are very low in the postpartum period," Dr. Chen said.

A previous retrospective cohort study found an association between both short and long interpregnancy intervals and major congenital malformations (Contraception 2009;80:512-8). But that study did not evaluate specific types of anomalies.

Dr. Chen and her colleagues began with data from the Alberta Perinatal Health Program Database, which collects information on all hospital and midwife births, and all terminations after 20 weeks’ gestation in the Northern part of the province.

They identified women who had a singleton delivery between 1999 and 2007 (the post–folate food fortification era, so that results would be applicable today) and who did not have a miscarriage between their first and second births (so that the interpregnancy interval was more reliable).

They then linked that data with data from other provincial databases to obtain more comprehensive maternal information and ascertain anomalies.

The working data set consisted of 46,559 pregnant women. The interpregnancy interval was 6-59 weeks’ gestation for 90% of them.

Most of the women were 20- to 34-years-old (83%) and para 2 (88%) at the time of the second delivery, and most of their infants had a gestational age of at least 37 weeks (93%) and a birth weight of at least 2,500 g (96%).

The rate of congenital anomalies did not vary significantly according to maternal age, maternal weight, smoking in pregnancy, or socioeconomic status, Dr. Chen reported.

For interpregnancy intervals of 59 months or less, there was a J-shaped association between the interval and the rate of congenital anomalies. The rate was lowest, at 1.9%, when the interval was 12-17 months.

It rose to a high of 2.5% when the interval was 0-5 months and 2.4% when the interval was 24-59 months. The corresponding odds ratios were 1.35 and 1.28, respectively.

The pattern was similar for folate-dependent anomalies (neural tube defects, cleft lip and palate, cardiovascular defects, urinary tract anomalies, and limb defects) and for folate-independent anomalies individually.

In addition, an interval of 0-5 months was associated with increased odds of specific anomalies, such as neural tube defects and heart defects, but not significantly so.

"We believe these results to be valid as they are consistent with and corroborate existing studies in the literature," said Dr. Chen.

"Future directions for research include changing the databases to capture more information on all terminations and folate supplementation, combining the databases for more statistical power, and checking other postulated mechanisms for the observed effect," she said.

 

 

Dr. Chen reported that she had no relevant financial disclosures.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
congenital anomalies, risk, pregnancy, women's health, the Society of Obstetricians and Gynaecologists of Canada, interpregnancy interval, Dr. Innie Chen, obstetrics and gynecology,
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

VANCOUVER, B.C. – The risk of congenital anomalies for a given pregnancy varies according to the time elapsed since the last pregnancy, a retrospective population-based cohort study of more than 46,000 women has shown.

Study results, reported at the annual meeting of the Society of Obstetricians and Gynaecologists of Canada, showed that the rate of congenital anomalies was lowest when the interpregnancy interval was 12-17 months and increased with both shorter and longer intervals. The pattern was similar for folate-dependent and folate-independent anomalies individually.

"A J-shaped relationship exists between interpregnancy interval and congenital anomalies," said principal investigator Dr. Innie Chen, a resident in the department of obstetrics and gynecology at the University of Alberta, Edmonton. "The observation that long intervals were associated with congenital anomalies as well as the preservation of the association for folate-independent anomalies suggests that the mechanism of the observed effect is unlikely to be mediated by folate deficiency alone.

"To date and to our knowledge, this is the most comprehensive data available on this topic. The implications of this study are broad and touch on prenatal risk assessment, prenatal counseling, and future recommendations regarding birth spacing and nutritional supplementation," she said.

But she also cautioned that it could be problematic to apply the findings to individual women who ask when is the best time to conceive again to minimize risk.

"This is an epidemiological study. Decisions for individuals depend on a lot of things, such as where they are in their life and their career situation," she explained. "But I think this data adds to the growing literature about the effect of interpregnancy interval and adverse perinatal outcomes, which we see again and again. Compared to 50 or 100 years ago, we have much better contraception, so I think it is within our control."

A variety of adverse perinatal outcomes – preterm birth, small for gestational age, low birth weight, and perinatal death – have shown a J-shaped association with interpregnancy interval.

"The most-often-cited postulated mechanism for the observed effect is a folate deficiency hypothesis, which is based on the observation that maternal serum levels are very low in the postpartum period," Dr. Chen said.

A previous retrospective cohort study found an association between both short and long interpregnancy intervals and major congenital malformations (Contraception 2009;80:512-8). But that study did not evaluate specific types of anomalies.

Dr. Chen and her colleagues began with data from the Alberta Perinatal Health Program Database, which collects information on all hospital and midwife births, and all terminations after 20 weeks’ gestation in the Northern part of the province.

They identified women who had a singleton delivery between 1999 and 2007 (the post–folate food fortification era, so that results would be applicable today) and who did not have a miscarriage between their first and second births (so that the interpregnancy interval was more reliable).

They then linked that data with data from other provincial databases to obtain more comprehensive maternal information and ascertain anomalies.

The working data set consisted of 46,559 pregnant women. The interpregnancy interval was 6-59 weeks’ gestation for 90% of them.

Most of the women were 20- to 34-years-old (83%) and para 2 (88%) at the time of the second delivery, and most of their infants had a gestational age of at least 37 weeks (93%) and a birth weight of at least 2,500 g (96%).

The rate of congenital anomalies did not vary significantly according to maternal age, maternal weight, smoking in pregnancy, or socioeconomic status, Dr. Chen reported.

For interpregnancy intervals of 59 months or less, there was a J-shaped association between the interval and the rate of congenital anomalies. The rate was lowest, at 1.9%, when the interval was 12-17 months.

It rose to a high of 2.5% when the interval was 0-5 months and 2.4% when the interval was 24-59 months. The corresponding odds ratios were 1.35 and 1.28, respectively.

The pattern was similar for folate-dependent anomalies (neural tube defects, cleft lip and palate, cardiovascular defects, urinary tract anomalies, and limb defects) and for folate-independent anomalies individually.

In addition, an interval of 0-5 months was associated with increased odds of specific anomalies, such as neural tube defects and heart defects, but not significantly so.

"We believe these results to be valid as they are consistent with and corroborate existing studies in the literature," said Dr. Chen.

"Future directions for research include changing the databases to capture more information on all terminations and folate supplementation, combining the databases for more statistical power, and checking other postulated mechanisms for the observed effect," she said.

 

 

Dr. Chen reported that she had no relevant financial disclosures.

VANCOUVER, B.C. – The risk of congenital anomalies for a given pregnancy varies according to the time elapsed since the last pregnancy, a retrospective population-based cohort study of more than 46,000 women has shown.

Study results, reported at the annual meeting of the Society of Obstetricians and Gynaecologists of Canada, showed that the rate of congenital anomalies was lowest when the interpregnancy interval was 12-17 months and increased with both shorter and longer intervals. The pattern was similar for folate-dependent and folate-independent anomalies individually.

"A J-shaped relationship exists between interpregnancy interval and congenital anomalies," said principal investigator Dr. Innie Chen, a resident in the department of obstetrics and gynecology at the University of Alberta, Edmonton. "The observation that long intervals were associated with congenital anomalies as well as the preservation of the association for folate-independent anomalies suggests that the mechanism of the observed effect is unlikely to be mediated by folate deficiency alone.

"To date and to our knowledge, this is the most comprehensive data available on this topic. The implications of this study are broad and touch on prenatal risk assessment, prenatal counseling, and future recommendations regarding birth spacing and nutritional supplementation," she said.

But she also cautioned that it could be problematic to apply the findings to individual women who ask when is the best time to conceive again to minimize risk.

"This is an epidemiological study. Decisions for individuals depend on a lot of things, such as where they are in their life and their career situation," she explained. "But I think this data adds to the growing literature about the effect of interpregnancy interval and adverse perinatal outcomes, which we see again and again. Compared to 50 or 100 years ago, we have much better contraception, so I think it is within our control."

A variety of adverse perinatal outcomes – preterm birth, small for gestational age, low birth weight, and perinatal death – have shown a J-shaped association with interpregnancy interval.

"The most-often-cited postulated mechanism for the observed effect is a folate deficiency hypothesis, which is based on the observation that maternal serum levels are very low in the postpartum period," Dr. Chen said.

A previous retrospective cohort study found an association between both short and long interpregnancy intervals and major congenital malformations (Contraception 2009;80:512-8). But that study did not evaluate specific types of anomalies.

Dr. Chen and her colleagues began with data from the Alberta Perinatal Health Program Database, which collects information on all hospital and midwife births, and all terminations after 20 weeks’ gestation in the Northern part of the province.

They identified women who had a singleton delivery between 1999 and 2007 (the post–folate food fortification era, so that results would be applicable today) and who did not have a miscarriage between their first and second births (so that the interpregnancy interval was more reliable).

They then linked that data with data from other provincial databases to obtain more comprehensive maternal information and ascertain anomalies.

The working data set consisted of 46,559 pregnant women. The interpregnancy interval was 6-59 weeks’ gestation for 90% of them.

Most of the women were 20- to 34-years-old (83%) and para 2 (88%) at the time of the second delivery, and most of their infants had a gestational age of at least 37 weeks (93%) and a birth weight of at least 2,500 g (96%).

The rate of congenital anomalies did not vary significantly according to maternal age, maternal weight, smoking in pregnancy, or socioeconomic status, Dr. Chen reported.

For interpregnancy intervals of 59 months or less, there was a J-shaped association between the interval and the rate of congenital anomalies. The rate was lowest, at 1.9%, when the interval was 12-17 months.

It rose to a high of 2.5% when the interval was 0-5 months and 2.4% when the interval was 24-59 months. The corresponding odds ratios were 1.35 and 1.28, respectively.

The pattern was similar for folate-dependent anomalies (neural tube defects, cleft lip and palate, cardiovascular defects, urinary tract anomalies, and limb defects) and for folate-independent anomalies individually.

In addition, an interval of 0-5 months was associated with increased odds of specific anomalies, such as neural tube defects and heart defects, but not significantly so.

"We believe these results to be valid as they are consistent with and corroborate existing studies in the literature," said Dr. Chen.

"Future directions for research include changing the databases to capture more information on all terminations and folate supplementation, combining the databases for more statistical power, and checking other postulated mechanisms for the observed effect," she said.

 

 

Dr. Chen reported that she had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Display Headline
Interpregnancy Interval Linked to Rate of Congenital Anomalies
Display Headline
Interpregnancy Interval Linked to Rate of Congenital Anomalies
Legacy Keywords
congenital anomalies, risk, pregnancy, women's health, the Society of Obstetricians and Gynaecologists of Canada, interpregnancy interval, Dr. Innie Chen, obstetrics and gynecology,
Legacy Keywords
congenital anomalies, risk, pregnancy, women's health, the Society of Obstetricians and Gynaecologists of Canada, interpregnancy interval, Dr. Innie Chen, obstetrics and gynecology,
Article Source

FROM THE ANNUAL MEETING OF THE SOCIETY OF OBSTETRICIANS AND GYNAECOLOGISTS OF CANADA

PURLs Copyright

Inside the Article

Vitals

Major Finding: For interpregnancy intervals of 59 months or less, there was a J-shaped association between the interval and the rate of congenital anomalies. The rate was lowest, at 1.9%, when the interval was 12-17 months. It rose to a high of 2.5% when the interval was 0-5 months and 2.4% when the interval was 24-59 months. The corresponding odds ratios were 1.35 and 1.28, respectively.

Data Source: A retrospective population-based cohort study of 46,559 pregnant women.

Disclosures: Dr. Chen reported that she had no relevant financial disclosures.

Adjuvant XELOX Improves Outcomes in Gastric Cancer

Article Type
Changed
Fri, 01/18/2019 - 11:08
Display Headline
Adjuvant XELOX Improves Outcomes in Gastric Cancer

CHICAGO – Patients with gastric cancer have better outcomes if they are given adjuvant capecitabine plus oxaliplatin after undergoing curative resection that includes an extended lymph-node dissection, according to results of the CLASSIC trial from China, South Korea, and Taiwan

Investigators randomized 1,035 patients to either simple observation or XELOX (the combination of capecitabine plus oxaliplatin) after surgery. A full analysis was performed early because of interim efficacy findings in favor of the chemotherapy.

    Dr. Yung-Jue Bang

Trial results, which were reported at the annual meeting of the American Society of Clinical Oncology, showed that patients in the XELOX arm had a 44% reduction in the risk of recurrence. Their 3-year, disease-free survival rate (the primary end point) was 74%, compared with 60% in the observation-only group (hazard ratio, 0.56; P less than .0001). The benefit was similar across patients with stage II, IIIA, and IIIB disease.

Additionally, early data showed a 26% reduction in the 3-year risk of death, though the latter is not yet statistically significant.

"The CLASSIC [Capecitabine and Oxaliplatin Adjuvant Study in Stomach Cancer] trial met its primary end point," said Dr. Yung-Jue Bang, presenting findings on behalf of his coinvestigators. "CLASSIC demonstrates superior efficacy of adjuvant XELOX vs. observation alone following D2 [extended] lymph-node dissection. The data presented support the use of adjuvant XELOX for gastric cancer."

The positive findings for chemotherapy in this trial contrast with negative findings of similar trials that have been conducted in Western countries, Dr. Bang acknowledged. He speculated that small sample sizes (which limit statistical power) and the lesser extent of surgery in the latter trials explain the difference.

    Dr. Florian Lordick

"We need some kind of good surgery to prove the effect of adjuvant chemotherapy," he elaborated on the latter point. "For patients who receive [inadequate] surgery, we may need radiotherapy to compensate" for any remaining locoregional disease."

Another recent adjuvant trial, ACTS-GC (Adjuvant Chemotherapy for Gastric Cancer With S-1), which was conducted in Japan, found that monotherapy with S-1 (an oral fluoropyrimidine that has not been approved in the United States) provided similar benefit (NEJM 2007;357:1810-20), an attendee pointed out. Should oncologists select S-1 or XELOX, and is a randomized, head-to-head comparison warranted?

"It is impossible to compare the results of this study with that of ACTS-GC," asserted Dr. Bang, an oncologist at Seoul (South Korea) National University. But he did note that in the ACTS-GC trial, the benefit for patients with stage III disease was uncertain.

"So at this time, my suggestion is we can consider doublet [XELOX] especially for stage III patients," he said. "I don’t want to do another study comparing S-1 and XELOX because we have to move forward."

Discussant Dr. Florian Lordick, an oncologist with the Klinikum Braunschweig (Germany), said that a key question is whether the CLASSIC trial results can be transferred to Western countries. "I would answer [that] there are some caveats," he commented.

Those caveats include the comparatively older age of patients with gastric cancer in Western countries, which might reduce tolerance for adjuvant chemotherapy; the greater prevalence of proximal cancers, for which neoadjuvant chemotherapy has shown benefit; and the less-frequent use of D2 resection.

"D2 resection was mandatory in the CLASSIC trial, and a median of 42 lymph nodes – I repeat, 42 lymph nodes – [was] examined. ... This is not uniformly the standard in many Western centers," he pointed out. For example, in the U.S. Intergroup 0116 and U.K. MAGIC trials in gastric cancer, only 10% and 41% of resections, respectively, were D2 resections.

"So one could ask the question, does the surgical approach determine the optimal adjuvant treatment strategy?" Dr. Lordick said. "We have seen compelling results for adjuvant chemotherapy following radical resection, D2 resection, which is the standard of care in Asia. For those centers that perform more subradical resection ... the addition of adjuvant radiation makes sense."

Patients were eligible for the CLASSIC trial if they had stage II, IIIA, or IIIB gastric cancer, had undergone a D2 dissection in the preceding 6 weeks with neither macroscopic nor microscopic evidence of residual disease, and had not received any chemotherapy or radiation therapy.

Sufficiency of extent of surgery was rigorously ensured through quality assurance meetings, the use of a standard operating procedure for surgical technique, and a requirement of photographic documentation, Dr. Bang noted.

The patients were assigned in nearly equal numbers to observation or eight cycles (6 months) of the XELOX regimen, consisting of capecitabine (Xeloda) 1,000 mg/m2 b.i.d on days 1-14 plus oxaliplatin (Eloxatin) 130 mg/m2 on day 1 of 3-week cycles.

 

 

"We chose 3-year disease-free survival as our primary end point because most relapses in gastric cancer occur within 2 or 3 years, and the survival after relapse is around 1 year," he commented. "In addition, there is evidence that 3-year disease-free survival is a surrogate end point for 5-year overall survival."

The patients had a median age of about 56 years, and 71% were male. The median time between surgery and randomization was 1.12 months.

At the trial’s preplanned interim analysis, the median follow-up was 34.4 months. The median number of cycles of therapy received was eight for both capecitabine and oxaliplatin. The median dose intensity was 85% and 98%, respectively.

"The safety of adjuvant XELOX in gastric cancer was consistent with the known safety profile of XELOX, with no new or unexpected findings," Dr. Bang reported. The rate of grade 3/4 adverse events was 54% in the XELOX arm and 6% in the observation arm. Two patients in the former arm died of treatment-related causes.

In intention-to-treat analyses, the 3-year rate of disease-free survival benefit was apparent. "The two curves separated early and the difference was well maintained," he commented.

In subgroup analyses, hazard ratios consistently favored XELOX. Analyses according to histologic tumor type are still ongoing, but HER2 testing was not done in the study.

There was also a trend toward a better 3-year rate of overall survival with XELOX (HR, 0.74; P = .0775). "The overall survival curves started to separate at 24 months; however, at this time point, the data are not mature enough," he said. "Longer follow-up is needed to determine the effect of adjuvant XELOX on overall survival."

The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.







Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
gastric cancer, capecitabine, oxaliplatin
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

CHICAGO – Patients with gastric cancer have better outcomes if they are given adjuvant capecitabine plus oxaliplatin after undergoing curative resection that includes an extended lymph-node dissection, according to results of the CLASSIC trial from China, South Korea, and Taiwan

Investigators randomized 1,035 patients to either simple observation or XELOX (the combination of capecitabine plus oxaliplatin) after surgery. A full analysis was performed early because of interim efficacy findings in favor of the chemotherapy.

    Dr. Yung-Jue Bang

Trial results, which were reported at the annual meeting of the American Society of Clinical Oncology, showed that patients in the XELOX arm had a 44% reduction in the risk of recurrence. Their 3-year, disease-free survival rate (the primary end point) was 74%, compared with 60% in the observation-only group (hazard ratio, 0.56; P less than .0001). The benefit was similar across patients with stage II, IIIA, and IIIB disease.

Additionally, early data showed a 26% reduction in the 3-year risk of death, though the latter is not yet statistically significant.

"The CLASSIC [Capecitabine and Oxaliplatin Adjuvant Study in Stomach Cancer] trial met its primary end point," said Dr. Yung-Jue Bang, presenting findings on behalf of his coinvestigators. "CLASSIC demonstrates superior efficacy of adjuvant XELOX vs. observation alone following D2 [extended] lymph-node dissection. The data presented support the use of adjuvant XELOX for gastric cancer."

The positive findings for chemotherapy in this trial contrast with negative findings of similar trials that have been conducted in Western countries, Dr. Bang acknowledged. He speculated that small sample sizes (which limit statistical power) and the lesser extent of surgery in the latter trials explain the difference.

    Dr. Florian Lordick

"We need some kind of good surgery to prove the effect of adjuvant chemotherapy," he elaborated on the latter point. "For patients who receive [inadequate] surgery, we may need radiotherapy to compensate" for any remaining locoregional disease."

Another recent adjuvant trial, ACTS-GC (Adjuvant Chemotherapy for Gastric Cancer With S-1), which was conducted in Japan, found that monotherapy with S-1 (an oral fluoropyrimidine that has not been approved in the United States) provided similar benefit (NEJM 2007;357:1810-20), an attendee pointed out. Should oncologists select S-1 or XELOX, and is a randomized, head-to-head comparison warranted?

"It is impossible to compare the results of this study with that of ACTS-GC," asserted Dr. Bang, an oncologist at Seoul (South Korea) National University. But he did note that in the ACTS-GC trial, the benefit for patients with stage III disease was uncertain.

"So at this time, my suggestion is we can consider doublet [XELOX] especially for stage III patients," he said. "I don’t want to do another study comparing S-1 and XELOX because we have to move forward."

Discussant Dr. Florian Lordick, an oncologist with the Klinikum Braunschweig (Germany), said that a key question is whether the CLASSIC trial results can be transferred to Western countries. "I would answer [that] there are some caveats," he commented.

Those caveats include the comparatively older age of patients with gastric cancer in Western countries, which might reduce tolerance for adjuvant chemotherapy; the greater prevalence of proximal cancers, for which neoadjuvant chemotherapy has shown benefit; and the less-frequent use of D2 resection.

"D2 resection was mandatory in the CLASSIC trial, and a median of 42 lymph nodes – I repeat, 42 lymph nodes – [was] examined. ... This is not uniformly the standard in many Western centers," he pointed out. For example, in the U.S. Intergroup 0116 and U.K. MAGIC trials in gastric cancer, only 10% and 41% of resections, respectively, were D2 resections.

"So one could ask the question, does the surgical approach determine the optimal adjuvant treatment strategy?" Dr. Lordick said. "We have seen compelling results for adjuvant chemotherapy following radical resection, D2 resection, which is the standard of care in Asia. For those centers that perform more subradical resection ... the addition of adjuvant radiation makes sense."

Patients were eligible for the CLASSIC trial if they had stage II, IIIA, or IIIB gastric cancer, had undergone a D2 dissection in the preceding 6 weeks with neither macroscopic nor microscopic evidence of residual disease, and had not received any chemotherapy or radiation therapy.

Sufficiency of extent of surgery was rigorously ensured through quality assurance meetings, the use of a standard operating procedure for surgical technique, and a requirement of photographic documentation, Dr. Bang noted.

The patients were assigned in nearly equal numbers to observation or eight cycles (6 months) of the XELOX regimen, consisting of capecitabine (Xeloda) 1,000 mg/m2 b.i.d on days 1-14 plus oxaliplatin (Eloxatin) 130 mg/m2 on day 1 of 3-week cycles.

 

 

"We chose 3-year disease-free survival as our primary end point because most relapses in gastric cancer occur within 2 or 3 years, and the survival after relapse is around 1 year," he commented. "In addition, there is evidence that 3-year disease-free survival is a surrogate end point for 5-year overall survival."

The patients had a median age of about 56 years, and 71% were male. The median time between surgery and randomization was 1.12 months.

At the trial’s preplanned interim analysis, the median follow-up was 34.4 months. The median number of cycles of therapy received was eight for both capecitabine and oxaliplatin. The median dose intensity was 85% and 98%, respectively.

"The safety of adjuvant XELOX in gastric cancer was consistent with the known safety profile of XELOX, with no new or unexpected findings," Dr. Bang reported. The rate of grade 3/4 adverse events was 54% in the XELOX arm and 6% in the observation arm. Two patients in the former arm died of treatment-related causes.

In intention-to-treat analyses, the 3-year rate of disease-free survival benefit was apparent. "The two curves separated early and the difference was well maintained," he commented.

In subgroup analyses, hazard ratios consistently favored XELOX. Analyses according to histologic tumor type are still ongoing, but HER2 testing was not done in the study.

There was also a trend toward a better 3-year rate of overall survival with XELOX (HR, 0.74; P = .0775). "The overall survival curves started to separate at 24 months; however, at this time point, the data are not mature enough," he said. "Longer follow-up is needed to determine the effect of adjuvant XELOX on overall survival."

The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.







CHICAGO – Patients with gastric cancer have better outcomes if they are given adjuvant capecitabine plus oxaliplatin after undergoing curative resection that includes an extended lymph-node dissection, according to results of the CLASSIC trial from China, South Korea, and Taiwan

Investigators randomized 1,035 patients to either simple observation or XELOX (the combination of capecitabine plus oxaliplatin) after surgery. A full analysis was performed early because of interim efficacy findings in favor of the chemotherapy.

    Dr. Yung-Jue Bang

Trial results, which were reported at the annual meeting of the American Society of Clinical Oncology, showed that patients in the XELOX arm had a 44% reduction in the risk of recurrence. Their 3-year, disease-free survival rate (the primary end point) was 74%, compared with 60% in the observation-only group (hazard ratio, 0.56; P less than .0001). The benefit was similar across patients with stage II, IIIA, and IIIB disease.

Additionally, early data showed a 26% reduction in the 3-year risk of death, though the latter is not yet statistically significant.

"The CLASSIC [Capecitabine and Oxaliplatin Adjuvant Study in Stomach Cancer] trial met its primary end point," said Dr. Yung-Jue Bang, presenting findings on behalf of his coinvestigators. "CLASSIC demonstrates superior efficacy of adjuvant XELOX vs. observation alone following D2 [extended] lymph-node dissection. The data presented support the use of adjuvant XELOX for gastric cancer."

The positive findings for chemotherapy in this trial contrast with negative findings of similar trials that have been conducted in Western countries, Dr. Bang acknowledged. He speculated that small sample sizes (which limit statistical power) and the lesser extent of surgery in the latter trials explain the difference.

    Dr. Florian Lordick

"We need some kind of good surgery to prove the effect of adjuvant chemotherapy," he elaborated on the latter point. "For patients who receive [inadequate] surgery, we may need radiotherapy to compensate" for any remaining locoregional disease."

Another recent adjuvant trial, ACTS-GC (Adjuvant Chemotherapy for Gastric Cancer With S-1), which was conducted in Japan, found that monotherapy with S-1 (an oral fluoropyrimidine that has not been approved in the United States) provided similar benefit (NEJM 2007;357:1810-20), an attendee pointed out. Should oncologists select S-1 or XELOX, and is a randomized, head-to-head comparison warranted?

"It is impossible to compare the results of this study with that of ACTS-GC," asserted Dr. Bang, an oncologist at Seoul (South Korea) National University. But he did note that in the ACTS-GC trial, the benefit for patients with stage III disease was uncertain.

"So at this time, my suggestion is we can consider doublet [XELOX] especially for stage III patients," he said. "I don’t want to do another study comparing S-1 and XELOX because we have to move forward."

Discussant Dr. Florian Lordick, an oncologist with the Klinikum Braunschweig (Germany), said that a key question is whether the CLASSIC trial results can be transferred to Western countries. "I would answer [that] there are some caveats," he commented.

Those caveats include the comparatively older age of patients with gastric cancer in Western countries, which might reduce tolerance for adjuvant chemotherapy; the greater prevalence of proximal cancers, for which neoadjuvant chemotherapy has shown benefit; and the less-frequent use of D2 resection.

"D2 resection was mandatory in the CLASSIC trial, and a median of 42 lymph nodes – I repeat, 42 lymph nodes – [was] examined. ... This is not uniformly the standard in many Western centers," he pointed out. For example, in the U.S. Intergroup 0116 and U.K. MAGIC trials in gastric cancer, only 10% and 41% of resections, respectively, were D2 resections.

"So one could ask the question, does the surgical approach determine the optimal adjuvant treatment strategy?" Dr. Lordick said. "We have seen compelling results for adjuvant chemotherapy following radical resection, D2 resection, which is the standard of care in Asia. For those centers that perform more subradical resection ... the addition of adjuvant radiation makes sense."

Patients were eligible for the CLASSIC trial if they had stage II, IIIA, or IIIB gastric cancer, had undergone a D2 dissection in the preceding 6 weeks with neither macroscopic nor microscopic evidence of residual disease, and had not received any chemotherapy or radiation therapy.

Sufficiency of extent of surgery was rigorously ensured through quality assurance meetings, the use of a standard operating procedure for surgical technique, and a requirement of photographic documentation, Dr. Bang noted.

The patients were assigned in nearly equal numbers to observation or eight cycles (6 months) of the XELOX regimen, consisting of capecitabine (Xeloda) 1,000 mg/m2 b.i.d on days 1-14 plus oxaliplatin (Eloxatin) 130 mg/m2 on day 1 of 3-week cycles.

 

 

"We chose 3-year disease-free survival as our primary end point because most relapses in gastric cancer occur within 2 or 3 years, and the survival after relapse is around 1 year," he commented. "In addition, there is evidence that 3-year disease-free survival is a surrogate end point for 5-year overall survival."

The patients had a median age of about 56 years, and 71% were male. The median time between surgery and randomization was 1.12 months.

At the trial’s preplanned interim analysis, the median follow-up was 34.4 months. The median number of cycles of therapy received was eight for both capecitabine and oxaliplatin. The median dose intensity was 85% and 98%, respectively.

"The safety of adjuvant XELOX in gastric cancer was consistent with the known safety profile of XELOX, with no new or unexpected findings," Dr. Bang reported. The rate of grade 3/4 adverse events was 54% in the XELOX arm and 6% in the observation arm. Two patients in the former arm died of treatment-related causes.

In intention-to-treat analyses, the 3-year rate of disease-free survival benefit was apparent. "The two curves separated early and the difference was well maintained," he commented.

In subgroup analyses, hazard ratios consistently favored XELOX. Analyses according to histologic tumor type are still ongoing, but HER2 testing was not done in the study.

There was also a trend toward a better 3-year rate of overall survival with XELOX (HR, 0.74; P = .0775). "The overall survival curves started to separate at 24 months; however, at this time point, the data are not mature enough," he said. "Longer follow-up is needed to determine the effect of adjuvant XELOX on overall survival."

The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.







Publications
Publications
Topics
Article Type
Display Headline
Adjuvant XELOX Improves Outcomes in Gastric Cancer
Display Headline
Adjuvant XELOX Improves Outcomes in Gastric Cancer
Legacy Keywords
gastric cancer, capecitabine, oxaliplatin
Legacy Keywords
gastric cancer, capecitabine, oxaliplatin
Article Source

FROM THE ANNUAL MEETING OF THE AMERICAN SOCIETY OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: Compared with their counterparts assigned to observation, patients assigned to XELOX had a significant 44% reduction in the risk of recurrence, and a marginal 26% reduction in the risk of death.

Data Source: A randomized, phase III trial of adjuvant XELOX (capecitabine and oxaliplatin) in 1,035 patients with stage II, IIIA, or IIIB gastric cancer (the CLASSIC trial).

Disclosures: The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.

Adjuvant XELOX Improves Outcomes in Gastric Cancer

Article Type
Changed
Fri, 12/07/2018 - 14:07
Display Headline
Adjuvant XELOX Improves Outcomes in Gastric Cancer

CHICAGO – Patients with gastric cancer have better outcomes if they are given adjuvant capecitabine plus oxaliplatin after undergoing curative resection that includes an extended lymph-node dissection, according to results of the CLASSIC trial from China, South Korea, and Taiwan

Investigators randomized 1,035 patients to either simple observation or XELOX (the combination of capecitabine plus oxaliplatin) after surgery. A full analysis was performed early because of interim efficacy findings in favor of the chemotherapy.

    Dr. Yung-Jue Bang

Trial results, which were reported at the annual meeting of the American Society of Clinical Oncology, showed that patients in the XELOX arm had a 44% reduction in the risk of recurrence. Their 3-year, disease-free survival rate (the primary end point) was 74%, compared with 60% in the observation-only group (hazard ratio, 0.56; P less than .0001). The benefit was similar across patients with stage II, IIIA, and IIIB disease.

Additionally, early data showed a 26% reduction in the 3-year risk of death, though the latter is not yet statistically significant.

"The CLASSIC [Capecitabine and Oxaliplatin Adjuvant Study in Stomach Cancer] trial met its primary end point," said Dr. Yung-Jue Bang, presenting findings on behalf of his coinvestigators. "CLASSIC demonstrates superior efficacy of adjuvant XELOX vs. observation alone following D2 [extended] lymph-node dissection. The data presented support the use of adjuvant XELOX for gastric cancer."

The positive findings for chemotherapy in this trial contrast with negative findings of similar trials that have been conducted in Western countries, Dr. Bang acknowledged. He speculated that small sample sizes (which limit statistical power) and the lesser extent of surgery in the latter trials explain the difference.

    Dr. Florian Lordick

"We need some kind of good surgery to prove the effect of adjuvant chemotherapy," he elaborated on the latter point. "For patients who receive [inadequate] surgery, we may need radiotherapy to compensate" for any remaining locoregional disease."

Another recent adjuvant trial, ACTS-GC (Adjuvant Chemotherapy for Gastric Cancer With S-1), which was conducted in Japan, found that monotherapy with S-1 (an oral fluoropyrimidine that has not been approved in the United States) provided similar benefit (NEJM 2007;357:1810-20), an attendee pointed out. Should oncologists select S-1 or XELOX, and is a randomized, head-to-head comparison warranted?

"It is impossible to compare the results of this study with that of ACTS-GC," asserted Dr. Bang, an oncologist at Seoul (South Korea) National University. But he did note that in the ACTS-GC trial, the benefit for patients with stage III disease was uncertain.

"So at this time, my suggestion is we can consider doublet [XELOX] especially for stage III patients," he said. "I don’t want to do another study comparing S-1 and XELOX because we have to move forward."

Discussant Dr. Florian Lordick, an oncologist with the Klinikum Braunschweig (Germany), said that a key question is whether the CLASSIC trial results can be transferred to Western countries. "I would answer [that] there are some caveats," he commented.

Those caveats include the comparatively older age of patients with gastric cancer in Western countries, which might reduce tolerance for adjuvant chemotherapy; the greater prevalence of proximal cancers, for which neoadjuvant chemotherapy has shown benefit; and the less-frequent use of D2 resection.

"D2 resection was mandatory in the CLASSIC trial, and a median of 42 lymph nodes – I repeat, 42 lymph nodes – [was] examined. ... This is not uniformly the standard in many Western centers," he pointed out. For example, in the U.S. Intergroup 0116 and U.K. MAGIC trials in gastric cancer, only 10% and 41% of resections, respectively, were D2 resections.

"So one could ask the question, does the surgical approach determine the optimal adjuvant treatment strategy?" Dr. Lordick said. "We have seen compelling results for adjuvant chemotherapy following radical resection, D2 resection, which is the standard of care in Asia. For those centers that perform more subradical resection ... the addition of adjuvant radiation makes sense."

Patients were eligible for the CLASSIC trial if they had stage II, IIIA, or IIIB gastric cancer, had undergone a D2 dissection in the preceding 6 weeks with neither macroscopic nor microscopic evidence of residual disease, and had not received any chemotherapy or radiation therapy.

Sufficiency of extent of surgery was rigorously ensured through quality assurance meetings, the use of a standard operating procedure for surgical technique, and a requirement of photographic documentation, Dr. Bang noted.

The patients were assigned in nearly equal numbers to observation or eight cycles (6 months) of the XELOX regimen, consisting of capecitabine (Xeloda) 1,000 mg/m2 b.i.d on days 1-14 plus oxaliplatin (Eloxatin) 130 mg/m2 on day 1 of 3-week cycles.

 

 

"We chose 3-year disease-free survival as our primary end point because most relapses in gastric cancer occur within 2 or 3 years, and the survival after relapse is around 1 year," he commented. "In addition, there is evidence that 3-year disease-free survival is a surrogate end point for 5-year overall survival."

The patients had a median age of about 56 years, and 71% were male. The median time between surgery and randomization was 1.12 months.

At the trial’s preplanned interim analysis, the median follow-up was 34.4 months. The median number of cycles of therapy received was eight for both capecitabine and oxaliplatin. The median dose intensity was 85% and 98%, respectively.

"The safety of adjuvant XELOX in gastric cancer was consistent with the known safety profile of XELOX, with no new or unexpected findings," Dr. Bang reported. The rate of grade 3/4 adverse events was 54% in the XELOX arm and 6% in the observation arm. Two patients in the former arm died of treatment-related causes.

In intention-to-treat analyses, the 3-year rate of disease-free survival benefit was apparent. "The two curves separated early and the difference was well maintained," he commented.

In subgroup analyses, hazard ratios consistently favored XELOX. Analyses according to histologic tumor type are still ongoing, but HER2 testing was not done in the study.

There was also a trend toward a better 3-year rate of overall survival with XELOX (HR, 0.74; P = .0775). "The overall survival curves started to separate at 24 months; however, at this time point, the data are not mature enough," he said. "Longer follow-up is needed to determine the effect of adjuvant XELOX on overall survival."

The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.







Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
gastric cancer, capecitabine, oxaliplatin
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

CHICAGO – Patients with gastric cancer have better outcomes if they are given adjuvant capecitabine plus oxaliplatin after undergoing curative resection that includes an extended lymph-node dissection, according to results of the CLASSIC trial from China, South Korea, and Taiwan

Investigators randomized 1,035 patients to either simple observation or XELOX (the combination of capecitabine plus oxaliplatin) after surgery. A full analysis was performed early because of interim efficacy findings in favor of the chemotherapy.

    Dr. Yung-Jue Bang

Trial results, which were reported at the annual meeting of the American Society of Clinical Oncology, showed that patients in the XELOX arm had a 44% reduction in the risk of recurrence. Their 3-year, disease-free survival rate (the primary end point) was 74%, compared with 60% in the observation-only group (hazard ratio, 0.56; P less than .0001). The benefit was similar across patients with stage II, IIIA, and IIIB disease.

Additionally, early data showed a 26% reduction in the 3-year risk of death, though the latter is not yet statistically significant.

"The CLASSIC [Capecitabine and Oxaliplatin Adjuvant Study in Stomach Cancer] trial met its primary end point," said Dr. Yung-Jue Bang, presenting findings on behalf of his coinvestigators. "CLASSIC demonstrates superior efficacy of adjuvant XELOX vs. observation alone following D2 [extended] lymph-node dissection. The data presented support the use of adjuvant XELOX for gastric cancer."

The positive findings for chemotherapy in this trial contrast with negative findings of similar trials that have been conducted in Western countries, Dr. Bang acknowledged. He speculated that small sample sizes (which limit statistical power) and the lesser extent of surgery in the latter trials explain the difference.

    Dr. Florian Lordick

"We need some kind of good surgery to prove the effect of adjuvant chemotherapy," he elaborated on the latter point. "For patients who receive [inadequate] surgery, we may need radiotherapy to compensate" for any remaining locoregional disease."

Another recent adjuvant trial, ACTS-GC (Adjuvant Chemotherapy for Gastric Cancer With S-1), which was conducted in Japan, found that monotherapy with S-1 (an oral fluoropyrimidine that has not been approved in the United States) provided similar benefit (NEJM 2007;357:1810-20), an attendee pointed out. Should oncologists select S-1 or XELOX, and is a randomized, head-to-head comparison warranted?

"It is impossible to compare the results of this study with that of ACTS-GC," asserted Dr. Bang, an oncologist at Seoul (South Korea) National University. But he did note that in the ACTS-GC trial, the benefit for patients with stage III disease was uncertain.

"So at this time, my suggestion is we can consider doublet [XELOX] especially for stage III patients," he said. "I don’t want to do another study comparing S-1 and XELOX because we have to move forward."

Discussant Dr. Florian Lordick, an oncologist with the Klinikum Braunschweig (Germany), said that a key question is whether the CLASSIC trial results can be transferred to Western countries. "I would answer [that] there are some caveats," he commented.

Those caveats include the comparatively older age of patients with gastric cancer in Western countries, which might reduce tolerance for adjuvant chemotherapy; the greater prevalence of proximal cancers, for which neoadjuvant chemotherapy has shown benefit; and the less-frequent use of D2 resection.

"D2 resection was mandatory in the CLASSIC trial, and a median of 42 lymph nodes – I repeat, 42 lymph nodes – [was] examined. ... This is not uniformly the standard in many Western centers," he pointed out. For example, in the U.S. Intergroup 0116 and U.K. MAGIC trials in gastric cancer, only 10% and 41% of resections, respectively, were D2 resections.

"So one could ask the question, does the surgical approach determine the optimal adjuvant treatment strategy?" Dr. Lordick said. "We have seen compelling results for adjuvant chemotherapy following radical resection, D2 resection, which is the standard of care in Asia. For those centers that perform more subradical resection ... the addition of adjuvant radiation makes sense."

Patients were eligible for the CLASSIC trial if they had stage II, IIIA, or IIIB gastric cancer, had undergone a D2 dissection in the preceding 6 weeks with neither macroscopic nor microscopic evidence of residual disease, and had not received any chemotherapy or radiation therapy.

Sufficiency of extent of surgery was rigorously ensured through quality assurance meetings, the use of a standard operating procedure for surgical technique, and a requirement of photographic documentation, Dr. Bang noted.

The patients were assigned in nearly equal numbers to observation or eight cycles (6 months) of the XELOX regimen, consisting of capecitabine (Xeloda) 1,000 mg/m2 b.i.d on days 1-14 plus oxaliplatin (Eloxatin) 130 mg/m2 on day 1 of 3-week cycles.

 

 

"We chose 3-year disease-free survival as our primary end point because most relapses in gastric cancer occur within 2 or 3 years, and the survival after relapse is around 1 year," he commented. "In addition, there is evidence that 3-year disease-free survival is a surrogate end point for 5-year overall survival."

The patients had a median age of about 56 years, and 71% were male. The median time between surgery and randomization was 1.12 months.

At the trial’s preplanned interim analysis, the median follow-up was 34.4 months. The median number of cycles of therapy received was eight for both capecitabine and oxaliplatin. The median dose intensity was 85% and 98%, respectively.

"The safety of adjuvant XELOX in gastric cancer was consistent with the known safety profile of XELOX, with no new or unexpected findings," Dr. Bang reported. The rate of grade 3/4 adverse events was 54% in the XELOX arm and 6% in the observation arm. Two patients in the former arm died of treatment-related causes.

In intention-to-treat analyses, the 3-year rate of disease-free survival benefit was apparent. "The two curves separated early and the difference was well maintained," he commented.

In subgroup analyses, hazard ratios consistently favored XELOX. Analyses according to histologic tumor type are still ongoing, but HER2 testing was not done in the study.

There was also a trend toward a better 3-year rate of overall survival with XELOX (HR, 0.74; P = .0775). "The overall survival curves started to separate at 24 months; however, at this time point, the data are not mature enough," he said. "Longer follow-up is needed to determine the effect of adjuvant XELOX on overall survival."

The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.







CHICAGO – Patients with gastric cancer have better outcomes if they are given adjuvant capecitabine plus oxaliplatin after undergoing curative resection that includes an extended lymph-node dissection, according to results of the CLASSIC trial from China, South Korea, and Taiwan

Investigators randomized 1,035 patients to either simple observation or XELOX (the combination of capecitabine plus oxaliplatin) after surgery. A full analysis was performed early because of interim efficacy findings in favor of the chemotherapy.

    Dr. Yung-Jue Bang

Trial results, which were reported at the annual meeting of the American Society of Clinical Oncology, showed that patients in the XELOX arm had a 44% reduction in the risk of recurrence. Their 3-year, disease-free survival rate (the primary end point) was 74%, compared with 60% in the observation-only group (hazard ratio, 0.56; P less than .0001). The benefit was similar across patients with stage II, IIIA, and IIIB disease.

Additionally, early data showed a 26% reduction in the 3-year risk of death, though the latter is not yet statistically significant.

"The CLASSIC [Capecitabine and Oxaliplatin Adjuvant Study in Stomach Cancer] trial met its primary end point," said Dr. Yung-Jue Bang, presenting findings on behalf of his coinvestigators. "CLASSIC demonstrates superior efficacy of adjuvant XELOX vs. observation alone following D2 [extended] lymph-node dissection. The data presented support the use of adjuvant XELOX for gastric cancer."

The positive findings for chemotherapy in this trial contrast with negative findings of similar trials that have been conducted in Western countries, Dr. Bang acknowledged. He speculated that small sample sizes (which limit statistical power) and the lesser extent of surgery in the latter trials explain the difference.

    Dr. Florian Lordick

"We need some kind of good surgery to prove the effect of adjuvant chemotherapy," he elaborated on the latter point. "For patients who receive [inadequate] surgery, we may need radiotherapy to compensate" for any remaining locoregional disease."

Another recent adjuvant trial, ACTS-GC (Adjuvant Chemotherapy for Gastric Cancer With S-1), which was conducted in Japan, found that monotherapy with S-1 (an oral fluoropyrimidine that has not been approved in the United States) provided similar benefit (NEJM 2007;357:1810-20), an attendee pointed out. Should oncologists select S-1 or XELOX, and is a randomized, head-to-head comparison warranted?

"It is impossible to compare the results of this study with that of ACTS-GC," asserted Dr. Bang, an oncologist at Seoul (South Korea) National University. But he did note that in the ACTS-GC trial, the benefit for patients with stage III disease was uncertain.

"So at this time, my suggestion is we can consider doublet [XELOX] especially for stage III patients," he said. "I don’t want to do another study comparing S-1 and XELOX because we have to move forward."

Discussant Dr. Florian Lordick, an oncologist with the Klinikum Braunschweig (Germany), said that a key question is whether the CLASSIC trial results can be transferred to Western countries. "I would answer [that] there are some caveats," he commented.

Those caveats include the comparatively older age of patients with gastric cancer in Western countries, which might reduce tolerance for adjuvant chemotherapy; the greater prevalence of proximal cancers, for which neoadjuvant chemotherapy has shown benefit; and the less-frequent use of D2 resection.

"D2 resection was mandatory in the CLASSIC trial, and a median of 42 lymph nodes – I repeat, 42 lymph nodes – [was] examined. ... This is not uniformly the standard in many Western centers," he pointed out. For example, in the U.S. Intergroup 0116 and U.K. MAGIC trials in gastric cancer, only 10% and 41% of resections, respectively, were D2 resections.

"So one could ask the question, does the surgical approach determine the optimal adjuvant treatment strategy?" Dr. Lordick said. "We have seen compelling results for adjuvant chemotherapy following radical resection, D2 resection, which is the standard of care in Asia. For those centers that perform more subradical resection ... the addition of adjuvant radiation makes sense."

Patients were eligible for the CLASSIC trial if they had stage II, IIIA, or IIIB gastric cancer, had undergone a D2 dissection in the preceding 6 weeks with neither macroscopic nor microscopic evidence of residual disease, and had not received any chemotherapy or radiation therapy.

Sufficiency of extent of surgery was rigorously ensured through quality assurance meetings, the use of a standard operating procedure for surgical technique, and a requirement of photographic documentation, Dr. Bang noted.

The patients were assigned in nearly equal numbers to observation or eight cycles (6 months) of the XELOX regimen, consisting of capecitabine (Xeloda) 1,000 mg/m2 b.i.d on days 1-14 plus oxaliplatin (Eloxatin) 130 mg/m2 on day 1 of 3-week cycles.

 

 

"We chose 3-year disease-free survival as our primary end point because most relapses in gastric cancer occur within 2 or 3 years, and the survival after relapse is around 1 year," he commented. "In addition, there is evidence that 3-year disease-free survival is a surrogate end point for 5-year overall survival."

The patients had a median age of about 56 years, and 71% were male. The median time between surgery and randomization was 1.12 months.

At the trial’s preplanned interim analysis, the median follow-up was 34.4 months. The median number of cycles of therapy received was eight for both capecitabine and oxaliplatin. The median dose intensity was 85% and 98%, respectively.

"The safety of adjuvant XELOX in gastric cancer was consistent with the known safety profile of XELOX, with no new or unexpected findings," Dr. Bang reported. The rate of grade 3/4 adverse events was 54% in the XELOX arm and 6% in the observation arm. Two patients in the former arm died of treatment-related causes.

In intention-to-treat analyses, the 3-year rate of disease-free survival benefit was apparent. "The two curves separated early and the difference was well maintained," he commented.

In subgroup analyses, hazard ratios consistently favored XELOX. Analyses according to histologic tumor type are still ongoing, but HER2 testing was not done in the study.

There was also a trend toward a better 3-year rate of overall survival with XELOX (HR, 0.74; P = .0775). "The overall survival curves started to separate at 24 months; however, at this time point, the data are not mature enough," he said. "Longer follow-up is needed to determine the effect of adjuvant XELOX on overall survival."

The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.







Publications
Publications
Topics
Article Type
Display Headline
Adjuvant XELOX Improves Outcomes in Gastric Cancer
Display Headline
Adjuvant XELOX Improves Outcomes in Gastric Cancer
Legacy Keywords
gastric cancer, capecitabine, oxaliplatin
Legacy Keywords
gastric cancer, capecitabine, oxaliplatin
Article Source

FROM THE ANNUAL MEETING OF THE AMERICAN SOCIETY OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: Compared with their counterparts assigned to observation, patients assigned to XELOX had a significant 44% reduction in the risk of recurrence, and a marginal 26% reduction in the risk of death.

Data Source: A randomized, phase III trial of adjuvant XELOX (capecitabine and oxaliplatin) in 1,035 patients with stage II, IIIA, or IIIB gastric cancer (the CLASSIC trial).

Disclosures: The trial was sponsored by Sanofi-Aventis and Roche. Dr. Bang reported that he is a consultant to and receives honoraria from Roche. Dr. Lordick reported that he is a consultant to Amgen and Ganymed; receives honoraria from Amgen, Fresenius, Merck Serono, Pfizer, and Roche; and receives research funding from Fresenius, GlaxoSmithKline, Merck Serono, and Sanofi-Aventis.

Shoulder Dystocia Protocol Reduces Injuries : The rate of obstetric brachial plexus injury fell by nearly three-fourths in this study.

Article Type
Changed
Tue, 08/28/2018 - 09:22
Display Headline
Shoulder Dystocia Protocol Reduces Injuries : The rate of obstetric brachial plexus injury fell by nearly three-fourths in this study.

Major Finding: The rate of obstetric brachial plexus injury in cases of shoulder dystocia fell from 40% before implementation of the Code D protocol to 14% afterward (P less than .01).

Data Source: A retrospective cohort study of 11,862 vaginal deliveries of singleton, live born infants.

Disclosures: Dr. Inglis did not report any relevant financial disclosures.

SAN FRANCISCO – A simple, standardized protocol for managing shoulder dystocia, called Code D, reduced the incidence of obstetric brachial plexus injury, according to a study reported at the meeting.

Investigators retrospectively assessed the impact of the protocol – which entails mobilization of experienced staff, a hands-off pause for assessment, and varied maneuvers – in a cohort of nearly 12,000 vaginal deliveries.

Study results showed that with use of the protocol, the rate of obstetric brachial plexus injury (Erb's palsy) among cases of shoulder dystocia fell by nearly three-fourths, from 40% before the protocol's implementation to 14% afterward.

“A standardized and simple protocol to manage shoulder dystocia appears to reduce the risk of Erb's palsy,” said lead investigator Dr. Steven R. Inglis.

“We were unable to tell which part of the protocol really was helping us,” he added, so further research is needed to determine the responsible components and maneuvers.

Rates of both shoulder dystocia and brachial plexus injury appear to be on the rise, in part because of increasing maternal obesity and diabetes, as well as increasing fetal macrosomia, according to Dr. Inglis, chairman of the department of ob.gyn. at the Jamaica (N.Y.) Hospital Medical Center.

These complications not only can be associated with long-term morbidity, but also account for a substantial share of obstetricians' liability payouts, according to Dr. Inglis.

Many strategies for managing shoulder dystocia have been introduced, but few of them have been studied to assess their impact on important neonatal outcomes, he said.

Dr. Inglis and his colleagues determined the rate of brachial plexus injury at Jamaica Hospital Medical Center before and after implementation of the Code D shoulder dystocia protocol. The protocol emphasized a stepwise team approach to management, conducted in a calm and relaxed environment.

Code D training was provided to all labor and delivery staff including attending and resident physicians, midwives, and nurses. “I don't think anybody else has really included nurses,” he commented. “I think they were a key part of it.”

Training included didactic presentations followed by hands-on practice with a manikin. “Everybody had to go through shoulder dystocia once or twice and get it done right according to our protocol,” Dr. Inglis explained.

When the staff diagnosed dystocia (tight or difficult shoulders, or the so-called turtle sign requiring additional maneuvers to achieve delivery), they activated the Code D protocol, which summoned to the room the most experienced available obstetrician, and also an anesthesiologist, a neonatologist, and a nurse.

Staff were taught, first, to assess – using a hands-off pause during which there was no maternal pushing, application of fundal pressure, or head traction – the orientation of the infant's back and shoulders, and to announce it to the delivery team.

This hands-off period lasted just a few seconds, according to Dr. Inglis. “You basically want to stop, take a deep breath, collect yourself, make sure you are following the protocol, and then go on.”

Staff then began one of several maneuvers performed in an order of their choice, including rotating the shoulders to the oblique position, changing maternal position, implementing the corkscrew maneuver, and delivering the posterior arm.

“Each should last no longer than 30 seconds, and you could go back to a maneuver if it didn't work the first time,” Dr. Inglis said. Suprapubic pressure also could be used.

To assess the impact of the Code D protocol, the investigators retrospectively reviewed medical records for mothers and their singleton, live born, nonbreech infants delivered vaginally between August 2003 and December 2009. Analyses were based on 6,269 deliveries in the pretraining period before September 2006, and 5,593 deliveries in the posttraining period.

Study results showed that the rate of shoulder dystocia did not differ significantly between periods: This complication occurred in 83 or 1.32% of deliveries in the former period, and in 75 or 1.34% of deliveries in the latter period. However, the percentage of cases of shoulder dystocia that resulted in brachial plexus injury was 40% in the pretraining period, compared with just 14% in the posttraining period.

Among the cases of shoulder dystocia, those in the pretraining period had a higher maternal body mass index (33.4 vs. 30.3 kg/m

 

 

But in a logistic regression analysis, use of the shoulder dystocia protocol was still associated with a reduced risk of obstetric brachial plexus injury.

The interval between delivery of the infant's head and body in cases of shoulder dystocia was longer in the posttraining period than in the pretraining period (2.0 minutes vs. 1.5 minutes).

“We wanted everyone to go slowly, so we were actually happy to see that the head-body interval went up,” commented Dr. Inglis. “That certainly didn't seem to worsen the risk of Erb's palsy.”

Study results also showed that staff were more likely to use the Rubin maneuver and posterior arm delivery in the posttraining vs. pretraining period, and were less likely to use the McRoberts maneuver.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

Major Finding: The rate of obstetric brachial plexus injury in cases of shoulder dystocia fell from 40% before implementation of the Code D protocol to 14% afterward (P less than .01).

Data Source: A retrospective cohort study of 11,862 vaginal deliveries of singleton, live born infants.

Disclosures: Dr. Inglis did not report any relevant financial disclosures.

SAN FRANCISCO – A simple, standardized protocol for managing shoulder dystocia, called Code D, reduced the incidence of obstetric brachial plexus injury, according to a study reported at the meeting.

Investigators retrospectively assessed the impact of the protocol – which entails mobilization of experienced staff, a hands-off pause for assessment, and varied maneuvers – in a cohort of nearly 12,000 vaginal deliveries.

Study results showed that with use of the protocol, the rate of obstetric brachial plexus injury (Erb's palsy) among cases of shoulder dystocia fell by nearly three-fourths, from 40% before the protocol's implementation to 14% afterward.

“A standardized and simple protocol to manage shoulder dystocia appears to reduce the risk of Erb's palsy,” said lead investigator Dr. Steven R. Inglis.

“We were unable to tell which part of the protocol really was helping us,” he added, so further research is needed to determine the responsible components and maneuvers.

Rates of both shoulder dystocia and brachial plexus injury appear to be on the rise, in part because of increasing maternal obesity and diabetes, as well as increasing fetal macrosomia, according to Dr. Inglis, chairman of the department of ob.gyn. at the Jamaica (N.Y.) Hospital Medical Center.

These complications not only can be associated with long-term morbidity, but also account for a substantial share of obstetricians' liability payouts, according to Dr. Inglis.

Many strategies for managing shoulder dystocia have been introduced, but few of them have been studied to assess their impact on important neonatal outcomes, he said.

Dr. Inglis and his colleagues determined the rate of brachial plexus injury at Jamaica Hospital Medical Center before and after implementation of the Code D shoulder dystocia protocol. The protocol emphasized a stepwise team approach to management, conducted in a calm and relaxed environment.

Code D training was provided to all labor and delivery staff including attending and resident physicians, midwives, and nurses. “I don't think anybody else has really included nurses,” he commented. “I think they were a key part of it.”

Training included didactic presentations followed by hands-on practice with a manikin. “Everybody had to go through shoulder dystocia once or twice and get it done right according to our protocol,” Dr. Inglis explained.

When the staff diagnosed dystocia (tight or difficult shoulders, or the so-called turtle sign requiring additional maneuvers to achieve delivery), they activated the Code D protocol, which summoned to the room the most experienced available obstetrician, and also an anesthesiologist, a neonatologist, and a nurse.

Staff were taught, first, to assess – using a hands-off pause during which there was no maternal pushing, application of fundal pressure, or head traction – the orientation of the infant's back and shoulders, and to announce it to the delivery team.

This hands-off period lasted just a few seconds, according to Dr. Inglis. “You basically want to stop, take a deep breath, collect yourself, make sure you are following the protocol, and then go on.”

Staff then began one of several maneuvers performed in an order of their choice, including rotating the shoulders to the oblique position, changing maternal position, implementing the corkscrew maneuver, and delivering the posterior arm.

“Each should last no longer than 30 seconds, and you could go back to a maneuver if it didn't work the first time,” Dr. Inglis said. Suprapubic pressure also could be used.

To assess the impact of the Code D protocol, the investigators retrospectively reviewed medical records for mothers and their singleton, live born, nonbreech infants delivered vaginally between August 2003 and December 2009. Analyses were based on 6,269 deliveries in the pretraining period before September 2006, and 5,593 deliveries in the posttraining period.

Study results showed that the rate of shoulder dystocia did not differ significantly between periods: This complication occurred in 83 or 1.32% of deliveries in the former period, and in 75 or 1.34% of deliveries in the latter period. However, the percentage of cases of shoulder dystocia that resulted in brachial plexus injury was 40% in the pretraining period, compared with just 14% in the posttraining period.

Among the cases of shoulder dystocia, those in the pretraining period had a higher maternal body mass index (33.4 vs. 30.3 kg/m

 

 

But in a logistic regression analysis, use of the shoulder dystocia protocol was still associated with a reduced risk of obstetric brachial plexus injury.

The interval between delivery of the infant's head and body in cases of shoulder dystocia was longer in the posttraining period than in the pretraining period (2.0 minutes vs. 1.5 minutes).

“We wanted everyone to go slowly, so we were actually happy to see that the head-body interval went up,” commented Dr. Inglis. “That certainly didn't seem to worsen the risk of Erb's palsy.”

Study results also showed that staff were more likely to use the Rubin maneuver and posterior arm delivery in the posttraining vs. pretraining period, and were less likely to use the McRoberts maneuver.

Major Finding: The rate of obstetric brachial plexus injury in cases of shoulder dystocia fell from 40% before implementation of the Code D protocol to 14% afterward (P less than .01).

Data Source: A retrospective cohort study of 11,862 vaginal deliveries of singleton, live born infants.

Disclosures: Dr. Inglis did not report any relevant financial disclosures.

SAN FRANCISCO – A simple, standardized protocol for managing shoulder dystocia, called Code D, reduced the incidence of obstetric brachial plexus injury, according to a study reported at the meeting.

Investigators retrospectively assessed the impact of the protocol – which entails mobilization of experienced staff, a hands-off pause for assessment, and varied maneuvers – in a cohort of nearly 12,000 vaginal deliveries.

Study results showed that with use of the protocol, the rate of obstetric brachial plexus injury (Erb's palsy) among cases of shoulder dystocia fell by nearly three-fourths, from 40% before the protocol's implementation to 14% afterward.

“A standardized and simple protocol to manage shoulder dystocia appears to reduce the risk of Erb's palsy,” said lead investigator Dr. Steven R. Inglis.

“We were unable to tell which part of the protocol really was helping us,” he added, so further research is needed to determine the responsible components and maneuvers.

Rates of both shoulder dystocia and brachial plexus injury appear to be on the rise, in part because of increasing maternal obesity and diabetes, as well as increasing fetal macrosomia, according to Dr. Inglis, chairman of the department of ob.gyn. at the Jamaica (N.Y.) Hospital Medical Center.

These complications not only can be associated with long-term morbidity, but also account for a substantial share of obstetricians' liability payouts, according to Dr. Inglis.

Many strategies for managing shoulder dystocia have been introduced, but few of them have been studied to assess their impact on important neonatal outcomes, he said.

Dr. Inglis and his colleagues determined the rate of brachial plexus injury at Jamaica Hospital Medical Center before and after implementation of the Code D shoulder dystocia protocol. The protocol emphasized a stepwise team approach to management, conducted in a calm and relaxed environment.

Code D training was provided to all labor and delivery staff including attending and resident physicians, midwives, and nurses. “I don't think anybody else has really included nurses,” he commented. “I think they were a key part of it.”

Training included didactic presentations followed by hands-on practice with a manikin. “Everybody had to go through shoulder dystocia once or twice and get it done right according to our protocol,” Dr. Inglis explained.

When the staff diagnosed dystocia (tight or difficult shoulders, or the so-called turtle sign requiring additional maneuvers to achieve delivery), they activated the Code D protocol, which summoned to the room the most experienced available obstetrician, and also an anesthesiologist, a neonatologist, and a nurse.

Staff were taught, first, to assess – using a hands-off pause during which there was no maternal pushing, application of fundal pressure, or head traction – the orientation of the infant's back and shoulders, and to announce it to the delivery team.

This hands-off period lasted just a few seconds, according to Dr. Inglis. “You basically want to stop, take a deep breath, collect yourself, make sure you are following the protocol, and then go on.”

Staff then began one of several maneuvers performed in an order of their choice, including rotating the shoulders to the oblique position, changing maternal position, implementing the corkscrew maneuver, and delivering the posterior arm.

“Each should last no longer than 30 seconds, and you could go back to a maneuver if it didn't work the first time,” Dr. Inglis said. Suprapubic pressure also could be used.

To assess the impact of the Code D protocol, the investigators retrospectively reviewed medical records for mothers and their singleton, live born, nonbreech infants delivered vaginally between August 2003 and December 2009. Analyses were based on 6,269 deliveries in the pretraining period before September 2006, and 5,593 deliveries in the posttraining period.

Study results showed that the rate of shoulder dystocia did not differ significantly between periods: This complication occurred in 83 or 1.32% of deliveries in the former period, and in 75 or 1.34% of deliveries in the latter period. However, the percentage of cases of shoulder dystocia that resulted in brachial plexus injury was 40% in the pretraining period, compared with just 14% in the posttraining period.

Among the cases of shoulder dystocia, those in the pretraining period had a higher maternal body mass index (33.4 vs. 30.3 kg/m

 

 

But in a logistic regression analysis, use of the shoulder dystocia protocol was still associated with a reduced risk of obstetric brachial plexus injury.

The interval between delivery of the infant's head and body in cases of shoulder dystocia was longer in the posttraining period than in the pretraining period (2.0 minutes vs. 1.5 minutes).

“We wanted everyone to go slowly, so we were actually happy to see that the head-body interval went up,” commented Dr. Inglis. “That certainly didn't seem to worsen the risk of Erb's palsy.”

Study results also showed that staff were more likely to use the Rubin maneuver and posterior arm delivery in the posttraining vs. pretraining period, and were less likely to use the McRoberts maneuver.

Publications
Publications
Topics
Article Type
Display Headline
Shoulder Dystocia Protocol Reduces Injuries : The rate of obstetric brachial plexus injury fell by nearly three-fourths in this study.
Display Headline
Shoulder Dystocia Protocol Reduces Injuries : The rate of obstetric brachial plexus injury fell by nearly three-fourths in this study.
Article Source

From the Annual Meeting of the Society for Maternal-Fetal Medicine

PURLs Copyright

Inside the Article

Article PDF Media

New Heart Allocation Algorithm a Success

Article Type
Changed
Tue, 12/04/2018 - 09:32
Display Headline
New Heart Allocation Algorithm a Success

SAN DIEGO – A new allocation algorithm that is designed to improve regional sharing of donor hearts with sicker patients before they are allocated locally to less-sick patients appears to be having the intended effects, according to a national cohort study.

In the study of nearly 12,000 adult patients who were wait-listed for primary heart transplantation in 2004-2009 in the United States, those who were wait-listed after the new algorithm was implemented were 17% less likely to die on the waiting list or to become too sick for transplantation, Dr. Tajinder P. Singh reported at the meeting.

Moreover, this benefit was achieved without any increase in the rate of in-hospital mortality in transplant recipients, even though they were sicker on average.

“The risk of dying on the heart transplant [waiting list] or becoming too sick for transplant has declined since the change in allocation algorithm in 2006,” said Dr. Singh, a pediatric cardiologist at Children's Hospital Boston. And reassuringly, “the shift in hearts to sicker transplant candidates has not resulted in higher early posttransplant mortality.”

These findings suggest that the new algorithm has been effective “not only from a utilitarian view, which means most benefit for most people, but even from the fairness or justice perspective,” he commented, because the hearts are goint to sicker people.

“The demand for donor hearts continues to exceed their supply,” he said, giving background to the study. “The United Network for Organ Sharing has periodically modified the allocation algorithm in the United States” to improve waiting list outcomes.

The last such modification, implemented in July 2006, expanded the sharing of these scarce organs across a geographic region, making them available first to the sickest patients (those with status 1A or 1B) in a region before allocating them locally to less-sick patients.

The investigators studied all patients aged 18 years or older who were placed on the waiting list for primary heart transplantationbduring July 1004, ane 30, 2009, and who were undergoing transplantation of only a heart.

For comparison, the patients were split according to when they were listed into “era 1” (before the date of implementation of the new algorithm) and “era 2” (after that date). Study results were based on 11,864 patients in total; 38% were listed in era 1 and 62% were listed in era 2.

Patients in the two eras were similar with respect to most sociodemographic and medical factors, except that those in era 2 were more likely to be aged 60 years or older (32% vs. 28%), to receive mechanical support (14% vs. 13%), and to be sicker, as indicated by having a transplantation status of 1A (20% vs. 19%) or 1B (38% vs. 32%), for instance.

Overall, 13% of the patients studied either died or had a worsening of their condition that prevented transplantation while they were on the waiting list, the study's primary end point, Dr. Singh reported.

Before statistical adjustment, patients in era 2 were 14% less likely than those in era 1 to die or worsen while on the wait list (hazard ratio, 0.86). And this benefit was evident in both status 1A patients and status 1B patients individually.

After adjustment for numerous potential confounders, patients in era 2 were 17% less likely to die or worsen while on the wait list (HR, 0.83). This significant benefit was similar in most subgroups, except that by race, it was mainly limited to white patients.

Other risk-reducing factors included having an implantable cardioverter defibrillator (HR, 0.87) and having a continuous-flow left ventricular assist device (HR, 0.56).

Overall, 65% of the patients ultimately underwent transplantation. Compared with those in era 1, era 2 transplant recipients had a significantly shorter median wait time before receiving a heart (55 vs. 63 days) and were more likely to be status 1A at transplantation (48% vs. 37%).

Dr. Singh reported having no conflicts of interest related to the research.

Article PDF
Author and Disclosure Information

Publications
Topics
Author and Disclosure Information

Author and Disclosure Information

Article PDF
Article PDF

SAN DIEGO – A new allocation algorithm that is designed to improve regional sharing of donor hearts with sicker patients before they are allocated locally to less-sick patients appears to be having the intended effects, according to a national cohort study.

In the study of nearly 12,000 adult patients who were wait-listed for primary heart transplantation in 2004-2009 in the United States, those who were wait-listed after the new algorithm was implemented were 17% less likely to die on the waiting list or to become too sick for transplantation, Dr. Tajinder P. Singh reported at the meeting.

Moreover, this benefit was achieved without any increase in the rate of in-hospital mortality in transplant recipients, even though they were sicker on average.

“The risk of dying on the heart transplant [waiting list] or becoming too sick for transplant has declined since the change in allocation algorithm in 2006,” said Dr. Singh, a pediatric cardiologist at Children's Hospital Boston. And reassuringly, “the shift in hearts to sicker transplant candidates has not resulted in higher early posttransplant mortality.”

These findings suggest that the new algorithm has been effective “not only from a utilitarian view, which means most benefit for most people, but even from the fairness or justice perspective,” he commented, because the hearts are goint to sicker people.

“The demand for donor hearts continues to exceed their supply,” he said, giving background to the study. “The United Network for Organ Sharing has periodically modified the allocation algorithm in the United States” to improve waiting list outcomes.

The last such modification, implemented in July 2006, expanded the sharing of these scarce organs across a geographic region, making them available first to the sickest patients (those with status 1A or 1B) in a region before allocating them locally to less-sick patients.

The investigators studied all patients aged 18 years or older who were placed on the waiting list for primary heart transplantationbduring July 1004, ane 30, 2009, and who were undergoing transplantation of only a heart.

For comparison, the patients were split according to when they were listed into “era 1” (before the date of implementation of the new algorithm) and “era 2” (after that date). Study results were based on 11,864 patients in total; 38% were listed in era 1 and 62% were listed in era 2.

Patients in the two eras were similar with respect to most sociodemographic and medical factors, except that those in era 2 were more likely to be aged 60 years or older (32% vs. 28%), to receive mechanical support (14% vs. 13%), and to be sicker, as indicated by having a transplantation status of 1A (20% vs. 19%) or 1B (38% vs. 32%), for instance.

Overall, 13% of the patients studied either died or had a worsening of their condition that prevented transplantation while they were on the waiting list, the study's primary end point, Dr. Singh reported.

Before statistical adjustment, patients in era 2 were 14% less likely than those in era 1 to die or worsen while on the wait list (hazard ratio, 0.86). And this benefit was evident in both status 1A patients and status 1B patients individually.

After adjustment for numerous potential confounders, patients in era 2 were 17% less likely to die or worsen while on the wait list (HR, 0.83). This significant benefit was similar in most subgroups, except that by race, it was mainly limited to white patients.

Other risk-reducing factors included having an implantable cardioverter defibrillator (HR, 0.87) and having a continuous-flow left ventricular assist device (HR, 0.56).

Overall, 65% of the patients ultimately underwent transplantation. Compared with those in era 1, era 2 transplant recipients had a significantly shorter median wait time before receiving a heart (55 vs. 63 days) and were more likely to be status 1A at transplantation (48% vs. 37%).

Dr. Singh reported having no conflicts of interest related to the research.

SAN DIEGO – A new allocation algorithm that is designed to improve regional sharing of donor hearts with sicker patients before they are allocated locally to less-sick patients appears to be having the intended effects, according to a national cohort study.

In the study of nearly 12,000 adult patients who were wait-listed for primary heart transplantation in 2004-2009 in the United States, those who were wait-listed after the new algorithm was implemented were 17% less likely to die on the waiting list or to become too sick for transplantation, Dr. Tajinder P. Singh reported at the meeting.

Moreover, this benefit was achieved without any increase in the rate of in-hospital mortality in transplant recipients, even though they were sicker on average.

“The risk of dying on the heart transplant [waiting list] or becoming too sick for transplant has declined since the change in allocation algorithm in 2006,” said Dr. Singh, a pediatric cardiologist at Children's Hospital Boston. And reassuringly, “the shift in hearts to sicker transplant candidates has not resulted in higher early posttransplant mortality.”

These findings suggest that the new algorithm has been effective “not only from a utilitarian view, which means most benefit for most people, but even from the fairness or justice perspective,” he commented, because the hearts are goint to sicker people.

“The demand for donor hearts continues to exceed their supply,” he said, giving background to the study. “The United Network for Organ Sharing has periodically modified the allocation algorithm in the United States” to improve waiting list outcomes.

The last such modification, implemented in July 2006, expanded the sharing of these scarce organs across a geographic region, making them available first to the sickest patients (those with status 1A or 1B) in a region before allocating them locally to less-sick patients.

The investigators studied all patients aged 18 years or older who were placed on the waiting list for primary heart transplantationbduring July 1004, ane 30, 2009, and who were undergoing transplantation of only a heart.

For comparison, the patients were split according to when they were listed into “era 1” (before the date of implementation of the new algorithm) and “era 2” (after that date). Study results were based on 11,864 patients in total; 38% were listed in era 1 and 62% were listed in era 2.

Patients in the two eras were similar with respect to most sociodemographic and medical factors, except that those in era 2 were more likely to be aged 60 years or older (32% vs. 28%), to receive mechanical support (14% vs. 13%), and to be sicker, as indicated by having a transplantation status of 1A (20% vs. 19%) or 1B (38% vs. 32%), for instance.

Overall, 13% of the patients studied either died or had a worsening of their condition that prevented transplantation while they were on the waiting list, the study's primary end point, Dr. Singh reported.

Before statistical adjustment, patients in era 2 were 14% less likely than those in era 1 to die or worsen while on the wait list (hazard ratio, 0.86). And this benefit was evident in both status 1A patients and status 1B patients individually.

After adjustment for numerous potential confounders, patients in era 2 were 17% less likely to die or worsen while on the wait list (HR, 0.83). This significant benefit was similar in most subgroups, except that by race, it was mainly limited to white patients.

Other risk-reducing factors included having an implantable cardioverter defibrillator (HR, 0.87) and having a continuous-flow left ventricular assist device (HR, 0.56).

Overall, 65% of the patients ultimately underwent transplantation. Compared with those in era 1, era 2 transplant recipients had a significantly shorter median wait time before receiving a heart (55 vs. 63 days) and were more likely to be status 1A at transplantation (48% vs. 37%).

Dr. Singh reported having no conflicts of interest related to the research.

Publications
Publications
Topics
Article Type
Display Headline
New Heart Allocation Algorithm a Success
Display Headline
New Heart Allocation Algorithm a Success
Article Source

PURLs Copyright

Inside the Article

Article PDF Media

HRT Post Oophorectomy Adds No Breast Cancer Risk

Article Type
Changed
Fri, 12/07/2018 - 14:07
Display Headline
HRT Post Oophorectomy Adds No Breast Cancer Risk

CHICAGO – Women who have a BRCA mutation and undergo prophylactic oophorectomy can use hormone replacement therapy to control menopausal symptoms – at least in the short term – without experiencing any increase in the risk of breast cancer, new data suggest.

In an observational cohort study of more than 1,200 BRCA carriers, roughly half of those who underwent risk-reducing salpingo-oophorectomy also used hormone replacement therapy (HRT). The average duration of follow-up was about 3-5 years.

    Dr. Lynn Hartmann

Study results, reported at the annual meeting of the American Society of Clinical Oncology, showed that oophorectomy reduced breast cancer risk as intended, and that HRT users after oophorectomy did not have an elevated risk of breast cancer, compared with nonusers.

"While further data are needed, short-term HRT can at least be considered for mutation carriers undergoing early oophorectomy for ovarian and breast cancer risk reduction," said Dr. Susan M. Domchek, who presented the findings on behalf of the PROSE (Prevention and Observation of Surgical End Points) Consortium.

"I hear a lot from my patients these days that their relatives do not want to come in for genetic testing because they have been told that they are required to have a bilateral mastectomy and oophorectomy, and are not permitted to take HRT," she commented. "If this is dissuading women from coming in, we have to have a real conversation that, although data are limited, this may be an option for patients."

The PROSE database was developed by 20 centers in the United States and Europe who identified and prospectively followed women with a deleterious BRCA1 or BRCA2 mutation. For the study, the investigators focused on those who at ascertainment had at least one ovary, no prior breast or ovarian cancer, no prior bilateral mastectomy, and at least 6 months of follow-up.

Results were based on 1,299 women; 61% had a BRCA1 mutation and 39% had a BRCA2 mutation. (Those with both mutations were excluded.) Overall, 25% underwent risk-reducing salpingo-oophorectomy, and of this group, 45% used HRT afterward.

The mean duration of follow-up was 5.1 years among women who did not have oophorectomy and never used HRT, 3.6 years among women who had oophorectomy and never used HRT, and 5.4 years among women who had oophorectomy and used HRT.

Breast cancer was diagnosed in 22% of the women overall, but in only 13% of the subgroup who underwent oophorectomy.

Study results showed that women who used HRT after oophorectomy did not have an increased risk of breast cancer (and in fact tended to have a decreased risk) whether they were compared with women who did not use HRT after oophorectomy (hazard ratio, 0.78) or with women who did not have oophorectomy and never used HRT (HR, 0.43).

The findings were similar when BCRA1 carriers and BRCA2 carriers were analyzed individually, noted Dr. Domchek of the Abramson Cancer Center at the University of Pennsylvania in Philadelphia.

Additionally, there was no increased breast cancer risk according to the type of HRT taken after surgery, whether combined estrogen-progestin (taken by women who had only their ovaries and fallopian tubes removed) or estrogen only (taken by women who had had a hysterectomy as well).

Finally, in analyses restricted to women who did not undergo oophorectomy, those who used HRT any time after natural menopause did not have an increased risk of breast cancer, compared with their counterparts who never used HRT, and again tended to have a reduced risk (HR, 0.52).

"It’s worth pointing out that the mean age at the start of follow-up is significantly different between these two groups," Dr. Domchek cautioned, at 49 years in the former and 34 years in the latter. "And this really may be a different group that becomes menopausal without any cancer diagnosis."

Dr. Domchek acknowledged that the study was not randomized, that the numbers were small in some subgroups, and that follow-up was limited. But "the perfect can be the enemy of the good at times," she cautioned.

"These women have estrogen floating around their bodies now and want their ovaries out, so they don’t die of ovarian cancer. So even if [HRT] maintains their risk at where it is before their ovaries are out, at least they don’t get ovarian cancer," she said. "I really feel that if we wait [until the data are] perfect, and women won’t have an oophorectomy because they are terrified about menopause, [then] that hasn’t done them any good, either."

Additionally, many women may be fine with short-term use of HRT, which gets around the issue of elevated breast cancer risk seen with longer-term use of combination HRT in the Women’s Health Initiative. "If longer-term use [is desired], then you can have a discussion with women about hysterectomy so that they can take estrogen only," which was not found to increase risk. "I think these are subtleties of the counseling process as well."

 

 

Moreover, participants in the Women’s Health Initiative had a median age of 63 years, which was much older than the mean age of 38 years for the BRCA carriers studied. The former "are women who had gone through their whole natural life with estrogen and then [had taken] more, so potentially, it’s not relevant to this population of patients."

Discussant Dr. Lynn Hartmann, an oncologist at the Mayo Clinic in Rochester, Minn., cautioned about the pitfall of unknown biases in observational studies. "I can tell you from participating in cohort studies myself that there are biases that one cannot even imagine that can seep into your study sets," she said.

In the PROSE study, the types of cancers resulting from a BRCA mutation in a family might have influenced which women underwent oophorectomy. And a woman’s breast history (for example, atypia) might have influenced whether her physician offered HRT after oophorectomy.

Dr. Hartmann commended the investigators for developing a large, multi-institutional registry; conducting a high-quality study; and addressing an important, relevant clinical question.

"But I think we do have to have some skepticism when treatment questions are tried to be answered from these types of [study] designs," she said. "I would at least ask the PROSE team ... to consider whether or not they could move into prospective clinical trials with their cohorts."

Dr. Domchek and Dr. Hartmann reported that they had no relevant conflicts of interest.




Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
HRT, estrogen, BRCA, oophorectomy
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

CHICAGO – Women who have a BRCA mutation and undergo prophylactic oophorectomy can use hormone replacement therapy to control menopausal symptoms – at least in the short term – without experiencing any increase in the risk of breast cancer, new data suggest.

In an observational cohort study of more than 1,200 BRCA carriers, roughly half of those who underwent risk-reducing salpingo-oophorectomy also used hormone replacement therapy (HRT). The average duration of follow-up was about 3-5 years.

    Dr. Lynn Hartmann

Study results, reported at the annual meeting of the American Society of Clinical Oncology, showed that oophorectomy reduced breast cancer risk as intended, and that HRT users after oophorectomy did not have an elevated risk of breast cancer, compared with nonusers.

"While further data are needed, short-term HRT can at least be considered for mutation carriers undergoing early oophorectomy for ovarian and breast cancer risk reduction," said Dr. Susan M. Domchek, who presented the findings on behalf of the PROSE (Prevention and Observation of Surgical End Points) Consortium.

"I hear a lot from my patients these days that their relatives do not want to come in for genetic testing because they have been told that they are required to have a bilateral mastectomy and oophorectomy, and are not permitted to take HRT," she commented. "If this is dissuading women from coming in, we have to have a real conversation that, although data are limited, this may be an option for patients."

The PROSE database was developed by 20 centers in the United States and Europe who identified and prospectively followed women with a deleterious BRCA1 or BRCA2 mutation. For the study, the investigators focused on those who at ascertainment had at least one ovary, no prior breast or ovarian cancer, no prior bilateral mastectomy, and at least 6 months of follow-up.

Results were based on 1,299 women; 61% had a BRCA1 mutation and 39% had a BRCA2 mutation. (Those with both mutations were excluded.) Overall, 25% underwent risk-reducing salpingo-oophorectomy, and of this group, 45% used HRT afterward.

The mean duration of follow-up was 5.1 years among women who did not have oophorectomy and never used HRT, 3.6 years among women who had oophorectomy and never used HRT, and 5.4 years among women who had oophorectomy and used HRT.

Breast cancer was diagnosed in 22% of the women overall, but in only 13% of the subgroup who underwent oophorectomy.

Study results showed that women who used HRT after oophorectomy did not have an increased risk of breast cancer (and in fact tended to have a decreased risk) whether they were compared with women who did not use HRT after oophorectomy (hazard ratio, 0.78) or with women who did not have oophorectomy and never used HRT (HR, 0.43).

The findings were similar when BCRA1 carriers and BRCA2 carriers were analyzed individually, noted Dr. Domchek of the Abramson Cancer Center at the University of Pennsylvania in Philadelphia.

Additionally, there was no increased breast cancer risk according to the type of HRT taken after surgery, whether combined estrogen-progestin (taken by women who had only their ovaries and fallopian tubes removed) or estrogen only (taken by women who had had a hysterectomy as well).

Finally, in analyses restricted to women who did not undergo oophorectomy, those who used HRT any time after natural menopause did not have an increased risk of breast cancer, compared with their counterparts who never used HRT, and again tended to have a reduced risk (HR, 0.52).

"It’s worth pointing out that the mean age at the start of follow-up is significantly different between these two groups," Dr. Domchek cautioned, at 49 years in the former and 34 years in the latter. "And this really may be a different group that becomes menopausal without any cancer diagnosis."

Dr. Domchek acknowledged that the study was not randomized, that the numbers were small in some subgroups, and that follow-up was limited. But "the perfect can be the enemy of the good at times," she cautioned.

"These women have estrogen floating around their bodies now and want their ovaries out, so they don’t die of ovarian cancer. So even if [HRT] maintains their risk at where it is before their ovaries are out, at least they don’t get ovarian cancer," she said. "I really feel that if we wait [until the data are] perfect, and women won’t have an oophorectomy because they are terrified about menopause, [then] that hasn’t done them any good, either."

Additionally, many women may be fine with short-term use of HRT, which gets around the issue of elevated breast cancer risk seen with longer-term use of combination HRT in the Women’s Health Initiative. "If longer-term use [is desired], then you can have a discussion with women about hysterectomy so that they can take estrogen only," which was not found to increase risk. "I think these are subtleties of the counseling process as well."

 

 

Moreover, participants in the Women’s Health Initiative had a median age of 63 years, which was much older than the mean age of 38 years for the BRCA carriers studied. The former "are women who had gone through their whole natural life with estrogen and then [had taken] more, so potentially, it’s not relevant to this population of patients."

Discussant Dr. Lynn Hartmann, an oncologist at the Mayo Clinic in Rochester, Minn., cautioned about the pitfall of unknown biases in observational studies. "I can tell you from participating in cohort studies myself that there are biases that one cannot even imagine that can seep into your study sets," she said.

In the PROSE study, the types of cancers resulting from a BRCA mutation in a family might have influenced which women underwent oophorectomy. And a woman’s breast history (for example, atypia) might have influenced whether her physician offered HRT after oophorectomy.

Dr. Hartmann commended the investigators for developing a large, multi-institutional registry; conducting a high-quality study; and addressing an important, relevant clinical question.

"But I think we do have to have some skepticism when treatment questions are tried to be answered from these types of [study] designs," she said. "I would at least ask the PROSE team ... to consider whether or not they could move into prospective clinical trials with their cohorts."

Dr. Domchek and Dr. Hartmann reported that they had no relevant conflicts of interest.




CHICAGO – Women who have a BRCA mutation and undergo prophylactic oophorectomy can use hormone replacement therapy to control menopausal symptoms – at least in the short term – without experiencing any increase in the risk of breast cancer, new data suggest.

In an observational cohort study of more than 1,200 BRCA carriers, roughly half of those who underwent risk-reducing salpingo-oophorectomy also used hormone replacement therapy (HRT). The average duration of follow-up was about 3-5 years.

    Dr. Lynn Hartmann

Study results, reported at the annual meeting of the American Society of Clinical Oncology, showed that oophorectomy reduced breast cancer risk as intended, and that HRT users after oophorectomy did not have an elevated risk of breast cancer, compared with nonusers.

"While further data are needed, short-term HRT can at least be considered for mutation carriers undergoing early oophorectomy for ovarian and breast cancer risk reduction," said Dr. Susan M. Domchek, who presented the findings on behalf of the PROSE (Prevention and Observation of Surgical End Points) Consortium.

"I hear a lot from my patients these days that their relatives do not want to come in for genetic testing because they have been told that they are required to have a bilateral mastectomy and oophorectomy, and are not permitted to take HRT," she commented. "If this is dissuading women from coming in, we have to have a real conversation that, although data are limited, this may be an option for patients."

The PROSE database was developed by 20 centers in the United States and Europe who identified and prospectively followed women with a deleterious BRCA1 or BRCA2 mutation. For the study, the investigators focused on those who at ascertainment had at least one ovary, no prior breast or ovarian cancer, no prior bilateral mastectomy, and at least 6 months of follow-up.

Results were based on 1,299 women; 61% had a BRCA1 mutation and 39% had a BRCA2 mutation. (Those with both mutations were excluded.) Overall, 25% underwent risk-reducing salpingo-oophorectomy, and of this group, 45% used HRT afterward.

The mean duration of follow-up was 5.1 years among women who did not have oophorectomy and never used HRT, 3.6 years among women who had oophorectomy and never used HRT, and 5.4 years among women who had oophorectomy and used HRT.

Breast cancer was diagnosed in 22% of the women overall, but in only 13% of the subgroup who underwent oophorectomy.

Study results showed that women who used HRT after oophorectomy did not have an increased risk of breast cancer (and in fact tended to have a decreased risk) whether they were compared with women who did not use HRT after oophorectomy (hazard ratio, 0.78) or with women who did not have oophorectomy and never used HRT (HR, 0.43).

The findings were similar when BCRA1 carriers and BRCA2 carriers were analyzed individually, noted Dr. Domchek of the Abramson Cancer Center at the University of Pennsylvania in Philadelphia.

Additionally, there was no increased breast cancer risk according to the type of HRT taken after surgery, whether combined estrogen-progestin (taken by women who had only their ovaries and fallopian tubes removed) or estrogen only (taken by women who had had a hysterectomy as well).

Finally, in analyses restricted to women who did not undergo oophorectomy, those who used HRT any time after natural menopause did not have an increased risk of breast cancer, compared with their counterparts who never used HRT, and again tended to have a reduced risk (HR, 0.52).

"It’s worth pointing out that the mean age at the start of follow-up is significantly different between these two groups," Dr. Domchek cautioned, at 49 years in the former and 34 years in the latter. "And this really may be a different group that becomes menopausal without any cancer diagnosis."

Dr. Domchek acknowledged that the study was not randomized, that the numbers were small in some subgroups, and that follow-up was limited. But "the perfect can be the enemy of the good at times," she cautioned.

"These women have estrogen floating around their bodies now and want their ovaries out, so they don’t die of ovarian cancer. So even if [HRT] maintains their risk at where it is before their ovaries are out, at least they don’t get ovarian cancer," she said. "I really feel that if we wait [until the data are] perfect, and women won’t have an oophorectomy because they are terrified about menopause, [then] that hasn’t done them any good, either."

Additionally, many women may be fine with short-term use of HRT, which gets around the issue of elevated breast cancer risk seen with longer-term use of combination HRT in the Women’s Health Initiative. "If longer-term use [is desired], then you can have a discussion with women about hysterectomy so that they can take estrogen only," which was not found to increase risk. "I think these are subtleties of the counseling process as well."

 

 

Moreover, participants in the Women’s Health Initiative had a median age of 63 years, which was much older than the mean age of 38 years for the BRCA carriers studied. The former "are women who had gone through their whole natural life with estrogen and then [had taken] more, so potentially, it’s not relevant to this population of patients."

Discussant Dr. Lynn Hartmann, an oncologist at the Mayo Clinic in Rochester, Minn., cautioned about the pitfall of unknown biases in observational studies. "I can tell you from participating in cohort studies myself that there are biases that one cannot even imagine that can seep into your study sets," she said.

In the PROSE study, the types of cancers resulting from a BRCA mutation in a family might have influenced which women underwent oophorectomy. And a woman’s breast history (for example, atypia) might have influenced whether her physician offered HRT after oophorectomy.

Dr. Hartmann commended the investigators for developing a large, multi-institutional registry; conducting a high-quality study; and addressing an important, relevant clinical question.

"But I think we do have to have some skepticism when treatment questions are tried to be answered from these types of [study] designs," she said. "I would at least ask the PROSE team ... to consider whether or not they could move into prospective clinical trials with their cohorts."

Dr. Domchek and Dr. Hartmann reported that they had no relevant conflicts of interest.




Publications
Publications
Topics
Article Type
Display Headline
HRT Post Oophorectomy Adds No Breast Cancer Risk
Display Headline
HRT Post Oophorectomy Adds No Breast Cancer Risk
Legacy Keywords
HRT, estrogen, BRCA, oophorectomy
Legacy Keywords
HRT, estrogen, BRCA, oophorectomy
Article Source

FROM THE ANNUAL MEETING OF THE AMERICAN SOCIETY OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: Women who used HRT after oophorectomy did not have an increased risk of breast cancer whether compared with women who did not use HRT after oophorectomy (HR, 0.78) or with women who did not have oophorectomy and never used HRT (HR, 0.43).

Data Source: A prospective observational cohort study of 1,299 BRCA carriers in the PROSE database.

Disclosures: Dr. Domchek and Dr. Hartmann reported they had no relevant conflicts of interest.

Busulfan-Melphalan Superior as Myeloablative Tx for High-Risk Neuroblastoma

Article Type
Changed
Fri, 01/04/2019 - 11:42
Display Headline
Busulfan-Melphalan Superior as Myeloablative Tx for High-Risk Neuroblastoma

CHICAGO – The combination of busulfan and melphalan is superior to the combination of carboplatin, etoposide, and melphalan when used as myeloablative therapy in children with high-risk neuroblastoma, new data show.

In a randomized trial conducted by the SIOPEN (International Society of Pediatric Oncology European Neuroblastoma) Group among 563 such patients, busulfan plus melphalan yielded higher 3-year rates of event-free survival (49% vs. 33%; P less than .001) and overall survival (60% vs. 48%; P = .003), investigators reported in the plenary session at the annual meeting of the American Society of Clinical Oncology.

It had a better toxicity profile overall as well. The main adverse effect of busulfan plus melphalan (BUMEL), as expected, was veno-occlusive disease, but only 5% of patients experienced a grade 3 occlusive event.

"This is the first time that in pediatric oncology that we can clearly demonstrate that the choice of the myeloablative therapy really matters," said principal investigator Dr. Ruth Ladenstein of the St. Anna Children’s Hospital and Research Institute in Vienna.

"Summing up all the results, we feel that current practice should now be in favor of busulfan-melphalan in high-risk neuroblastoma," she said.

Discussant Dr. Julie R. Park called the SIOPEN trial "a great achievement in pediatric clinical research," noting, for example, its collaborative nature and completion despite the use of two toxic myeloablative regimens.

Yet, she cautioned, the event-free survival rate of 33% for the CEM (carboplatin, etoposide, and melphalan) regimen was much lower than that observed in the previous COG (Children’s Oncology Group) A3973 trial of this regimen (46%), possibly because of different treatment strategies and patient populations, and dose-reductions of CEM for renal toxicity in the new trial.

"The SIOPEN trial does demonstrate that the busulfan-melphalan regimen is superior to carboplatin, etoposide, and melphalan in the context of receiving rapid COJEC [cisplatin, vincristine, carboplatin, etoposide, and cyclophosphamide] induction and in a cohort of patients with good response to induction therapy," said Dr. Park, chair of COG’s Neuroblastoma Scientific Committee and a pediatric oncologist at the University of Washington in Seattle. "The COG data indicate that these results may not be applicable to those children who have received the N6 Memorial Sloan-Kettering induction" regimen.

"The toxicities of busulfan-melphalan, primarily veno-occlusive disease, will need to be taken into account as we consider whether further modifications of consolidation therapy can occur," she added.

Perhaps of greater importance, the majority of the SIOPEN trial’s initially eligible patients were unable to undergo randomization because of an inadequate response to induction therapy, as has been seen on other trials.

"Future high-risk neuroblastoma trials must address the need for improved induction response, as poor induction remains a major barrier to our cure of these children," Dr. Park asserted. "And continued improvement of postconsolidation [therapy] needs to be studied, as that has shown our maximal success in treating these children."

The trial was the first high-risk neuroblastoma trial (HR-NBL1) undertaken by SIOPEN. The design called for rapid COJEC induction followed by peripheral stem cell harvest, a first round of local control (attempted complete surgery of the primary tumor), randomized myeloablative therapy with stem cell rescue, a second round of local control (radiation therapy to the primary tumor), and finally maintenance therapy.

Patients could proceed to randomized myeloablative therapy only if they had an adequate response of metastases to the rapid induction regimen and had an adequate number of stem cells harvested.

They were randomized to BUMEL or CEM. The busulfan was given orally until 2006, after which an intravenous form became available.

A preplanned interim analysis showed efficacy in favor of BUMEL, Dr. Ladenstein reported. The trial was therefore stopped early, after a median observation period of 3.5 years. The 563 randomized patients (just 43% of those initially enrolled) had a median age of 3 years. In all, 83% had stage IV disease.

Presenting the main efficacy results, she said "we find [them] quite extraordinary and above our expectations."

"Most interestingly, this really was related to a decreased relapse rate under the busulfan-melphalan regimen and was not related to [decreased] transplant-related mortality," she noted.

Stratified analyses suggested that BUMEL had the greatest benefit in patients who had residual disease after induction. "We believe that this is related to the potency of the drugs to work on the resting tumor cells," Dr. Ladenstein commented.

In a multivariate analysis that included age, disease stage, and treatment group, patients still had a significantly reduced risk of events if they were assigned to BUMEL instead of CEM (hazard ratio, 0.64; P less than .001).

 

 

CEM was associated with higher rates of grade 3/4 infectious, gastrointestinal, and renal adverse effects, and ototoxicity. BUMEL was associated with a higher rate of grade 3/4 veno-occlusive disease; patients in the trial did not receive prophylactic anticoagulation with defibrotide, she noted.

Dr. Ladenstein and Dr. Park said they had no disclosures. The intravenous form of busulfan was provided by Pierre Fabre Médicament Oncology.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
combination, busulfan and melphalan, carboplatin, etoposide, and melphalan, myeloablative therapy, children, high-risk neuroblastoma, SIOPEN, International Society of Pediatric Oncology European Neuroblastoma Group, American Society of Clinical Oncology, BUMEL, Dr. Ruth Ladenstein,
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

CHICAGO – The combination of busulfan and melphalan is superior to the combination of carboplatin, etoposide, and melphalan when used as myeloablative therapy in children with high-risk neuroblastoma, new data show.

In a randomized trial conducted by the SIOPEN (International Society of Pediatric Oncology European Neuroblastoma) Group among 563 such patients, busulfan plus melphalan yielded higher 3-year rates of event-free survival (49% vs. 33%; P less than .001) and overall survival (60% vs. 48%; P = .003), investigators reported in the plenary session at the annual meeting of the American Society of Clinical Oncology.

It had a better toxicity profile overall as well. The main adverse effect of busulfan plus melphalan (BUMEL), as expected, was veno-occlusive disease, but only 5% of patients experienced a grade 3 occlusive event.

"This is the first time that in pediatric oncology that we can clearly demonstrate that the choice of the myeloablative therapy really matters," said principal investigator Dr. Ruth Ladenstein of the St. Anna Children’s Hospital and Research Institute in Vienna.

"Summing up all the results, we feel that current practice should now be in favor of busulfan-melphalan in high-risk neuroblastoma," she said.

Discussant Dr. Julie R. Park called the SIOPEN trial "a great achievement in pediatric clinical research," noting, for example, its collaborative nature and completion despite the use of two toxic myeloablative regimens.

Yet, she cautioned, the event-free survival rate of 33% for the CEM (carboplatin, etoposide, and melphalan) regimen was much lower than that observed in the previous COG (Children’s Oncology Group) A3973 trial of this regimen (46%), possibly because of different treatment strategies and patient populations, and dose-reductions of CEM for renal toxicity in the new trial.

"The SIOPEN trial does demonstrate that the busulfan-melphalan regimen is superior to carboplatin, etoposide, and melphalan in the context of receiving rapid COJEC [cisplatin, vincristine, carboplatin, etoposide, and cyclophosphamide] induction and in a cohort of patients with good response to induction therapy," said Dr. Park, chair of COG’s Neuroblastoma Scientific Committee and a pediatric oncologist at the University of Washington in Seattle. "The COG data indicate that these results may not be applicable to those children who have received the N6 Memorial Sloan-Kettering induction" regimen.

"The toxicities of busulfan-melphalan, primarily veno-occlusive disease, will need to be taken into account as we consider whether further modifications of consolidation therapy can occur," she added.

Perhaps of greater importance, the majority of the SIOPEN trial’s initially eligible patients were unable to undergo randomization because of an inadequate response to induction therapy, as has been seen on other trials.

"Future high-risk neuroblastoma trials must address the need for improved induction response, as poor induction remains a major barrier to our cure of these children," Dr. Park asserted. "And continued improvement of postconsolidation [therapy] needs to be studied, as that has shown our maximal success in treating these children."

The trial was the first high-risk neuroblastoma trial (HR-NBL1) undertaken by SIOPEN. The design called for rapid COJEC induction followed by peripheral stem cell harvest, a first round of local control (attempted complete surgery of the primary tumor), randomized myeloablative therapy with stem cell rescue, a second round of local control (radiation therapy to the primary tumor), and finally maintenance therapy.

Patients could proceed to randomized myeloablative therapy only if they had an adequate response of metastases to the rapid induction regimen and had an adequate number of stem cells harvested.

They were randomized to BUMEL or CEM. The busulfan was given orally until 2006, after which an intravenous form became available.

A preplanned interim analysis showed efficacy in favor of BUMEL, Dr. Ladenstein reported. The trial was therefore stopped early, after a median observation period of 3.5 years. The 563 randomized patients (just 43% of those initially enrolled) had a median age of 3 years. In all, 83% had stage IV disease.

Presenting the main efficacy results, she said "we find [them] quite extraordinary and above our expectations."

"Most interestingly, this really was related to a decreased relapse rate under the busulfan-melphalan regimen and was not related to [decreased] transplant-related mortality," she noted.

Stratified analyses suggested that BUMEL had the greatest benefit in patients who had residual disease after induction. "We believe that this is related to the potency of the drugs to work on the resting tumor cells," Dr. Ladenstein commented.

In a multivariate analysis that included age, disease stage, and treatment group, patients still had a significantly reduced risk of events if they were assigned to BUMEL instead of CEM (hazard ratio, 0.64; P less than .001).

 

 

CEM was associated with higher rates of grade 3/4 infectious, gastrointestinal, and renal adverse effects, and ototoxicity. BUMEL was associated with a higher rate of grade 3/4 veno-occlusive disease; patients in the trial did not receive prophylactic anticoagulation with defibrotide, she noted.

Dr. Ladenstein and Dr. Park said they had no disclosures. The intravenous form of busulfan was provided by Pierre Fabre Médicament Oncology.

CHICAGO – The combination of busulfan and melphalan is superior to the combination of carboplatin, etoposide, and melphalan when used as myeloablative therapy in children with high-risk neuroblastoma, new data show.

In a randomized trial conducted by the SIOPEN (International Society of Pediatric Oncology European Neuroblastoma) Group among 563 such patients, busulfan plus melphalan yielded higher 3-year rates of event-free survival (49% vs. 33%; P less than .001) and overall survival (60% vs. 48%; P = .003), investigators reported in the plenary session at the annual meeting of the American Society of Clinical Oncology.

It had a better toxicity profile overall as well. The main adverse effect of busulfan plus melphalan (BUMEL), as expected, was veno-occlusive disease, but only 5% of patients experienced a grade 3 occlusive event.

"This is the first time that in pediatric oncology that we can clearly demonstrate that the choice of the myeloablative therapy really matters," said principal investigator Dr. Ruth Ladenstein of the St. Anna Children’s Hospital and Research Institute in Vienna.

"Summing up all the results, we feel that current practice should now be in favor of busulfan-melphalan in high-risk neuroblastoma," she said.

Discussant Dr. Julie R. Park called the SIOPEN trial "a great achievement in pediatric clinical research," noting, for example, its collaborative nature and completion despite the use of two toxic myeloablative regimens.

Yet, she cautioned, the event-free survival rate of 33% for the CEM (carboplatin, etoposide, and melphalan) regimen was much lower than that observed in the previous COG (Children’s Oncology Group) A3973 trial of this regimen (46%), possibly because of different treatment strategies and patient populations, and dose-reductions of CEM for renal toxicity in the new trial.

"The SIOPEN trial does demonstrate that the busulfan-melphalan regimen is superior to carboplatin, etoposide, and melphalan in the context of receiving rapid COJEC [cisplatin, vincristine, carboplatin, etoposide, and cyclophosphamide] induction and in a cohort of patients with good response to induction therapy," said Dr. Park, chair of COG’s Neuroblastoma Scientific Committee and a pediatric oncologist at the University of Washington in Seattle. "The COG data indicate that these results may not be applicable to those children who have received the N6 Memorial Sloan-Kettering induction" regimen.

"The toxicities of busulfan-melphalan, primarily veno-occlusive disease, will need to be taken into account as we consider whether further modifications of consolidation therapy can occur," she added.

Perhaps of greater importance, the majority of the SIOPEN trial’s initially eligible patients were unable to undergo randomization because of an inadequate response to induction therapy, as has been seen on other trials.

"Future high-risk neuroblastoma trials must address the need for improved induction response, as poor induction remains a major barrier to our cure of these children," Dr. Park asserted. "And continued improvement of postconsolidation [therapy] needs to be studied, as that has shown our maximal success in treating these children."

The trial was the first high-risk neuroblastoma trial (HR-NBL1) undertaken by SIOPEN. The design called for rapid COJEC induction followed by peripheral stem cell harvest, a first round of local control (attempted complete surgery of the primary tumor), randomized myeloablative therapy with stem cell rescue, a second round of local control (radiation therapy to the primary tumor), and finally maintenance therapy.

Patients could proceed to randomized myeloablative therapy only if they had an adequate response of metastases to the rapid induction regimen and had an adequate number of stem cells harvested.

They were randomized to BUMEL or CEM. The busulfan was given orally until 2006, after which an intravenous form became available.

A preplanned interim analysis showed efficacy in favor of BUMEL, Dr. Ladenstein reported. The trial was therefore stopped early, after a median observation period of 3.5 years. The 563 randomized patients (just 43% of those initially enrolled) had a median age of 3 years. In all, 83% had stage IV disease.

Presenting the main efficacy results, she said "we find [them] quite extraordinary and above our expectations."

"Most interestingly, this really was related to a decreased relapse rate under the busulfan-melphalan regimen and was not related to [decreased] transplant-related mortality," she noted.

Stratified analyses suggested that BUMEL had the greatest benefit in patients who had residual disease after induction. "We believe that this is related to the potency of the drugs to work on the resting tumor cells," Dr. Ladenstein commented.

In a multivariate analysis that included age, disease stage, and treatment group, patients still had a significantly reduced risk of events if they were assigned to BUMEL instead of CEM (hazard ratio, 0.64; P less than .001).

 

 

CEM was associated with higher rates of grade 3/4 infectious, gastrointestinal, and renal adverse effects, and ototoxicity. BUMEL was associated with a higher rate of grade 3/4 veno-occlusive disease; patients in the trial did not receive prophylactic anticoagulation with defibrotide, she noted.

Dr. Ladenstein and Dr. Park said they had no disclosures. The intravenous form of busulfan was provided by Pierre Fabre Médicament Oncology.

Publications
Publications
Topics
Article Type
Display Headline
Busulfan-Melphalan Superior as Myeloablative Tx for High-Risk Neuroblastoma
Display Headline
Busulfan-Melphalan Superior as Myeloablative Tx for High-Risk Neuroblastoma
Legacy Keywords
combination, busulfan and melphalan, carboplatin, etoposide, and melphalan, myeloablative therapy, children, high-risk neuroblastoma, SIOPEN, International Society of Pediatric Oncology European Neuroblastoma Group, American Society of Clinical Oncology, BUMEL, Dr. Ruth Ladenstein,
Legacy Keywords
combination, busulfan and melphalan, carboplatin, etoposide, and melphalan, myeloablative therapy, children, high-risk neuroblastoma, SIOPEN, International Society of Pediatric Oncology European Neuroblastoma Group, American Society of Clinical Oncology, BUMEL, Dr. Ruth Ladenstein,
Article Source

FROM THE ANNUAL MEETING OF THE AMERICAN SOCIETY OF CLINICAL ONCOLOGY

PURLs Copyright

Inside the Article

Vitals

Major Finding: Compared with their peers who were given CEM, patients who were given BUMEL had superior 3-year rates of event-free survival (49% vs. 33%) and overall survival (60% vs. 48%).

Data Source: A randomized trial among 563 patients with high-risk neuroblastoma (the HR-NBL1/SIOPEN trial).

Disclosures: Dr. Ladenstein and Dr. Park said they had no disclosures. The intravenous form of busulfan was provided by Pierre Fabre Médicament Oncology.

Certain Antibodies Raise Rejection Risk in Heart Transplant Recipients

Article Type
Changed
Wed, 01/02/2019 - 08:10
Display Headline
Certain Antibodies Raise Rejection Risk in Heart Transplant Recipients

SAN DIEGO – Heart transplant recipients who develop circulating antibodies to human tissues in the first year post transplantation are at heightened risk for poor outcomes and may therefore need closer monitoring, suggests a prospective observational study.

One in seven of the patients studied developed circulating antibodies that specifically targeted human leukocyte antigens on donor tissue, and one in three developed nonspecific antibodies, according to results reported at the annual meeting of the International Society for Heart and Lung Transplantation.

    Dr. Jignesh Patel

Relative to their counterparts who did not develop any antibodies, patients who developed either type were more likely to experience both antibody-mediated and cellular rejection. In addition, those developing the donor-specific type were more likely to experience cardiac allograft vasculopathy and to die.

"Patients with donor-specific antibodies or nonspecific antibodies may require more intensive monitoring and augmented immunosuppression to improve their long-term outcomes," commented lead investigator Dr. Jignesh Patel, co–medical director of the heart transplant program at the Cedars-Sinai Heart Institute in Los Angeles. "Further studies are needed to determine the optimum therapy for these patients."

He acknowledged that the issue is complicated, because some patients with donor-specific antibodies (DSA) never experienced rejection, yet others with nonspecific antibodies did. These outcomes suggest that the nature of the antibodies is key. As a result, it is tricky to manage patients who develop antibodies but don’t have any symptoms of rejection.

At his institution, Dr. Patel said, clinicians don’t step up the number of biopsies performed to monitor for rejection in heart transplant recipients who develop antibodies unless they become symptomatic. However, they are cautious about long-term management of immunosuppression. "We will think twice about weaning them off prednisone," he noted. "More likely, we are kind of tending to switch them to a proliferation signaling inhibitor earlier when we see donor-specific antibodies."

Dr. Patel and his coinvestigators studied 144 patients who underwent heart transplantation in 2003-2010 and had serial antibody monitoring by solid-phase assays at baseline (the time of transplantation) and at 1, 3, 6, 9, and 12 months, at minimum.

"More recently introduced methods using solid-phase matrices coated with HLA antigens have demonstrated the ability to detect and identify HLA antibodies with high sensitivity and accuracy," he said.

Because the study period preceded the guidelines that recommended antibody monitoring, these patients were being followed more closely than usual out of concern that they were at heightened risk for antibody development, he said.

On average, the patients had seven antibody measurements during their first year post transplantation.

Study results showed that in the first year after transplantation, 14% of patients developed DSA and 32% developed non–donor-specific antibodies (non-DSA), while the rest did not develop any.

The mean age (approximately 53 years) was similar across groups. Relative to those who did not develop any antibodies, patients who developed non-DSA were more likely to be female (54% vs. 22%). Also, ischemic time was shorter for patients who developed DSA (183 minutes) or non-DSA (195 minutes) than for their counterparts who did not develop any antibodies (230 minutes).

The three groups of patients were generally similar with respect to immunosuppressive therapy at baseline, including receipt of calcineurin inhibitors and antiproliferative agents.

But the group developing DSA was significantly less likely than the group not developing antibodies to be weaned off prednisone (7% vs. 46%), and both the DSA and non-DSA groups were more likely than their counterparts with no antibodies to have received induction therapy (45% and 39% vs. 15%).

The 1-year rate of freedom from antibody-mediated rejection was poorer for patients who developed DSA (65%) or non-DSA (76%), compared with their peers who developed no antibodies (94%). The findings were similar with respect to rates of freedom from acute cellular rejection (80% and 87% vs. 99%, respectively).

The temporal patterns did differ somewhat according to type of rejection, according to Dr. Patel.

"With regard to cellular rejection, it appeared that a lot of events in the patients who developed donor-specific antibodies occurred toward the end of the first year, in comparison to the patients who developed antibody-mediated rejection, where most of the events tended to occur early" post transplant, he observed.

Relative to their counterparts who did not develop antibodies, the patients who developed DSA also had significantly poorer 3-year rates of survival (65% vs. 85%) and freedom from cardiac allograft vasculopathy, which was defined as the development of vascular stenosis exceeding 30% (70% vs. 88%).

Dr. Patel reported that he had no conflicts of interest related to the study.

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
Heart transplant, antibodies, Dr. Jignesh Patel, cardiovascular disease
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

SAN DIEGO – Heart transplant recipients who develop circulating antibodies to human tissues in the first year post transplantation are at heightened risk for poor outcomes and may therefore need closer monitoring, suggests a prospective observational study.

One in seven of the patients studied developed circulating antibodies that specifically targeted human leukocyte antigens on donor tissue, and one in three developed nonspecific antibodies, according to results reported at the annual meeting of the International Society for Heart and Lung Transplantation.

    Dr. Jignesh Patel

Relative to their counterparts who did not develop any antibodies, patients who developed either type were more likely to experience both antibody-mediated and cellular rejection. In addition, those developing the donor-specific type were more likely to experience cardiac allograft vasculopathy and to die.

"Patients with donor-specific antibodies or nonspecific antibodies may require more intensive monitoring and augmented immunosuppression to improve their long-term outcomes," commented lead investigator Dr. Jignesh Patel, co–medical director of the heart transplant program at the Cedars-Sinai Heart Institute in Los Angeles. "Further studies are needed to determine the optimum therapy for these patients."

He acknowledged that the issue is complicated, because some patients with donor-specific antibodies (DSA) never experienced rejection, yet others with nonspecific antibodies did. These outcomes suggest that the nature of the antibodies is key. As a result, it is tricky to manage patients who develop antibodies but don’t have any symptoms of rejection.

At his institution, Dr. Patel said, clinicians don’t step up the number of biopsies performed to monitor for rejection in heart transplant recipients who develop antibodies unless they become symptomatic. However, they are cautious about long-term management of immunosuppression. "We will think twice about weaning them off prednisone," he noted. "More likely, we are kind of tending to switch them to a proliferation signaling inhibitor earlier when we see donor-specific antibodies."

Dr. Patel and his coinvestigators studied 144 patients who underwent heart transplantation in 2003-2010 and had serial antibody monitoring by solid-phase assays at baseline (the time of transplantation) and at 1, 3, 6, 9, and 12 months, at minimum.

"More recently introduced methods using solid-phase matrices coated with HLA antigens have demonstrated the ability to detect and identify HLA antibodies with high sensitivity and accuracy," he said.

Because the study period preceded the guidelines that recommended antibody monitoring, these patients were being followed more closely than usual out of concern that they were at heightened risk for antibody development, he said.

On average, the patients had seven antibody measurements during their first year post transplantation.

Study results showed that in the first year after transplantation, 14% of patients developed DSA and 32% developed non–donor-specific antibodies (non-DSA), while the rest did not develop any.

The mean age (approximately 53 years) was similar across groups. Relative to those who did not develop any antibodies, patients who developed non-DSA were more likely to be female (54% vs. 22%). Also, ischemic time was shorter for patients who developed DSA (183 minutes) or non-DSA (195 minutes) than for their counterparts who did not develop any antibodies (230 minutes).

The three groups of patients were generally similar with respect to immunosuppressive therapy at baseline, including receipt of calcineurin inhibitors and antiproliferative agents.

But the group developing DSA was significantly less likely than the group not developing antibodies to be weaned off prednisone (7% vs. 46%), and both the DSA and non-DSA groups were more likely than their counterparts with no antibodies to have received induction therapy (45% and 39% vs. 15%).

The 1-year rate of freedom from antibody-mediated rejection was poorer for patients who developed DSA (65%) or non-DSA (76%), compared with their peers who developed no antibodies (94%). The findings were similar with respect to rates of freedom from acute cellular rejection (80% and 87% vs. 99%, respectively).

The temporal patterns did differ somewhat according to type of rejection, according to Dr. Patel.

"With regard to cellular rejection, it appeared that a lot of events in the patients who developed donor-specific antibodies occurred toward the end of the first year, in comparison to the patients who developed antibody-mediated rejection, where most of the events tended to occur early" post transplant, he observed.

Relative to their counterparts who did not develop antibodies, the patients who developed DSA also had significantly poorer 3-year rates of survival (65% vs. 85%) and freedom from cardiac allograft vasculopathy, which was defined as the development of vascular stenosis exceeding 30% (70% vs. 88%).

Dr. Patel reported that he had no conflicts of interest related to the study.

SAN DIEGO – Heart transplant recipients who develop circulating antibodies to human tissues in the first year post transplantation are at heightened risk for poor outcomes and may therefore need closer monitoring, suggests a prospective observational study.

One in seven of the patients studied developed circulating antibodies that specifically targeted human leukocyte antigens on donor tissue, and one in three developed nonspecific antibodies, according to results reported at the annual meeting of the International Society for Heart and Lung Transplantation.

    Dr. Jignesh Patel

Relative to their counterparts who did not develop any antibodies, patients who developed either type were more likely to experience both antibody-mediated and cellular rejection. In addition, those developing the donor-specific type were more likely to experience cardiac allograft vasculopathy and to die.

"Patients with donor-specific antibodies or nonspecific antibodies may require more intensive monitoring and augmented immunosuppression to improve their long-term outcomes," commented lead investigator Dr. Jignesh Patel, co–medical director of the heart transplant program at the Cedars-Sinai Heart Institute in Los Angeles. "Further studies are needed to determine the optimum therapy for these patients."

He acknowledged that the issue is complicated, because some patients with donor-specific antibodies (DSA) never experienced rejection, yet others with nonspecific antibodies did. These outcomes suggest that the nature of the antibodies is key. As a result, it is tricky to manage patients who develop antibodies but don’t have any symptoms of rejection.

At his institution, Dr. Patel said, clinicians don’t step up the number of biopsies performed to monitor for rejection in heart transplant recipients who develop antibodies unless they become symptomatic. However, they are cautious about long-term management of immunosuppression. "We will think twice about weaning them off prednisone," he noted. "More likely, we are kind of tending to switch them to a proliferation signaling inhibitor earlier when we see donor-specific antibodies."

Dr. Patel and his coinvestigators studied 144 patients who underwent heart transplantation in 2003-2010 and had serial antibody monitoring by solid-phase assays at baseline (the time of transplantation) and at 1, 3, 6, 9, and 12 months, at minimum.

"More recently introduced methods using solid-phase matrices coated with HLA antigens have demonstrated the ability to detect and identify HLA antibodies with high sensitivity and accuracy," he said.

Because the study period preceded the guidelines that recommended antibody monitoring, these patients were being followed more closely than usual out of concern that they were at heightened risk for antibody development, he said.

On average, the patients had seven antibody measurements during their first year post transplantation.

Study results showed that in the first year after transplantation, 14% of patients developed DSA and 32% developed non–donor-specific antibodies (non-DSA), while the rest did not develop any.

The mean age (approximately 53 years) was similar across groups. Relative to those who did not develop any antibodies, patients who developed non-DSA were more likely to be female (54% vs. 22%). Also, ischemic time was shorter for patients who developed DSA (183 minutes) or non-DSA (195 minutes) than for their counterparts who did not develop any antibodies (230 minutes).

The three groups of patients were generally similar with respect to immunosuppressive therapy at baseline, including receipt of calcineurin inhibitors and antiproliferative agents.

But the group developing DSA was significantly less likely than the group not developing antibodies to be weaned off prednisone (7% vs. 46%), and both the DSA and non-DSA groups were more likely than their counterparts with no antibodies to have received induction therapy (45% and 39% vs. 15%).

The 1-year rate of freedom from antibody-mediated rejection was poorer for patients who developed DSA (65%) or non-DSA (76%), compared with their peers who developed no antibodies (94%). The findings were similar with respect to rates of freedom from acute cellular rejection (80% and 87% vs. 99%, respectively).

The temporal patterns did differ somewhat according to type of rejection, according to Dr. Patel.

"With regard to cellular rejection, it appeared that a lot of events in the patients who developed donor-specific antibodies occurred toward the end of the first year, in comparison to the patients who developed antibody-mediated rejection, where most of the events tended to occur early" post transplant, he observed.

Relative to their counterparts who did not develop antibodies, the patients who developed DSA also had significantly poorer 3-year rates of survival (65% vs. 85%) and freedom from cardiac allograft vasculopathy, which was defined as the development of vascular stenosis exceeding 30% (70% vs. 88%).

Dr. Patel reported that he had no conflicts of interest related to the study.

Publications
Publications
Topics
Article Type
Display Headline
Certain Antibodies Raise Rejection Risk in Heart Transplant Recipients
Display Headline
Certain Antibodies Raise Rejection Risk in Heart Transplant Recipients
Legacy Keywords
Heart transplant, antibodies, Dr. Jignesh Patel, cardiovascular disease
Legacy Keywords
Heart transplant, antibodies, Dr. Jignesh Patel, cardiovascular disease
Article Source

FROM THE ANNUAL MEETING OF THE INTERNATIONAL SOCIETY FOR HEART AND LUNG TRANSPLANTATION

PURLs Copyright

Inside the Article

Vitals

Major Finding: Patients who developed donor-specific antibodies or non–donor-specific antibodies in the first year were more likely to experience rejection. The former were also more likely to experience cardiac allograft vasculopathy and to die.

Data Source: A prospective observational study of 144 heart transplant recipients who had serial antibody monitoring.

Disclosures: Dr. Patel reported that he had no relevant conflicts of interest.