Glaring gap in CV event reporting in pivotal cancer trials

Article Type
Changed
Thu, 12/15/2022 - 17:38

Clinical trials supporting Food and Drug Adminstration approval of contemporary cancer therapies frequently failed to capture major adverse cardiovascular events (MACE) and, when they did, reported rates 2.6-fold lower than noncancer trials, new research shows.

Overall, 51.3% of trials did not report MACE, with that number reaching 57.6% in trials enrolling patients with baseline cardiovascular disease (CVD).

Nearly 40% of trials did not report any CVD events in follow-up, the authors reported online Feb. 10, 2020, in the Journal of the American College of Cardiology (2020;75:620-8).

“Even in drug classes where there were established or emerging associations with cardiotoxic events, often there were no reported heart events or cardiovascular events across years of follow-up in trials that examined hundreds or even thousands of patients. That was actually pretty surprising,” senior author Daniel Addison, MD, codirector of the cardio-oncology program at the Ohio State University Medical Center, Columbus, said in an interview.

The study was prompted by a series of events that crescendoed when his team was called to the ICU to determine whether a novel targeted agent played a role in the heart decline of a patient with acute myeloid leukemia. “I had a resident ask me a very important question: ‘How do we really know for sure that the trial actually reflects the true risk of heart events?’ to which I told him, ‘it’s difficult to know,’ ” he said.

“I think many of us rely heavily on what we see in the trials, particularly when they make it to the top journals, and quite frankly, we generally take it at face value,” Dr. Addison observed.
 

Lower Rate of Reported Events

The investigators reviewed CV events reported in 97,365 patients (median age, 61 years; 46% female) enrolled in 189 phase 2 and 3 trials supporting FDA approval of 123 anticancer drugs from 1998 to 2018. Biologic, targeted, or immune-based therapies accounted for 72.5% of drug approvals.

Over 148,138 person-years of follow-up (median trial duration, 30 months), there were 1,148 incidents of MACE (375 heart failure, 253 MIs, 180 strokes, 65 atrial fibrillation, 29 coronary revascularizations, and 246 CVD deaths). MACE rates were higher in the intervention group than in the control group (792 vs. 356; P less than .01). Among the 64 trials that excluded patients with baseline CVD, there were 269 incidents of MACE.

To put this finding in context, the researchers examined the reported incidence of MACE among some 6,000 similarly aged participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The overall weighted-average incidence rate was 1,408 per 100,000 person-years among MESA participants, compared with 542 events per 100,000 person-years among oncology trial participants (716 per 100,000 in the intervention arm). This represents a reported-to-expected ratio of 0.38 – a 2.6-fold lower rate of reported events (P less than .001) – and a risk difference of 866.

Further, MACE reporting was lower by a factor of 1.7 among all cancer trial participants irrespective of baseline CVD status (reported-to-expected ratio, 0.56; risk difference, 613; P less than .001).

There was no significant difference in MACE reporting between independent or industry-sponsored trials, the authors report.
 

 

 

No malicious intent

“There are likely some that might lean toward not wanting to attribute blame to a new drug when the drug is in a study, but I really think that the leading factor is lack of awareness,” Dr. Addison said. “I’ve talked with several cancer collaborators around the country who run large clinical trials, and I think often, when an event may be brought to someone’s attention, there is a tendency to just write it off as kind of a generic expected event due to age, or just something that’s not really pertinent to the study. So they don’t really focus on it as much.”

“Closer collaboration between cardiologists and cancer physicians is needed to better determine true cardiac risks among patients treated with these drugs.”

Breast cancer oncologist Marc E. Lippman, MD, of Georgetown University Medical Center and Georgetown Lombardi Comprehensive Cancer Center, Washington, D.C., isn’t convinced a lack of awareness is the culprit.

“I don’t agree with that at all,” he said in an interview. “I think there are very, very clear rules and guidelines these days for adverse-event reporting. I think that’s not a very likely explanation – that it’s not on the radar.”

Part of the problem may be that some of the toxicities, particularly cardiovascular, may not emerge for years, he said. Participant screening for the trials also likely removed patients with high cardiovascular risk. “It’s very understandable to me – I’m not saying it’s good particularly – but I think it’s very understandable that, if you’re trying to develop a drug, the last thing you’d want to have is a lot of toxicity that you might have avoided by just being restrictive in who you let into the study,” Dr. Lippman said.

The underreported CVD events may also reflect the rapidly changing profile of cardiovascular toxicities associated with novel anticancer therapies.

“Providers, both cancer and noncancer, generally put cardiotoxicity in the box of anthracyclines and radiation, but particularly over the last decade, we’ve begun to understand it’s well beyond any one class of drugs,” Dr. Addison said.

“I agree completely,” Dr. Lippman said. For example, “the checkpoint inhibitors are so unbelievably different in terms of their toxicities that many people simply didn’t even know what they were getting into at first.”
 

One size does not fit all

Javid Moslehi, MD, director of the cardio-oncology program at Vanderbilt University, Nashville, Tenn., said echocardiography – recommended to detect changes in left ventricular function in patients exposed to anthracyclines or targeted agents like trastuzumab (Herceptin) – isn’t enough to address today’s cancer therapy–related CVD events.

Dr. Javed Moslehi
Courtesy Joe Howell
Dr. Javed Moslehi

“Initial drugs like anthracyclines or Herceptin in cardio-oncology were associated with systolic cardiac dysfunction, whereas the majority of issues we see in the cardio-oncology clinics today are vascular, metabolic, arrhythmogenic, and inflammatory,” he said in an interview. “Echocardiography misses the big and increasingly complex picture.”

His group, for example, has been studying myocarditis associated with immunotherapies, but none of the clinical trials require screening or surveillance for myocarditis with a cardiac biomarker like troponin.

The group also recently identified 303 deaths in patients exposed to ibrutinib, a drug that revolutionized the treatment of several B-cell malignancies but is associated with higher rates of atrial fibrillation, which is also associated with increased bleeding risk. “So there’s a little bit of a double whammy there, given that we often treat atrial fibrillation with anticoagulation and where we can cause complications in patients,” Dr. Moslehi noted.

Although there needs to be closer collaboration between cardiologists and oncologists on individual trials, cardiologists also have to realize that oncology care has become very personalized, he suggested.

“What’s probably relevant for the breast cancer patient may not be relevant for the prostate cancer patient and their respective treatments,” Dr. Moslehi said. “So if we were to say, ‘every person should get an echo,’ that may be less relevant to the prostate cancer patient where treatments can cause vascular and metabolic perturbations or to the patient treated with immunotherapy who may have myocarditis, where many of the echos can be normal. There’s no one-size-fits-all for these things.”

Wearable technologies like smartwatches could play a role in improving the reporting of CVD events with novel therapies but a lot more research needs to be done to validate these tools, Dr. Addison said. “But as we continue on into the 21st century, this is going to expand and may potentially help us,” he added.

In the interim, better standardization is needed of the cardiovascular events reported in oncology trials, particularly the Common Terminology Criteria for Adverse Events (CTCAE), said Dr. Moslehi, who also serves as chair of the American Heart Association’s subcommittee on cardio-oncology.

“Cardiovascular definitions are not exactly uniform and are not consistent with what we in cardiology consider to be important or relevant,” he said. “So I think there needs to be better standardization of these definitions, specifically within the CTCAE, which is what the oncologists use to identify adverse events.”

In a linked editorial (J Am Coll Cardiol. 2020;75:629-31), Dr. Lippman and cardiologist Nanette Bishopric, MD, of the Medstar Heart and Vascular Institute in Washington, D.C., suggested it may also be time to organize a consortium that can carry out “rigorous multicenter clinical investigations to evaluate the cardiotoxicity of emerging cancer treatments,” similar to the Thrombosis in Myocardial Infarction Study Group.

“The success of this consortium in pioneering and targeting multiple generations of drugs for the treatment of MI, involving tens of thousands of patients and thousands of collaborations across multiple national borders, is a model for how to move forward in providing the new hope of cancer cure without the trade-off of years lost to heart disease,” the editorialists concluded.

The study was supported in part by National Institutes of Health grants, including a K12-CA133250 grant to Dr. Addison. Dr. Bishopric reported being on the scientific board of C&C Biopharma. Dr. Lippman reports being on the board of directors of and holding stock in Seattle Genetics. Dr. Moslehi reported having served on advisory boards for Pfizer, Novartis, Bristol-Myers Squibb, Deciphera, Audentes Pharmaceuticals, Nektar, Takeda, Ipsen, Myokardia, AstraZeneca, GlaxoSmithKline, Intrexon, and Regeneron.

This article first appeared on Medscape.com.

Publications
Topics
Sections

Clinical trials supporting Food and Drug Adminstration approval of contemporary cancer therapies frequently failed to capture major adverse cardiovascular events (MACE) and, when they did, reported rates 2.6-fold lower than noncancer trials, new research shows.

Overall, 51.3% of trials did not report MACE, with that number reaching 57.6% in trials enrolling patients with baseline cardiovascular disease (CVD).

Nearly 40% of trials did not report any CVD events in follow-up, the authors reported online Feb. 10, 2020, in the Journal of the American College of Cardiology (2020;75:620-8).

“Even in drug classes where there were established or emerging associations with cardiotoxic events, often there were no reported heart events or cardiovascular events across years of follow-up in trials that examined hundreds or even thousands of patients. That was actually pretty surprising,” senior author Daniel Addison, MD, codirector of the cardio-oncology program at the Ohio State University Medical Center, Columbus, said in an interview.

The study was prompted by a series of events that crescendoed when his team was called to the ICU to determine whether a novel targeted agent played a role in the heart decline of a patient with acute myeloid leukemia. “I had a resident ask me a very important question: ‘How do we really know for sure that the trial actually reflects the true risk of heart events?’ to which I told him, ‘it’s difficult to know,’ ” he said.

“I think many of us rely heavily on what we see in the trials, particularly when they make it to the top journals, and quite frankly, we generally take it at face value,” Dr. Addison observed.
 

Lower Rate of Reported Events

The investigators reviewed CV events reported in 97,365 patients (median age, 61 years; 46% female) enrolled in 189 phase 2 and 3 trials supporting FDA approval of 123 anticancer drugs from 1998 to 2018. Biologic, targeted, or immune-based therapies accounted for 72.5% of drug approvals.

Over 148,138 person-years of follow-up (median trial duration, 30 months), there were 1,148 incidents of MACE (375 heart failure, 253 MIs, 180 strokes, 65 atrial fibrillation, 29 coronary revascularizations, and 246 CVD deaths). MACE rates were higher in the intervention group than in the control group (792 vs. 356; P less than .01). Among the 64 trials that excluded patients with baseline CVD, there were 269 incidents of MACE.

To put this finding in context, the researchers examined the reported incidence of MACE among some 6,000 similarly aged participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The overall weighted-average incidence rate was 1,408 per 100,000 person-years among MESA participants, compared with 542 events per 100,000 person-years among oncology trial participants (716 per 100,000 in the intervention arm). This represents a reported-to-expected ratio of 0.38 – a 2.6-fold lower rate of reported events (P less than .001) – and a risk difference of 866.

Further, MACE reporting was lower by a factor of 1.7 among all cancer trial participants irrespective of baseline CVD status (reported-to-expected ratio, 0.56; risk difference, 613; P less than .001).

There was no significant difference in MACE reporting between independent or industry-sponsored trials, the authors report.
 

 

 

No malicious intent

“There are likely some that might lean toward not wanting to attribute blame to a new drug when the drug is in a study, but I really think that the leading factor is lack of awareness,” Dr. Addison said. “I’ve talked with several cancer collaborators around the country who run large clinical trials, and I think often, when an event may be brought to someone’s attention, there is a tendency to just write it off as kind of a generic expected event due to age, or just something that’s not really pertinent to the study. So they don’t really focus on it as much.”

“Closer collaboration between cardiologists and cancer physicians is needed to better determine true cardiac risks among patients treated with these drugs.”

Breast cancer oncologist Marc E. Lippman, MD, of Georgetown University Medical Center and Georgetown Lombardi Comprehensive Cancer Center, Washington, D.C., isn’t convinced a lack of awareness is the culprit.

“I don’t agree with that at all,” he said in an interview. “I think there are very, very clear rules and guidelines these days for adverse-event reporting. I think that’s not a very likely explanation – that it’s not on the radar.”

Part of the problem may be that some of the toxicities, particularly cardiovascular, may not emerge for years, he said. Participant screening for the trials also likely removed patients with high cardiovascular risk. “It’s very understandable to me – I’m not saying it’s good particularly – but I think it’s very understandable that, if you’re trying to develop a drug, the last thing you’d want to have is a lot of toxicity that you might have avoided by just being restrictive in who you let into the study,” Dr. Lippman said.

The underreported CVD events may also reflect the rapidly changing profile of cardiovascular toxicities associated with novel anticancer therapies.

“Providers, both cancer and noncancer, generally put cardiotoxicity in the box of anthracyclines and radiation, but particularly over the last decade, we’ve begun to understand it’s well beyond any one class of drugs,” Dr. Addison said.

“I agree completely,” Dr. Lippman said. For example, “the checkpoint inhibitors are so unbelievably different in terms of their toxicities that many people simply didn’t even know what they were getting into at first.”
 

One size does not fit all

Javid Moslehi, MD, director of the cardio-oncology program at Vanderbilt University, Nashville, Tenn., said echocardiography – recommended to detect changes in left ventricular function in patients exposed to anthracyclines or targeted agents like trastuzumab (Herceptin) – isn’t enough to address today’s cancer therapy–related CVD events.

Dr. Javed Moslehi
Courtesy Joe Howell
Dr. Javed Moslehi

“Initial drugs like anthracyclines or Herceptin in cardio-oncology were associated with systolic cardiac dysfunction, whereas the majority of issues we see in the cardio-oncology clinics today are vascular, metabolic, arrhythmogenic, and inflammatory,” he said in an interview. “Echocardiography misses the big and increasingly complex picture.”

His group, for example, has been studying myocarditis associated with immunotherapies, but none of the clinical trials require screening or surveillance for myocarditis with a cardiac biomarker like troponin.

The group also recently identified 303 deaths in patients exposed to ibrutinib, a drug that revolutionized the treatment of several B-cell malignancies but is associated with higher rates of atrial fibrillation, which is also associated with increased bleeding risk. “So there’s a little bit of a double whammy there, given that we often treat atrial fibrillation with anticoagulation and where we can cause complications in patients,” Dr. Moslehi noted.

Although there needs to be closer collaboration between cardiologists and oncologists on individual trials, cardiologists also have to realize that oncology care has become very personalized, he suggested.

“What’s probably relevant for the breast cancer patient may not be relevant for the prostate cancer patient and their respective treatments,” Dr. Moslehi said. “So if we were to say, ‘every person should get an echo,’ that may be less relevant to the prostate cancer patient where treatments can cause vascular and metabolic perturbations or to the patient treated with immunotherapy who may have myocarditis, where many of the echos can be normal. There’s no one-size-fits-all for these things.”

Wearable technologies like smartwatches could play a role in improving the reporting of CVD events with novel therapies but a lot more research needs to be done to validate these tools, Dr. Addison said. “But as we continue on into the 21st century, this is going to expand and may potentially help us,” he added.

In the interim, better standardization is needed of the cardiovascular events reported in oncology trials, particularly the Common Terminology Criteria for Adverse Events (CTCAE), said Dr. Moslehi, who also serves as chair of the American Heart Association’s subcommittee on cardio-oncology.

“Cardiovascular definitions are not exactly uniform and are not consistent with what we in cardiology consider to be important or relevant,” he said. “So I think there needs to be better standardization of these definitions, specifically within the CTCAE, which is what the oncologists use to identify adverse events.”

In a linked editorial (J Am Coll Cardiol. 2020;75:629-31), Dr. Lippman and cardiologist Nanette Bishopric, MD, of the Medstar Heart and Vascular Institute in Washington, D.C., suggested it may also be time to organize a consortium that can carry out “rigorous multicenter clinical investigations to evaluate the cardiotoxicity of emerging cancer treatments,” similar to the Thrombosis in Myocardial Infarction Study Group.

“The success of this consortium in pioneering and targeting multiple generations of drugs for the treatment of MI, involving tens of thousands of patients and thousands of collaborations across multiple national borders, is a model for how to move forward in providing the new hope of cancer cure without the trade-off of years lost to heart disease,” the editorialists concluded.

The study was supported in part by National Institutes of Health grants, including a K12-CA133250 grant to Dr. Addison. Dr. Bishopric reported being on the scientific board of C&C Biopharma. Dr. Lippman reports being on the board of directors of and holding stock in Seattle Genetics. Dr. Moslehi reported having served on advisory boards for Pfizer, Novartis, Bristol-Myers Squibb, Deciphera, Audentes Pharmaceuticals, Nektar, Takeda, Ipsen, Myokardia, AstraZeneca, GlaxoSmithKline, Intrexon, and Regeneron.

This article first appeared on Medscape.com.

Clinical trials supporting Food and Drug Adminstration approval of contemporary cancer therapies frequently failed to capture major adverse cardiovascular events (MACE) and, when they did, reported rates 2.6-fold lower than noncancer trials, new research shows.

Overall, 51.3% of trials did not report MACE, with that number reaching 57.6% in trials enrolling patients with baseline cardiovascular disease (CVD).

Nearly 40% of trials did not report any CVD events in follow-up, the authors reported online Feb. 10, 2020, in the Journal of the American College of Cardiology (2020;75:620-8).

“Even in drug classes where there were established or emerging associations with cardiotoxic events, often there were no reported heart events or cardiovascular events across years of follow-up in trials that examined hundreds or even thousands of patients. That was actually pretty surprising,” senior author Daniel Addison, MD, codirector of the cardio-oncology program at the Ohio State University Medical Center, Columbus, said in an interview.

The study was prompted by a series of events that crescendoed when his team was called to the ICU to determine whether a novel targeted agent played a role in the heart decline of a patient with acute myeloid leukemia. “I had a resident ask me a very important question: ‘How do we really know for sure that the trial actually reflects the true risk of heart events?’ to which I told him, ‘it’s difficult to know,’ ” he said.

“I think many of us rely heavily on what we see in the trials, particularly when they make it to the top journals, and quite frankly, we generally take it at face value,” Dr. Addison observed.
 

Lower Rate of Reported Events

The investigators reviewed CV events reported in 97,365 patients (median age, 61 years; 46% female) enrolled in 189 phase 2 and 3 trials supporting FDA approval of 123 anticancer drugs from 1998 to 2018. Biologic, targeted, or immune-based therapies accounted for 72.5% of drug approvals.

Over 148,138 person-years of follow-up (median trial duration, 30 months), there were 1,148 incidents of MACE (375 heart failure, 253 MIs, 180 strokes, 65 atrial fibrillation, 29 coronary revascularizations, and 246 CVD deaths). MACE rates were higher in the intervention group than in the control group (792 vs. 356; P less than .01). Among the 64 trials that excluded patients with baseline CVD, there were 269 incidents of MACE.

To put this finding in context, the researchers examined the reported incidence of MACE among some 6,000 similarly aged participants in the Multi-Ethnic Study of Atherosclerosis (MESA). The overall weighted-average incidence rate was 1,408 per 100,000 person-years among MESA participants, compared with 542 events per 100,000 person-years among oncology trial participants (716 per 100,000 in the intervention arm). This represents a reported-to-expected ratio of 0.38 – a 2.6-fold lower rate of reported events (P less than .001) – and a risk difference of 866.

Further, MACE reporting was lower by a factor of 1.7 among all cancer trial participants irrespective of baseline CVD status (reported-to-expected ratio, 0.56; risk difference, 613; P less than .001).

There was no significant difference in MACE reporting between independent or industry-sponsored trials, the authors report.
 

 

 

No malicious intent

“There are likely some that might lean toward not wanting to attribute blame to a new drug when the drug is in a study, but I really think that the leading factor is lack of awareness,” Dr. Addison said. “I’ve talked with several cancer collaborators around the country who run large clinical trials, and I think often, when an event may be brought to someone’s attention, there is a tendency to just write it off as kind of a generic expected event due to age, or just something that’s not really pertinent to the study. So they don’t really focus on it as much.”

“Closer collaboration between cardiologists and cancer physicians is needed to better determine true cardiac risks among patients treated with these drugs.”

Breast cancer oncologist Marc E. Lippman, MD, of Georgetown University Medical Center and Georgetown Lombardi Comprehensive Cancer Center, Washington, D.C., isn’t convinced a lack of awareness is the culprit.

“I don’t agree with that at all,” he said in an interview. “I think there are very, very clear rules and guidelines these days for adverse-event reporting. I think that’s not a very likely explanation – that it’s not on the radar.”

Part of the problem may be that some of the toxicities, particularly cardiovascular, may not emerge for years, he said. Participant screening for the trials also likely removed patients with high cardiovascular risk. “It’s very understandable to me – I’m not saying it’s good particularly – but I think it’s very understandable that, if you’re trying to develop a drug, the last thing you’d want to have is a lot of toxicity that you might have avoided by just being restrictive in who you let into the study,” Dr. Lippman said.

The underreported CVD events may also reflect the rapidly changing profile of cardiovascular toxicities associated with novel anticancer therapies.

“Providers, both cancer and noncancer, generally put cardiotoxicity in the box of anthracyclines and radiation, but particularly over the last decade, we’ve begun to understand it’s well beyond any one class of drugs,” Dr. Addison said.

“I agree completely,” Dr. Lippman said. For example, “the checkpoint inhibitors are so unbelievably different in terms of their toxicities that many people simply didn’t even know what they were getting into at first.”
 

One size does not fit all

Javid Moslehi, MD, director of the cardio-oncology program at Vanderbilt University, Nashville, Tenn., said echocardiography – recommended to detect changes in left ventricular function in patients exposed to anthracyclines or targeted agents like trastuzumab (Herceptin) – isn’t enough to address today’s cancer therapy–related CVD events.

Dr. Javed Moslehi
Courtesy Joe Howell
Dr. Javed Moslehi

“Initial drugs like anthracyclines or Herceptin in cardio-oncology were associated with systolic cardiac dysfunction, whereas the majority of issues we see in the cardio-oncology clinics today are vascular, metabolic, arrhythmogenic, and inflammatory,” he said in an interview. “Echocardiography misses the big and increasingly complex picture.”

His group, for example, has been studying myocarditis associated with immunotherapies, but none of the clinical trials require screening or surveillance for myocarditis with a cardiac biomarker like troponin.

The group also recently identified 303 deaths in patients exposed to ibrutinib, a drug that revolutionized the treatment of several B-cell malignancies but is associated with higher rates of atrial fibrillation, which is also associated with increased bleeding risk. “So there’s a little bit of a double whammy there, given that we often treat atrial fibrillation with anticoagulation and where we can cause complications in patients,” Dr. Moslehi noted.

Although there needs to be closer collaboration between cardiologists and oncologists on individual trials, cardiologists also have to realize that oncology care has become very personalized, he suggested.

“What’s probably relevant for the breast cancer patient may not be relevant for the prostate cancer patient and their respective treatments,” Dr. Moslehi said. “So if we were to say, ‘every person should get an echo,’ that may be less relevant to the prostate cancer patient where treatments can cause vascular and metabolic perturbations or to the patient treated with immunotherapy who may have myocarditis, where many of the echos can be normal. There’s no one-size-fits-all for these things.”

Wearable technologies like smartwatches could play a role in improving the reporting of CVD events with novel therapies but a lot more research needs to be done to validate these tools, Dr. Addison said. “But as we continue on into the 21st century, this is going to expand and may potentially help us,” he added.

In the interim, better standardization is needed of the cardiovascular events reported in oncology trials, particularly the Common Terminology Criteria for Adverse Events (CTCAE), said Dr. Moslehi, who also serves as chair of the American Heart Association’s subcommittee on cardio-oncology.

“Cardiovascular definitions are not exactly uniform and are not consistent with what we in cardiology consider to be important or relevant,” he said. “So I think there needs to be better standardization of these definitions, specifically within the CTCAE, which is what the oncologists use to identify adverse events.”

In a linked editorial (J Am Coll Cardiol. 2020;75:629-31), Dr. Lippman and cardiologist Nanette Bishopric, MD, of the Medstar Heart and Vascular Institute in Washington, D.C., suggested it may also be time to organize a consortium that can carry out “rigorous multicenter clinical investigations to evaluate the cardiotoxicity of emerging cancer treatments,” similar to the Thrombosis in Myocardial Infarction Study Group.

“The success of this consortium in pioneering and targeting multiple generations of drugs for the treatment of MI, involving tens of thousands of patients and thousands of collaborations across multiple national borders, is a model for how to move forward in providing the new hope of cancer cure without the trade-off of years lost to heart disease,” the editorialists concluded.

The study was supported in part by National Institutes of Health grants, including a K12-CA133250 grant to Dr. Addison. Dr. Bishopric reported being on the scientific board of C&C Biopharma. Dr. Lippman reports being on the board of directors of and holding stock in Seattle Genetics. Dr. Moslehi reported having served on advisory boards for Pfizer, Novartis, Bristol-Myers Squibb, Deciphera, Audentes Pharmaceuticals, Nektar, Takeda, Ipsen, Myokardia, AstraZeneca, GlaxoSmithKline, Intrexon, and Regeneron.

This article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Medscape Article

U.S. heroin use: Good news, bad news?

Article Type
Changed
Mon, 03/22/2021 - 14:08

U.S. rates of heroin use, heroin use disorder, and heroin injections all increased overall among adults during a recent 17-year period, but rates have plateaued, new research shows.
 

Although on the face of it this may seem like good news, investigators at the Substance Abuse and Mental Health Services Administration (SAMHSA) note that the plateau in heroin use may simply reflect a switch to fentanyl.

“The recent leveling off of heroin use might reflect shifts from heroin to illicit fentanyl-related compounds,” wrote the investigators, led by Beth Han, MD, PhD, MPH.

The study was published online Feb. 11 as a research letter in JAMA (2020;323[6]:568-71).

National data

For the study, researchers collected data from a nationally representative group of adults aged 18 years or older who participated in the 2002-2018 National Survey on Drug Use and Health (NSDUH).

The analysis included 800,500 respondents during the study period. The mean age of respondents was 34.5 years, and 53.2% were women.

Results showed that the reported past-year prevalence of heroin use increased from 0.17% in 2002 to 0.32% in 2018 (average annual percentage change [AAPC], 5.6; 95% confidence interval [CI], 1.0-10.5; P = .02). During 2002-2016, the APC was 7.6 (95% CI, 6.3-9.0; P less than .001) but then plateaued during 2016-2018 (APC, –7.1; 95% CI, –36.9 to 36.7; P = .69).

The prevalence of heroin use disorder increased from 0.10% in 2002 to 0.21% in 2018 (AAPC, 6.0; 95% CI, 3.2-8.8; P less than .001). The rate remained stable during 2002-2008, increased during 2008-2015, then plateaued during 2015-2018.

The prevalence of heroin injections increased from 0.09% in 2002 to 0.17% in 2018 (AAPC, 6.9; 95% CI, 5.7-8.0; P less than .001), although there was a dip from the previous year. This rate increased during the study period among both men and women, those aged 35-49 years, non-Hispanic whites, and those residing in the Northeast or West regions.

For individuals up to age 25 years and those living in the Midwest, the heroin injection rate stopped increasing and plateaued, but there was an overall increase during the study period.

In 2018, the rate of past-year heroin injection was highest in those in the Northeast, those up to age 49 years, men, and non-Hispanic whites.

More infectious disease testing

Prevalence of heroin injection did not increase among adults who used heroin or who had heroin use disorder. This, the researchers note, “suggests that increases in heroin injection are related to overall increases in heroin use rather than increases in the propensity to inject.”

Future research should examine differences in heroin injection trends across subgroups, the authors wrote.

The researchers advocate for expanding HIV and hepatitis testing and treatment, the provision of sterile syringes, and use of Food and Drug Administration–approved medications for opioid use disorders, particularly among populations at greatest risk – adults in the Northeast, those aged 18-49 years, men, and non-Hispanic whites.

“In parallel, interventions to prevent opioid misuse and opioid use disorder are needed to avert further increases in injection drug use,” they noted.

A limitation of the study was that the NSDUH excludes jail and prison populations and homeless people not in living shelters. In addition, the NSDUH is subject to recall bias.

The study was jointly sponsored by SAMHSA and the National Institute on Drug Abuse of the National Institutes of Health. One author reports owning stock in General Electric Co, 3M Co, and Pfizer Inc.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

U.S. rates of heroin use, heroin use disorder, and heroin injections all increased overall among adults during a recent 17-year period, but rates have plateaued, new research shows.
 

Although on the face of it this may seem like good news, investigators at the Substance Abuse and Mental Health Services Administration (SAMHSA) note that the plateau in heroin use may simply reflect a switch to fentanyl.

“The recent leveling off of heroin use might reflect shifts from heroin to illicit fentanyl-related compounds,” wrote the investigators, led by Beth Han, MD, PhD, MPH.

The study was published online Feb. 11 as a research letter in JAMA (2020;323[6]:568-71).

National data

For the study, researchers collected data from a nationally representative group of adults aged 18 years or older who participated in the 2002-2018 National Survey on Drug Use and Health (NSDUH).

The analysis included 800,500 respondents during the study period. The mean age of respondents was 34.5 years, and 53.2% were women.

Results showed that the reported past-year prevalence of heroin use increased from 0.17% in 2002 to 0.32% in 2018 (average annual percentage change [AAPC], 5.6; 95% confidence interval [CI], 1.0-10.5; P = .02). During 2002-2016, the APC was 7.6 (95% CI, 6.3-9.0; P less than .001) but then plateaued during 2016-2018 (APC, –7.1; 95% CI, –36.9 to 36.7; P = .69).

The prevalence of heroin use disorder increased from 0.10% in 2002 to 0.21% in 2018 (AAPC, 6.0; 95% CI, 3.2-8.8; P less than .001). The rate remained stable during 2002-2008, increased during 2008-2015, then plateaued during 2015-2018.

The prevalence of heroin injections increased from 0.09% in 2002 to 0.17% in 2018 (AAPC, 6.9; 95% CI, 5.7-8.0; P less than .001), although there was a dip from the previous year. This rate increased during the study period among both men and women, those aged 35-49 years, non-Hispanic whites, and those residing in the Northeast or West regions.

For individuals up to age 25 years and those living in the Midwest, the heroin injection rate stopped increasing and plateaued, but there was an overall increase during the study period.

In 2018, the rate of past-year heroin injection was highest in those in the Northeast, those up to age 49 years, men, and non-Hispanic whites.

More infectious disease testing

Prevalence of heroin injection did not increase among adults who used heroin or who had heroin use disorder. This, the researchers note, “suggests that increases in heroin injection are related to overall increases in heroin use rather than increases in the propensity to inject.”

Future research should examine differences in heroin injection trends across subgroups, the authors wrote.

The researchers advocate for expanding HIV and hepatitis testing and treatment, the provision of sterile syringes, and use of Food and Drug Administration–approved medications for opioid use disorders, particularly among populations at greatest risk – adults in the Northeast, those aged 18-49 years, men, and non-Hispanic whites.

“In parallel, interventions to prevent opioid misuse and opioid use disorder are needed to avert further increases in injection drug use,” they noted.

A limitation of the study was that the NSDUH excludes jail and prison populations and homeless people not in living shelters. In addition, the NSDUH is subject to recall bias.

The study was jointly sponsored by SAMHSA and the National Institute on Drug Abuse of the National Institutes of Health. One author reports owning stock in General Electric Co, 3M Co, and Pfizer Inc.

A version of this article first appeared on Medscape.com.

U.S. rates of heroin use, heroin use disorder, and heroin injections all increased overall among adults during a recent 17-year period, but rates have plateaued, new research shows.
 

Although on the face of it this may seem like good news, investigators at the Substance Abuse and Mental Health Services Administration (SAMHSA) note that the plateau in heroin use may simply reflect a switch to fentanyl.

“The recent leveling off of heroin use might reflect shifts from heroin to illicit fentanyl-related compounds,” wrote the investigators, led by Beth Han, MD, PhD, MPH.

The study was published online Feb. 11 as a research letter in JAMA (2020;323[6]:568-71).

National data

For the study, researchers collected data from a nationally representative group of adults aged 18 years or older who participated in the 2002-2018 National Survey on Drug Use and Health (NSDUH).

The analysis included 800,500 respondents during the study period. The mean age of respondents was 34.5 years, and 53.2% were women.

Results showed that the reported past-year prevalence of heroin use increased from 0.17% in 2002 to 0.32% in 2018 (average annual percentage change [AAPC], 5.6; 95% confidence interval [CI], 1.0-10.5; P = .02). During 2002-2016, the APC was 7.6 (95% CI, 6.3-9.0; P less than .001) but then plateaued during 2016-2018 (APC, –7.1; 95% CI, –36.9 to 36.7; P = .69).

The prevalence of heroin use disorder increased from 0.10% in 2002 to 0.21% in 2018 (AAPC, 6.0; 95% CI, 3.2-8.8; P less than .001). The rate remained stable during 2002-2008, increased during 2008-2015, then plateaued during 2015-2018.

The prevalence of heroin injections increased from 0.09% in 2002 to 0.17% in 2018 (AAPC, 6.9; 95% CI, 5.7-8.0; P less than .001), although there was a dip from the previous year. This rate increased during the study period among both men and women, those aged 35-49 years, non-Hispanic whites, and those residing in the Northeast or West regions.

For individuals up to age 25 years and those living in the Midwest, the heroin injection rate stopped increasing and plateaued, but there was an overall increase during the study period.

In 2018, the rate of past-year heroin injection was highest in those in the Northeast, those up to age 49 years, men, and non-Hispanic whites.

More infectious disease testing

Prevalence of heroin injection did not increase among adults who used heroin or who had heroin use disorder. This, the researchers note, “suggests that increases in heroin injection are related to overall increases in heroin use rather than increases in the propensity to inject.”

Future research should examine differences in heroin injection trends across subgroups, the authors wrote.

The researchers advocate for expanding HIV and hepatitis testing and treatment, the provision of sterile syringes, and use of Food and Drug Administration–approved medications for opioid use disorders, particularly among populations at greatest risk – adults in the Northeast, those aged 18-49 years, men, and non-Hispanic whites.

“In parallel, interventions to prevent opioid misuse and opioid use disorder are needed to avert further increases in injection drug use,” they noted.

A limitation of the study was that the NSDUH excludes jail and prison populations and homeless people not in living shelters. In addition, the NSDUH is subject to recall bias.

The study was jointly sponsored by SAMHSA and the National Institute on Drug Abuse of the National Institutes of Health. One author reports owning stock in General Electric Co, 3M Co, and Pfizer Inc.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Medscape Article

CRC task force updates colonoscopy follow-up guidance

Article Type
Changed
Wed, 05/26/2021 - 13:45

The U.S. Multi-Society Task Force on Colorectal Cancer (CRC) recently updated recommendations for patient follow-up after colonoscopy and polypectomy.

The new guidance was based on advancements in both research and technology since the last recommendations were published in 2012, reported lead author Samir Gupta, MD, AGAF, of the University of California, San Diego, and colleagues.

“[Since 2012,] a number of articles have been published on risk of CRC based on colonoscopy findings and patient characteristics, as well as the potential impact of screening and surveillance colonoscopy on outcomes, such as incident CRC and polyps,” the investigators wrote in Gastroenterology. “Further, recent studies increasingly reflect the modern era of colonoscopy with more awareness of the importance of quality factors (e.g., adequate bowel preparation, cecal intubation, adequate adenoma detection, and complete polyp resection), and utilization of state of the art technologies (e.g., high-definition colonoscopes).”

The task force, which comprised the American College of Gastroenterology, the American Gastroenterological Association, and the American Society of Gastrointestinal Endoscopy, identified key topics using PICO (patient, intervention, comparison, and outcome) questions before conducting a comprehensive literature review that included 136 articles. Based on these findings, two task force members generated recommendations that were further refined through consensus discussion. The recommendations were copublished in the March issues of the American Journal of Gastroenterology, Gastroenterology, and Gastrointestinal Endoscopy.

According to Dr. Gupta and colleagues, some of the new recommendations, particularly those that advise less stringent follow-up, may encounter resistance from various stakeholders.

“Patients, primary care physicians, and colonoscopists may have concerns about lengthening a previously recommended interval, and will need to engage in shared decision making regarding whether to lengthen the follow-up interval based upon the guidance here or utilize the recommendation made at the time of the prior colonoscopy,” the task force wrote.

The most prominent recommendations of this kind concern patients who undergo removal of tubular adenomas less than 10 mm in size. For patients who have 1-2 of these adenomas removed, the task force now recommends follow-up after 7-10 years, instead of the previously recommended interval of 5-10 years.

“[This decision was] based on the growing body of evidence to support low risk for metachronous advanced neoplasia,” the task force wrote. “In this population, the risk for metachronous advanced neoplasia is similar to that for individuals with no adenoma. Importantly, the observed risk for fatal CRC among individuals with 1-10 adenomas less than 10 mm is lower than average for the general population.”

Along similar lines, patients who undergo removal of 3-4 small adenomas now have a recommended 3-5 year follow-up window, instead of the previously strict recommendation for follow-up at 3 years.

But not all of the new guidance is less stringent. While the task force previously recommended a follow-up period of less than 3 years after removal of more than 10 adenomas, they now recommend follow-up at 1 year. This change was made to simplify guidance, the investigators wrote, noting that the evidence base in this area “has not been markedly strengthened” since 2012.

Compared with the old guidance, the updated publication offers more detailed recommendations for follow-up after removal of serrated polyps. On this topic, 10 clinical scenarios are presented, with follow-up ranging from 6 months after piecemeal resection of a sessile serrated polyp greater than 20 mm to 10 years after removal of 20 or fewer hyperplastic polyps less than 10 mm that were located in the rectum or sigmoid colon. Incidentally, these two recommendations are strong and based on moderate evidence, whereas the remaining recommendations for serrated polyps are weak and based on very-low-quality evidence.

Because of such knowledge gaps, the investigators emphasized the need for more data. The publication includes extensive discussion of pressing research topics and appropriate methods of investigation.

“Our review highlights several opportunities for research to clarify risk stratification and management of patients post-polypectomy,” the task force wrote. “In order to optimize risk-reduction strategies, the mechanisms driving metachronous advanced neoplasia after baseline polypectomy and their relative frequency need to be better understood through studies that include large numbers of patients with interval cancers and/or advanced neoplasia after baseline polypectomy. Mechanisms may include new/incident growth, incomplete baseline resection, and missed neoplasia; each of these potential causes may require different interventions for improvement.”

The task force also suggested that some basic questions beyond risk stratification remain unanswered, such as the impact of surveillance on CRC incidence and mortality.

“Such evidence is needed given the increasing proportion of patients who are having adenomas detected as part of increased participation in CRC screening,” the task force wrote.

Other suggested topics of investigation include age-related analyses that incorporate procedural risk, cost-effectiveness studies, and comparisons of nonendoscopic methods of surveillance, such as fecal immunochemical testing.

The study was funded by the National Institutes of Health and the Department of Veterans Affairs. The investigators reported relationships with Covidien, Ironwood, Medtronic, and others.

SOURCE: Gupta S et al. Gastroenterology. 2020 Feb 7. doi: 10.1053/j.gastro.2019.10.026.

Publications
Topics
Sections

The U.S. Multi-Society Task Force on Colorectal Cancer (CRC) recently updated recommendations for patient follow-up after colonoscopy and polypectomy.

The new guidance was based on advancements in both research and technology since the last recommendations were published in 2012, reported lead author Samir Gupta, MD, AGAF, of the University of California, San Diego, and colleagues.

“[Since 2012,] a number of articles have been published on risk of CRC based on colonoscopy findings and patient characteristics, as well as the potential impact of screening and surveillance colonoscopy on outcomes, such as incident CRC and polyps,” the investigators wrote in Gastroenterology. “Further, recent studies increasingly reflect the modern era of colonoscopy with more awareness of the importance of quality factors (e.g., adequate bowel preparation, cecal intubation, adequate adenoma detection, and complete polyp resection), and utilization of state of the art technologies (e.g., high-definition colonoscopes).”

The task force, which comprised the American College of Gastroenterology, the American Gastroenterological Association, and the American Society of Gastrointestinal Endoscopy, identified key topics using PICO (patient, intervention, comparison, and outcome) questions before conducting a comprehensive literature review that included 136 articles. Based on these findings, two task force members generated recommendations that were further refined through consensus discussion. The recommendations were copublished in the March issues of the American Journal of Gastroenterology, Gastroenterology, and Gastrointestinal Endoscopy.

According to Dr. Gupta and colleagues, some of the new recommendations, particularly those that advise less stringent follow-up, may encounter resistance from various stakeholders.

“Patients, primary care physicians, and colonoscopists may have concerns about lengthening a previously recommended interval, and will need to engage in shared decision making regarding whether to lengthen the follow-up interval based upon the guidance here or utilize the recommendation made at the time of the prior colonoscopy,” the task force wrote.

The most prominent recommendations of this kind concern patients who undergo removal of tubular adenomas less than 10 mm in size. For patients who have 1-2 of these adenomas removed, the task force now recommends follow-up after 7-10 years, instead of the previously recommended interval of 5-10 years.

“[This decision was] based on the growing body of evidence to support low risk for metachronous advanced neoplasia,” the task force wrote. “In this population, the risk for metachronous advanced neoplasia is similar to that for individuals with no adenoma. Importantly, the observed risk for fatal CRC among individuals with 1-10 adenomas less than 10 mm is lower than average for the general population.”

Along similar lines, patients who undergo removal of 3-4 small adenomas now have a recommended 3-5 year follow-up window, instead of the previously strict recommendation for follow-up at 3 years.

But not all of the new guidance is less stringent. While the task force previously recommended a follow-up period of less than 3 years after removal of more than 10 adenomas, they now recommend follow-up at 1 year. This change was made to simplify guidance, the investigators wrote, noting that the evidence base in this area “has not been markedly strengthened” since 2012.

Compared with the old guidance, the updated publication offers more detailed recommendations for follow-up after removal of serrated polyps. On this topic, 10 clinical scenarios are presented, with follow-up ranging from 6 months after piecemeal resection of a sessile serrated polyp greater than 20 mm to 10 years after removal of 20 or fewer hyperplastic polyps less than 10 mm that were located in the rectum or sigmoid colon. Incidentally, these two recommendations are strong and based on moderate evidence, whereas the remaining recommendations for serrated polyps are weak and based on very-low-quality evidence.

Because of such knowledge gaps, the investigators emphasized the need for more data. The publication includes extensive discussion of pressing research topics and appropriate methods of investigation.

“Our review highlights several opportunities for research to clarify risk stratification and management of patients post-polypectomy,” the task force wrote. “In order to optimize risk-reduction strategies, the mechanisms driving metachronous advanced neoplasia after baseline polypectomy and their relative frequency need to be better understood through studies that include large numbers of patients with interval cancers and/or advanced neoplasia after baseline polypectomy. Mechanisms may include new/incident growth, incomplete baseline resection, and missed neoplasia; each of these potential causes may require different interventions for improvement.”

The task force also suggested that some basic questions beyond risk stratification remain unanswered, such as the impact of surveillance on CRC incidence and mortality.

“Such evidence is needed given the increasing proportion of patients who are having adenomas detected as part of increased participation in CRC screening,” the task force wrote.

Other suggested topics of investigation include age-related analyses that incorporate procedural risk, cost-effectiveness studies, and comparisons of nonendoscopic methods of surveillance, such as fecal immunochemical testing.

The study was funded by the National Institutes of Health and the Department of Veterans Affairs. The investigators reported relationships with Covidien, Ironwood, Medtronic, and others.

SOURCE: Gupta S et al. Gastroenterology. 2020 Feb 7. doi: 10.1053/j.gastro.2019.10.026.

The U.S. Multi-Society Task Force on Colorectal Cancer (CRC) recently updated recommendations for patient follow-up after colonoscopy and polypectomy.

The new guidance was based on advancements in both research and technology since the last recommendations were published in 2012, reported lead author Samir Gupta, MD, AGAF, of the University of California, San Diego, and colleagues.

“[Since 2012,] a number of articles have been published on risk of CRC based on colonoscopy findings and patient characteristics, as well as the potential impact of screening and surveillance colonoscopy on outcomes, such as incident CRC and polyps,” the investigators wrote in Gastroenterology. “Further, recent studies increasingly reflect the modern era of colonoscopy with more awareness of the importance of quality factors (e.g., adequate bowel preparation, cecal intubation, adequate adenoma detection, and complete polyp resection), and utilization of state of the art technologies (e.g., high-definition colonoscopes).”

The task force, which comprised the American College of Gastroenterology, the American Gastroenterological Association, and the American Society of Gastrointestinal Endoscopy, identified key topics using PICO (patient, intervention, comparison, and outcome) questions before conducting a comprehensive literature review that included 136 articles. Based on these findings, two task force members generated recommendations that were further refined through consensus discussion. The recommendations were copublished in the March issues of the American Journal of Gastroenterology, Gastroenterology, and Gastrointestinal Endoscopy.

According to Dr. Gupta and colleagues, some of the new recommendations, particularly those that advise less stringent follow-up, may encounter resistance from various stakeholders.

“Patients, primary care physicians, and colonoscopists may have concerns about lengthening a previously recommended interval, and will need to engage in shared decision making regarding whether to lengthen the follow-up interval based upon the guidance here or utilize the recommendation made at the time of the prior colonoscopy,” the task force wrote.

The most prominent recommendations of this kind concern patients who undergo removal of tubular adenomas less than 10 mm in size. For patients who have 1-2 of these adenomas removed, the task force now recommends follow-up after 7-10 years, instead of the previously recommended interval of 5-10 years.

“[This decision was] based on the growing body of evidence to support low risk for metachronous advanced neoplasia,” the task force wrote. “In this population, the risk for metachronous advanced neoplasia is similar to that for individuals with no adenoma. Importantly, the observed risk for fatal CRC among individuals with 1-10 adenomas less than 10 mm is lower than average for the general population.”

Along similar lines, patients who undergo removal of 3-4 small adenomas now have a recommended 3-5 year follow-up window, instead of the previously strict recommendation for follow-up at 3 years.

But not all of the new guidance is less stringent. While the task force previously recommended a follow-up period of less than 3 years after removal of more than 10 adenomas, they now recommend follow-up at 1 year. This change was made to simplify guidance, the investigators wrote, noting that the evidence base in this area “has not been markedly strengthened” since 2012.

Compared with the old guidance, the updated publication offers more detailed recommendations for follow-up after removal of serrated polyps. On this topic, 10 clinical scenarios are presented, with follow-up ranging from 6 months after piecemeal resection of a sessile serrated polyp greater than 20 mm to 10 years after removal of 20 or fewer hyperplastic polyps less than 10 mm that were located in the rectum or sigmoid colon. Incidentally, these two recommendations are strong and based on moderate evidence, whereas the remaining recommendations for serrated polyps are weak and based on very-low-quality evidence.

Because of such knowledge gaps, the investigators emphasized the need for more data. The publication includes extensive discussion of pressing research topics and appropriate methods of investigation.

“Our review highlights several opportunities for research to clarify risk stratification and management of patients post-polypectomy,” the task force wrote. “In order to optimize risk-reduction strategies, the mechanisms driving metachronous advanced neoplasia after baseline polypectomy and their relative frequency need to be better understood through studies that include large numbers of patients with interval cancers and/or advanced neoplasia after baseline polypectomy. Mechanisms may include new/incident growth, incomplete baseline resection, and missed neoplasia; each of these potential causes may require different interventions for improvement.”

The task force also suggested that some basic questions beyond risk stratification remain unanswered, such as the impact of surveillance on CRC incidence and mortality.

“Such evidence is needed given the increasing proportion of patients who are having adenomas detected as part of increased participation in CRC screening,” the task force wrote.

Other suggested topics of investigation include age-related analyses that incorporate procedural risk, cost-effectiveness studies, and comparisons of nonendoscopic methods of surveillance, such as fecal immunochemical testing.

The study was funded by the National Institutes of Health and the Department of Veterans Affairs. The investigators reported relationships with Covidien, Ironwood, Medtronic, and others.

SOURCE: Gupta S et al. Gastroenterology. 2020 Feb 7. doi: 10.1053/j.gastro.2019.10.026.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM GASTROENTEROLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

C. auris Infection: Rare, But Raising Concerns About Pan-Resistance

Article Type
Changed
Wed, 02/12/2020 - 10:46
CDC researchers say the infection is “globally emerging” and cases with resistance to all 3 classes of commonly prescribed antifungal drugs have been reported in multiple countries.

Candida auris (C. auris) infection was first detected in New York, in July 2016. As of June 2019, 801 patients have been identified in New York as having C auris—and of those, 3 had pan-resistant infection.

CDC researchers say C auris is “a globally emerging yeast.” Cases with resistance to all 3 classes of commonly prescribed antifungal drugs have been reported in multiple countries.

In New York, of the first 277 available clinical isolates, 276 were resistant to fluconazole and 170 were resistant to amphotericin B. None were resistant to echinocandins. Subsequent testing found 99.7% of 331 isolates from infected patients with susceptibilities were resistant to fluconazole, 63% were resistant to amphotericin B, and 4% were resistant to echinocandins. Three of the subsequent isolates were pan-resistant.

The first 2 of those 3 patients were > 50 years old and residents of long-term care facilities. Each had multiple medical conditions, including ventilator dependence and colonization with multidrug-resistant bacteria. Neither patient was known to have received antifungal medications before the diagnosis of C. auris infection, but both were treated with prolonged courses of echinocandins after the diagnosis. Cultures taken after echinocandin therapy showed resistance to fluconazole, amphotericin B, and echinocandins. Both patients died, but the role of C. auris in their deaths is unclear.

The researchers found no epidemiologic links between the 2 patients. They were residents at different health care facilities, neither had any known domestic or foreign travel. No pan-resistant isolates were identified among contacts or on environmental surfaces from their rooms or common equipment at the 3 facilities where they had been patients. Although C. auris was isolated from other patients, none was pan-resistant.

A retrospective review of all New York C. auris isolates turned up a third pan-resistant patient. The patient also was aged > 50 years old , had multiple comorbidities, and a prolonged hospital and long-term care stay. However, the patient received care at a third unique facility. This third patient, who died from underlying medical conditions, was also not known to have traveled recently, and had no known contact with the other 2 patients.

Isolates from all 3 patients were initially sensitive to echinocandins. Resistance was detected after treatment, indicating it emerged during treatment with the drugs. The researchers found no evidence of transmission.

Approximately 3 years after the beginning of the New York outbreak, the pan-resistant isolates still appear to be rare, the researchers say, but “their emergence is concerning.” They urge close monitoring for patients on antifungal treatment for C. auris, along with follow-up cultures and repeat susceptibility testing, especially in patients previously treated with echinocandins.

Publications
Topics
Sections
CDC researchers say the infection is “globally emerging” and cases with resistance to all 3 classes of commonly prescribed antifungal drugs have been reported in multiple countries.
CDC researchers say the infection is “globally emerging” and cases with resistance to all 3 classes of commonly prescribed antifungal drugs have been reported in multiple countries.

Candida auris (C. auris) infection was first detected in New York, in July 2016. As of June 2019, 801 patients have been identified in New York as having C auris—and of those, 3 had pan-resistant infection.

CDC researchers say C auris is “a globally emerging yeast.” Cases with resistance to all 3 classes of commonly prescribed antifungal drugs have been reported in multiple countries.

In New York, of the first 277 available clinical isolates, 276 were resistant to fluconazole and 170 were resistant to amphotericin B. None were resistant to echinocandins. Subsequent testing found 99.7% of 331 isolates from infected patients with susceptibilities were resistant to fluconazole, 63% were resistant to amphotericin B, and 4% were resistant to echinocandins. Three of the subsequent isolates were pan-resistant.

The first 2 of those 3 patients were > 50 years old and residents of long-term care facilities. Each had multiple medical conditions, including ventilator dependence and colonization with multidrug-resistant bacteria. Neither patient was known to have received antifungal medications before the diagnosis of C. auris infection, but both were treated with prolonged courses of echinocandins after the diagnosis. Cultures taken after echinocandin therapy showed resistance to fluconazole, amphotericin B, and echinocandins. Both patients died, but the role of C. auris in their deaths is unclear.

The researchers found no epidemiologic links between the 2 patients. They were residents at different health care facilities, neither had any known domestic or foreign travel. No pan-resistant isolates were identified among contacts or on environmental surfaces from their rooms or common equipment at the 3 facilities where they had been patients. Although C. auris was isolated from other patients, none was pan-resistant.

A retrospective review of all New York C. auris isolates turned up a third pan-resistant patient. The patient also was aged > 50 years old , had multiple comorbidities, and a prolonged hospital and long-term care stay. However, the patient received care at a third unique facility. This third patient, who died from underlying medical conditions, was also not known to have traveled recently, and had no known contact with the other 2 patients.

Isolates from all 3 patients were initially sensitive to echinocandins. Resistance was detected after treatment, indicating it emerged during treatment with the drugs. The researchers found no evidence of transmission.

Approximately 3 years after the beginning of the New York outbreak, the pan-resistant isolates still appear to be rare, the researchers say, but “their emergence is concerning.” They urge close monitoring for patients on antifungal treatment for C. auris, along with follow-up cultures and repeat susceptibility testing, especially in patients previously treated with echinocandins.

Candida auris (C. auris) infection was first detected in New York, in July 2016. As of June 2019, 801 patients have been identified in New York as having C auris—and of those, 3 had pan-resistant infection.

CDC researchers say C auris is “a globally emerging yeast.” Cases with resistance to all 3 classes of commonly prescribed antifungal drugs have been reported in multiple countries.

In New York, of the first 277 available clinical isolates, 276 were resistant to fluconazole and 170 were resistant to amphotericin B. None were resistant to echinocandins. Subsequent testing found 99.7% of 331 isolates from infected patients with susceptibilities were resistant to fluconazole, 63% were resistant to amphotericin B, and 4% were resistant to echinocandins. Three of the subsequent isolates were pan-resistant.

The first 2 of those 3 patients were > 50 years old and residents of long-term care facilities. Each had multiple medical conditions, including ventilator dependence and colonization with multidrug-resistant bacteria. Neither patient was known to have received antifungal medications before the diagnosis of C. auris infection, but both were treated with prolonged courses of echinocandins after the diagnosis. Cultures taken after echinocandin therapy showed resistance to fluconazole, amphotericin B, and echinocandins. Both patients died, but the role of C. auris in their deaths is unclear.

The researchers found no epidemiologic links between the 2 patients. They were residents at different health care facilities, neither had any known domestic or foreign travel. No pan-resistant isolates were identified among contacts or on environmental surfaces from their rooms or common equipment at the 3 facilities where they had been patients. Although C. auris was isolated from other patients, none was pan-resistant.

A retrospective review of all New York C. auris isolates turned up a third pan-resistant patient. The patient also was aged > 50 years old , had multiple comorbidities, and a prolonged hospital and long-term care stay. However, the patient received care at a third unique facility. This third patient, who died from underlying medical conditions, was also not known to have traveled recently, and had no known contact with the other 2 patients.

Isolates from all 3 patients were initially sensitive to echinocandins. Resistance was detected after treatment, indicating it emerged during treatment with the drugs. The researchers found no evidence of transmission.

Approximately 3 years after the beginning of the New York outbreak, the pan-resistant isolates still appear to be rare, the researchers say, but “their emergence is concerning.” They urge close monitoring for patients on antifungal treatment for C. auris, along with follow-up cultures and repeat susceptibility testing, especially in patients previously treated with echinocandins.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 02/12/2020 - 10:45
Un-Gate On Date
Wed, 02/12/2020 - 10:45
Use ProPublica
CFC Schedule Remove Status
Wed, 02/12/2020 - 10:45
Hide sidebar & use full width
render the right sidebar.

Study Warns of the Risk of Carbon Monoxide Poisoning in the Military

Article Type
Changed
Wed, 02/12/2020 - 10:40
Preventable CO exposures present a “unique and potentially lethal” risk for active duty service members and their beneficiaries.

Carbon monoxide (CO)—colorless, odorless, tasteless and highly toxic—is one of the most common causes of unintentional poisoning deaths in the US. Researchers who described their analysis of CO-related incidents in the military for the Medical Surveillance Monthly Report say military activities, materials, and settings pose “unique and potentially lethal sources of significant CO exposure.”

They reported on episodes of CO poisoning among members of the US Armed Forces between 2009 and 2019 and expanded on reports that dated back to 2001. Their analysis included reserve members and nonservice member beneficiaries.

Over the 10 years, there were 1,288 confirmed/probable cases of CO poisoning among active component service members, 366 among reserve component service members, and 4,754 among nonservice member beneficiaries. The highest number of active-duty members with CO confirmed/probable poisoning were reported at Fort Carson, Colorado (60) and NMC San Diego, California (52).

Of the confirmed/probable cases among active-duty members, 613 were classified as having unintentional intent, 538 undetermined intent, and 136 self-harm intent. One was due to assault. Most of the cases were related to work in repair/engineering occupations. Although the majority of sources were “other or unspecified,” motor vehicle exhaust accounted for 17% of the confirmed cases and all of the probable cases. Similarly, in the reserve component and among nonservice member beneficiaries, vehicle exhaust was the second-most common source.

The researchers found that CO poisoning-related injuries/diagnoses in the military often involved a single exposure that affected multiple personnel. For example, 21 soldiers showed symptoms during a multi-day exercise at the Yukon Training Center.

Excessive CO exposure is “entirely preventable,” the researchers say. Primary medical care providers—including unit medics and emergency medical technicians—should be knowledgeable about and sensitive to the “diverse and nonspecific” early clinical manifestations of CO intoxication, such as dizziness, headache, malaise, fatigue, disorientation, nausea, and vomiting. High CO exposure can cause more pronounced and severe symptoms, including syncope, seizures, acute stroke-like syndromes, and coma.

 It’s important to remember, the researchers add, that increased oxygen demand from muscular activity exacerbates the symptoms of CO exposure, but individuals at rest may experience no other symptoms before losing consciousness.

An editorial comment notes that the full impact of morbidity and mortality from CO poisoning is difficult to estimate. For one thing, because the symptoms can be so nonspecific, clinicians may not consider CO poisoning when patients present for care.

This study differs from previous ones in that it uses code data from both the Ninth and Tenth Revisions of the International Classification of Diseases. Such data, the editorial comment says, can be used at national and Military Health System–wide levels with relatively few resources, providing useful information on trends and risk factors that can be used in designing interventions

Publications
Topics
Sections
Preventable CO exposures present a “unique and potentially lethal” risk for active duty service members and their beneficiaries.
Preventable CO exposures present a “unique and potentially lethal” risk for active duty service members and their beneficiaries.

Carbon monoxide (CO)—colorless, odorless, tasteless and highly toxic—is one of the most common causes of unintentional poisoning deaths in the US. Researchers who described their analysis of CO-related incidents in the military for the Medical Surveillance Monthly Report say military activities, materials, and settings pose “unique and potentially lethal sources of significant CO exposure.”

They reported on episodes of CO poisoning among members of the US Armed Forces between 2009 and 2019 and expanded on reports that dated back to 2001. Their analysis included reserve members and nonservice member beneficiaries.

Over the 10 years, there were 1,288 confirmed/probable cases of CO poisoning among active component service members, 366 among reserve component service members, and 4,754 among nonservice member beneficiaries. The highest number of active-duty members with CO confirmed/probable poisoning were reported at Fort Carson, Colorado (60) and NMC San Diego, California (52).

Of the confirmed/probable cases among active-duty members, 613 were classified as having unintentional intent, 538 undetermined intent, and 136 self-harm intent. One was due to assault. Most of the cases were related to work in repair/engineering occupations. Although the majority of sources were “other or unspecified,” motor vehicle exhaust accounted for 17% of the confirmed cases and all of the probable cases. Similarly, in the reserve component and among nonservice member beneficiaries, vehicle exhaust was the second-most common source.

The researchers found that CO poisoning-related injuries/diagnoses in the military often involved a single exposure that affected multiple personnel. For example, 21 soldiers showed symptoms during a multi-day exercise at the Yukon Training Center.

Excessive CO exposure is “entirely preventable,” the researchers say. Primary medical care providers—including unit medics and emergency medical technicians—should be knowledgeable about and sensitive to the “diverse and nonspecific” early clinical manifestations of CO intoxication, such as dizziness, headache, malaise, fatigue, disorientation, nausea, and vomiting. High CO exposure can cause more pronounced and severe symptoms, including syncope, seizures, acute stroke-like syndromes, and coma.

 It’s important to remember, the researchers add, that increased oxygen demand from muscular activity exacerbates the symptoms of CO exposure, but individuals at rest may experience no other symptoms before losing consciousness.

An editorial comment notes that the full impact of morbidity and mortality from CO poisoning is difficult to estimate. For one thing, because the symptoms can be so nonspecific, clinicians may not consider CO poisoning when patients present for care.

This study differs from previous ones in that it uses code data from both the Ninth and Tenth Revisions of the International Classification of Diseases. Such data, the editorial comment says, can be used at national and Military Health System–wide levels with relatively few resources, providing useful information on trends and risk factors that can be used in designing interventions

Carbon monoxide (CO)—colorless, odorless, tasteless and highly toxic—is one of the most common causes of unintentional poisoning deaths in the US. Researchers who described their analysis of CO-related incidents in the military for the Medical Surveillance Monthly Report say military activities, materials, and settings pose “unique and potentially lethal sources of significant CO exposure.”

They reported on episodes of CO poisoning among members of the US Armed Forces between 2009 and 2019 and expanded on reports that dated back to 2001. Their analysis included reserve members and nonservice member beneficiaries.

Over the 10 years, there were 1,288 confirmed/probable cases of CO poisoning among active component service members, 366 among reserve component service members, and 4,754 among nonservice member beneficiaries. The highest number of active-duty members with CO confirmed/probable poisoning were reported at Fort Carson, Colorado (60) and NMC San Diego, California (52).

Of the confirmed/probable cases among active-duty members, 613 were classified as having unintentional intent, 538 undetermined intent, and 136 self-harm intent. One was due to assault. Most of the cases were related to work in repair/engineering occupations. Although the majority of sources were “other or unspecified,” motor vehicle exhaust accounted for 17% of the confirmed cases and all of the probable cases. Similarly, in the reserve component and among nonservice member beneficiaries, vehicle exhaust was the second-most common source.

The researchers found that CO poisoning-related injuries/diagnoses in the military often involved a single exposure that affected multiple personnel. For example, 21 soldiers showed symptoms during a multi-day exercise at the Yukon Training Center.

Excessive CO exposure is “entirely preventable,” the researchers say. Primary medical care providers—including unit medics and emergency medical technicians—should be knowledgeable about and sensitive to the “diverse and nonspecific” early clinical manifestations of CO intoxication, such as dizziness, headache, malaise, fatigue, disorientation, nausea, and vomiting. High CO exposure can cause more pronounced and severe symptoms, including syncope, seizures, acute stroke-like syndromes, and coma.

 It’s important to remember, the researchers add, that increased oxygen demand from muscular activity exacerbates the symptoms of CO exposure, but individuals at rest may experience no other symptoms before losing consciousness.

An editorial comment notes that the full impact of morbidity and mortality from CO poisoning is difficult to estimate. For one thing, because the symptoms can be so nonspecific, clinicians may not consider CO poisoning when patients present for care.

This study differs from previous ones in that it uses code data from both the Ninth and Tenth Revisions of the International Classification of Diseases. Such data, the editorial comment says, can be used at national and Military Health System–wide levels with relatively few resources, providing useful information on trends and risk factors that can be used in designing interventions

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 02/12/2020 - 10:30
Un-Gate On Date
Wed, 02/12/2020 - 10:30
Use ProPublica
CFC Schedule Remove Status
Wed, 02/12/2020 - 10:30
Hide sidebar & use full width
render the right sidebar.

Functional outcomes of SLAH may be superior to those of open resection

Article Type
Changed
Fri, 02/28/2020 - 10:06

Among patients with medial temporal lobe epilepsy, functional outcomes of stereotactic laser amygdalohippocampotomy (SLAH) are superior to those of open resection, according to data presented at the annual meeting of the American Epilepsy Society. In addition, improvements in functional status are strongly associated with improvements in global cognitive performance.

Dr. Daniel Drane of Emory University, Atlanta
Dr. Daniel Drane

Previous data have indicated that SLAH results in superior cognitive outcomes, compared with selective and standard open resection, in the treatment of medial temporal lobe epilepsy. The rates of seizure freedom following these procedures are equivalent. Daniel Drane, PhD, associate professor of neurology at Emory University in Atlanta, and colleagues hypothesized that the preservation of cognitive skills following SLAH would be apparent in real-world settings. To test this hypothesis, they investigated changes in functional status following SLAH.
 

Functional status correlated with neurocognitive change

Dr. Drane and colleagues compared functional outcomes in 53 patients who underwent SLAH at the Emory University Epilepsy Center and 20 patients who underwent open resection at the same center. The investigators created a hierarchical classification of functional status using the following criteria (from best to worst): employed and independent with all activities of daily living (ADLs), unemployed and independent with all ADLs, unemployed and independent with lower ADLs only (i.e., independent with self-care, but not with management of finances, medications, etc.), and unemployed and unable to manage any ADLs without assistance. Dr. Drane and colleagues rated all patients on these criteria at baseline and at 1 year after surgery. They classified patients as improving, declining, or remaining stable in functional status. Finally, the investigators used Fisher’s exact test to compare the proportional ratings of change between surgical procedures.

At baseline, the proportions of patients in each functional group were similar between patients who later underwent SLAH and those who later underwent open resection. Significantly more patients who underwent SLAH, however, had functional improvement, compared with patients who underwent open resection (13.2% vs. 0%). Furthermore, fewer patients who underwent SLAH had functional decline, compared with those who underwent open resection (3.7% vs. 35%).

Dr. Drane and colleagues found a strong correlation between functional status and global ratings of neurocognitive change, but no correlation between functional status and seizure-freedom status. Patients who underwent SLAH were less likely to have a decline in employment status than were patients who underwent open resection were (4.2% vs. 45.4%).
 

Weighing surgical options for a given patient

“This study provides a real-world metric of meaningful change following surgery, which is, critically, independent of seizure freedom outcome,” said Dr. Drane. “If a patient becomes seizure free but declines in functional status, presumably due to compromised cognitive function, this outcome is likely not going to lead to a better quality of life. Overall, our data suggest that functional status is driven more by cognitive outcome than by seizure freedom, and that it is an equally important metric for determining whether or not surgery has been successful. We would hope that the epilepsy surgical team would try to balance the desire to achieve seizure freedom against the potential risks and benefits of surgery on cognitive performance and functional status.”

 

 

Neurologists must consider various factors when deciding whether open resection or SLAH is the better option for a given patient. “Our prior work has shown that SLAH will not cause naming or object recognition deficits, while such deficits will result in a substantial proportion of patients undergoing open resection procedures,” said Dr. Drane. “Declarative memory can seemingly be hurt by either procedure, although it would appear that rates of decline are substantially less following SLAH. As functional status appears to be related to cognitive outcome, SLAH would always be the better choice from the standpoint of risk analysis, particularly since one can almost always go back an complete an open resection at a later date.

“Seizure freedom rates appear to be slightly higher with open resection than with SLAH,” Dr. Drane continued. “This [result] would be the one factor that would represent the one reason to opt for an open resection rather than SLAH. Factors that might push one in this direction could be risk for SUDEP (i.e., someone at very high risk may want to just be done with the seizures) and impaired baseline cognitive functioning (i.e., someone with severely impaired cognitive functioning might be viewed as having less to lose). In the latter case, however, we would caution that low-functioning individuals can sometimes lose their remaining functional abilities even if we cannot do a very good job of measuring cognitive change in such cases due to their poor baseline performance.”

The hemisphere to undergo operation also may influence the choice of procedure. “Some epileptologists will suggest that the choice of using SLAH is more important for patients having surgery involving their language-dominant cerebral hemisphere,” said Dr. Drane. “While postsurgical deficits in these patients are clearly more easy to identify, I would argue that a case can be made for starting with SLAH in the nondominant temporal lobe cases as well. Many of the functions that can be potentially harmed by surgical procedures involving the nondominant (typically right) hemisphere have more subtle effects, but their cumulative impact can yet be harmful.”

The study was partially supported by funding from the National Institutes of Health and Medtronic. The investigators did not report any conflicts of interest.

SOURCE: Drane DL et al. AES 2019. Abstract 1.34.

Meeting/Event
Issue
Neurology Reviews- 28(3)
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Among patients with medial temporal lobe epilepsy, functional outcomes of stereotactic laser amygdalohippocampotomy (SLAH) are superior to those of open resection, according to data presented at the annual meeting of the American Epilepsy Society. In addition, improvements in functional status are strongly associated with improvements in global cognitive performance.

Dr. Daniel Drane of Emory University, Atlanta
Dr. Daniel Drane

Previous data have indicated that SLAH results in superior cognitive outcomes, compared with selective and standard open resection, in the treatment of medial temporal lobe epilepsy. The rates of seizure freedom following these procedures are equivalent. Daniel Drane, PhD, associate professor of neurology at Emory University in Atlanta, and colleagues hypothesized that the preservation of cognitive skills following SLAH would be apparent in real-world settings. To test this hypothesis, they investigated changes in functional status following SLAH.
 

Functional status correlated with neurocognitive change

Dr. Drane and colleagues compared functional outcomes in 53 patients who underwent SLAH at the Emory University Epilepsy Center and 20 patients who underwent open resection at the same center. The investigators created a hierarchical classification of functional status using the following criteria (from best to worst): employed and independent with all activities of daily living (ADLs), unemployed and independent with all ADLs, unemployed and independent with lower ADLs only (i.e., independent with self-care, but not with management of finances, medications, etc.), and unemployed and unable to manage any ADLs without assistance. Dr. Drane and colleagues rated all patients on these criteria at baseline and at 1 year after surgery. They classified patients as improving, declining, or remaining stable in functional status. Finally, the investigators used Fisher’s exact test to compare the proportional ratings of change between surgical procedures.

At baseline, the proportions of patients in each functional group were similar between patients who later underwent SLAH and those who later underwent open resection. Significantly more patients who underwent SLAH, however, had functional improvement, compared with patients who underwent open resection (13.2% vs. 0%). Furthermore, fewer patients who underwent SLAH had functional decline, compared with those who underwent open resection (3.7% vs. 35%).

Dr. Drane and colleagues found a strong correlation between functional status and global ratings of neurocognitive change, but no correlation between functional status and seizure-freedom status. Patients who underwent SLAH were less likely to have a decline in employment status than were patients who underwent open resection were (4.2% vs. 45.4%).
 

Weighing surgical options for a given patient

“This study provides a real-world metric of meaningful change following surgery, which is, critically, independent of seizure freedom outcome,” said Dr. Drane. “If a patient becomes seizure free but declines in functional status, presumably due to compromised cognitive function, this outcome is likely not going to lead to a better quality of life. Overall, our data suggest that functional status is driven more by cognitive outcome than by seizure freedom, and that it is an equally important metric for determining whether or not surgery has been successful. We would hope that the epilepsy surgical team would try to balance the desire to achieve seizure freedom against the potential risks and benefits of surgery on cognitive performance and functional status.”

 

 

Neurologists must consider various factors when deciding whether open resection or SLAH is the better option for a given patient. “Our prior work has shown that SLAH will not cause naming or object recognition deficits, while such deficits will result in a substantial proportion of patients undergoing open resection procedures,” said Dr. Drane. “Declarative memory can seemingly be hurt by either procedure, although it would appear that rates of decline are substantially less following SLAH. As functional status appears to be related to cognitive outcome, SLAH would always be the better choice from the standpoint of risk analysis, particularly since one can almost always go back an complete an open resection at a later date.

“Seizure freedom rates appear to be slightly higher with open resection than with SLAH,” Dr. Drane continued. “This [result] would be the one factor that would represent the one reason to opt for an open resection rather than SLAH. Factors that might push one in this direction could be risk for SUDEP (i.e., someone at very high risk may want to just be done with the seizures) and impaired baseline cognitive functioning (i.e., someone with severely impaired cognitive functioning might be viewed as having less to lose). In the latter case, however, we would caution that low-functioning individuals can sometimes lose their remaining functional abilities even if we cannot do a very good job of measuring cognitive change in such cases due to their poor baseline performance.”

The hemisphere to undergo operation also may influence the choice of procedure. “Some epileptologists will suggest that the choice of using SLAH is more important for patients having surgery involving their language-dominant cerebral hemisphere,” said Dr. Drane. “While postsurgical deficits in these patients are clearly more easy to identify, I would argue that a case can be made for starting with SLAH in the nondominant temporal lobe cases as well. Many of the functions that can be potentially harmed by surgical procedures involving the nondominant (typically right) hemisphere have more subtle effects, but their cumulative impact can yet be harmful.”

The study was partially supported by funding from the National Institutes of Health and Medtronic. The investigators did not report any conflicts of interest.

SOURCE: Drane DL et al. AES 2019. Abstract 1.34.

Among patients with medial temporal lobe epilepsy, functional outcomes of stereotactic laser amygdalohippocampotomy (SLAH) are superior to those of open resection, according to data presented at the annual meeting of the American Epilepsy Society. In addition, improvements in functional status are strongly associated with improvements in global cognitive performance.

Dr. Daniel Drane of Emory University, Atlanta
Dr. Daniel Drane

Previous data have indicated that SLAH results in superior cognitive outcomes, compared with selective and standard open resection, in the treatment of medial temporal lobe epilepsy. The rates of seizure freedom following these procedures are equivalent. Daniel Drane, PhD, associate professor of neurology at Emory University in Atlanta, and colleagues hypothesized that the preservation of cognitive skills following SLAH would be apparent in real-world settings. To test this hypothesis, they investigated changes in functional status following SLAH.
 

Functional status correlated with neurocognitive change

Dr. Drane and colleagues compared functional outcomes in 53 patients who underwent SLAH at the Emory University Epilepsy Center and 20 patients who underwent open resection at the same center. The investigators created a hierarchical classification of functional status using the following criteria (from best to worst): employed and independent with all activities of daily living (ADLs), unemployed and independent with all ADLs, unemployed and independent with lower ADLs only (i.e., independent with self-care, but not with management of finances, medications, etc.), and unemployed and unable to manage any ADLs without assistance. Dr. Drane and colleagues rated all patients on these criteria at baseline and at 1 year after surgery. They classified patients as improving, declining, or remaining stable in functional status. Finally, the investigators used Fisher’s exact test to compare the proportional ratings of change between surgical procedures.

At baseline, the proportions of patients in each functional group were similar between patients who later underwent SLAH and those who later underwent open resection. Significantly more patients who underwent SLAH, however, had functional improvement, compared with patients who underwent open resection (13.2% vs. 0%). Furthermore, fewer patients who underwent SLAH had functional decline, compared with those who underwent open resection (3.7% vs. 35%).

Dr. Drane and colleagues found a strong correlation between functional status and global ratings of neurocognitive change, but no correlation between functional status and seizure-freedom status. Patients who underwent SLAH were less likely to have a decline in employment status than were patients who underwent open resection were (4.2% vs. 45.4%).
 

Weighing surgical options for a given patient

“This study provides a real-world metric of meaningful change following surgery, which is, critically, independent of seizure freedom outcome,” said Dr. Drane. “If a patient becomes seizure free but declines in functional status, presumably due to compromised cognitive function, this outcome is likely not going to lead to a better quality of life. Overall, our data suggest that functional status is driven more by cognitive outcome than by seizure freedom, and that it is an equally important metric for determining whether or not surgery has been successful. We would hope that the epilepsy surgical team would try to balance the desire to achieve seizure freedom against the potential risks and benefits of surgery on cognitive performance and functional status.”

 

 

Neurologists must consider various factors when deciding whether open resection or SLAH is the better option for a given patient. “Our prior work has shown that SLAH will not cause naming or object recognition deficits, while such deficits will result in a substantial proportion of patients undergoing open resection procedures,” said Dr. Drane. “Declarative memory can seemingly be hurt by either procedure, although it would appear that rates of decline are substantially less following SLAH. As functional status appears to be related to cognitive outcome, SLAH would always be the better choice from the standpoint of risk analysis, particularly since one can almost always go back an complete an open resection at a later date.

“Seizure freedom rates appear to be slightly higher with open resection than with SLAH,” Dr. Drane continued. “This [result] would be the one factor that would represent the one reason to opt for an open resection rather than SLAH. Factors that might push one in this direction could be risk for SUDEP (i.e., someone at very high risk may want to just be done with the seizures) and impaired baseline cognitive functioning (i.e., someone with severely impaired cognitive functioning might be viewed as having less to lose). In the latter case, however, we would caution that low-functioning individuals can sometimes lose their remaining functional abilities even if we cannot do a very good job of measuring cognitive change in such cases due to their poor baseline performance.”

The hemisphere to undergo operation also may influence the choice of procedure. “Some epileptologists will suggest that the choice of using SLAH is more important for patients having surgery involving their language-dominant cerebral hemisphere,” said Dr. Drane. “While postsurgical deficits in these patients are clearly more easy to identify, I would argue that a case can be made for starting with SLAH in the nondominant temporal lobe cases as well. Many of the functions that can be potentially harmed by surgical procedures involving the nondominant (typically right) hemisphere have more subtle effects, but their cumulative impact can yet be harmful.”

The study was partially supported by funding from the National Institutes of Health and Medtronic. The investigators did not report any conflicts of interest.

SOURCE: Drane DL et al. AES 2019. Abstract 1.34.

Issue
Neurology Reviews- 28(3)
Issue
Neurology Reviews- 28(3)
Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM AES 2019

Citation Override
Publish date: February 12, 2020
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Epidermolysis bullosa classification criteria refined and ready

Article Type
Changed
Wed, 02/12/2020 - 10:26

Revised classification criteria for epidermolysis bullosa (EB) demonstrate how far researchers and clinicians have come in understanding this debilitating group of genetic skin diseases, but also how far there is still to go towards improving the management of those affected.

Dr. Christina Has of the University of Freiburg, Germany
Sara Freeman/MDedge News
Dr. Christina Has

Previous criteria issued in 2014 represented “important progress” and “built on the achievements of several generations of physicians and researchers who described the phenotypes, the level of skin cleavage, developed and characterized antibodies, and discovered EB-associated genes,” Cristina Has, MD, said at the EB World Congress, organized by the Dystrophic Epidermolysis Bullosa Association (DEBRA).

Dr. Has, a senior dermatologist and professor of experimental dermatology at the University of Freiburg (Germany), observed that prior criteria had “introduced genetic and molecular data in a so-called onion-skin classification of EB, and removed most of the eponyms,” which had been maintained in the latest update.

“What is new, and probably the most important change, is making the distinction between classical EB and other disorders with skin fragility,” she said, noting that the revised classification criteria for EB included minor changes to the nomenclature of EB. Six new EB subtypes and genes have also been added, and there are new sections on genotype/phenotype correlations, disease modifying factors, and the natural history of EB. Furthermore, supporting information included a concise description of clinical and genetic features of all EB types and subtypes.

The updated criteria are the result of an expert meeting held in April 2019 and have been accepted for publication. The expert panel that developed the criteria think that the revised classification criteria will be “useful and, we hope, inspiring and motivating for the young generation of dermatologists, pediatricians, and for the researchers who work in this field,” Dr. Has said.

“The term EB has been used in the last years for many new disorders, and this is the reason why we thought we have to somehow control this, and to make the distinction between classical epidermolysis bullosa due to defects at the dermal junction and other disorders with skin fragility where the anomalies occur within other layers of the epidermis or in the dermis,” Dr. Has explained.

There are still 4 main types of classical EB: EB simplex (EBS), dystrophic EB (DEB), junctional EB, and Kindler EB, but there are now 34 subtypes, slightly fewer than before. The updated criteria distinguish between the types and subtypes according to the level of skin cleavage, the inheritance pattern, the mutated gene, and the targeted protein, Dr. Has said.

As for peeling disorders, these have been classified as being erosive or hyperkeratotic, or as affecting the connective tissue with skin blistering. Similar to classical EB, these disorders are associated with fragility of the skin and mucosa and share some pathogenetic mechanisms. Moreover, as “the suffering of the patient is similar,” Dr. Has said, “we’d like to consider them under the umbrella of EB.” Most of the disorders she listed were inherited via an autosomal recessive mechanism, with intraepidermal disorders inherited via an autosomal dominant mechanism. New genes are being identified the time, she added, so these groupings will no doubt be subject to future revisions.

Minor changes to nomenclature were made to avoid confusion among clinicians and those living with the condition. As such, Kindler EB replaces Kindler syndrome, names of some subtypes were simplified, and a new “self-improving” type of DEB was introduced to replace the term “transient dermolysis of the newborn.” Altogether, there are now 11 subtypes of DEB. A distinction was also made between syndromic and nonsyndromic EB. “We all know that EB can be a systemic disorder with secondary manifestations within different organs,” Dr. Has told conference attendees. Anemia and failure to thrive can be associated, but it still remains a nonsyndromic disorder, she said. By contrast, “syndromic EB is due to genetic defects, which are also expressed in other organs than the skin or mucosal membranes, and lead to primary extracutaneous manifestations, such as cardiomyopathy, nephropathy, and so on.”

There are fewer subtypes of EBS and “we think they are better defined,” Dr. Has stated. “EB simplex is the most heterogenous EB type, clinically and genetically, and includes several syndromic disorders,” and the new classification criteria should be useful in helping categorize individuals with EBS and thus help target their management.

One of the six new subtypes of EB included in the revised classification criteria is “syndromic EBS with cardiomyopathy” caused by the KLH24 mutation. This gene was discovered in 2016 and more than 40 cases have so far been identified, 50% of which have been sporadic de novo mutations.

Other new EB subtypes are:

  • “EBS with localized nephropathy” caused by a mutation in the CD151 gene.
  • An autosomal recessive EBS linked to the KRT5 gene.
  • A new phenotype that manifests with oral mucosal blisters linked to the DSG3 gene. (Although only a single case has been reported to date, it was felt worthy of inclusion.)
  • Another linked to DSG3 that leads to skin fragility and hypertrichosis.
  • A new dystrophic EB subtype linked to mutations in the PLOD3 gene.

In an interview, Dr. Has reiterated the importance of keeping classification criteria updated in line with current research findings. She emphasized that there were many types of EB and how important it was to refine how these were classified based on the underlying genetics.

“We brought much more genetic data into the paper, because we are in the era of personalized medicine,” she said. “There are specific therapies for mutations and for different subtypes and that’s why we think that, step by step, we have to bring in more and more data into the classification.”

There are many people with EBS, she observed, and while these individuals may not have such a dramatic clinical presentation as those with recessive DEB, for example, the effect of the condition on their daily lives is no less. “These people are active, they have jobs, they have to work, and they have pain, they have blister,” Dr. Has said.

While the criteria are intended only for classification of EB, they might help in practice. Dr. Has gave an anecdotal example of a woman that has been misdiagnosed as having a type of DEB with a high risk of squamous cell carcinoma but in fact had a different form of EB with no risk of developing SCC. “That’s why criteria are important,” she said.

Dr. Has had no conflicts of interest to disclose.
 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Revised classification criteria for epidermolysis bullosa (EB) demonstrate how far researchers and clinicians have come in understanding this debilitating group of genetic skin diseases, but also how far there is still to go towards improving the management of those affected.

Dr. Christina Has of the University of Freiburg, Germany
Sara Freeman/MDedge News
Dr. Christina Has

Previous criteria issued in 2014 represented “important progress” and “built on the achievements of several generations of physicians and researchers who described the phenotypes, the level of skin cleavage, developed and characterized antibodies, and discovered EB-associated genes,” Cristina Has, MD, said at the EB World Congress, organized by the Dystrophic Epidermolysis Bullosa Association (DEBRA).

Dr. Has, a senior dermatologist and professor of experimental dermatology at the University of Freiburg (Germany), observed that prior criteria had “introduced genetic and molecular data in a so-called onion-skin classification of EB, and removed most of the eponyms,” which had been maintained in the latest update.

“What is new, and probably the most important change, is making the distinction between classical EB and other disorders with skin fragility,” she said, noting that the revised classification criteria for EB included minor changes to the nomenclature of EB. Six new EB subtypes and genes have also been added, and there are new sections on genotype/phenotype correlations, disease modifying factors, and the natural history of EB. Furthermore, supporting information included a concise description of clinical and genetic features of all EB types and subtypes.

The updated criteria are the result of an expert meeting held in April 2019 and have been accepted for publication. The expert panel that developed the criteria think that the revised classification criteria will be “useful and, we hope, inspiring and motivating for the young generation of dermatologists, pediatricians, and for the researchers who work in this field,” Dr. Has said.

“The term EB has been used in the last years for many new disorders, and this is the reason why we thought we have to somehow control this, and to make the distinction between classical epidermolysis bullosa due to defects at the dermal junction and other disorders with skin fragility where the anomalies occur within other layers of the epidermis or in the dermis,” Dr. Has explained.

There are still 4 main types of classical EB: EB simplex (EBS), dystrophic EB (DEB), junctional EB, and Kindler EB, but there are now 34 subtypes, slightly fewer than before. The updated criteria distinguish between the types and subtypes according to the level of skin cleavage, the inheritance pattern, the mutated gene, and the targeted protein, Dr. Has said.

As for peeling disorders, these have been classified as being erosive or hyperkeratotic, or as affecting the connective tissue with skin blistering. Similar to classical EB, these disorders are associated with fragility of the skin and mucosa and share some pathogenetic mechanisms. Moreover, as “the suffering of the patient is similar,” Dr. Has said, “we’d like to consider them under the umbrella of EB.” Most of the disorders she listed were inherited via an autosomal recessive mechanism, with intraepidermal disorders inherited via an autosomal dominant mechanism. New genes are being identified the time, she added, so these groupings will no doubt be subject to future revisions.

Minor changes to nomenclature were made to avoid confusion among clinicians and those living with the condition. As such, Kindler EB replaces Kindler syndrome, names of some subtypes were simplified, and a new “self-improving” type of DEB was introduced to replace the term “transient dermolysis of the newborn.” Altogether, there are now 11 subtypes of DEB. A distinction was also made between syndromic and nonsyndromic EB. “We all know that EB can be a systemic disorder with secondary manifestations within different organs,” Dr. Has told conference attendees. Anemia and failure to thrive can be associated, but it still remains a nonsyndromic disorder, she said. By contrast, “syndromic EB is due to genetic defects, which are also expressed in other organs than the skin or mucosal membranes, and lead to primary extracutaneous manifestations, such as cardiomyopathy, nephropathy, and so on.”

There are fewer subtypes of EBS and “we think they are better defined,” Dr. Has stated. “EB simplex is the most heterogenous EB type, clinically and genetically, and includes several syndromic disorders,” and the new classification criteria should be useful in helping categorize individuals with EBS and thus help target their management.

One of the six new subtypes of EB included in the revised classification criteria is “syndromic EBS with cardiomyopathy” caused by the KLH24 mutation. This gene was discovered in 2016 and more than 40 cases have so far been identified, 50% of which have been sporadic de novo mutations.

Other new EB subtypes are:

  • “EBS with localized nephropathy” caused by a mutation in the CD151 gene.
  • An autosomal recessive EBS linked to the KRT5 gene.
  • A new phenotype that manifests with oral mucosal blisters linked to the DSG3 gene. (Although only a single case has been reported to date, it was felt worthy of inclusion.)
  • Another linked to DSG3 that leads to skin fragility and hypertrichosis.
  • A new dystrophic EB subtype linked to mutations in the PLOD3 gene.

In an interview, Dr. Has reiterated the importance of keeping classification criteria updated in line with current research findings. She emphasized that there were many types of EB and how important it was to refine how these were classified based on the underlying genetics.

“We brought much more genetic data into the paper, because we are in the era of personalized medicine,” she said. “There are specific therapies for mutations and for different subtypes and that’s why we think that, step by step, we have to bring in more and more data into the classification.”

There are many people with EBS, she observed, and while these individuals may not have such a dramatic clinical presentation as those with recessive DEB, for example, the effect of the condition on their daily lives is no less. “These people are active, they have jobs, they have to work, and they have pain, they have blister,” Dr. Has said.

While the criteria are intended only for classification of EB, they might help in practice. Dr. Has gave an anecdotal example of a woman that has been misdiagnosed as having a type of DEB with a high risk of squamous cell carcinoma but in fact had a different form of EB with no risk of developing SCC. “That’s why criteria are important,” she said.

Dr. Has had no conflicts of interest to disclose.
 

Revised classification criteria for epidermolysis bullosa (EB) demonstrate how far researchers and clinicians have come in understanding this debilitating group of genetic skin diseases, but also how far there is still to go towards improving the management of those affected.

Dr. Christina Has of the University of Freiburg, Germany
Sara Freeman/MDedge News
Dr. Christina Has

Previous criteria issued in 2014 represented “important progress” and “built on the achievements of several generations of physicians and researchers who described the phenotypes, the level of skin cleavage, developed and characterized antibodies, and discovered EB-associated genes,” Cristina Has, MD, said at the EB World Congress, organized by the Dystrophic Epidermolysis Bullosa Association (DEBRA).

Dr. Has, a senior dermatologist and professor of experimental dermatology at the University of Freiburg (Germany), observed that prior criteria had “introduced genetic and molecular data in a so-called onion-skin classification of EB, and removed most of the eponyms,” which had been maintained in the latest update.

“What is new, and probably the most important change, is making the distinction between classical EB and other disorders with skin fragility,” she said, noting that the revised classification criteria for EB included minor changes to the nomenclature of EB. Six new EB subtypes and genes have also been added, and there are new sections on genotype/phenotype correlations, disease modifying factors, and the natural history of EB. Furthermore, supporting information included a concise description of clinical and genetic features of all EB types and subtypes.

The updated criteria are the result of an expert meeting held in April 2019 and have been accepted for publication. The expert panel that developed the criteria think that the revised classification criteria will be “useful and, we hope, inspiring and motivating for the young generation of dermatologists, pediatricians, and for the researchers who work in this field,” Dr. Has said.

“The term EB has been used in the last years for many new disorders, and this is the reason why we thought we have to somehow control this, and to make the distinction between classical epidermolysis bullosa due to defects at the dermal junction and other disorders with skin fragility where the anomalies occur within other layers of the epidermis or in the dermis,” Dr. Has explained.

There are still 4 main types of classical EB: EB simplex (EBS), dystrophic EB (DEB), junctional EB, and Kindler EB, but there are now 34 subtypes, slightly fewer than before. The updated criteria distinguish between the types and subtypes according to the level of skin cleavage, the inheritance pattern, the mutated gene, and the targeted protein, Dr. Has said.

As for peeling disorders, these have been classified as being erosive or hyperkeratotic, or as affecting the connective tissue with skin blistering. Similar to classical EB, these disorders are associated with fragility of the skin and mucosa and share some pathogenetic mechanisms. Moreover, as “the suffering of the patient is similar,” Dr. Has said, “we’d like to consider them under the umbrella of EB.” Most of the disorders she listed were inherited via an autosomal recessive mechanism, with intraepidermal disorders inherited via an autosomal dominant mechanism. New genes are being identified the time, she added, so these groupings will no doubt be subject to future revisions.

Minor changes to nomenclature were made to avoid confusion among clinicians and those living with the condition. As such, Kindler EB replaces Kindler syndrome, names of some subtypes were simplified, and a new “self-improving” type of DEB was introduced to replace the term “transient dermolysis of the newborn.” Altogether, there are now 11 subtypes of DEB. A distinction was also made between syndromic and nonsyndromic EB. “We all know that EB can be a systemic disorder with secondary manifestations within different organs,” Dr. Has told conference attendees. Anemia and failure to thrive can be associated, but it still remains a nonsyndromic disorder, she said. By contrast, “syndromic EB is due to genetic defects, which are also expressed in other organs than the skin or mucosal membranes, and lead to primary extracutaneous manifestations, such as cardiomyopathy, nephropathy, and so on.”

There are fewer subtypes of EBS and “we think they are better defined,” Dr. Has stated. “EB simplex is the most heterogenous EB type, clinically and genetically, and includes several syndromic disorders,” and the new classification criteria should be useful in helping categorize individuals with EBS and thus help target their management.

One of the six new subtypes of EB included in the revised classification criteria is “syndromic EBS with cardiomyopathy” caused by the KLH24 mutation. This gene was discovered in 2016 and more than 40 cases have so far been identified, 50% of which have been sporadic de novo mutations.

Other new EB subtypes are:

  • “EBS with localized nephropathy” caused by a mutation in the CD151 gene.
  • An autosomal recessive EBS linked to the KRT5 gene.
  • A new phenotype that manifests with oral mucosal blisters linked to the DSG3 gene. (Although only a single case has been reported to date, it was felt worthy of inclusion.)
  • Another linked to DSG3 that leads to skin fragility and hypertrichosis.
  • A new dystrophic EB subtype linked to mutations in the PLOD3 gene.

In an interview, Dr. Has reiterated the importance of keeping classification criteria updated in line with current research findings. She emphasized that there were many types of EB and how important it was to refine how these were classified based on the underlying genetics.

“We brought much more genetic data into the paper, because we are in the era of personalized medicine,” she said. “There are specific therapies for mutations and for different subtypes and that’s why we think that, step by step, we have to bring in more and more data into the classification.”

There are many people with EBS, she observed, and while these individuals may not have such a dramatic clinical presentation as those with recessive DEB, for example, the effect of the condition on their daily lives is no less. “These people are active, they have jobs, they have to work, and they have pain, they have blister,” Dr. Has said.

While the criteria are intended only for classification of EB, they might help in practice. Dr. Has gave an anecdotal example of a woman that has been misdiagnosed as having a type of DEB with a high risk of squamous cell carcinoma but in fact had a different form of EB with no risk of developing SCC. “That’s why criteria are important,” she said.

Dr. Has had no conflicts of interest to disclose.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM EB 2020

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Work the program for NP/PAs, and the program will work

Article Type
Changed
Wed, 02/12/2020 - 10:22

A ‘knowledge gap’ in best practices exists

Hospital medicine has been the fastest growing medical specialty since the term “hospitalist” was coined by Bob Wachter, MD, in the famous 1996 New England Journal of Medicine article (doi: 10.1056/NEJM199608153350713). The growth and change within this specialty is also reflected in the changing and migrating target of hospitals and hospital systems as they continue to effectively and safely move from fee-for-service to a payer model that rewards value and improvement in the health of a population – both in and outside of hospital walls.

Tracy Cardin
Tracy Cardin

In a short time, nurse practitioners and physician assistants have become a growing population in the hospital medicine workforce. The 2018 State of Hospital Medicine Report notes a 42% increase in 4 years, and about 75% of hospital medicine groups across the country currently incorporate NP/PAs within a hospital medicine practice. This evolution has occurred in the setting of a looming and well-documented physician shortage, a variety of cost pressures on hospitals that reflect the need for an efficient and cost-effective care delivery model, an increasing NP/PA workforce (the Department of Labor notes increases of 35% and 36% respectively by 2036), and data that indicates similar outcomes, for example, HCAHPS (the Hospital Consumer Assessment of Healthcare Providers and Systems), readmission, and morbidity and mortality in NP/PA-driven care.

This evolution, however, reveals a true knowledge gap in best practices related to integration of these providers. This is impacted by wide variability in the preparation of NPs – they may enter hospitalist practice from a variety of clinical exposures and training, for example, adult gerontology acute care, adult, or even, in some states, family NPs. For PAs, this is reflected in the variety of clinical rotations and pregraduate clinical exposure.

This variability is compounded, too, by the lack of standardization of hospital medicine practices, both with site size and patient acuity, a variety of challenges that drive the need for integration of NP/PA providers, and by-laws that define advanced practice clinical models and function.

In that perspective, it is important to define what constitutes a leading and successful advanced practice provider (APP) integration program. I would suggest:

  • A structured and formalized transition-to-practice program for all new graduates and those new to hospital medicine. This program should consist of clinical volume progression, formalized didactic congruent with the Society of Hospital Medicine Core Competencies, and a process for evaluating knowledge and decision making throughout the program and upon completion.
  • Development of physician competencies related to APP integration. Physicians are not prepared in their medical school training or residency to understand the differences and similarities of NP/PA providers. These competencies should be required and can best be developed through steady leadership, formalized instruction and accountability for professional teamwork.
  • Allowance for NP/PA providers to work at the top of their skills and license. This means utilizing NP/PAs as providers who care for patients – not as scribes or clerical workers. The evolution of the acuity of patients provided for may evolve with the skill set and experience of NP/PAs, but it will evolve – especially if steps 1 and 2 are in place.
  • Productivity expectations that reach near physician level of volume. In 2016 State of Hospital Medicine Report data, yearly billable encounters for NP/PAs were within 10% of that of physicians. I think 15% is a reasonable goal.
  • Implementation and support of APP administrative leadership structure at the system/site level. This can be as simple as having APPs on the same leadership committees as physician team members, being involved in hiring and training newer physicians and NP/PAs or as broad as having all NP/PAs report to an APP leader. Having an intentional leadership structure that demonstrates and reflects inclusivity and belonging is crucial.

Consistent application of these frameworks will provide a strong infrastructure for successful NP/PA practice.

Ms. Cardin is currently the vice president of advanced practice providers at Sound Physicians and serves on SHM’s board of directors as its secretary. This article appeared initially at the Hospital Leader, the official blog of SHM.

Publications
Topics
Sections

A ‘knowledge gap’ in best practices exists

A ‘knowledge gap’ in best practices exists

Hospital medicine has been the fastest growing medical specialty since the term “hospitalist” was coined by Bob Wachter, MD, in the famous 1996 New England Journal of Medicine article (doi: 10.1056/NEJM199608153350713). The growth and change within this specialty is also reflected in the changing and migrating target of hospitals and hospital systems as they continue to effectively and safely move from fee-for-service to a payer model that rewards value and improvement in the health of a population – both in and outside of hospital walls.

Tracy Cardin
Tracy Cardin

In a short time, nurse practitioners and physician assistants have become a growing population in the hospital medicine workforce. The 2018 State of Hospital Medicine Report notes a 42% increase in 4 years, and about 75% of hospital medicine groups across the country currently incorporate NP/PAs within a hospital medicine practice. This evolution has occurred in the setting of a looming and well-documented physician shortage, a variety of cost pressures on hospitals that reflect the need for an efficient and cost-effective care delivery model, an increasing NP/PA workforce (the Department of Labor notes increases of 35% and 36% respectively by 2036), and data that indicates similar outcomes, for example, HCAHPS (the Hospital Consumer Assessment of Healthcare Providers and Systems), readmission, and morbidity and mortality in NP/PA-driven care.

This evolution, however, reveals a true knowledge gap in best practices related to integration of these providers. This is impacted by wide variability in the preparation of NPs – they may enter hospitalist practice from a variety of clinical exposures and training, for example, adult gerontology acute care, adult, or even, in some states, family NPs. For PAs, this is reflected in the variety of clinical rotations and pregraduate clinical exposure.

This variability is compounded, too, by the lack of standardization of hospital medicine practices, both with site size and patient acuity, a variety of challenges that drive the need for integration of NP/PA providers, and by-laws that define advanced practice clinical models and function.

In that perspective, it is important to define what constitutes a leading and successful advanced practice provider (APP) integration program. I would suggest:

  • A structured and formalized transition-to-practice program for all new graduates and those new to hospital medicine. This program should consist of clinical volume progression, formalized didactic congruent with the Society of Hospital Medicine Core Competencies, and a process for evaluating knowledge and decision making throughout the program and upon completion.
  • Development of physician competencies related to APP integration. Physicians are not prepared in their medical school training or residency to understand the differences and similarities of NP/PA providers. These competencies should be required and can best be developed through steady leadership, formalized instruction and accountability for professional teamwork.
  • Allowance for NP/PA providers to work at the top of their skills and license. This means utilizing NP/PAs as providers who care for patients – not as scribes or clerical workers. The evolution of the acuity of patients provided for may evolve with the skill set and experience of NP/PAs, but it will evolve – especially if steps 1 and 2 are in place.
  • Productivity expectations that reach near physician level of volume. In 2016 State of Hospital Medicine Report data, yearly billable encounters for NP/PAs were within 10% of that of physicians. I think 15% is a reasonable goal.
  • Implementation and support of APP administrative leadership structure at the system/site level. This can be as simple as having APPs on the same leadership committees as physician team members, being involved in hiring and training newer physicians and NP/PAs or as broad as having all NP/PAs report to an APP leader. Having an intentional leadership structure that demonstrates and reflects inclusivity and belonging is crucial.

Consistent application of these frameworks will provide a strong infrastructure for successful NP/PA practice.

Ms. Cardin is currently the vice president of advanced practice providers at Sound Physicians and serves on SHM’s board of directors as its secretary. This article appeared initially at the Hospital Leader, the official blog of SHM.

Hospital medicine has been the fastest growing medical specialty since the term “hospitalist” was coined by Bob Wachter, MD, in the famous 1996 New England Journal of Medicine article (doi: 10.1056/NEJM199608153350713). The growth and change within this specialty is also reflected in the changing and migrating target of hospitals and hospital systems as they continue to effectively and safely move from fee-for-service to a payer model that rewards value and improvement in the health of a population – both in and outside of hospital walls.

Tracy Cardin
Tracy Cardin

In a short time, nurse practitioners and physician assistants have become a growing population in the hospital medicine workforce. The 2018 State of Hospital Medicine Report notes a 42% increase in 4 years, and about 75% of hospital medicine groups across the country currently incorporate NP/PAs within a hospital medicine practice. This evolution has occurred in the setting of a looming and well-documented physician shortage, a variety of cost pressures on hospitals that reflect the need for an efficient and cost-effective care delivery model, an increasing NP/PA workforce (the Department of Labor notes increases of 35% and 36% respectively by 2036), and data that indicates similar outcomes, for example, HCAHPS (the Hospital Consumer Assessment of Healthcare Providers and Systems), readmission, and morbidity and mortality in NP/PA-driven care.

This evolution, however, reveals a true knowledge gap in best practices related to integration of these providers. This is impacted by wide variability in the preparation of NPs – they may enter hospitalist practice from a variety of clinical exposures and training, for example, adult gerontology acute care, adult, or even, in some states, family NPs. For PAs, this is reflected in the variety of clinical rotations and pregraduate clinical exposure.

This variability is compounded, too, by the lack of standardization of hospital medicine practices, both with site size and patient acuity, a variety of challenges that drive the need for integration of NP/PA providers, and by-laws that define advanced practice clinical models and function.

In that perspective, it is important to define what constitutes a leading and successful advanced practice provider (APP) integration program. I would suggest:

  • A structured and formalized transition-to-practice program for all new graduates and those new to hospital medicine. This program should consist of clinical volume progression, formalized didactic congruent with the Society of Hospital Medicine Core Competencies, and a process for evaluating knowledge and decision making throughout the program and upon completion.
  • Development of physician competencies related to APP integration. Physicians are not prepared in their medical school training or residency to understand the differences and similarities of NP/PA providers. These competencies should be required and can best be developed through steady leadership, formalized instruction and accountability for professional teamwork.
  • Allowance for NP/PA providers to work at the top of their skills and license. This means utilizing NP/PAs as providers who care for patients – not as scribes or clerical workers. The evolution of the acuity of patients provided for may evolve with the skill set and experience of NP/PAs, but it will evolve – especially if steps 1 and 2 are in place.
  • Productivity expectations that reach near physician level of volume. In 2016 State of Hospital Medicine Report data, yearly billable encounters for NP/PAs were within 10% of that of physicians. I think 15% is a reasonable goal.
  • Implementation and support of APP administrative leadership structure at the system/site level. This can be as simple as having APPs on the same leadership committees as physician team members, being involved in hiring and training newer physicians and NP/PAs or as broad as having all NP/PAs report to an APP leader. Having an intentional leadership structure that demonstrates and reflects inclusivity and belonging is crucial.

Consistent application of these frameworks will provide a strong infrastructure for successful NP/PA practice.

Ms. Cardin is currently the vice president of advanced practice providers at Sound Physicians and serves on SHM’s board of directors as its secretary. This article appeared initially at the Hospital Leader, the official blog of SHM.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

In hysterectomy, consider wider risks of ovary removal

Article Type
Changed
Wed, 02/12/2020 - 09:54

– While it’s fading in popularity, ovary removal in hysterectomy is still far from uncommon. A gynecologic surgeon urged colleagues to give deeper consideration to whether the ovaries can stay in place.

“Gynecologists should truly familiarize themselves with the data on cardiovascular, endocrine, bone, and sexual health implications of removing the ovaries when there isn’t a medical indication to do so,” Amanda Nickles Fader, MD, director of the Kelly gynecologic oncology service and the director of the center for rare gynecologic cancers at Johns Hopkins Hospital, Baltimore, said in an interview following her presentation at the Pelvic Anatomy and Gynecologic Surgery Symposium.

“Until I started giving this talk, I thought I knew this data. However, once I took a deeper dive into the studies of how hormonally active the postmenopausal ovaries are, as well as the population-based studies demonstrating worse all-cause mortality outcomes in low-risk women who have their ovaries surgically removed prior to their 60s, I was stunned at how compelling this data is,” she said.

The conventional wisdom about ovary removal in hysterectomy has changed dramatically over the decades. As Dr. Nickles Fader explained in the interview, “in the ’80s and early ’90s, the mantra was ‘just take everything out’ at hysterectomy surgery – tubes and ovaries should be removed – without understanding the implications. Then in the late ’90s and early 2000s, it was a more selective strategy of ‘wait until menopause to remove the ovaries.’ ”

Now, “more contemporary data suggests that the ovaries appear to be hormonally active to some degree well into the seventh decade of life, and even women in their early 60s who have their ovaries removed without a medical indication may be harmed.”

Still, ovary removal occurs in about 50%-60% of the 450,000-500,000 hysterectomies performed each year in the United States, Dr. Nickles Fader said at the meeting, which was jointly provided by Global Academy for Medical Education and the University of Cincinnati. Global Academy and this news organization are owned by the same company.

These findings seem to suggest that messages about the potential benefits of ovary preservation are not getting through to surgeons and patients.

Indeed, a 2017 study of 57,776 benign premenopausal hysterectomies with ovary removal in California from 2005 to 2011 found that 38% had no documented sign of an appropriate diagnosis signaling a need for oophorectomy. These included “ovarian cyst, breast cancer susceptibility gene carrier status, and other diagnoses,” the study authors wrote (Menopause. 2017 Aug;24[8]:947-53).

Dr. Nickles Fader emphasized that ovary removal is appropriate in cases of gynecologic malignancy, while patients at high genetic risk of ovarian cancer may consider salpingo-oophorectomy or salpingectomy.

What about other situations? She offered these pearls in the presentation:

  • Don’t remove ovaries before age 60 “without a good reason” because the procedure may lower lifespan and increase cardiovascular risk.
  • Ovary removal is linked to cognitive decline, Parkinson’s disease, depression and anxiety, glaucoma, sexual dysfunction, and bone fractures.
  • Ovary preservation, in contrast, is linked to improvement of menopausal symptoms, sleep quality, urogenital atrophy, skin conditions, and metabolism.
  • Fallopian tubes may be the true trouble area. “The prevailing theory amongst scientists and clinicians is that ‘ovarian cancer’ is in most cases a misnomer, and most of these malignancies start in the fallopian tube,” Dr. Nickles Fader said in the interview.

“It’s a better time than ever to be thoughtful about removing a woman’s ovaries in someone who is at low risk for ovarian cancer. The new, universal guideline is that instead of removing ovaries in most women undergoing hysterectomy, it’s quite important to consider removing just the fallopian tubes to best optimize cancer risk reduction and general health outcomes.”

Dr. Nickles Fader disclosed consulting work for Ethicon and Merck.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

– While it’s fading in popularity, ovary removal in hysterectomy is still far from uncommon. A gynecologic surgeon urged colleagues to give deeper consideration to whether the ovaries can stay in place.

“Gynecologists should truly familiarize themselves with the data on cardiovascular, endocrine, bone, and sexual health implications of removing the ovaries when there isn’t a medical indication to do so,” Amanda Nickles Fader, MD, director of the Kelly gynecologic oncology service and the director of the center for rare gynecologic cancers at Johns Hopkins Hospital, Baltimore, said in an interview following her presentation at the Pelvic Anatomy and Gynecologic Surgery Symposium.

“Until I started giving this talk, I thought I knew this data. However, once I took a deeper dive into the studies of how hormonally active the postmenopausal ovaries are, as well as the population-based studies demonstrating worse all-cause mortality outcomes in low-risk women who have their ovaries surgically removed prior to their 60s, I was stunned at how compelling this data is,” she said.

The conventional wisdom about ovary removal in hysterectomy has changed dramatically over the decades. As Dr. Nickles Fader explained in the interview, “in the ’80s and early ’90s, the mantra was ‘just take everything out’ at hysterectomy surgery – tubes and ovaries should be removed – without understanding the implications. Then in the late ’90s and early 2000s, it was a more selective strategy of ‘wait until menopause to remove the ovaries.’ ”

Now, “more contemporary data suggests that the ovaries appear to be hormonally active to some degree well into the seventh decade of life, and even women in their early 60s who have their ovaries removed without a medical indication may be harmed.”

Still, ovary removal occurs in about 50%-60% of the 450,000-500,000 hysterectomies performed each year in the United States, Dr. Nickles Fader said at the meeting, which was jointly provided by Global Academy for Medical Education and the University of Cincinnati. Global Academy and this news organization are owned by the same company.

These findings seem to suggest that messages about the potential benefits of ovary preservation are not getting through to surgeons and patients.

Indeed, a 2017 study of 57,776 benign premenopausal hysterectomies with ovary removal in California from 2005 to 2011 found that 38% had no documented sign of an appropriate diagnosis signaling a need for oophorectomy. These included “ovarian cyst, breast cancer susceptibility gene carrier status, and other diagnoses,” the study authors wrote (Menopause. 2017 Aug;24[8]:947-53).

Dr. Nickles Fader emphasized that ovary removal is appropriate in cases of gynecologic malignancy, while patients at high genetic risk of ovarian cancer may consider salpingo-oophorectomy or salpingectomy.

What about other situations? She offered these pearls in the presentation:

  • Don’t remove ovaries before age 60 “without a good reason” because the procedure may lower lifespan and increase cardiovascular risk.
  • Ovary removal is linked to cognitive decline, Parkinson’s disease, depression and anxiety, glaucoma, sexual dysfunction, and bone fractures.
  • Ovary preservation, in contrast, is linked to improvement of menopausal symptoms, sleep quality, urogenital atrophy, skin conditions, and metabolism.
  • Fallopian tubes may be the true trouble area. “The prevailing theory amongst scientists and clinicians is that ‘ovarian cancer’ is in most cases a misnomer, and most of these malignancies start in the fallopian tube,” Dr. Nickles Fader said in the interview.

“It’s a better time than ever to be thoughtful about removing a woman’s ovaries in someone who is at low risk for ovarian cancer. The new, universal guideline is that instead of removing ovaries in most women undergoing hysterectomy, it’s quite important to consider removing just the fallopian tubes to best optimize cancer risk reduction and general health outcomes.”

Dr. Nickles Fader disclosed consulting work for Ethicon and Merck.

– While it’s fading in popularity, ovary removal in hysterectomy is still far from uncommon. A gynecologic surgeon urged colleagues to give deeper consideration to whether the ovaries can stay in place.

“Gynecologists should truly familiarize themselves with the data on cardiovascular, endocrine, bone, and sexual health implications of removing the ovaries when there isn’t a medical indication to do so,” Amanda Nickles Fader, MD, director of the Kelly gynecologic oncology service and the director of the center for rare gynecologic cancers at Johns Hopkins Hospital, Baltimore, said in an interview following her presentation at the Pelvic Anatomy and Gynecologic Surgery Symposium.

“Until I started giving this talk, I thought I knew this data. However, once I took a deeper dive into the studies of how hormonally active the postmenopausal ovaries are, as well as the population-based studies demonstrating worse all-cause mortality outcomes in low-risk women who have their ovaries surgically removed prior to their 60s, I was stunned at how compelling this data is,” she said.

The conventional wisdom about ovary removal in hysterectomy has changed dramatically over the decades. As Dr. Nickles Fader explained in the interview, “in the ’80s and early ’90s, the mantra was ‘just take everything out’ at hysterectomy surgery – tubes and ovaries should be removed – without understanding the implications. Then in the late ’90s and early 2000s, it was a more selective strategy of ‘wait until menopause to remove the ovaries.’ ”

Now, “more contemporary data suggests that the ovaries appear to be hormonally active to some degree well into the seventh decade of life, and even women in their early 60s who have their ovaries removed without a medical indication may be harmed.”

Still, ovary removal occurs in about 50%-60% of the 450,000-500,000 hysterectomies performed each year in the United States, Dr. Nickles Fader said at the meeting, which was jointly provided by Global Academy for Medical Education and the University of Cincinnati. Global Academy and this news organization are owned by the same company.

These findings seem to suggest that messages about the potential benefits of ovary preservation are not getting through to surgeons and patients.

Indeed, a 2017 study of 57,776 benign premenopausal hysterectomies with ovary removal in California from 2005 to 2011 found that 38% had no documented sign of an appropriate diagnosis signaling a need for oophorectomy. These included “ovarian cyst, breast cancer susceptibility gene carrier status, and other diagnoses,” the study authors wrote (Menopause. 2017 Aug;24[8]:947-53).

Dr. Nickles Fader emphasized that ovary removal is appropriate in cases of gynecologic malignancy, while patients at high genetic risk of ovarian cancer may consider salpingo-oophorectomy or salpingectomy.

What about other situations? She offered these pearls in the presentation:

  • Don’t remove ovaries before age 60 “without a good reason” because the procedure may lower lifespan and increase cardiovascular risk.
  • Ovary removal is linked to cognitive decline, Parkinson’s disease, depression and anxiety, glaucoma, sexual dysfunction, and bone fractures.
  • Ovary preservation, in contrast, is linked to improvement of menopausal symptoms, sleep quality, urogenital atrophy, skin conditions, and metabolism.
  • Fallopian tubes may be the true trouble area. “The prevailing theory amongst scientists and clinicians is that ‘ovarian cancer’ is in most cases a misnomer, and most of these malignancies start in the fallopian tube,” Dr. Nickles Fader said in the interview.

“It’s a better time than ever to be thoughtful about removing a woman’s ovaries in someone who is at low risk for ovarian cancer. The new, universal guideline is that instead of removing ovaries in most women undergoing hysterectomy, it’s quite important to consider removing just the fallopian tubes to best optimize cancer risk reduction and general health outcomes.”

Dr. Nickles Fader disclosed consulting work for Ethicon and Merck.

Publications
Publications
Topics
Article Type
Sections
Article Source

EXPERT ANALYSIS FROM PAGS 2019

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

Model reveals genes associated with prognosis in ER+, HER2– breast cancer

Article Type
Changed
Wed, 01/04/2023 - 16:43

A machine learning–assisted prognostication model identified genes in the tumor microenvironment that are strongly associated with worse prognosis in patients with stage III, estrogen receptor-positive, HER2-negative breast cancer, according to new research.

Yara Abdou, MD, a hematology-oncology fellow at Roswell Park Comprehensive Cancer Center, Buffalo, N.Y.
Sharon Worcester/MDedge News
Dr. Yara Abdou

Yara Abdou, MD, of Roswell Park Comprehensive Cancer Center in Buffalo, N.Y., and colleagues presented this work in a poster at the ASCO-SITC Clinical Immuno-Oncology Symposium.

The model used 50 cycles of machine learning to cluster 98 patients from The Cancer Genome Atlas Program into high- and low-risk groups based on mRNA expression of 26 gene groups.

The gene groups consisted of 191 genes enriched in cellular and noncellular elements of the tumor microenvironment. Mutational burden and clinical outcomes data for the patients also were considered, Dr. Abdou explained in an interview.

Kaplan-Meier curves were created for each group by K-means clustering, survival differences between the two groups were assessed, and correlations among the various gene groups were analyzed.

Five identified genes were associated with poor prognosis: LOXL2, PHEX, ACTA2, MEGF9, and TNFSF4. Fifteen genes were associated with good prognosis: CD8A, CD8B, FCRL3, GZMK, CD3E, CCL5, TP53, ICAM3, CD247, IFNG, IFNGR1, ICAM4, SHH, HLA-DOB, and CXCR3.

The Kaplan-Meier curves showed a significant difference in survival between the two groups (hazard ratio, 2.878; P = .05), confirming the validity of the risk score modeling, Dr. Abdou said.

Immune profiling showed that expression of genes associated with desmoplastic reaction, neutrophils, and immunosuppressive cytokines were higher in the high-risk group, whereas expression of genes related to immune system activation were higher in the low-risk group (P less than .05).

Stroma in the tumor microenvironment is known to affect prognosis and response to therapy in patients with breast cancer, but few mathematical models exist to determine prognosis based on mRNA expressivity in the tumor microenvironment, Dr. Abdou said, explaining the rationale for the study.

The findings suggest that when genomic profile information is available for a given patient in the clinic, this machine learning–assisted risk scoring approach could have prognostic value, she said, noting that the model also will be assessed in patients with other types of breast cancer.

Dr. Abdou reported having no disclosures.

SOURCE: Abdou Y et al. ASCO-SITC. Poster A3.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

A machine learning–assisted prognostication model identified genes in the tumor microenvironment that are strongly associated with worse prognosis in patients with stage III, estrogen receptor-positive, HER2-negative breast cancer, according to new research.

Yara Abdou, MD, a hematology-oncology fellow at Roswell Park Comprehensive Cancer Center, Buffalo, N.Y.
Sharon Worcester/MDedge News
Dr. Yara Abdou

Yara Abdou, MD, of Roswell Park Comprehensive Cancer Center in Buffalo, N.Y., and colleagues presented this work in a poster at the ASCO-SITC Clinical Immuno-Oncology Symposium.

The model used 50 cycles of machine learning to cluster 98 patients from The Cancer Genome Atlas Program into high- and low-risk groups based on mRNA expression of 26 gene groups.

The gene groups consisted of 191 genes enriched in cellular and noncellular elements of the tumor microenvironment. Mutational burden and clinical outcomes data for the patients also were considered, Dr. Abdou explained in an interview.

Kaplan-Meier curves were created for each group by K-means clustering, survival differences between the two groups were assessed, and correlations among the various gene groups were analyzed.

Five identified genes were associated with poor prognosis: LOXL2, PHEX, ACTA2, MEGF9, and TNFSF4. Fifteen genes were associated with good prognosis: CD8A, CD8B, FCRL3, GZMK, CD3E, CCL5, TP53, ICAM3, CD247, IFNG, IFNGR1, ICAM4, SHH, HLA-DOB, and CXCR3.

The Kaplan-Meier curves showed a significant difference in survival between the two groups (hazard ratio, 2.878; P = .05), confirming the validity of the risk score modeling, Dr. Abdou said.

Immune profiling showed that expression of genes associated with desmoplastic reaction, neutrophils, and immunosuppressive cytokines were higher in the high-risk group, whereas expression of genes related to immune system activation were higher in the low-risk group (P less than .05).

Stroma in the tumor microenvironment is known to affect prognosis and response to therapy in patients with breast cancer, but few mathematical models exist to determine prognosis based on mRNA expressivity in the tumor microenvironment, Dr. Abdou said, explaining the rationale for the study.

The findings suggest that when genomic profile information is available for a given patient in the clinic, this machine learning–assisted risk scoring approach could have prognostic value, she said, noting that the model also will be assessed in patients with other types of breast cancer.

Dr. Abdou reported having no disclosures.

SOURCE: Abdou Y et al. ASCO-SITC. Poster A3.

A machine learning–assisted prognostication model identified genes in the tumor microenvironment that are strongly associated with worse prognosis in patients with stage III, estrogen receptor-positive, HER2-negative breast cancer, according to new research.

Yara Abdou, MD, a hematology-oncology fellow at Roswell Park Comprehensive Cancer Center, Buffalo, N.Y.
Sharon Worcester/MDedge News
Dr. Yara Abdou

Yara Abdou, MD, of Roswell Park Comprehensive Cancer Center in Buffalo, N.Y., and colleagues presented this work in a poster at the ASCO-SITC Clinical Immuno-Oncology Symposium.

The model used 50 cycles of machine learning to cluster 98 patients from The Cancer Genome Atlas Program into high- and low-risk groups based on mRNA expression of 26 gene groups.

The gene groups consisted of 191 genes enriched in cellular and noncellular elements of the tumor microenvironment. Mutational burden and clinical outcomes data for the patients also were considered, Dr. Abdou explained in an interview.

Kaplan-Meier curves were created for each group by K-means clustering, survival differences between the two groups were assessed, and correlations among the various gene groups were analyzed.

Five identified genes were associated with poor prognosis: LOXL2, PHEX, ACTA2, MEGF9, and TNFSF4. Fifteen genes were associated with good prognosis: CD8A, CD8B, FCRL3, GZMK, CD3E, CCL5, TP53, ICAM3, CD247, IFNG, IFNGR1, ICAM4, SHH, HLA-DOB, and CXCR3.

The Kaplan-Meier curves showed a significant difference in survival between the two groups (hazard ratio, 2.878; P = .05), confirming the validity of the risk score modeling, Dr. Abdou said.

Immune profiling showed that expression of genes associated with desmoplastic reaction, neutrophils, and immunosuppressive cytokines were higher in the high-risk group, whereas expression of genes related to immune system activation were higher in the low-risk group (P less than .05).

Stroma in the tumor microenvironment is known to affect prognosis and response to therapy in patients with breast cancer, but few mathematical models exist to determine prognosis based on mRNA expressivity in the tumor microenvironment, Dr. Abdou said, explaining the rationale for the study.

The findings suggest that when genomic profile information is available for a given patient in the clinic, this machine learning–assisted risk scoring approach could have prognostic value, she said, noting that the model also will be assessed in patients with other types of breast cancer.

Dr. Abdou reported having no disclosures.

SOURCE: Abdou Y et al. ASCO-SITC. Poster A3.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM THE CLINICAL IMMUNO-ONCOLOGY SYMPOSIUM

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.