Big data ready to revolutionize infectious diseases research

Article Type
Changed
Fri, 01/18/2019 - 16:25

 

An all-encompassing definition of “big data” in health care remains elusive, but most researchers and public health experts believe big data may be poised to revolutionize the field of infectious diseases research.

Big data could provide the means to finally achieve effective and timely surveillance systems, which form a “pillar of infectious disease control,” according to Shweta Bansal, PhD, of the department of biology, Georgetown University, Washington, and her associates (J Infect Dis. 2016 Nov;214[S4]:S375-9. doi: 10.1093/infdis/jiw400).

copyright Sebastian Duda/Thinkstock
For infectious diseases researchers, surveillance systems are critical, as they track disease incidence and mortality rates as pathogens spread across the globe. “An ideal surveillance system is representative of the population, flexible, economic, and resilient, with timely reporting and validation of its outputs,” Lone Simonsen, PhD, of the department of public health, University of Copenhagen, Denmark, and her associates wrote in a supplemental article (J Infect Dis. 2016 Nov;214[S4]:S380-5. doi: 10.1093/infdis/jiw376).

“Further, full situational awareness requires availability of multiple surveillance data streams that capture mild and severe clinical outcomes (death certificates, hospital admissions, and emergency department and outpatient visits), as well as laboratory-based information (confirmed cases, genetic sequences, and serologic findings),” Dr. Simonsen added.

But unlike marketing or meteorology, two fields that have “perfected the art of real-time acquisition and analysis of highly resolved digital data,” the field of infectious diseases research has suffered from slow and incomplete surveillance of emerging and reemerging pathogens and pandemics, Dr. Bansal said.

What has changed in recent years is that physicians and researchers now have better access to patient information. Today, electronic health records and nontraditional patient data sources such as social media and remote sensing technology provide multiple surveillance data streams, and millions of people around the world can participate as the Internet, cell phones, and computers pervade even low income countries.

Several private and federal public health agencies have already launched successful initiatives “to use electronic data and patient records in a more timely fashion to track important events,” Dr. Simonsen said. For example, the Food and Drug Administration’s Sentinel Initiative aims to augment traditional surveillance (which relies on passive case reporting by physicians) with private sector electronic health data to identify severe adverse drug events.

The Centers for Disease Control and Prevention’s BioSense platform collects electronic health records to achieve “real-time awareness and tracking of pandemic influenza or any other novel health threat.” Google tracks influenza epidemics by analyzing Internet search query data. In Germany, researchers use medical claims data to track vaccination rates. In Canada, public health analysts compile multiple sources of disease outbreak information into online computational systems and then use this information to identify and track novel outbreaks and drug resistance.

The authors of these two papers warn that while big data is promising, it must be “balanced by caution.” Privacy concerns, barriers in access to e-health systems, and ill-fitting big data models must be addressed, and continued validation against traditional surveillance systems is imperative.

The authors of both papers reported no relevant conflicts of interest.

Publications
Topics
Sections

 

An all-encompassing definition of “big data” in health care remains elusive, but most researchers and public health experts believe big data may be poised to revolutionize the field of infectious diseases research.

Big data could provide the means to finally achieve effective and timely surveillance systems, which form a “pillar of infectious disease control,” according to Shweta Bansal, PhD, of the department of biology, Georgetown University, Washington, and her associates (J Infect Dis. 2016 Nov;214[S4]:S375-9. doi: 10.1093/infdis/jiw400).

copyright Sebastian Duda/Thinkstock
For infectious diseases researchers, surveillance systems are critical, as they track disease incidence and mortality rates as pathogens spread across the globe. “An ideal surveillance system is representative of the population, flexible, economic, and resilient, with timely reporting and validation of its outputs,” Lone Simonsen, PhD, of the department of public health, University of Copenhagen, Denmark, and her associates wrote in a supplemental article (J Infect Dis. 2016 Nov;214[S4]:S380-5. doi: 10.1093/infdis/jiw376).

“Further, full situational awareness requires availability of multiple surveillance data streams that capture mild and severe clinical outcomes (death certificates, hospital admissions, and emergency department and outpatient visits), as well as laboratory-based information (confirmed cases, genetic sequences, and serologic findings),” Dr. Simonsen added.

But unlike marketing or meteorology, two fields that have “perfected the art of real-time acquisition and analysis of highly resolved digital data,” the field of infectious diseases research has suffered from slow and incomplete surveillance of emerging and reemerging pathogens and pandemics, Dr. Bansal said.

What has changed in recent years is that physicians and researchers now have better access to patient information. Today, electronic health records and nontraditional patient data sources such as social media and remote sensing technology provide multiple surveillance data streams, and millions of people around the world can participate as the Internet, cell phones, and computers pervade even low income countries.

Several private and federal public health agencies have already launched successful initiatives “to use electronic data and patient records in a more timely fashion to track important events,” Dr. Simonsen said. For example, the Food and Drug Administration’s Sentinel Initiative aims to augment traditional surveillance (which relies on passive case reporting by physicians) with private sector electronic health data to identify severe adverse drug events.

The Centers for Disease Control and Prevention’s BioSense platform collects electronic health records to achieve “real-time awareness and tracking of pandemic influenza or any other novel health threat.” Google tracks influenza epidemics by analyzing Internet search query data. In Germany, researchers use medical claims data to track vaccination rates. In Canada, public health analysts compile multiple sources of disease outbreak information into online computational systems and then use this information to identify and track novel outbreaks and drug resistance.

The authors of these two papers warn that while big data is promising, it must be “balanced by caution.” Privacy concerns, barriers in access to e-health systems, and ill-fitting big data models must be addressed, and continued validation against traditional surveillance systems is imperative.

The authors of both papers reported no relevant conflicts of interest.

 

An all-encompassing definition of “big data” in health care remains elusive, but most researchers and public health experts believe big data may be poised to revolutionize the field of infectious diseases research.

Big data could provide the means to finally achieve effective and timely surveillance systems, which form a “pillar of infectious disease control,” according to Shweta Bansal, PhD, of the department of biology, Georgetown University, Washington, and her associates (J Infect Dis. 2016 Nov;214[S4]:S375-9. doi: 10.1093/infdis/jiw400).

copyright Sebastian Duda/Thinkstock
For infectious diseases researchers, surveillance systems are critical, as they track disease incidence and mortality rates as pathogens spread across the globe. “An ideal surveillance system is representative of the population, flexible, economic, and resilient, with timely reporting and validation of its outputs,” Lone Simonsen, PhD, of the department of public health, University of Copenhagen, Denmark, and her associates wrote in a supplemental article (J Infect Dis. 2016 Nov;214[S4]:S380-5. doi: 10.1093/infdis/jiw376).

“Further, full situational awareness requires availability of multiple surveillance data streams that capture mild and severe clinical outcomes (death certificates, hospital admissions, and emergency department and outpatient visits), as well as laboratory-based information (confirmed cases, genetic sequences, and serologic findings),” Dr. Simonsen added.

But unlike marketing or meteorology, two fields that have “perfected the art of real-time acquisition and analysis of highly resolved digital data,” the field of infectious diseases research has suffered from slow and incomplete surveillance of emerging and reemerging pathogens and pandemics, Dr. Bansal said.

What has changed in recent years is that physicians and researchers now have better access to patient information. Today, electronic health records and nontraditional patient data sources such as social media and remote sensing technology provide multiple surveillance data streams, and millions of people around the world can participate as the Internet, cell phones, and computers pervade even low income countries.

Several private and federal public health agencies have already launched successful initiatives “to use electronic data and patient records in a more timely fashion to track important events,” Dr. Simonsen said. For example, the Food and Drug Administration’s Sentinel Initiative aims to augment traditional surveillance (which relies on passive case reporting by physicians) with private sector electronic health data to identify severe adverse drug events.

The Centers for Disease Control and Prevention’s BioSense platform collects electronic health records to achieve “real-time awareness and tracking of pandemic influenza or any other novel health threat.” Google tracks influenza epidemics by analyzing Internet search query data. In Germany, researchers use medical claims data to track vaccination rates. In Canada, public health analysts compile multiple sources of disease outbreak information into online computational systems and then use this information to identify and track novel outbreaks and drug resistance.

The authors of these two papers warn that while big data is promising, it must be “balanced by caution.” Privacy concerns, barriers in access to e-health systems, and ill-fitting big data models must be addressed, and continued validation against traditional surveillance systems is imperative.

The authors of both papers reported no relevant conflicts of interest.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE JOURNAL OF INFECTIOUS DISEASES

Disallow All Ads

REBOA may be a safe alternative to RTACC in the acute care setting

Article Type
Changed
Wed, 01/02/2019 - 09:44

 

– Resuscitative endovascular balloon occlusion of the aorta (REBOA) could be an acceptable alternative to thoracotomy in traumatic arrest patients who are hemorrhaging below the diaphragm, according to the results of a small pilot study which were presented by William Teeter, MD, at the annual clinical congress of the American College of Surgeons.

Furthermore, virtual simulation training sufficiently prepares surgeons to safely use the REBOA technique in the acute care setting, a separate study found. Importantly, this training has the potential to allow REBOA to become a widespread tool for surgeons regardless of their endovascular surgical experience.

A doctor tends to a patient in the ICU
XiXinXing/ThinkStock
REBOA is an emerging and less invasive method of aortic occlusion during traumatic arrest. “Recent evidence published in the Journal of Trauma suggests that REBOA has similar outcomes to resuscitative thoracotomy with aortic cross-clamping or RTACC,” said Dr. Teeter, who is currently an emergency medicine resident at the University of North Carolina, Chapel Hill, but conducted this research during a fellowship at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center in Baltimore.

Dr. Teeter presented the preliminary results of a pilot study involving 19 patients who received RTACC between 2008 and 2013 and 17 patients who received REBOA between 2013 and 2015. All study participants were trauma patients who arrived at the R Adams Cowley Shock Trauma Center in arrest or arrested shortly after arrival.

Age, gender, Glasgow Coma Scale, and injury severity score were the same or similar between the two groups, Dr. Teeter reported. Mean systolic blood pressure at admission was 14 mmHg for the REBOA group and 28 mmHg for the RTACC group; however, the majority of patients (82% of REBOA patients and 73% of RTACC patients) arrived with a blood pressure of 0, reported Dr. Teeter.

Importantly, patients in the RTACC group who had penetrating chest injury were excluded for this analysis, Dr. Teeter noted, adding that there was a slightly higher incidence of blunt trauma within the REBOA group likely due to “a change in practice at the trauma center during this time.”

All resuscitations were captured with real-time videography. Continuous vitals were also collected and analyzed.

While more RTACC patients survived to the operating room (53% vs. 68%), among the REBOA group there were more patients who experienced return of spontaneous circulation (53% vs. 37%). However, neither of these results was statistically significant.

Following occlusion of the aorta, the blood pressure measures, taken from continuous vital signs and averaged over a 15-minute period, were 80 mmHg for the REBOA group and 46 mmHg for the RTACC group. Again, this result was statistically insignificant but trended toward favoring REBOA.

Overall, patient survival was dismal. Only one patient who received REBOA survived.

Following Dr. Teeter’s presentation, the study’s assigned discussant, Nicole A. Stassen, MD, of the University of Rochester Medical Center, N.Y., noted that while post-occlusion blood pressure was higher for the REBOA group it seemed not to matter as the majority of patients did not survive. Dr. Stassen also asked if these preliminary results were sufficient to inform or change clinical practice.

In response, Dr. Teeter explained that the pilot study was conducted at a time when the literature was unclear about how patients would respond to open versus endovascular occlusion, and this data helped guide further research and resuscitation efforts.

“At our center there has been a marked change in practice regarding which patients receive resuscitative thoracotomy and which get REBOA,” he added and concluded that “these and previous data suggest that the time performing thoracotomy for resuscitation purposes may be better spent performing CPR with REBOA.”

At the very least, this pilot study demonstrated that “REBOA may be an acceptable alternative to RTACC.” Further analysis of larger study populations will be published soon and will show that REBOA may be preferred over RTACC, according to Dr. Teeter.

Dr. David Hampton
In a subsequent presentation, David Hampton, MD, a surgical critical care fellow at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center, confirmed that many recent studies have demonstrated that REBOA is a comparable alternative to emergency thoracotomies. In fact, REBOA is commonly used throughout Japan, the United Kingdom, and in northern Europe; however, in the United States, REBOA is currently only used at a few Level 1 trauma centers and in the military, according to Dr. Hampton.

A major hindrance to wider-spread REBOA use in the United States is the lack of endovascular training for surgeons during residency which has resulted in a limited number of surgeons who can perform the REBOA technique and a limited number of surgeons who can teach the procedure to others, said Dr. Hampton.

In lieu of experience, formalized 1- or 2-day endovascular simulation courses, such as BEST, were created to prepare surgeons to use techniques such as REBOA. Prior validation studies, including those conducted by researchers at the University of Maryland, demonstrated that surgeons who participated in these courses improved surgical technique and increased their surgical knowledge base, Dr. Hampton reported.

To further elucidate the benefits of these training courses on the successful use of REBOA in the acute care setting, Dr. Hampton and his associates selected nine acute care surgeons with varying endovascular surgical experience to complete the 1-day BEST course and then compared surgeons’ performances of the REBOA technique after successful course completion.

During the study, a total of 28 REBOA procedures were performed, 17 by the surgeons with no endovascular experience, and the remaining 11 by surgeons with endovascular surgical experience.

Overall, there was no difference in wire placements, sheath insertion, position or localization of balloons, or balloon inflation. In addition, there was no difference in mortality among patients, and there were no known REBOA complications during this study.

In conclusion, endovascular experience during residency is not a prerequisite for safe REBOA placement, Dr. Hampton commented.

Taken together, these two research studies are really helping to break ground on REBOA use in the acute care setting, commented an audience member.

The Department of Defense funded Dr. Teeter’s study. Dr. Teeter and Dr. Hampton both reported having no disclosures.

 

 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Resuscitative endovascular balloon occlusion of the aorta (REBOA) could be an acceptable alternative to thoracotomy in traumatic arrest patients who are hemorrhaging below the diaphragm, according to the results of a small pilot study which were presented by William Teeter, MD, at the annual clinical congress of the American College of Surgeons.

Furthermore, virtual simulation training sufficiently prepares surgeons to safely use the REBOA technique in the acute care setting, a separate study found. Importantly, this training has the potential to allow REBOA to become a widespread tool for surgeons regardless of their endovascular surgical experience.

A doctor tends to a patient in the ICU
XiXinXing/ThinkStock
REBOA is an emerging and less invasive method of aortic occlusion during traumatic arrest. “Recent evidence published in the Journal of Trauma suggests that REBOA has similar outcomes to resuscitative thoracotomy with aortic cross-clamping or RTACC,” said Dr. Teeter, who is currently an emergency medicine resident at the University of North Carolina, Chapel Hill, but conducted this research during a fellowship at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center in Baltimore.

Dr. Teeter presented the preliminary results of a pilot study involving 19 patients who received RTACC between 2008 and 2013 and 17 patients who received REBOA between 2013 and 2015. All study participants were trauma patients who arrived at the R Adams Cowley Shock Trauma Center in arrest or arrested shortly after arrival.

Age, gender, Glasgow Coma Scale, and injury severity score were the same or similar between the two groups, Dr. Teeter reported. Mean systolic blood pressure at admission was 14 mmHg for the REBOA group and 28 mmHg for the RTACC group; however, the majority of patients (82% of REBOA patients and 73% of RTACC patients) arrived with a blood pressure of 0, reported Dr. Teeter.

Importantly, patients in the RTACC group who had penetrating chest injury were excluded for this analysis, Dr. Teeter noted, adding that there was a slightly higher incidence of blunt trauma within the REBOA group likely due to “a change in practice at the trauma center during this time.”

All resuscitations were captured with real-time videography. Continuous vitals were also collected and analyzed.

While more RTACC patients survived to the operating room (53% vs. 68%), among the REBOA group there were more patients who experienced return of spontaneous circulation (53% vs. 37%). However, neither of these results was statistically significant.

Following occlusion of the aorta, the blood pressure measures, taken from continuous vital signs and averaged over a 15-minute period, were 80 mmHg for the REBOA group and 46 mmHg for the RTACC group. Again, this result was statistically insignificant but trended toward favoring REBOA.

Overall, patient survival was dismal. Only one patient who received REBOA survived.

Following Dr. Teeter’s presentation, the study’s assigned discussant, Nicole A. Stassen, MD, of the University of Rochester Medical Center, N.Y., noted that while post-occlusion blood pressure was higher for the REBOA group it seemed not to matter as the majority of patients did not survive. Dr. Stassen also asked if these preliminary results were sufficient to inform or change clinical practice.

In response, Dr. Teeter explained that the pilot study was conducted at a time when the literature was unclear about how patients would respond to open versus endovascular occlusion, and this data helped guide further research and resuscitation efforts.

“At our center there has been a marked change in practice regarding which patients receive resuscitative thoracotomy and which get REBOA,” he added and concluded that “these and previous data suggest that the time performing thoracotomy for resuscitation purposes may be better spent performing CPR with REBOA.”

At the very least, this pilot study demonstrated that “REBOA may be an acceptable alternative to RTACC.” Further analysis of larger study populations will be published soon and will show that REBOA may be preferred over RTACC, according to Dr. Teeter.

Dr. David Hampton
In a subsequent presentation, David Hampton, MD, a surgical critical care fellow at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center, confirmed that many recent studies have demonstrated that REBOA is a comparable alternative to emergency thoracotomies. In fact, REBOA is commonly used throughout Japan, the United Kingdom, and in northern Europe; however, in the United States, REBOA is currently only used at a few Level 1 trauma centers and in the military, according to Dr. Hampton.

A major hindrance to wider-spread REBOA use in the United States is the lack of endovascular training for surgeons during residency which has resulted in a limited number of surgeons who can perform the REBOA technique and a limited number of surgeons who can teach the procedure to others, said Dr. Hampton.

In lieu of experience, formalized 1- or 2-day endovascular simulation courses, such as BEST, were created to prepare surgeons to use techniques such as REBOA. Prior validation studies, including those conducted by researchers at the University of Maryland, demonstrated that surgeons who participated in these courses improved surgical technique and increased their surgical knowledge base, Dr. Hampton reported.

To further elucidate the benefits of these training courses on the successful use of REBOA in the acute care setting, Dr. Hampton and his associates selected nine acute care surgeons with varying endovascular surgical experience to complete the 1-day BEST course and then compared surgeons’ performances of the REBOA technique after successful course completion.

During the study, a total of 28 REBOA procedures were performed, 17 by the surgeons with no endovascular experience, and the remaining 11 by surgeons with endovascular surgical experience.

Overall, there was no difference in wire placements, sheath insertion, position or localization of balloons, or balloon inflation. In addition, there was no difference in mortality among patients, and there were no known REBOA complications during this study.

In conclusion, endovascular experience during residency is not a prerequisite for safe REBOA placement, Dr. Hampton commented.

Taken together, these two research studies are really helping to break ground on REBOA use in the acute care setting, commented an audience member.

The Department of Defense funded Dr. Teeter’s study. Dr. Teeter and Dr. Hampton both reported having no disclosures.

 

 

 

– Resuscitative endovascular balloon occlusion of the aorta (REBOA) could be an acceptable alternative to thoracotomy in traumatic arrest patients who are hemorrhaging below the diaphragm, according to the results of a small pilot study which were presented by William Teeter, MD, at the annual clinical congress of the American College of Surgeons.

Furthermore, virtual simulation training sufficiently prepares surgeons to safely use the REBOA technique in the acute care setting, a separate study found. Importantly, this training has the potential to allow REBOA to become a widespread tool for surgeons regardless of their endovascular surgical experience.

A doctor tends to a patient in the ICU
XiXinXing/ThinkStock
REBOA is an emerging and less invasive method of aortic occlusion during traumatic arrest. “Recent evidence published in the Journal of Trauma suggests that REBOA has similar outcomes to resuscitative thoracotomy with aortic cross-clamping or RTACC,” said Dr. Teeter, who is currently an emergency medicine resident at the University of North Carolina, Chapel Hill, but conducted this research during a fellowship at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center in Baltimore.

Dr. Teeter presented the preliminary results of a pilot study involving 19 patients who received RTACC between 2008 and 2013 and 17 patients who received REBOA between 2013 and 2015. All study participants were trauma patients who arrived at the R Adams Cowley Shock Trauma Center in arrest or arrested shortly after arrival.

Age, gender, Glasgow Coma Scale, and injury severity score were the same or similar between the two groups, Dr. Teeter reported. Mean systolic blood pressure at admission was 14 mmHg for the REBOA group and 28 mmHg for the RTACC group; however, the majority of patients (82% of REBOA patients and 73% of RTACC patients) arrived with a blood pressure of 0, reported Dr. Teeter.

Importantly, patients in the RTACC group who had penetrating chest injury were excluded for this analysis, Dr. Teeter noted, adding that there was a slightly higher incidence of blunt trauma within the REBOA group likely due to “a change in practice at the trauma center during this time.”

All resuscitations were captured with real-time videography. Continuous vitals were also collected and analyzed.

While more RTACC patients survived to the operating room (53% vs. 68%), among the REBOA group there were more patients who experienced return of spontaneous circulation (53% vs. 37%). However, neither of these results was statistically significant.

Following occlusion of the aorta, the blood pressure measures, taken from continuous vital signs and averaged over a 15-minute period, were 80 mmHg for the REBOA group and 46 mmHg for the RTACC group. Again, this result was statistically insignificant but trended toward favoring REBOA.

Overall, patient survival was dismal. Only one patient who received REBOA survived.

Following Dr. Teeter’s presentation, the study’s assigned discussant, Nicole A. Stassen, MD, of the University of Rochester Medical Center, N.Y., noted that while post-occlusion blood pressure was higher for the REBOA group it seemed not to matter as the majority of patients did not survive. Dr. Stassen also asked if these preliminary results were sufficient to inform or change clinical practice.

In response, Dr. Teeter explained that the pilot study was conducted at a time when the literature was unclear about how patients would respond to open versus endovascular occlusion, and this data helped guide further research and resuscitation efforts.

“At our center there has been a marked change in practice regarding which patients receive resuscitative thoracotomy and which get REBOA,” he added and concluded that “these and previous data suggest that the time performing thoracotomy for resuscitation purposes may be better spent performing CPR with REBOA.”

At the very least, this pilot study demonstrated that “REBOA may be an acceptable alternative to RTACC.” Further analysis of larger study populations will be published soon and will show that REBOA may be preferred over RTACC, according to Dr. Teeter.

Dr. David Hampton
In a subsequent presentation, David Hampton, MD, a surgical critical care fellow at the University of Maryland Medical Center’s R Adams Cowley Shock Trauma Center, confirmed that many recent studies have demonstrated that REBOA is a comparable alternative to emergency thoracotomies. In fact, REBOA is commonly used throughout Japan, the United Kingdom, and in northern Europe; however, in the United States, REBOA is currently only used at a few Level 1 trauma centers and in the military, according to Dr. Hampton.

A major hindrance to wider-spread REBOA use in the United States is the lack of endovascular training for surgeons during residency which has resulted in a limited number of surgeons who can perform the REBOA technique and a limited number of surgeons who can teach the procedure to others, said Dr. Hampton.

In lieu of experience, formalized 1- or 2-day endovascular simulation courses, such as BEST, were created to prepare surgeons to use techniques such as REBOA. Prior validation studies, including those conducted by researchers at the University of Maryland, demonstrated that surgeons who participated in these courses improved surgical technique and increased their surgical knowledge base, Dr. Hampton reported.

To further elucidate the benefits of these training courses on the successful use of REBOA in the acute care setting, Dr. Hampton and his associates selected nine acute care surgeons with varying endovascular surgical experience to complete the 1-day BEST course and then compared surgeons’ performances of the REBOA technique after successful course completion.

During the study, a total of 28 REBOA procedures were performed, 17 by the surgeons with no endovascular experience, and the remaining 11 by surgeons with endovascular surgical experience.

Overall, there was no difference in wire placements, sheath insertion, position or localization of balloons, or balloon inflation. In addition, there was no difference in mortality among patients, and there were no known REBOA complications during this study.

In conclusion, endovascular experience during residency is not a prerequisite for safe REBOA placement, Dr. Hampton commented.

Taken together, these two research studies are really helping to break ground on REBOA use in the acute care setting, commented an audience member.

The Department of Defense funded Dr. Teeter’s study. Dr. Teeter and Dr. Hampton both reported having no disclosures.

 

 

Publications
Publications
Topics
Article Type
Sections
Article Source

AT THE ACS CLINICAL CONGRESS

Disallow All Ads
Vitals

 

Key clinical point: REBOA and RTACC had similar outcomes in a small pilot study.

Major finding: More RTACC patients survived to the operating room (53% vs. 68%), but more REBOA patients experienced return of spontaneous circulation (53% vs. 37%).

Data source: Pilot study involving 36 trauma patients who received either RTACC or REBOA.

Disclosures: The Department of Defense funded Dr. Teeter’s study. Dr. Teeter and Dr. Hampton both reported having no disclosures.

How to Tweet: a guide for physicians

Article Type
Changed
Thu, 03/28/2019 - 15:00

Social media, and Twitter in particular, is reshaping the practice of medicine by bringing physicians, scientists, and patients together on a common platform. With the pressures for providers to remain current with new clinical developments within the framework of health reform and to navigate the shift from volume- to value-based, patient-centered care, immediate access to a dynamic information-exchange medium such as Twitter can have an impact on both the quality and efficiency of care.


Click on the PDF icon at the top of this introduction to read the full article.

 

 

 

 

Article PDF
Issue
The Journal of Community and Supportive Oncology - 14(10)
Publications
Topics
Page Number
440-443
Sections
Article PDF
Article PDF

Social media, and Twitter in particular, is reshaping the practice of medicine by bringing physicians, scientists, and patients together on a common platform. With the pressures for providers to remain current with new clinical developments within the framework of health reform and to navigate the shift from volume- to value-based, patient-centered care, immediate access to a dynamic information-exchange medium such as Twitter can have an impact on both the quality and efficiency of care.


Click on the PDF icon at the top of this introduction to read the full article.

 

 

 

 

Social media, and Twitter in particular, is reshaping the practice of medicine by bringing physicians, scientists, and patients together on a common platform. With the pressures for providers to remain current with new clinical developments within the framework of health reform and to navigate the shift from volume- to value-based, patient-centered care, immediate access to a dynamic information-exchange medium such as Twitter can have an impact on both the quality and efficiency of care.


Click on the PDF icon at the top of this introduction to read the full article.

 

 

 

 

Issue
The Journal of Community and Supportive Oncology - 14(10)
Issue
The Journal of Community and Supportive Oncology - 14(10)
Page Number
440-443
Page Number
440-443
Publications
Publications
Topics
Article Type
Sections
Citation Override
JCSO 2016;14(10):440-443
Disallow All Ads
Alternative CME
Article PDF Media

Ustekinumab leads to high 6-week clinical response and 44-week remission rates

Article Type
Changed
Fri, 01/18/2019 - 16:21

Intravenous induction of the monoclonal antibody ustekinumab induced significantly higher 6-week clinical response rates in patients with moderately to severely active Crohn’s disease, compared with those who received placebo.

 

In addition, among those who had achieved clinical response to induction therapy, subcutaneous administration of ustekinumab led to high 44-week remission rates.

Dr. Brian G. Feagan, senior scientific director of the gastrointestinal contract research firm Alimentiv in London, Ontario
University of Western Ontario, London
Dr. Brian Feagan
These results are based on a phase III development program that included two 8-week induction trials, UNITI-1 and UNITI-2, and one 44-week maintenance trial, IM-UNITI. Together, the trials represented 52 weeks of continuous therapy, explained lead investigators Brian Feagan, MD, of the Robarts Research Institute, Western University, London, Ont., and William Sandborn, MD, of the University of California in San Diego and their associates (N Engl J Med. 2016 Nov 16. doi: 10.1056/NEJMoa1602773).

All three trials were global, multisite, double-blind, placebo-controlled studies involving adults who had Crohn’s Disease Activity Index (CDAI) scores ranging from 220 to 450 out of 600, with higher scores indicating more severe disease.

“At week 0, patients in both induction trials were randomly assigned, in a 1:1:1 ratio, to receive a single intravenous infusion of 130 mg of ustekinumab, a weight-range–based dose that approximated 6 mg of ustekinumab per kilogram of body weight, or placebo,” Dr. Feagan, Dr. Sandborn, and their associates wrote.

A total of 741 patients participated in UNITI-1, while 628 patients participated in the UNITI-2 trial. Baseline and disease characteristics were similar among all groups, according to the researchers.

In UNITI-1, the percentage of patients who achieved the study’s primary endpoint of 6-week clinical response (defined as a 100-point decrease from baseline CDAI score or total CDAI score less than 150) was significantly higher in the groups that received ustekinumab at a dose of either 130 mg or 6 mg/kg (34.3% and 33.7%, respectively) than in the placebo group (21.5%).

The absolute difference between the 130-mg ustekinumab group and the placebo group was 12.8 percentage points (95% confidence interval, 5.0-20.7; P = .002), and the absolute difference between the 6-mg/kg ustekinumab dose and placebo was 12.3 percentage points (95% CI, 4.5-20.1; P = .003), the investigators reported.

Similarly, in UNITI-2, the percentages of patients who achieved 6-week clinical response also were significantly higher in the groups that received ustekinumab at a dose of either 130 mg or 6-mg/kg (51.7% and 55.5%, respectively), compared with the placebo group (28.7%).

The absolute difference between the 130-mg ustekinumab dose and placebo was 23.0 percentage points (95% CI, 13.8-32.1; P less than .001). Between the 6-mg/kg ustekinumab dose and placebo, the absolute difference was 26.8 percentage points (95% CI, 17.7-35.9; P less than .001).

“In the induction trials, both doses of ustekinumab were associated with greater reductions in and normalization of serum [C-reactive protein] levels than was placebo. The differences between ustekinumab and placebo were nominally significant and were observed as early as week 3 and persisted through week 8. Similar effects were observed for fecal calprotectin levels at week 6,” the investigators summarized.

In the maintenance trial IM-UNITI, patients from UNITI-1 and UNITI-2 (n = 1,281) who had a response to ustekinumab induction therapy at week 8 were randomly assigned, in a 1:1:1 ratio, to receive subcutaneous injections of 90 mg of ustekinumab every 8 weeks, 90 mg of ustekinumab every 12 weeks, or placebo through week 40.

In general, participants who received maintenance therapy with ustekinumab had significantly higher 44-week remission rates, compared with those who received placebo (53.1% for 8-week group, 48.8% for 12-week group, 35.9% for placebo).

“The rate of remission at week 44 was significantly higher among patients who entered maintenance in remission and who received treatment every 8 weeks – but not those who received treatment every 12 weeks – than among those who received placebo,” Dr. Feagan, Dr. Sandborn and their associates added.

Janssen Research and Development supported this study. Dr. Feagan, Dr. Sandborn, and 24 other investigators reported receiving financial compensation from various pharmaceutical companies, including Janssen. Nine of the investigators are employees of Janssen.

Publications
Topics
Sections

Intravenous induction of the monoclonal antibody ustekinumab induced significantly higher 6-week clinical response rates in patients with moderately to severely active Crohn’s disease, compared with those who received placebo.

 

In addition, among those who had achieved clinical response to induction therapy, subcutaneous administration of ustekinumab led to high 44-week remission rates.

Dr. Brian G. Feagan, senior scientific director of the gastrointestinal contract research firm Alimentiv in London, Ontario
University of Western Ontario, London
Dr. Brian Feagan
These results are based on a phase III development program that included two 8-week induction trials, UNITI-1 and UNITI-2, and one 44-week maintenance trial, IM-UNITI. Together, the trials represented 52 weeks of continuous therapy, explained lead investigators Brian Feagan, MD, of the Robarts Research Institute, Western University, London, Ont., and William Sandborn, MD, of the University of California in San Diego and their associates (N Engl J Med. 2016 Nov 16. doi: 10.1056/NEJMoa1602773).

All three trials were global, multisite, double-blind, placebo-controlled studies involving adults who had Crohn’s Disease Activity Index (CDAI) scores ranging from 220 to 450 out of 600, with higher scores indicating more severe disease.

“At week 0, patients in both induction trials were randomly assigned, in a 1:1:1 ratio, to receive a single intravenous infusion of 130 mg of ustekinumab, a weight-range–based dose that approximated 6 mg of ustekinumab per kilogram of body weight, or placebo,” Dr. Feagan, Dr. Sandborn, and their associates wrote.

A total of 741 patients participated in UNITI-1, while 628 patients participated in the UNITI-2 trial. Baseline and disease characteristics were similar among all groups, according to the researchers.

In UNITI-1, the percentage of patients who achieved the study’s primary endpoint of 6-week clinical response (defined as a 100-point decrease from baseline CDAI score or total CDAI score less than 150) was significantly higher in the groups that received ustekinumab at a dose of either 130 mg or 6 mg/kg (34.3% and 33.7%, respectively) than in the placebo group (21.5%).

The absolute difference between the 130-mg ustekinumab group and the placebo group was 12.8 percentage points (95% confidence interval, 5.0-20.7; P = .002), and the absolute difference between the 6-mg/kg ustekinumab dose and placebo was 12.3 percentage points (95% CI, 4.5-20.1; P = .003), the investigators reported.

Similarly, in UNITI-2, the percentages of patients who achieved 6-week clinical response also were significantly higher in the groups that received ustekinumab at a dose of either 130 mg or 6-mg/kg (51.7% and 55.5%, respectively), compared with the placebo group (28.7%).

The absolute difference between the 130-mg ustekinumab dose and placebo was 23.0 percentage points (95% CI, 13.8-32.1; P less than .001). Between the 6-mg/kg ustekinumab dose and placebo, the absolute difference was 26.8 percentage points (95% CI, 17.7-35.9; P less than .001).

“In the induction trials, both doses of ustekinumab were associated with greater reductions in and normalization of serum [C-reactive protein] levels than was placebo. The differences between ustekinumab and placebo were nominally significant and were observed as early as week 3 and persisted through week 8. Similar effects were observed for fecal calprotectin levels at week 6,” the investigators summarized.

In the maintenance trial IM-UNITI, patients from UNITI-1 and UNITI-2 (n = 1,281) who had a response to ustekinumab induction therapy at week 8 were randomly assigned, in a 1:1:1 ratio, to receive subcutaneous injections of 90 mg of ustekinumab every 8 weeks, 90 mg of ustekinumab every 12 weeks, or placebo through week 40.

In general, participants who received maintenance therapy with ustekinumab had significantly higher 44-week remission rates, compared with those who received placebo (53.1% for 8-week group, 48.8% for 12-week group, 35.9% for placebo).

“The rate of remission at week 44 was significantly higher among patients who entered maintenance in remission and who received treatment every 8 weeks – but not those who received treatment every 12 weeks – than among those who received placebo,” Dr. Feagan, Dr. Sandborn and their associates added.

Janssen Research and Development supported this study. Dr. Feagan, Dr. Sandborn, and 24 other investigators reported receiving financial compensation from various pharmaceutical companies, including Janssen. Nine of the investigators are employees of Janssen.

Intravenous induction of the monoclonal antibody ustekinumab induced significantly higher 6-week clinical response rates in patients with moderately to severely active Crohn’s disease, compared with those who received placebo.

 

In addition, among those who had achieved clinical response to induction therapy, subcutaneous administration of ustekinumab led to high 44-week remission rates.

Dr. Brian G. Feagan, senior scientific director of the gastrointestinal contract research firm Alimentiv in London, Ontario
University of Western Ontario, London
Dr. Brian Feagan
These results are based on a phase III development program that included two 8-week induction trials, UNITI-1 and UNITI-2, and one 44-week maintenance trial, IM-UNITI. Together, the trials represented 52 weeks of continuous therapy, explained lead investigators Brian Feagan, MD, of the Robarts Research Institute, Western University, London, Ont., and William Sandborn, MD, of the University of California in San Diego and their associates (N Engl J Med. 2016 Nov 16. doi: 10.1056/NEJMoa1602773).

All three trials were global, multisite, double-blind, placebo-controlled studies involving adults who had Crohn’s Disease Activity Index (CDAI) scores ranging from 220 to 450 out of 600, with higher scores indicating more severe disease.

“At week 0, patients in both induction trials were randomly assigned, in a 1:1:1 ratio, to receive a single intravenous infusion of 130 mg of ustekinumab, a weight-range–based dose that approximated 6 mg of ustekinumab per kilogram of body weight, or placebo,” Dr. Feagan, Dr. Sandborn, and their associates wrote.

A total of 741 patients participated in UNITI-1, while 628 patients participated in the UNITI-2 trial. Baseline and disease characteristics were similar among all groups, according to the researchers.

In UNITI-1, the percentage of patients who achieved the study’s primary endpoint of 6-week clinical response (defined as a 100-point decrease from baseline CDAI score or total CDAI score less than 150) was significantly higher in the groups that received ustekinumab at a dose of either 130 mg or 6 mg/kg (34.3% and 33.7%, respectively) than in the placebo group (21.5%).

The absolute difference between the 130-mg ustekinumab group and the placebo group was 12.8 percentage points (95% confidence interval, 5.0-20.7; P = .002), and the absolute difference between the 6-mg/kg ustekinumab dose and placebo was 12.3 percentage points (95% CI, 4.5-20.1; P = .003), the investigators reported.

Similarly, in UNITI-2, the percentages of patients who achieved 6-week clinical response also were significantly higher in the groups that received ustekinumab at a dose of either 130 mg or 6-mg/kg (51.7% and 55.5%, respectively), compared with the placebo group (28.7%).

The absolute difference between the 130-mg ustekinumab dose and placebo was 23.0 percentage points (95% CI, 13.8-32.1; P less than .001). Between the 6-mg/kg ustekinumab dose and placebo, the absolute difference was 26.8 percentage points (95% CI, 17.7-35.9; P less than .001).

“In the induction trials, both doses of ustekinumab were associated with greater reductions in and normalization of serum [C-reactive protein] levels than was placebo. The differences between ustekinumab and placebo were nominally significant and were observed as early as week 3 and persisted through week 8. Similar effects were observed for fecal calprotectin levels at week 6,” the investigators summarized.

In the maintenance trial IM-UNITI, patients from UNITI-1 and UNITI-2 (n = 1,281) who had a response to ustekinumab induction therapy at week 8 were randomly assigned, in a 1:1:1 ratio, to receive subcutaneous injections of 90 mg of ustekinumab every 8 weeks, 90 mg of ustekinumab every 12 weeks, or placebo through week 40.

In general, participants who received maintenance therapy with ustekinumab had significantly higher 44-week remission rates, compared with those who received placebo (53.1% for 8-week group, 48.8% for 12-week group, 35.9% for placebo).

“The rate of remission at week 44 was significantly higher among patients who entered maintenance in remission and who received treatment every 8 weeks – but not those who received treatment every 12 weeks – than among those who received placebo,” Dr. Feagan, Dr. Sandborn and their associates added.

Janssen Research and Development supported this study. Dr. Feagan, Dr. Sandborn, and 24 other investigators reported receiving financial compensation from various pharmaceutical companies, including Janssen. Nine of the investigators are employees of Janssen.

Publications
Publications
Topics
Article Type
Click for Credit Status
Eligible
Sections
Article Source

FROM THE NEW ENGLAND JOURNAL OF MEDICINE

Disallow All Ads
Vitals

 

Key clinical point: Intravenous induction followed by subcutaneous administration of ustekinumab resulted in high clinical response and remission rates for patients with moderately to severely active Crohn’s disease.

Major finding: 53.1% of patients who received ustekinumab every 8 weeks achieved 44-week remission, compared with 35.9% for placebo.

Data source: Three multisite, double-blind, placebo-controlled studies.

Disclosures: Janssen Research and Development supported this study. Dr. Feagan, Dr. Sandborn, and 24 other investigators reported receiving financial compensation from various pharmaceutical companies, including Janssen. Nine of the investigators are employees of Janssen.

Simulation model favors hernia surgery over watchful waiting

Article Type
Changed
Wed, 01/02/2019 - 09:43

 

– Surgical repair of ventral hernias at time of diagnosis is a more cost-effective approach than watchful waiting, according to the results of a state-transition microsimulation model presented by Lindsey Wolf, MD, at the annual clinical congress of the American College of Surgeons.

The benefit of surgical intervention, compared with observation or watchful waiting for reducible ventral hernias is not well described, reported Dr. Wolf, a general surgery resident at Brigham and Women’s Hospital, Boston.

Dr. Lindsey Wolf
Decision analysis comparing lifetime outcomes after surgical repair of hernias or watchful waiting are scarce. Dr. Wolf and her associates attempted to fill this gap by creating a Markov model that estimated outcomes and cost effectiveness for 100,000 simulated patients with any type of reducible ventral hernia who underwent either watchful waiting, open repair at diagnosis, or laparoscopic repair at diagnosis.

In the model, cost was represented in U.S. dollars and benefit was indicated by quality-adjusted life years (QALY). Both measures accumulated for individual patients over time and then were averaged and reported for an entire cohort of simulated patients, Dr. Wolf explained. Incremental cost effectiveness, a measure represented as a ratio of cost per QALY gained for each strategy, “provides context by allowing us to compare each strategy to the next best strategy,” explained Dr. Wolf.

Willingness to pay, a threshold set by the government that represents the maximum amount a payer is willing to spend for additional quality, was set at a threshold of $50,000 dollars per QALY, which is a “commonly accepted willingness-to pay-threshold,” she said.

The model’s primary outcomes were lifetime costs, QALYs from time of diagnosis, and incremental cost-effectiveness ratios.

“We built a state-transition microsimulation model which represents the different health states a patient can occupy at any point in time,” Dr. Wolf reported. “Using a yearly cycle, a cohort of patients were simulated through the model one at a time.”

All patients entered the model in an asymptomatic state. For each year there was a probability for a patient to transition from the current state to another state in the model.

Patients who underwent surgical repair at diagnosis were transitioned to the no-hernia state in the first year after undergoing surgery. Those in the watchful-waiting group stayed in the asymptotic state and each year there was a probability of becoming symptomatic. Of those who became symptomatic, there was a small probability that they would present with an incarcerated hernia and would require emergent surgery rather than elective surgery. Patients were subjected to perioperative mortality rates, as well as background yearly risk of death.

Cohort characteristics, hospital and other costs, perioperative mortality, and quality of life were derived from best available published studies and the Nationwide Inpatient Sample, the largest all-payer inpatient care database in the United States.

Overall, laparoscopic surgery at diagnosis was the optimal hernia repair strategy, reported Dr. Wolf.

Although laparoscopic surgery was the most expensive, it was also the most effective and, compared with watchful waiting – the least expensive and least effective strategy – the incremental cost-effectiveness ratio was about $14,800 per QALY.

Open repair at diagnosis fell between the watchful-waiting and laparoscopic-repair strategies in terms of cost and effectiveness.

To understand the conditions in which the optimal strategy changed, the researchers performed sensitivity analysis using the net monetary benefit metric, which represented both costs and benefits in a single unit at a given willingness to pay threshold.

“For a cohort of high-risk patients, once the perioperative risk of death exceeds 3.4%, watchful waiting becomes the preferred strategy,” Dr. Wolf said. Watchful waiting also was the preferred strategy when the yearly risk of recurrence exceeded 24%.

A sensitivity analysis comparing quality of life for elective open and laparoscopic repair revealed that, when quality-of-life measures were similar between the two surgical repair groups, the open repair became the preferred strategy.

Finally, researchers performed probabilistic sensitivity analysis by simulating the cohort of 100,000 patients 100 times and each time deriving results that were similar – an indication of robust results.

“In conclusion, we found that, for a typical cohort of patients with ventral hernia, laparoscopic repair at diagnosis is very cost effective. As long-term outcomes for open and laparoscopic repair were very similar in the model, the decision between laparoscopic and open surgery depends on surgeon experience and preference for one method over another,” said Dr. Wolf.

This study was funded by the Resident Research Scholarship awarded by the American College of Surgeons. Dr. Wolf reported having no disclosures.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Surgical repair of ventral hernias at time of diagnosis is a more cost-effective approach than watchful waiting, according to the results of a state-transition microsimulation model presented by Lindsey Wolf, MD, at the annual clinical congress of the American College of Surgeons.

The benefit of surgical intervention, compared with observation or watchful waiting for reducible ventral hernias is not well described, reported Dr. Wolf, a general surgery resident at Brigham and Women’s Hospital, Boston.

Dr. Lindsey Wolf
Decision analysis comparing lifetime outcomes after surgical repair of hernias or watchful waiting are scarce. Dr. Wolf and her associates attempted to fill this gap by creating a Markov model that estimated outcomes and cost effectiveness for 100,000 simulated patients with any type of reducible ventral hernia who underwent either watchful waiting, open repair at diagnosis, or laparoscopic repair at diagnosis.

In the model, cost was represented in U.S. dollars and benefit was indicated by quality-adjusted life years (QALY). Both measures accumulated for individual patients over time and then were averaged and reported for an entire cohort of simulated patients, Dr. Wolf explained. Incremental cost effectiveness, a measure represented as a ratio of cost per QALY gained for each strategy, “provides context by allowing us to compare each strategy to the next best strategy,” explained Dr. Wolf.

Willingness to pay, a threshold set by the government that represents the maximum amount a payer is willing to spend for additional quality, was set at a threshold of $50,000 dollars per QALY, which is a “commonly accepted willingness-to pay-threshold,” she said.

The model’s primary outcomes were lifetime costs, QALYs from time of diagnosis, and incremental cost-effectiveness ratios.

“We built a state-transition microsimulation model which represents the different health states a patient can occupy at any point in time,” Dr. Wolf reported. “Using a yearly cycle, a cohort of patients were simulated through the model one at a time.”

All patients entered the model in an asymptomatic state. For each year there was a probability for a patient to transition from the current state to another state in the model.

Patients who underwent surgical repair at diagnosis were transitioned to the no-hernia state in the first year after undergoing surgery. Those in the watchful-waiting group stayed in the asymptotic state and each year there was a probability of becoming symptomatic. Of those who became symptomatic, there was a small probability that they would present with an incarcerated hernia and would require emergent surgery rather than elective surgery. Patients were subjected to perioperative mortality rates, as well as background yearly risk of death.

Cohort characteristics, hospital and other costs, perioperative mortality, and quality of life were derived from best available published studies and the Nationwide Inpatient Sample, the largest all-payer inpatient care database in the United States.

Overall, laparoscopic surgery at diagnosis was the optimal hernia repair strategy, reported Dr. Wolf.

Although laparoscopic surgery was the most expensive, it was also the most effective and, compared with watchful waiting – the least expensive and least effective strategy – the incremental cost-effectiveness ratio was about $14,800 per QALY.

Open repair at diagnosis fell between the watchful-waiting and laparoscopic-repair strategies in terms of cost and effectiveness.

To understand the conditions in which the optimal strategy changed, the researchers performed sensitivity analysis using the net monetary benefit metric, which represented both costs and benefits in a single unit at a given willingness to pay threshold.

“For a cohort of high-risk patients, once the perioperative risk of death exceeds 3.4%, watchful waiting becomes the preferred strategy,” Dr. Wolf said. Watchful waiting also was the preferred strategy when the yearly risk of recurrence exceeded 24%.

A sensitivity analysis comparing quality of life for elective open and laparoscopic repair revealed that, when quality-of-life measures were similar between the two surgical repair groups, the open repair became the preferred strategy.

Finally, researchers performed probabilistic sensitivity analysis by simulating the cohort of 100,000 patients 100 times and each time deriving results that were similar – an indication of robust results.

“In conclusion, we found that, for a typical cohort of patients with ventral hernia, laparoscopic repair at diagnosis is very cost effective. As long-term outcomes for open and laparoscopic repair were very similar in the model, the decision between laparoscopic and open surgery depends on surgeon experience and preference for one method over another,” said Dr. Wolf.

This study was funded by the Resident Research Scholarship awarded by the American College of Surgeons. Dr. Wolf reported having no disclosures.

 

– Surgical repair of ventral hernias at time of diagnosis is a more cost-effective approach than watchful waiting, according to the results of a state-transition microsimulation model presented by Lindsey Wolf, MD, at the annual clinical congress of the American College of Surgeons.

The benefit of surgical intervention, compared with observation or watchful waiting for reducible ventral hernias is not well described, reported Dr. Wolf, a general surgery resident at Brigham and Women’s Hospital, Boston.

Dr. Lindsey Wolf
Decision analysis comparing lifetime outcomes after surgical repair of hernias or watchful waiting are scarce. Dr. Wolf and her associates attempted to fill this gap by creating a Markov model that estimated outcomes and cost effectiveness for 100,000 simulated patients with any type of reducible ventral hernia who underwent either watchful waiting, open repair at diagnosis, or laparoscopic repair at diagnosis.

In the model, cost was represented in U.S. dollars and benefit was indicated by quality-adjusted life years (QALY). Both measures accumulated for individual patients over time and then were averaged and reported for an entire cohort of simulated patients, Dr. Wolf explained. Incremental cost effectiveness, a measure represented as a ratio of cost per QALY gained for each strategy, “provides context by allowing us to compare each strategy to the next best strategy,” explained Dr. Wolf.

Willingness to pay, a threshold set by the government that represents the maximum amount a payer is willing to spend for additional quality, was set at a threshold of $50,000 dollars per QALY, which is a “commonly accepted willingness-to pay-threshold,” she said.

The model’s primary outcomes were lifetime costs, QALYs from time of diagnosis, and incremental cost-effectiveness ratios.

“We built a state-transition microsimulation model which represents the different health states a patient can occupy at any point in time,” Dr. Wolf reported. “Using a yearly cycle, a cohort of patients were simulated through the model one at a time.”

All patients entered the model in an asymptomatic state. For each year there was a probability for a patient to transition from the current state to another state in the model.

Patients who underwent surgical repair at diagnosis were transitioned to the no-hernia state in the first year after undergoing surgery. Those in the watchful-waiting group stayed in the asymptotic state and each year there was a probability of becoming symptomatic. Of those who became symptomatic, there was a small probability that they would present with an incarcerated hernia and would require emergent surgery rather than elective surgery. Patients were subjected to perioperative mortality rates, as well as background yearly risk of death.

Cohort characteristics, hospital and other costs, perioperative mortality, and quality of life were derived from best available published studies and the Nationwide Inpatient Sample, the largest all-payer inpatient care database in the United States.

Overall, laparoscopic surgery at diagnosis was the optimal hernia repair strategy, reported Dr. Wolf.

Although laparoscopic surgery was the most expensive, it was also the most effective and, compared with watchful waiting – the least expensive and least effective strategy – the incremental cost-effectiveness ratio was about $14,800 per QALY.

Open repair at diagnosis fell between the watchful-waiting and laparoscopic-repair strategies in terms of cost and effectiveness.

To understand the conditions in which the optimal strategy changed, the researchers performed sensitivity analysis using the net monetary benefit metric, which represented both costs and benefits in a single unit at a given willingness to pay threshold.

“For a cohort of high-risk patients, once the perioperative risk of death exceeds 3.4%, watchful waiting becomes the preferred strategy,” Dr. Wolf said. Watchful waiting also was the preferred strategy when the yearly risk of recurrence exceeded 24%.

A sensitivity analysis comparing quality of life for elective open and laparoscopic repair revealed that, when quality-of-life measures were similar between the two surgical repair groups, the open repair became the preferred strategy.

Finally, researchers performed probabilistic sensitivity analysis by simulating the cohort of 100,000 patients 100 times and each time deriving results that were similar – an indication of robust results.

“In conclusion, we found that, for a typical cohort of patients with ventral hernia, laparoscopic repair at diagnosis is very cost effective. As long-term outcomes for open and laparoscopic repair were very similar in the model, the decision between laparoscopic and open surgery depends on surgeon experience and preference for one method over another,” said Dr. Wolf.

This study was funded by the Resident Research Scholarship awarded by the American College of Surgeons. Dr. Wolf reported having no disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

AT THE ACS CLINICAL CONGRESS

Disallow All Ads
Vitals

 

Key clinical point: Surgical repair of ventral hernias at the time of diagnosis was more cost effective than watchful waiting.

Major finding: The incremental cost-effectiveness ratio for laparoscopic surgery, compared with watchful waiting, was about $14,800 per QALY.

Data source: A state-transition microsimulation model of 100,000 people.

Disclosures: This study was funded by the Resident Research Scholarship awarded by the American College of Surgeons. Dr. Wolf reported having no disclosures.

FDA approves tenofovir alafenamide for patients with chronic hepatitis B and liver disease

Article Type
Changed
Fri, 01/18/2019 - 16:20

 

The Food and Drug Administration has approved tenofovir alafenamide (marketed as Vemlidy by Gilead Sciences) for the treatment of adults with chronic hepatitis B virus infection with compensated liver disease.

Tenofovir alafenamide is a novel, targeted prodrug of tenofovir that has demonstrated antiviral efficacy similar to tenofovir disoproxil fumarate (Viread) at significantly lower doses.

FDA icon
Approval for this drug was based on two international phase III clinical trials that, together, enrolled 1,298 treatment-naive and treatment-experienced adult patients with chronic hepatitis B virus infection, 425 of whom were HBeAg negative and 873 of whom were HBeAg positive. In both studies, participants were randomly treated with either tenofovir alafenamide or tenofovir disoproxil fumarate, and tenofovir alafenamide met the primary endpoint of noninferiority to tenofovir disoproxil fumarate, according to a written statement published by Gilead Sciences.

Compared with tenofovir disoproxil fumarate, tenofovir alafenamide has “greater plasma stability and more efficiently delivers tenofovir to hepatocytes” which allows tenofovir alafenamide to be administered in daily doses of 25mg while tenofovir disoproxil fumarate requires a dose of 300 mg to be as effective.

In addition, patients treated with tenofovir alafenamide demonstrated “improvements in certain bone and renal laboratory parameters.”

Overall, tenofovir alafenamide was well tolerated. Only 1% of patients discontinued treatment because of adverse events, and the most common adverse events were headache, abdominal pain, fatigue, cough, nausea, and back pain. Vemlidy has a boxed warning in its product label regarding the risks of lactic acidosis/severe hepatomegaly with steatosis and severe acute exacerbation of hepatitis B with discontinuation.

“Vemlidy is the first medication approved to treat this disease in nearly a decade,” said President and Chief Executive Officer of Gilead Sciences John Milligan. “We are excited to offer a new, effective option to help advance long-term care for patients.”

Publications
Topics
Sections

 

The Food and Drug Administration has approved tenofovir alafenamide (marketed as Vemlidy by Gilead Sciences) for the treatment of adults with chronic hepatitis B virus infection with compensated liver disease.

Tenofovir alafenamide is a novel, targeted prodrug of tenofovir that has demonstrated antiviral efficacy similar to tenofovir disoproxil fumarate (Viread) at significantly lower doses.

FDA icon
Approval for this drug was based on two international phase III clinical trials that, together, enrolled 1,298 treatment-naive and treatment-experienced adult patients with chronic hepatitis B virus infection, 425 of whom were HBeAg negative and 873 of whom were HBeAg positive. In both studies, participants were randomly treated with either tenofovir alafenamide or tenofovir disoproxil fumarate, and tenofovir alafenamide met the primary endpoint of noninferiority to tenofovir disoproxil fumarate, according to a written statement published by Gilead Sciences.

Compared with tenofovir disoproxil fumarate, tenofovir alafenamide has “greater plasma stability and more efficiently delivers tenofovir to hepatocytes” which allows tenofovir alafenamide to be administered in daily doses of 25mg while tenofovir disoproxil fumarate requires a dose of 300 mg to be as effective.

In addition, patients treated with tenofovir alafenamide demonstrated “improvements in certain bone and renal laboratory parameters.”

Overall, tenofovir alafenamide was well tolerated. Only 1% of patients discontinued treatment because of adverse events, and the most common adverse events were headache, abdominal pain, fatigue, cough, nausea, and back pain. Vemlidy has a boxed warning in its product label regarding the risks of lactic acidosis/severe hepatomegaly with steatosis and severe acute exacerbation of hepatitis B with discontinuation.

“Vemlidy is the first medication approved to treat this disease in nearly a decade,” said President and Chief Executive Officer of Gilead Sciences John Milligan. “We are excited to offer a new, effective option to help advance long-term care for patients.”

 

The Food and Drug Administration has approved tenofovir alafenamide (marketed as Vemlidy by Gilead Sciences) for the treatment of adults with chronic hepatitis B virus infection with compensated liver disease.

Tenofovir alafenamide is a novel, targeted prodrug of tenofovir that has demonstrated antiviral efficacy similar to tenofovir disoproxil fumarate (Viread) at significantly lower doses.

FDA icon
Approval for this drug was based on two international phase III clinical trials that, together, enrolled 1,298 treatment-naive and treatment-experienced adult patients with chronic hepatitis B virus infection, 425 of whom were HBeAg negative and 873 of whom were HBeAg positive. In both studies, participants were randomly treated with either tenofovir alafenamide or tenofovir disoproxil fumarate, and tenofovir alafenamide met the primary endpoint of noninferiority to tenofovir disoproxil fumarate, according to a written statement published by Gilead Sciences.

Compared with tenofovir disoproxil fumarate, tenofovir alafenamide has “greater plasma stability and more efficiently delivers tenofovir to hepatocytes” which allows tenofovir alafenamide to be administered in daily doses of 25mg while tenofovir disoproxil fumarate requires a dose of 300 mg to be as effective.

In addition, patients treated with tenofovir alafenamide demonstrated “improvements in certain bone and renal laboratory parameters.”

Overall, tenofovir alafenamide was well tolerated. Only 1% of patients discontinued treatment because of adverse events, and the most common adverse events were headache, abdominal pain, fatigue, cough, nausea, and back pain. Vemlidy has a boxed warning in its product label regarding the risks of lactic acidosis/severe hepatomegaly with steatosis and severe acute exacerbation of hepatitis B with discontinuation.

“Vemlidy is the first medication approved to treat this disease in nearly a decade,” said President and Chief Executive Officer of Gilead Sciences John Milligan. “We are excited to offer a new, effective option to help advance long-term care for patients.”

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads

High protein intake moderately associated with improved breast cancer survival

Article Type
Changed
Thu, 12/15/2022 - 17:56

Higher protein intake, particularly protein from animal sources, is associated with a modest but lower risk of breast cancer recurrence and death, regardless of insulin receptor status.

Using information gathered through biennial questionnaires from 6,348 women who were diagnosed with stage I to III breast cancer between 1976 and 2004, investigators found a significant inverse association between total protein intake and distant breast cancer recurrence (P = .02). This association was driven specifically by protein from animal sources (P = .003) rather than vegetable sources.

Various meats on display
camij/thinkstock
Furthermore, the 5-year recurrence-free survival for women in the highest quintile of protein consumption was 94.0%, while those in the lowest quintile of protein consumption, had 5-year recurrence-free survival of 92.1%. Similarly, the corresponding 10-year recurrence-free survival rates were 87.4% and 83.3%, respectively, reported Michelle Holmes, MD, of Brigham and Women’s Hospital, Boston, and her associates (J Clin Oncol. 2016 Nov 7. doi: 10.1200/JCO.2016.68.3292).

Pathology records were reviewed and histology samples were analyzed for insulin receptor and estrogen receptor expression. Associations between breast cancer recurrence and protein intake, amino acids, or protein-containing food groups did not differ by tumor receptor status or body mass index at time of cancer diagnosis. Given that the association between protein intake and recurrence was not confined to tumors expressing insulin receptors, “It is difficult to invoke the insulin pathway as a mechanism to explain these findings,” the investigators wrote.

Given the only “modest survival advantage” of higher protein intake among women with breast cancer, and given the “challenges involved in randomized trials of diet, this association is unlikely to ever be definitively tested in a randomized trial,” Dr. Holmes and her associates wrote.

“However, the modest survival advantage with higher protein intake has been found in several studies, and we feel it is important that patients with breast cancer and their clinicians know this. At the least, it may provide reassurance that consuming protein-containing foods is not likely to increase the risk of breast cancer recurrence,” the researchers concluded.

This study was sponsored by grants from the National Institutes of Health. Dr. Holmes and one other investigator reported receiving financial compensation from Bayer HealthCare Pharmaceuticals.

Publications
Topics
Sections

Higher protein intake, particularly protein from animal sources, is associated with a modest but lower risk of breast cancer recurrence and death, regardless of insulin receptor status.

Using information gathered through biennial questionnaires from 6,348 women who were diagnosed with stage I to III breast cancer between 1976 and 2004, investigators found a significant inverse association between total protein intake and distant breast cancer recurrence (P = .02). This association was driven specifically by protein from animal sources (P = .003) rather than vegetable sources.

Various meats on display
camij/thinkstock
Furthermore, the 5-year recurrence-free survival for women in the highest quintile of protein consumption was 94.0%, while those in the lowest quintile of protein consumption, had 5-year recurrence-free survival of 92.1%. Similarly, the corresponding 10-year recurrence-free survival rates were 87.4% and 83.3%, respectively, reported Michelle Holmes, MD, of Brigham and Women’s Hospital, Boston, and her associates (J Clin Oncol. 2016 Nov 7. doi: 10.1200/JCO.2016.68.3292).

Pathology records were reviewed and histology samples were analyzed for insulin receptor and estrogen receptor expression. Associations between breast cancer recurrence and protein intake, amino acids, or protein-containing food groups did not differ by tumor receptor status or body mass index at time of cancer diagnosis. Given that the association between protein intake and recurrence was not confined to tumors expressing insulin receptors, “It is difficult to invoke the insulin pathway as a mechanism to explain these findings,” the investigators wrote.

Given the only “modest survival advantage” of higher protein intake among women with breast cancer, and given the “challenges involved in randomized trials of diet, this association is unlikely to ever be definitively tested in a randomized trial,” Dr. Holmes and her associates wrote.

“However, the modest survival advantage with higher protein intake has been found in several studies, and we feel it is important that patients with breast cancer and their clinicians know this. At the least, it may provide reassurance that consuming protein-containing foods is not likely to increase the risk of breast cancer recurrence,” the researchers concluded.

This study was sponsored by grants from the National Institutes of Health. Dr. Holmes and one other investigator reported receiving financial compensation from Bayer HealthCare Pharmaceuticals.

Higher protein intake, particularly protein from animal sources, is associated with a modest but lower risk of breast cancer recurrence and death, regardless of insulin receptor status.

Using information gathered through biennial questionnaires from 6,348 women who were diagnosed with stage I to III breast cancer between 1976 and 2004, investigators found a significant inverse association between total protein intake and distant breast cancer recurrence (P = .02). This association was driven specifically by protein from animal sources (P = .003) rather than vegetable sources.

Various meats on display
camij/thinkstock
Furthermore, the 5-year recurrence-free survival for women in the highest quintile of protein consumption was 94.0%, while those in the lowest quintile of protein consumption, had 5-year recurrence-free survival of 92.1%. Similarly, the corresponding 10-year recurrence-free survival rates were 87.4% and 83.3%, respectively, reported Michelle Holmes, MD, of Brigham and Women’s Hospital, Boston, and her associates (J Clin Oncol. 2016 Nov 7. doi: 10.1200/JCO.2016.68.3292).

Pathology records were reviewed and histology samples were analyzed for insulin receptor and estrogen receptor expression. Associations between breast cancer recurrence and protein intake, amino acids, or protein-containing food groups did not differ by tumor receptor status or body mass index at time of cancer diagnosis. Given that the association between protein intake and recurrence was not confined to tumors expressing insulin receptors, “It is difficult to invoke the insulin pathway as a mechanism to explain these findings,” the investigators wrote.

Given the only “modest survival advantage” of higher protein intake among women with breast cancer, and given the “challenges involved in randomized trials of diet, this association is unlikely to ever be definitively tested in a randomized trial,” Dr. Holmes and her associates wrote.

“However, the modest survival advantage with higher protein intake has been found in several studies, and we feel it is important that patients with breast cancer and their clinicians know this. At the least, it may provide reassurance that consuming protein-containing foods is not likely to increase the risk of breast cancer recurrence,” the researchers concluded.

This study was sponsored by grants from the National Institutes of Health. Dr. Holmes and one other investigator reported receiving financial compensation from Bayer HealthCare Pharmaceuticals.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF CLINICAL ONCOLOGY

Disallow All Ads
Vitals

Key clinical point: Higher intake of protein, particularly protein from animal sources, is associated with lower risk of breast cancer death and tumor recurrence.

Major finding: The 5-year recurrence-free survival for women in the highest quintile of protein consumption was 94.0%, while those in the lowest quintile of protein consumption had 5-year recurrence-free survival of 92.1%.

Data source: Biennial questionnaires for 6,348 women diagnosed with any stage breast cancer between 1976 and 2004.

Disclosures: This study was sponsored by grants from the National Institutes of Health. Dr. Holmes and one coinvestigator reported receiving financial compensation from Bayer HealthCare Pharmaceuticals.

Tobacco-related cancer incidence, mortality drop

Article Type
Changed
Fri, 01/18/2019 - 16:20

 

Tobacco-related cancer incidence and mortality rates dropped from 2004 to 2013, the Centers for Disease Control and Prevention reported in the Morbidity and Mortality Weekly Report published on Nov. 11, 2016.

Overall, tobacco-related invasive cancer incidence decreased from 206 cases per 100,000 during 2004-2008 to 193 cases per 100,000 during 2009-2013.

CDC News icon
Tobacco-related cancer mortality also declined from 108 deaths per 100,000 during 2004-2008 to 100 per 100,000 during 2009-2013.

The announcement of this data marks a continuation of the downward trend that has been observed since the 1990s, lead author S. Jane Henley, MSPH, and her associates at the CDC wrote in the report (MMWR. 2016 Nov 11;65[44]:1212-8).

Despite this continued decline in tobacco-related cancer incidence and mortality rates, the tobacco-related cancer burden remains high, and disparities in the rates and decline of tobacco-related cancer persists.

“Tobacco use remains the leading preventable cause of disease and death in the United States,” reported Henley and her associates. For each year between 2009 and 2013, an estimated 660,000 Americans were diagnosed with tobacco-related cancer, and an estimated 343,000 people died from those cancers, according to the investigators’ analysis of data collected by the United States Cancer Statistics working group, which compiles data from multiple nationwide sources including the National Program of Cancer Registries and the National Cancer Institute’s Surveillance, Epidemiology, and End Results program.

Tobacco-related cancer incidence and deaths were higher among men than women and higher among blacks than any other ethnic group. However, the cancer incidence and mortality rates also declined the fastest among men and blacks, compared with women and other ethnic groups, respectively.

Cancer incidence and death were also highest and decreased the most slowly in counties with lower educational attainment or highest poverty. Conversely, cancer incidence and mortality was the lowest and decreased the most quickly in metropolitan areas with populations greater than 1 million people.

Given that an estimated 40% of cancers diagnosed in the country and 3 in 10 cancer deaths are attributable to cigarette smoking and the use of smokeless tobacco, it is imperative that the CDC implement programs to help the almost 6 million smokers quit, CDC director Tom Frieden, MD, said in an associated telebriefing. Most people who smoke want to quit, and the health care system should do all it can to help them, Dr. Frieden said. At the same time, he echoed a claim from Henley’s paper, which said many tobacco-related cancers could be prevented by reducing tobacco use through implementation of evidence-based tobacco prevention and control interventions, such as increasing tobacco product prices, enforcing smoke-free laws, and promoting anti-tobacco mass media campaigns. These programs should be tailored to local geographic areas and demographics given the continued inconsistent progress and persistent disparities in tobacco-related cancer incidence and mortality, Dr. Frieden added.
 

Publications
Topics
Sections

 

Tobacco-related cancer incidence and mortality rates dropped from 2004 to 2013, the Centers for Disease Control and Prevention reported in the Morbidity and Mortality Weekly Report published on Nov. 11, 2016.

Overall, tobacco-related invasive cancer incidence decreased from 206 cases per 100,000 during 2004-2008 to 193 cases per 100,000 during 2009-2013.

CDC News icon
Tobacco-related cancer mortality also declined from 108 deaths per 100,000 during 2004-2008 to 100 per 100,000 during 2009-2013.

The announcement of this data marks a continuation of the downward trend that has been observed since the 1990s, lead author S. Jane Henley, MSPH, and her associates at the CDC wrote in the report (MMWR. 2016 Nov 11;65[44]:1212-8).

Despite this continued decline in tobacco-related cancer incidence and mortality rates, the tobacco-related cancer burden remains high, and disparities in the rates and decline of tobacco-related cancer persists.

“Tobacco use remains the leading preventable cause of disease and death in the United States,” reported Henley and her associates. For each year between 2009 and 2013, an estimated 660,000 Americans were diagnosed with tobacco-related cancer, and an estimated 343,000 people died from those cancers, according to the investigators’ analysis of data collected by the United States Cancer Statistics working group, which compiles data from multiple nationwide sources including the National Program of Cancer Registries and the National Cancer Institute’s Surveillance, Epidemiology, and End Results program.

Tobacco-related cancer incidence and deaths were higher among men than women and higher among blacks than any other ethnic group. However, the cancer incidence and mortality rates also declined the fastest among men and blacks, compared with women and other ethnic groups, respectively.

Cancer incidence and death were also highest and decreased the most slowly in counties with lower educational attainment or highest poverty. Conversely, cancer incidence and mortality was the lowest and decreased the most quickly in metropolitan areas with populations greater than 1 million people.

Given that an estimated 40% of cancers diagnosed in the country and 3 in 10 cancer deaths are attributable to cigarette smoking and the use of smokeless tobacco, it is imperative that the CDC implement programs to help the almost 6 million smokers quit, CDC director Tom Frieden, MD, said in an associated telebriefing. Most people who smoke want to quit, and the health care system should do all it can to help them, Dr. Frieden said. At the same time, he echoed a claim from Henley’s paper, which said many tobacco-related cancers could be prevented by reducing tobacco use through implementation of evidence-based tobacco prevention and control interventions, such as increasing tobacco product prices, enforcing smoke-free laws, and promoting anti-tobacco mass media campaigns. These programs should be tailored to local geographic areas and demographics given the continued inconsistent progress and persistent disparities in tobacco-related cancer incidence and mortality, Dr. Frieden added.
 

 

Tobacco-related cancer incidence and mortality rates dropped from 2004 to 2013, the Centers for Disease Control and Prevention reported in the Morbidity and Mortality Weekly Report published on Nov. 11, 2016.

Overall, tobacco-related invasive cancer incidence decreased from 206 cases per 100,000 during 2004-2008 to 193 cases per 100,000 during 2009-2013.

CDC News icon
Tobacco-related cancer mortality also declined from 108 deaths per 100,000 during 2004-2008 to 100 per 100,000 during 2009-2013.

The announcement of this data marks a continuation of the downward trend that has been observed since the 1990s, lead author S. Jane Henley, MSPH, and her associates at the CDC wrote in the report (MMWR. 2016 Nov 11;65[44]:1212-8).

Despite this continued decline in tobacco-related cancer incidence and mortality rates, the tobacco-related cancer burden remains high, and disparities in the rates and decline of tobacco-related cancer persists.

“Tobacco use remains the leading preventable cause of disease and death in the United States,” reported Henley and her associates. For each year between 2009 and 2013, an estimated 660,000 Americans were diagnosed with tobacco-related cancer, and an estimated 343,000 people died from those cancers, according to the investigators’ analysis of data collected by the United States Cancer Statistics working group, which compiles data from multiple nationwide sources including the National Program of Cancer Registries and the National Cancer Institute’s Surveillance, Epidemiology, and End Results program.

Tobacco-related cancer incidence and deaths were higher among men than women and higher among blacks than any other ethnic group. However, the cancer incidence and mortality rates also declined the fastest among men and blacks, compared with women and other ethnic groups, respectively.

Cancer incidence and death were also highest and decreased the most slowly in counties with lower educational attainment or highest poverty. Conversely, cancer incidence and mortality was the lowest and decreased the most quickly in metropolitan areas with populations greater than 1 million people.

Given that an estimated 40% of cancers diagnosed in the country and 3 in 10 cancer deaths are attributable to cigarette smoking and the use of smokeless tobacco, it is imperative that the CDC implement programs to help the almost 6 million smokers quit, CDC director Tom Frieden, MD, said in an associated telebriefing. Most people who smoke want to quit, and the health care system should do all it can to help them, Dr. Frieden said. At the same time, he echoed a claim from Henley’s paper, which said many tobacco-related cancers could be prevented by reducing tobacco use through implementation of evidence-based tobacco prevention and control interventions, such as increasing tobacco product prices, enforcing smoke-free laws, and promoting anti-tobacco mass media campaigns. These programs should be tailored to local geographic areas and demographics given the continued inconsistent progress and persistent disparities in tobacco-related cancer incidence and mortality, Dr. Frieden added.
 

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM MMWR

Disallow All Ads
Vitals

 

Key clinical point: Tobacco-related cancer incidence and mortality rates continued to decline from 2004 to 2013.

Major finding: Tobacco-related cancer mortality dropped from 108 deaths per 100,000 during 2004-2008 to 100 per 100,000 during 2009-2013.

Data source: Retrospective analysis of United States Cancer Statistics data for 2004 to 2013.

Disclosures: This study was sponsored by the Centers for Disease Control and Prevention. The authors’ disclosures were not reported.

New criteria estimate systemic sclerosis to be more prevalent in primary biliary cholangitis patients

Article Type
Changed
Sat, 12/08/2018 - 03:06

 

The use of outdated criteria for the identification of systemic sclerosis in primary biliary cholangitis likely led to an underestimation of the comorbidity’s prevalence.

Furthermore, more recent criteria estimate the prevalence of systemic sclerosis in primary biliary cholangitis to be around 23%.

[[{"fid":"172520","view_mode":"medstat_image_flush_right","fields":{"format":"medstat_image_flush_right","field_file_image_alt_text[und][0][value]":"Clinical appearance of acrosclerotic piece-meal necrosis of the first digit in a patient with systemic sclerosis.","field_file_image_credit[und][0][value]":"BMC Dermatology 2004, 4:11. doi:10.1186/1471-5945-4-11 ","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_flush_right"}}]]In 1980, the American College of Rheumatology defined “highly specific but not sensitive” criteria for the identification of primary biliary cholangitis patients who also had systemic sclerosis, reported the study’s lead investigator Dr. Boyang Zheng of the University of Montreal Hospital Center and associates (J Rheumatol. 2016 Oct 28. doi: 10.3899/jrheum.160243).

In 2001, LeRoy and Medsger proposed and validated a new set of criteria that centrally required the observation of Raynaud phenomenon and “allowed for greater sensitivity and diagnosis of earlier disease by incorporating advances in nailfold capillary microscopy and [systemic sclerosis]–specific antibodies,” the investigators wrote.

Most recently, the ACR and the European League Against Rheumatism jointly developed new “weighted-point criteria endorsed for use in systemic sclerosis inclusion studies.” These new criteria, which were published in 2013, “the addition of at least a clinical or radiological feature to be positive.”

The purpose of this study, the first of its kind, according to investigators, was to compare the prevalence estimates of systemic sclerosis in primary biliary cholangitis patients as predicted by each of the three criteria sets.

A total of 100 patients who had previously been diagnosed with primary biliary cholangitis but not systemic sclerosis were recruited into the study. The majority of the patients were female (91%), the mean age at first visit was 57 years, and the mean primary biliary cholangitis Mayo score of disease severity and survival was 4.14.

At time of study enrollment, medical histories were obtained. All patients also underwent nailfold capillary microscopy, and serum samples were collected and analyzed for the presence of primary biliary cholangitis antibodies and the following systemic sclerosis–specific antibodies: anti–CENP-B, anti–topo I, anti–RNAP III, anti-Th/To.

Clinical data, presence of antibodies, and capillarascopic patterns were analyzed, and patients were retroactively evaluated for the fulfillment of each of the three systemic sclerosis criteria sets.

“A total of 23 patients satisfied at least one set of criteria, with 22 being positive for LeRoy and Medsger criteria, 17 for ACR/EULAR criteria, and only 1 for the ACR 1980 criteria,” Dr. Zheng and his associates reported.

The most frequent systemic sclerosis–associated features in the study population were Raynaud phenomenon (39%), systemic sclerosis antibodies (26%), abnormal nailfold capillary microscopy (20%), and capillary telangiectases (17%), while clinically evident skin changes were the most rare, investigators explained.

The 1980 ACR criteria likely led to an underestimation of systemic sclerosis in primary biliary cirrhosis, and given the benefit of early diagnosis and treatment of systemic sclerosis, patients with primary biliary cholangitis should be screened for Raynaud phenomenon and systemic sclerosis antibodies and undergo nailfold capillaroscopic microscopy, the investigators recommended.

“Clinicians need to remain alert for this sometimes insidious comorbidity,” the researchers added.

Dr. Zheng had no relevant financial disclosures.

Publications
Topics
Sections

 

The use of outdated criteria for the identification of systemic sclerosis in primary biliary cholangitis likely led to an underestimation of the comorbidity’s prevalence.

Furthermore, more recent criteria estimate the prevalence of systemic sclerosis in primary biliary cholangitis to be around 23%.

[[{"fid":"172520","view_mode":"medstat_image_flush_right","fields":{"format":"medstat_image_flush_right","field_file_image_alt_text[und][0][value]":"Clinical appearance of acrosclerotic piece-meal necrosis of the first digit in a patient with systemic sclerosis.","field_file_image_credit[und][0][value]":"BMC Dermatology 2004, 4:11. doi:10.1186/1471-5945-4-11 ","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_flush_right"}}]]In 1980, the American College of Rheumatology defined “highly specific but not sensitive” criteria for the identification of primary biliary cholangitis patients who also had systemic sclerosis, reported the study’s lead investigator Dr. Boyang Zheng of the University of Montreal Hospital Center and associates (J Rheumatol. 2016 Oct 28. doi: 10.3899/jrheum.160243).

In 2001, LeRoy and Medsger proposed and validated a new set of criteria that centrally required the observation of Raynaud phenomenon and “allowed for greater sensitivity and diagnosis of earlier disease by incorporating advances in nailfold capillary microscopy and [systemic sclerosis]–specific antibodies,” the investigators wrote.

Most recently, the ACR and the European League Against Rheumatism jointly developed new “weighted-point criteria endorsed for use in systemic sclerosis inclusion studies.” These new criteria, which were published in 2013, “the addition of at least a clinical or radiological feature to be positive.”

The purpose of this study, the first of its kind, according to investigators, was to compare the prevalence estimates of systemic sclerosis in primary biliary cholangitis patients as predicted by each of the three criteria sets.

A total of 100 patients who had previously been diagnosed with primary biliary cholangitis but not systemic sclerosis were recruited into the study. The majority of the patients were female (91%), the mean age at first visit was 57 years, and the mean primary biliary cholangitis Mayo score of disease severity and survival was 4.14.

At time of study enrollment, medical histories were obtained. All patients also underwent nailfold capillary microscopy, and serum samples were collected and analyzed for the presence of primary biliary cholangitis antibodies and the following systemic sclerosis–specific antibodies: anti–CENP-B, anti–topo I, anti–RNAP III, anti-Th/To.

Clinical data, presence of antibodies, and capillarascopic patterns were analyzed, and patients were retroactively evaluated for the fulfillment of each of the three systemic sclerosis criteria sets.

“A total of 23 patients satisfied at least one set of criteria, with 22 being positive for LeRoy and Medsger criteria, 17 for ACR/EULAR criteria, and only 1 for the ACR 1980 criteria,” Dr. Zheng and his associates reported.

The most frequent systemic sclerosis–associated features in the study population were Raynaud phenomenon (39%), systemic sclerosis antibodies (26%), abnormal nailfold capillary microscopy (20%), and capillary telangiectases (17%), while clinically evident skin changes were the most rare, investigators explained.

The 1980 ACR criteria likely led to an underestimation of systemic sclerosis in primary biliary cirrhosis, and given the benefit of early diagnosis and treatment of systemic sclerosis, patients with primary biliary cholangitis should be screened for Raynaud phenomenon and systemic sclerosis antibodies and undergo nailfold capillaroscopic microscopy, the investigators recommended.

“Clinicians need to remain alert for this sometimes insidious comorbidity,” the researchers added.

Dr. Zheng had no relevant financial disclosures.

 

The use of outdated criteria for the identification of systemic sclerosis in primary biliary cholangitis likely led to an underestimation of the comorbidity’s prevalence.

Furthermore, more recent criteria estimate the prevalence of systemic sclerosis in primary biliary cholangitis to be around 23%.

[[{"fid":"172520","view_mode":"medstat_image_flush_right","fields":{"format":"medstat_image_flush_right","field_file_image_alt_text[und][0][value]":"Clinical appearance of acrosclerotic piece-meal necrosis of the first digit in a patient with systemic sclerosis.","field_file_image_credit[und][0][value]":"BMC Dermatology 2004, 4:11. doi:10.1186/1471-5945-4-11 ","field_file_image_caption[und][0][value]":""},"type":"media","attributes":{"class":"media-element file-medstat_image_flush_right"}}]]In 1980, the American College of Rheumatology defined “highly specific but not sensitive” criteria for the identification of primary biliary cholangitis patients who also had systemic sclerosis, reported the study’s lead investigator Dr. Boyang Zheng of the University of Montreal Hospital Center and associates (J Rheumatol. 2016 Oct 28. doi: 10.3899/jrheum.160243).

In 2001, LeRoy and Medsger proposed and validated a new set of criteria that centrally required the observation of Raynaud phenomenon and “allowed for greater sensitivity and diagnosis of earlier disease by incorporating advances in nailfold capillary microscopy and [systemic sclerosis]–specific antibodies,” the investigators wrote.

Most recently, the ACR and the European League Against Rheumatism jointly developed new “weighted-point criteria endorsed for use in systemic sclerosis inclusion studies.” These new criteria, which were published in 2013, “the addition of at least a clinical or radiological feature to be positive.”

The purpose of this study, the first of its kind, according to investigators, was to compare the prevalence estimates of systemic sclerosis in primary biliary cholangitis patients as predicted by each of the three criteria sets.

A total of 100 patients who had previously been diagnosed with primary biliary cholangitis but not systemic sclerosis were recruited into the study. The majority of the patients were female (91%), the mean age at first visit was 57 years, and the mean primary biliary cholangitis Mayo score of disease severity and survival was 4.14.

At time of study enrollment, medical histories were obtained. All patients also underwent nailfold capillary microscopy, and serum samples were collected and analyzed for the presence of primary biliary cholangitis antibodies and the following systemic sclerosis–specific antibodies: anti–CENP-B, anti–topo I, anti–RNAP III, anti-Th/To.

Clinical data, presence of antibodies, and capillarascopic patterns were analyzed, and patients were retroactively evaluated for the fulfillment of each of the three systemic sclerosis criteria sets.

“A total of 23 patients satisfied at least one set of criteria, with 22 being positive for LeRoy and Medsger criteria, 17 for ACR/EULAR criteria, and only 1 for the ACR 1980 criteria,” Dr. Zheng and his associates reported.

The most frequent systemic sclerosis–associated features in the study population were Raynaud phenomenon (39%), systemic sclerosis antibodies (26%), abnormal nailfold capillary microscopy (20%), and capillary telangiectases (17%), while clinically evident skin changes were the most rare, investigators explained.

The 1980 ACR criteria likely led to an underestimation of systemic sclerosis in primary biliary cirrhosis, and given the benefit of early diagnosis and treatment of systemic sclerosis, patients with primary biliary cholangitis should be screened for Raynaud phenomenon and systemic sclerosis antibodies and undergo nailfold capillaroscopic microscopy, the investigators recommended.

“Clinicians need to remain alert for this sometimes insidious comorbidity,” the researchers added.

Dr. Zheng had no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF RHEUMATOLOGY

Disallow All Ads
Vitals

 

Key clinical point: The use of outdated criteria for the identification of systemic sclerosis in primary biliary cholangitis likely led to an underestimation of the comorbidity’s prevalence.

Major finding: The prevalence of systemic sclerosis in primary biliary cholangitis, according to new criteria, is around 23%.

Data source: Evaluation of systemic sclerosis in 100 patients previously diagnosed with primary biliary cholangitis.

Disclosures: Dr. Zheng had no relevant financial disclosures.

ASCO: Patients with advanced cancer should receive palliative care within 8 weeks of diagnosis

Article Type
Changed
Fri, 01/18/2019 - 16:19

 

Patients with advanced cancer should receive dedicated palliative care services early in the disease course, concurrently with active treatment, according to the American Society of Clinical Oncology’s new guidelines on the integration of palliative care into standard oncology care.

Ideally, patients should be referred to interdisciplinary palliative care teams within 8 weeks of cancer diagnosis, and palliative care should be available in both the inpatient and outpatient setting, recommended ASCO.

The guidelines, which updated and expanded the 2012 ASCO provisional clinical opinion, were developed by a multidisciplinary expert panel that systematically reviewed phase III randomized controlled trials, secondary analyses of those trials, and meta-analyses that were published between March 2010 and January 2016.

According to the panel, essential components of palliative care include:
 

• Rapport and relationship building with patient and family caregivers.

• Symptom, distress, and functional status management.

• Exploration of understanding and education about illness and prognosis.

• Clarification of treatment goals.

• Assessment and support of coping needs.

• Assistance with medical decision making.

Dr. Betty Ferrell
Dr. Betty Ferrell
• Coordination with other care providers.

• Provision of referrals to other care providers as indicated.

The panel makes the case that not only does palliative care improve care for patients and families, it also likely reduces the total cost of care, often substantially. However, “race, poverty and low socioeconomic and/or immigration status are determinants of barriers to palliative care,” wrote the expert panel, which was cochaired by Betty Ferrell, PhD, of the City of Hope Medical Center, Duarte, Calif., and Thomas Smith, MD, of the Sidney Kimmel Comprehensive Cancer Center in Baltimore.

Dr. Thomas J. Smith
Dr. Thomas J. Smith
While it was not “within the scope of this guideline to examine specific factors contributing to disparities,” the panel urged health care providers to be aware of the paucity of health disparities research on palliative care and to “strive to deliver the highest level of cancer care to these vulnerable populations.”

Read the full guidelines here.

Publications
Topics
Sections

 

Patients with advanced cancer should receive dedicated palliative care services early in the disease course, concurrently with active treatment, according to the American Society of Clinical Oncology’s new guidelines on the integration of palliative care into standard oncology care.

Ideally, patients should be referred to interdisciplinary palliative care teams within 8 weeks of cancer diagnosis, and palliative care should be available in both the inpatient and outpatient setting, recommended ASCO.

The guidelines, which updated and expanded the 2012 ASCO provisional clinical opinion, were developed by a multidisciplinary expert panel that systematically reviewed phase III randomized controlled trials, secondary analyses of those trials, and meta-analyses that were published between March 2010 and January 2016.

According to the panel, essential components of palliative care include:
 

• Rapport and relationship building with patient and family caregivers.

• Symptom, distress, and functional status management.

• Exploration of understanding and education about illness and prognosis.

• Clarification of treatment goals.

• Assessment and support of coping needs.

• Assistance with medical decision making.

Dr. Betty Ferrell
Dr. Betty Ferrell
• Coordination with other care providers.

• Provision of referrals to other care providers as indicated.

The panel makes the case that not only does palliative care improve care for patients and families, it also likely reduces the total cost of care, often substantially. However, “race, poverty and low socioeconomic and/or immigration status are determinants of barriers to palliative care,” wrote the expert panel, which was cochaired by Betty Ferrell, PhD, of the City of Hope Medical Center, Duarte, Calif., and Thomas Smith, MD, of the Sidney Kimmel Comprehensive Cancer Center in Baltimore.

Dr. Thomas J. Smith
Dr. Thomas J. Smith
While it was not “within the scope of this guideline to examine specific factors contributing to disparities,” the panel urged health care providers to be aware of the paucity of health disparities research on palliative care and to “strive to deliver the highest level of cancer care to these vulnerable populations.”

Read the full guidelines here.

 

Patients with advanced cancer should receive dedicated palliative care services early in the disease course, concurrently with active treatment, according to the American Society of Clinical Oncology’s new guidelines on the integration of palliative care into standard oncology care.

Ideally, patients should be referred to interdisciplinary palliative care teams within 8 weeks of cancer diagnosis, and palliative care should be available in both the inpatient and outpatient setting, recommended ASCO.

The guidelines, which updated and expanded the 2012 ASCO provisional clinical opinion, were developed by a multidisciplinary expert panel that systematically reviewed phase III randomized controlled trials, secondary analyses of those trials, and meta-analyses that were published between March 2010 and January 2016.

According to the panel, essential components of palliative care include:
 

• Rapport and relationship building with patient and family caregivers.

• Symptom, distress, and functional status management.

• Exploration of understanding and education about illness and prognosis.

• Clarification of treatment goals.

• Assessment and support of coping needs.

• Assistance with medical decision making.

Dr. Betty Ferrell
Dr. Betty Ferrell
• Coordination with other care providers.

• Provision of referrals to other care providers as indicated.

The panel makes the case that not only does palliative care improve care for patients and families, it also likely reduces the total cost of care, often substantially. However, “race, poverty and low socioeconomic and/or immigration status are determinants of barriers to palliative care,” wrote the expert panel, which was cochaired by Betty Ferrell, PhD, of the City of Hope Medical Center, Duarte, Calif., and Thomas Smith, MD, of the Sidney Kimmel Comprehensive Cancer Center in Baltimore.

Dr. Thomas J. Smith
Dr. Thomas J. Smith
While it was not “within the scope of this guideline to examine specific factors contributing to disparities,” the panel urged health care providers to be aware of the paucity of health disparities research on palliative care and to “strive to deliver the highest level of cancer care to these vulnerable populations.”

Read the full guidelines here.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM THE JOURNAL OF CLINICAL ONCOLOGY

Disallow All Ads