Article Type
Changed
Fri, 01/18/2019 - 12:25
Display Headline
Evidence-based medicine depends on quality evidence

Efforts to improve the quality of health care often emphasize evidence-based medicine, but flaws in how research is designed, conducted, and reported make this "a great time for skeptics, in looking at clinical trials," according to Dr. J. Russell Hoverman.

Multiple studies in recent years suggest that increasing influence from industry and researchers’ desire to emphasize positive results, as well as other factors, may be distorting choices about which studies get done and how they get reported, said Dr. Hoverman, a medical oncologist and hematologist at Texas Oncology in Austin, Tex.

Dr. J. Russell Hoverman

If researchers don’t improve the way they conduct and assess clinical trials, a lot of money could be wasted on misguided research, he said at a quality care symposium sponsored by the American Society of Clinical Oncology.

He’s not the only one making the case. Physicians at Yale recently argued for greater transparency in pharmaceutical industry–sponsored research to improve the integrity of medical research (Am. J. Public Health 2012;102:72-80).

Over the last three decades, sponsorship of breast, colon, and lung cancer studies by for-profit companies increased from 4% to 57%, Dr. Hoverman noted. Industry sponsorship was associated with trial results that endorsed the experimental agent, according to one study (J. Clin. Oncol. 2008;26:5458-64).

A separate study showed that abstracts of study results presented at major oncology meetings before final publication were discordant from the published article 63% of the time. In 10% of cases, the abstract and article presented substantially different conclusions (J. Clin. Oncol. 2009;3938-44).

One example of this was a trial of a cancer treatment regimen using gemcitabine, cisplatin, and bevacizumab. The investigators initially released an early abstract reporting an improvement in progression-free survival using the regimen. "That actually changed some [oncologists’] practices," he noted. But that was before the study reached its main outcome measure – overall survival – which, in the end, did not improve significantly with the new regimen.

Only half of phase II clinical trials with positive findings lead to positive phase III trials, another study found. For some reason, industry-sponsored trials are much more likely to report positive findings, compared with all other trials – 90% and 45%, respectively (J. Clin. Oncol. 2008;26:1511-8).

When reading or interpreting abstract summaries from a medical conference, "one needs to be a little careful," Dr. Hoverman advised.

Yet another study found that only 45% of randomized clinical trials were registered, even though trial registration has been required since 2005 by the International Committee of Medical Journal Editors in order for the results to be published in participating journals.

Among the registered studies, 31% showed discrepancies between what the investigators said they would be studying and the published outcomes. Half of the studies with discrepancies could be assessed to try to figure out why this was so; of those, 83% of the time it appeared that the investigators decided to favor statistically significant findings in the published article (JAMA 2009;302:977-84).

One set of experts from within industry and from Johns Hopkins University called for "transformational change" in how randomized clinical trials are conducted (Ann. Intern. Med. 2009;151:206-209).

"Without major changes in how we conceive, design, conduct, and analyze randomized controlled trials, the nation risks spending large sums of money inefficiently to answer the wrong questions, or the right questions too late," Dr. Hoverman said.

"In fact, we probably can’t do randomized clinical trials on everything we want to know about. It’s simply impossible. There’s not enough money, and many things involve competing industries or competing members within an industry," making it unlikely that some head-to-head comparisons will ever be done, he added. "So, we are challenged to make decisions based on evidence."

The broader challenge for clinicians and researchers will be to improve the quality and integrity of medical studies while maintaining a healthy skepticism about the available evidence. Medicine has always been an art and a science. Where the science behind medicine is lacking, the art takes over.

Dr. Hoverman reported having no financial disclosures.

-- Sherry Boschert

s.boschert@elsevier.com

On Twitter @sherryboschert

Meeting/Event
Author and Disclosure Information

Publications
Sections
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

Efforts to improve the quality of health care often emphasize evidence-based medicine, but flaws in how research is designed, conducted, and reported make this "a great time for skeptics, in looking at clinical trials," according to Dr. J. Russell Hoverman.

Multiple studies in recent years suggest that increasing influence from industry and researchers’ desire to emphasize positive results, as well as other factors, may be distorting choices about which studies get done and how they get reported, said Dr. Hoverman, a medical oncologist and hematologist at Texas Oncology in Austin, Tex.

Dr. J. Russell Hoverman

If researchers don’t improve the way they conduct and assess clinical trials, a lot of money could be wasted on misguided research, he said at a quality care symposium sponsored by the American Society of Clinical Oncology.

He’s not the only one making the case. Physicians at Yale recently argued for greater transparency in pharmaceutical industry–sponsored research to improve the integrity of medical research (Am. J. Public Health 2012;102:72-80).

Over the last three decades, sponsorship of breast, colon, and lung cancer studies by for-profit companies increased from 4% to 57%, Dr. Hoverman noted. Industry sponsorship was associated with trial results that endorsed the experimental agent, according to one study (J. Clin. Oncol. 2008;26:5458-64).

A separate study showed that abstracts of study results presented at major oncology meetings before final publication were discordant from the published article 63% of the time. In 10% of cases, the abstract and article presented substantially different conclusions (J. Clin. Oncol. 2009;3938-44).

One example of this was a trial of a cancer treatment regimen using gemcitabine, cisplatin, and bevacizumab. The investigators initially released an early abstract reporting an improvement in progression-free survival using the regimen. "That actually changed some [oncologists’] practices," he noted. But that was before the study reached its main outcome measure – overall survival – which, in the end, did not improve significantly with the new regimen.

Only half of phase II clinical trials with positive findings lead to positive phase III trials, another study found. For some reason, industry-sponsored trials are much more likely to report positive findings, compared with all other trials – 90% and 45%, respectively (J. Clin. Oncol. 2008;26:1511-8).

When reading or interpreting abstract summaries from a medical conference, "one needs to be a little careful," Dr. Hoverman advised.

Yet another study found that only 45% of randomized clinical trials were registered, even though trial registration has been required since 2005 by the International Committee of Medical Journal Editors in order for the results to be published in participating journals.

Among the registered studies, 31% showed discrepancies between what the investigators said they would be studying and the published outcomes. Half of the studies with discrepancies could be assessed to try to figure out why this was so; of those, 83% of the time it appeared that the investigators decided to favor statistically significant findings in the published article (JAMA 2009;302:977-84).

One set of experts from within industry and from Johns Hopkins University called for "transformational change" in how randomized clinical trials are conducted (Ann. Intern. Med. 2009;151:206-209).

"Without major changes in how we conceive, design, conduct, and analyze randomized controlled trials, the nation risks spending large sums of money inefficiently to answer the wrong questions, or the right questions too late," Dr. Hoverman said.

"In fact, we probably can’t do randomized clinical trials on everything we want to know about. It’s simply impossible. There’s not enough money, and many things involve competing industries or competing members within an industry," making it unlikely that some head-to-head comparisons will ever be done, he added. "So, we are challenged to make decisions based on evidence."

The broader challenge for clinicians and researchers will be to improve the quality and integrity of medical studies while maintaining a healthy skepticism about the available evidence. Medicine has always been an art and a science. Where the science behind medicine is lacking, the art takes over.

Dr. Hoverman reported having no financial disclosures.

-- Sherry Boschert

s.boschert@elsevier.com

On Twitter @sherryboschert

Efforts to improve the quality of health care often emphasize evidence-based medicine, but flaws in how research is designed, conducted, and reported make this "a great time for skeptics, in looking at clinical trials," according to Dr. J. Russell Hoverman.

Multiple studies in recent years suggest that increasing influence from industry and researchers’ desire to emphasize positive results, as well as other factors, may be distorting choices about which studies get done and how they get reported, said Dr. Hoverman, a medical oncologist and hematologist at Texas Oncology in Austin, Tex.

Dr. J. Russell Hoverman

If researchers don’t improve the way they conduct and assess clinical trials, a lot of money could be wasted on misguided research, he said at a quality care symposium sponsored by the American Society of Clinical Oncology.

He’s not the only one making the case. Physicians at Yale recently argued for greater transparency in pharmaceutical industry–sponsored research to improve the integrity of medical research (Am. J. Public Health 2012;102:72-80).

Over the last three decades, sponsorship of breast, colon, and lung cancer studies by for-profit companies increased from 4% to 57%, Dr. Hoverman noted. Industry sponsorship was associated with trial results that endorsed the experimental agent, according to one study (J. Clin. Oncol. 2008;26:5458-64).

A separate study showed that abstracts of study results presented at major oncology meetings before final publication were discordant from the published article 63% of the time. In 10% of cases, the abstract and article presented substantially different conclusions (J. Clin. Oncol. 2009;3938-44).

One example of this was a trial of a cancer treatment regimen using gemcitabine, cisplatin, and bevacizumab. The investigators initially released an early abstract reporting an improvement in progression-free survival using the regimen. "That actually changed some [oncologists’] practices," he noted. But that was before the study reached its main outcome measure – overall survival – which, in the end, did not improve significantly with the new regimen.

Only half of phase II clinical trials with positive findings lead to positive phase III trials, another study found. For some reason, industry-sponsored trials are much more likely to report positive findings, compared with all other trials – 90% and 45%, respectively (J. Clin. Oncol. 2008;26:1511-8).

When reading or interpreting abstract summaries from a medical conference, "one needs to be a little careful," Dr. Hoverman advised.

Yet another study found that only 45% of randomized clinical trials were registered, even though trial registration has been required since 2005 by the International Committee of Medical Journal Editors in order for the results to be published in participating journals.

Among the registered studies, 31% showed discrepancies between what the investigators said they would be studying and the published outcomes. Half of the studies with discrepancies could be assessed to try to figure out why this was so; of those, 83% of the time it appeared that the investigators decided to favor statistically significant findings in the published article (JAMA 2009;302:977-84).

One set of experts from within industry and from Johns Hopkins University called for "transformational change" in how randomized clinical trials are conducted (Ann. Intern. Med. 2009;151:206-209).

"Without major changes in how we conceive, design, conduct, and analyze randomized controlled trials, the nation risks spending large sums of money inefficiently to answer the wrong questions, or the right questions too late," Dr. Hoverman said.

"In fact, we probably can’t do randomized clinical trials on everything we want to know about. It’s simply impossible. There’s not enough money, and many things involve competing industries or competing members within an industry," making it unlikely that some head-to-head comparisons will ever be done, he added. "So, we are challenged to make decisions based on evidence."

The broader challenge for clinicians and researchers will be to improve the quality and integrity of medical studies while maintaining a healthy skepticism about the available evidence. Medicine has always been an art and a science. Where the science behind medicine is lacking, the art takes over.

Dr. Hoverman reported having no financial disclosures.

-- Sherry Boschert

s.boschert@elsevier.com

On Twitter @sherryboschert

Publications
Publications
Article Type
Display Headline
Evidence-based medicine depends on quality evidence
Display Headline
Evidence-based medicine depends on quality evidence
Sections
Article Source

PURLs Copyright

Inside the Article