A review of the recent literature confirms that “comparative effectiveness” research—studies designed to help physicians use existing treatments and treatment strategies more effectively—is severely lacking.
Fewer than a third of the studies published in the six top journals covering general and internal medicine qualified as comparative effectiveness (CE) research. This finding “supports concerns that only limited clinical research is currently devoted to helping physicians” improve the use of existing therapies and determine which interventions and strategies are the most effective, safe, and cost-efficient, said Dr. Michael Hochman and Dr. Danny McCormick of Cambridge (Mass.) Health Alliance and Harvard Medical School, Boston.
Congress recently passed legislation to provide more than $1 billion to support CE studies, and President Obama's budget for 2011 recommends further funding of CE research. Noting that few data are available on the current status of CE research, Dr. Hochman and Dr. McCormick reviewed all clinical studies assessing medications that were published between June 2008 and October 2009 in the six “highest impact” medical journals: New England Journal of Medicine, Lancet, JAMA, Annals of Internal Medicine, British Medical Journal, and Archives of Internal Medicine.
These publications “are the most widely read, quoted, and covered by the media, and thus are disproportionately likely to influence clinicians,” the researchers said (JAMA 2010;303:951-8).
Of the 328 randomized trials, observational studies, or meta-analyses involving medications that were included in the analysis, only 104 (32%) were CE studies.
Only 11% of the CE studies compared medications with nonpharmacologic treatments, confirming that there is a relative lack of such research. CE studies that compare medications with nonpharmacologic interventions are particularly important because they help clinicians “make fundamental therapeutic decisions,” Dr. Hochman and Dr. McCormick said.
Nearly 90% of the CE studies relied on noncommercial funding, primarily from government sources, a finding that highlights how essential such funding is. “Commercial entities presumably devote much of their research to the development of novel therapies and to funding inactive-comparator studies aimed at expanding indications for their products,” they noted.
Most of the randomized trials in this analysis used an “inactive comparator” such as placebo, rather than comparing a medication against existing treatments. Such trials were disproportionately funded by commercial sources and disproportionately likely to show that a medication had positive results.
In addition, 24% of the randomized trials that did use an active comparator sought to show only the noninferiority of a medication to that comparator; there was no effort to clarify the optimal therapy, only to test equivalency. Such trials were exclusively funded by commercial sources.
Only 19% of the CE studies focused on patient safety, which implies that safety concerns are not adequately emphasized.
Only 2% of the CE studies and 1% of all studies in the analysis included formal cost-effectiveness analyses, which are critical to promoting efficient health care. This absence “may reflect policies or editorial priorities of journal editors favoring publication of clinical outcome reports rather than a true dearth of cost-effectiveness studies,” the authors said.
Overall, the findings “underscore the importance of the recent legislation passed in the United States to expand public funding for CE studies. In particular, our findings suggest government and noncommercial support should be increased for studies involving nonpharmacologic therapies, for studies comparing different therapeutic strategies, and for studies focusing on the comparative safety and cost of different therapies,” Dr. Hochman and Dr. McCormick said.
Disclosures: The investigators reported no financial conflicts of interest.