BOSTON—The interpretation of clinical trial results can stray from the data in many ways. Creating spin (ie, stressing an experimental treatment’s advantages) may or may not be the intention of the researchers or of people who write press releases, but clinicians evaluating the results should not be distracted from the key characteristics of a meaningful trial. They can use several strategies to keep the facts in focus, according to a researcher.
“Here are some words that should put you on alert: ‘revolutionary,’ ‘groundbreaking,’ and ‘first-line.’ It is time to be cautious when you are hearing the spin and the results at the same time,” said Elizabeth W. Loder, MD, MPH, Professor of Neurology at Harvard Medical School in Boston. At the 59th Annual Scientific Meeting of the American Headache Society, Dr. Loder spoke about migraine prevention trials, but she allowed that her remarks are relevant to any clinical trial.
Guidelines Aim to Increase Objectivity
The potential for overinterpretation, misinterpretation, or misleading interpretation of trial results was reduced greatly in 2005. At that time, the International Committee of Medical Journal Editors agreed that trials accepted for publication should first be registered and have their methodology defined before study initiation. Establishing the trial design and primary end points in advance makes selective reporting and data manipulation more difficult. The approach, however, does not eliminate the potential for spin, said Dr. Loder. “The trial registrations on sites like ClinicalTrials.gov are easy to find, and it is worth looking back to compare what was registered to what was reported. There can be some surprises,” Dr. Loder explained.
One potential surprise may be a discrepancy between the prespecified outcomes and the outcomes that the researchers stress at the conclusion of the study. The peer-review process of a high-quality journal limits claims based on secondary outcomes, but press releases do not have similar constraints. In addition, favorable reporting on outcomes that did not appear in the trial registration should arouse suspicion. “It is fair to include data on outcomes that were not prespecified, but they should be flagged. These are hypothesis-generating and should not be given the same weight as those prespecified,” Dr. Loder explained.
Guidelines to improve the objectivity of data gathered and reported for trials are growing increasingly rigorous, according to Dr. Loder. For headache prevention trials, the International Headache Society has issued specific recommendations about trial conduct and the measurement of end points. Although Dr. Loder conceded that strict constraints may make reports of trial results formulaic or tedious, the consistency of the formula, which progresses from an introduction through methods, results, discussion, and conclusions, makes the findings easier to interpret and to place into context.
Data Should Guide Interpretation of Results
A paper’s discussion section may cloud the reader’s understanding of the trial’s findings, Dr. Loder cautioned. In a properly reported study, the results section confines itself to the facts. In the discussion section, interpretation of the facts varies with perspective, according to Dr. Loder. The authors’ perception of relative benefit following a favorable outcome or of the burden of an adverse event is subjective. The potential for intentional or unintentional spin is substantial.
“Examples of spin include focusing on an outcome [that] the trial was not designed to study, focusing on subgroups rather than [on] the overall population, and downplaying adverse safety data,” explained Dr. Loder. Dr. Loder cited several studies that compared reader reaction to abstracts with and without spin. The studies showed that spin was persuasive. Moreover, Dr. Loder noted that spin in abstracts is typically passed on in press releases, news stories, and other accounts of the studies.
One strategy for remaining circumspect about new data is to consult one of many watchdog organizations that monitor clinical data and evaluate data collection and analysis. One such organization is HealthNewsReview.org, which has an editorial team that routinely critiques claims made about drugs, devices, vitamins, and surgical procedures. According to Dr. Loder, the website has examined migraine therapies and provided a perspective that was fully independent of the trials’ sponsors, their authors, and sometimes of the prevailing view.
Pure objectivity may not be appealing for those who want to draw attention to their research, and spin is hard to resist in the desire to develop an engaging narrative. Whether or not those who focus on the most favorable findings of a trial are conscious of their disservice to scientific inquiry, spin has been found repeatedly in systematic reviews of study data. Dr. Loder cited one study that found spin in 47% of 498 press releases on scientific articles.
“There were various types of spin, but 19% of the press releases failed to acknowledge that the primary end point was not statistically significant,” Dr. Loder noted. When abstracts that provided the basis for the press releases were analyzed, 40% were found to contain spin.
The Value of Common Sense
Randomized controlled trials are considered the gold standard for objectively evaluating most treatment strategies, but Dr. Loder cautioned that this design by itself is not enough to ensure reproducible results. The results of the study should include not only how many patients were randomized, but also how many patients received treatment and how many were followed to the trial’s end. Low enrollment or high dropout rates are red flags. These problems can be detected by critical thinking.
“There really is no substitute for common sense,” Dr. Loder said. She suggested that studies that include all of the standard points of discussion, such as the generalizability of results, the limitations of the design, the statistical significance of the findings, and a fair interpretation of benefits and hazards, establish credibility and are generally recognizable with a discerning eye.
“For clinicians considering how to interpret results, one question to ask is whether the patients enrolled are representative of the ones that are in front of you,” Dr. Loder suggested.
A critical view of new data helps to avoid the fads that some critics have observed in the treatment of headaches and in clinical medicine overall. Typically, excessive enthusiasm about positive trial results is followed by a period of disillusionment until clinicians finally arrive at a realistic perspective of the strengths and weaknesses of a new therapeutic option. Warning of a coming brace of headache trial results, which will include studies of devices, apps, and new drugs, Dr. Loder urged clinicians to read the studies rather than the press releases, applying the criteria that define a well designed and fairly reported trial.
—Theodore Bosworth
Suggested Reading
Tfelt-Hansen P, Pascual J, Ramadan N, et al. Guidelines for controlled trials of drugs in migraine: third edition. A guide for investigators. Cephalalgia. 2012;32(1):6-38.
Yavchitz A, Boutron I, Bafeta A, et al. Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study. PLoS Med. 2012;9(9):e1001308.