About half (51%) of clinical trials for high-risk cardiovascular devices remain unpublished > 2 years after FDA approval, say researchers from Massachusetts General Hospital in Boston; University of California, Davis, in Sacramento; University of California, San Francisco; and University of Sydney in Australia. Moreover, high-risk devices are often approved on the basis of a single study.
The researchers conducted a study to examine the extent of selective reporting for medical devices. The FDA documents containing the evidence to support approval of devices are available but not easy to access, they note, which is why the researchers took the route of comparing clinical trial data in FDA summaries with information in corresponding peer-reviewed publications.
The researchers found summaries for 106 cardiovascular devices approved between 2000 and 2010. Of 177 studies, 86 were published, corresponding to 60 distinct devices. The pivotal studies also corresponded to the same 60 devices. The researchers contacted 23 manufacturers to request publication references; 8 (35%) responded, confirming that the trials of interest had not been published. A subgroup analysis restricted to the pivotal studies showed that, of 112, there were 66 corresponding publications (59%).
The average time from FDA approval to publication in a peer-reviewed journal was 6.5 months but extended as long as 7.5 years. For the 66 pivotal studies, the average time to publication was 7.9 months; 22 (33%) were published before FDA approval. Of publications that specified a funding source, all were industry funded.
Reassuringly, the researchers say, the summaries and publications were nearly identical on clinical design features such as randomization, blinding, and number of centers.
However, in one-quarter of the published studies, the number of patients enrolled in the study differed between the summary and the publication. Demographic information, such as age and gender, also differed in 11% and 16% of the studies, respectively.
Endpoints labeled as primary in summaries were sometimes secondary in studies. Nearly 10% were entirely missing. Those alterations mean, the researchers say, that primary endpoints were “rebranded” to modify the emphasis of the findings in the literature.
Fewer than half the results for primary endpoints were identical in both summaries and published studies, and 11% were “substantially different,” the researchers say.
Although systematic reviews increasingly serve as a basis for evidence-based clinical practice guidelines, documents from regulatory agencies are seldom included, the researchers say. Thus, the researchers warn, guidelines might not be based on complete and accurate information. But since clinicians can use devices immediately after FDA approval, the researchers point out, it’s also in the public interest that all the data be immediately available.
Source
Chang L, Dhruva SS, Chu J, Bero LA, Redburg RF. BMJ. 2015;350:h2613.
doi: 10.1136/bmj.h2613.