User login
If we were to try to identify a Zeitgeist (spirit of the time) in society, one possible answer would be data. In the field of clinical research this could mean data that is collected, not collected, public, hidden from view, published, not published—the list of issues connected to data is almost endless.
In this editorial, we would like to examine clinical research data from 3 different perspectives. What happens when there is no data available? Or when only incomplete data can be accessed? Or when all of the data is in the public realm but is uncritically taken at face value?
There is currently a groundswell of opinion that the subject of transparency of clinical trial data needs to be tackled. This campaign is particularly strong in the United Kingdom where the British Medical Journal and advocacy groups like www.alltrials.net have gained prominence. Ben Goldacre, author of the recent Bad Pharma book, goes so far as to say, “The problem of missing trials is one of the greatest ethical and practical problems facing medicine today.”1
Here in the United States we also have issues with data. One study from 2009 found that the results of only 44% of trials conducted in the United States and Canada is published in the medical literature.2 However, this study was on general medicine, how are we faring in orthopedics? A study from 2011 targeted orthopedic trauma trials registered on www.clinicaltrials.gov and followed them up to see if they were published within a reasonable timeframe.3 The result? Only 43.2% of the orthopedic trauma trials studied resulted in a publication—a figure that almost exactly mirrors the findings from the general medicine study.
Data that is not released obviously skews the evidence available to us as clinicians and researchers. More insidious still is incomplete data as it gives a false picture to anyone reading the original study or to a researcher who wants to include the study in a meta-analysis. We are all aware of the difficulty of having complete patient follow-up because, ironically, we as surgeons have enabled our patients to walk away from the study. How should we best deal with these gaps in our knowledge? Some statistical techniques have been developed to deal with just this problem.
One set of researchers looked at how missing data was dealt with in an intention-to-treat analysis in orthopedic randomized clinical trials.4 They took 1 published study and recalculated the way patients on a displaced midshaft clavicular fracture trial who were lost to follow-up are handled. These researchers used the Last Observation Carried Forward technique and compared this to the original method, which was exclusion from the analysis. This change in approach changed the significance of the nonunion and overall complication results. However, the use of these various methods to deal with missing data in intention-to-treat analysis is in itself the subject of some controversy in orthopedic clinical research.5
There is more than merely anecdotal evidence that uncritical acceptance of research findings could harm patients. We are all familiar with the recent metal-on-metal hip implant controversy when promising early results were not borne out by later experience. One study, which found combined clinical and radiographic failure rates of 28% among large diameter metal-on-metal articulations in total hip arthroplasty, notes that, “adequate preclinical trials may have identified some of the shortcomings of this class of implants before the marketing and widespread use of these implants ensued.”6
Is this volte-face in the evidence released a rare occurrence? Perhaps not. A well-known review of 49 studies from 2005 found that 45 claimed the intervention was effective.7 Subsequent investigations contradicted the findings of 7 of the original studies with positive results (16%), and a further 7 of these studies (16%) reported effects stronger than those of any of the follow-up studies, studies which were larger or better controlled. The evidence for almost one-third of the positive result studies was therefore changed, either wholly or partly. Keep in mind that this figure does not take into account the 11 positive result studies which were not replicated at all.
In all of this, we have to accept that things are rarely black and white. When is the best time to release information? For example, the conclusion for a closed fracture treatment subgroup in the study to prospectively evaluate reamed intramedullary (IM) nails in tibial fractures (SPRINT) changed only after 800 patients had been enrolled. A smaller trial would have led to an incorrect conclusion for this subgroup.8 As you can see, deciding on when to release data is a delicate subject and is influenced by many factors, not least time and costs. Many contemporary clinical researchers also operate under publication pressures.9 And all of us are aware of the kudos that accrue from being first-in-manuscript authors!
Unfortunately, knowing how to identify good and bad (and premature) information, and how to filter out relevant information in today’s flood of publications in the field of medicine is likely to remain an intractable problem for all of us involved in conducting or assessing clinical research for the foreseeable future. This is why the critical appraisal techniques of evidence-based medicine are invaluable.
Starr10 in writing about the advances in fracture repair achieved by the AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation), says that, “Fortunately, the surgical pioneers who described early use of these techniques were harsh critics of their own work. The need for better methods and implants was evident.” From its founding, the AO inculcated a culture in which data, positive or negative, was shared.
Perhaps the ‘Golden Age of Orthopedic Surgery’ has already passed. But even with all of the advances in today’s operating room, we should continue to strive to improve what it is we do, even if it is only incrementally. As this editorial has illustrated, complacency about clinical research data presents a challenge to better patient care. We need to continue to be inquisitive and questioning in our quest to be better!
Dr. Helfet is Associate Editor of Trauma of this journal; Professor, Department of Orthopedic Surgery, Cornell University Medical College; and Director of the Orthopaedic Trauma Service, at the Hospital for Special Surgery and New York–Presbyterian Hospital, New York, New York. Dr. Hanson is Director and Mr. De Faoite is Education Manager, AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation) Clinical Investigation and Documentation (AOCID), Dübendorf, Switzerland.
Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.
Am J Orthop. 2013;42(9):399-400. Copyright Frontline Medical Communications Inc. 2013. All rights reserved.
1. Davies E. The shifting debate on trial data transparency. BMJ. 2013;347:f4485.
2. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144.
3. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M. Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskelet Disord. 2011;12:278.
4. Herman A, Botser IB, Tenenbaum S, Chechick A. Intention-to-treat analysis and accounting for missing data in orthopaedic randomized clinical trials. J Bone Joint Surg Am. 2009;91(9):2137-2143.
5. Scharfstein DO, Hogan J, Herman A. On the prevention and analysis of missing data in randomized clinical trials: the state of the art. J Bone Joint Surg Am. 2012;94 suppl 1:80-84.
6. Steele GD, Fehring TK, Odum SM, Dennos AC, Nadaud MC. Early failure of articular surface replacement XL total hip arthroplasty. J Arthroplasty. 2011;26(6 suppl):14-18.
7. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218-228.
8. Slobogean GP, Sprague S, Bhandari M. The tactics of large randomized trials. J Bone Joint Surg Am. 2012;94 suppl 1:19-23.
9. Duvivier R, Crocker-Buqué T, Stull MJ. Young doctors and the pressure of publication. Lancet. 2013;381(9876):e10.
10. Starr AJ. Fracture repair: successful advances, persistent problems, and the psychological burden of trauma. J Bone Joint Surg Am. 2008;90 suppl 1:132-137.
If we were to try to identify a Zeitgeist (spirit of the time) in society, one possible answer would be data. In the field of clinical research this could mean data that is collected, not collected, public, hidden from view, published, not published—the list of issues connected to data is almost endless.
In this editorial, we would like to examine clinical research data from 3 different perspectives. What happens when there is no data available? Or when only incomplete data can be accessed? Or when all of the data is in the public realm but is uncritically taken at face value?
There is currently a groundswell of opinion that the subject of transparency of clinical trial data needs to be tackled. This campaign is particularly strong in the United Kingdom where the British Medical Journal and advocacy groups like www.alltrials.net have gained prominence. Ben Goldacre, author of the recent Bad Pharma book, goes so far as to say, “The problem of missing trials is one of the greatest ethical and practical problems facing medicine today.”1
Here in the United States we also have issues with data. One study from 2009 found that the results of only 44% of trials conducted in the United States and Canada is published in the medical literature.2 However, this study was on general medicine, how are we faring in orthopedics? A study from 2011 targeted orthopedic trauma trials registered on www.clinicaltrials.gov and followed them up to see if they were published within a reasonable timeframe.3 The result? Only 43.2% of the orthopedic trauma trials studied resulted in a publication—a figure that almost exactly mirrors the findings from the general medicine study.
Data that is not released obviously skews the evidence available to us as clinicians and researchers. More insidious still is incomplete data as it gives a false picture to anyone reading the original study or to a researcher who wants to include the study in a meta-analysis. We are all aware of the difficulty of having complete patient follow-up because, ironically, we as surgeons have enabled our patients to walk away from the study. How should we best deal with these gaps in our knowledge? Some statistical techniques have been developed to deal with just this problem.
One set of researchers looked at how missing data was dealt with in an intention-to-treat analysis in orthopedic randomized clinical trials.4 They took 1 published study and recalculated the way patients on a displaced midshaft clavicular fracture trial who were lost to follow-up are handled. These researchers used the Last Observation Carried Forward technique and compared this to the original method, which was exclusion from the analysis. This change in approach changed the significance of the nonunion and overall complication results. However, the use of these various methods to deal with missing data in intention-to-treat analysis is in itself the subject of some controversy in orthopedic clinical research.5
There is more than merely anecdotal evidence that uncritical acceptance of research findings could harm patients. We are all familiar with the recent metal-on-metal hip implant controversy when promising early results were not borne out by later experience. One study, which found combined clinical and radiographic failure rates of 28% among large diameter metal-on-metal articulations in total hip arthroplasty, notes that, “adequate preclinical trials may have identified some of the shortcomings of this class of implants before the marketing and widespread use of these implants ensued.”6
Is this volte-face in the evidence released a rare occurrence? Perhaps not. A well-known review of 49 studies from 2005 found that 45 claimed the intervention was effective.7 Subsequent investigations contradicted the findings of 7 of the original studies with positive results (16%), and a further 7 of these studies (16%) reported effects stronger than those of any of the follow-up studies, studies which were larger or better controlled. The evidence for almost one-third of the positive result studies was therefore changed, either wholly or partly. Keep in mind that this figure does not take into account the 11 positive result studies which were not replicated at all.
In all of this, we have to accept that things are rarely black and white. When is the best time to release information? For example, the conclusion for a closed fracture treatment subgroup in the study to prospectively evaluate reamed intramedullary (IM) nails in tibial fractures (SPRINT) changed only after 800 patients had been enrolled. A smaller trial would have led to an incorrect conclusion for this subgroup.8 As you can see, deciding on when to release data is a delicate subject and is influenced by many factors, not least time and costs. Many contemporary clinical researchers also operate under publication pressures.9 And all of us are aware of the kudos that accrue from being first-in-manuscript authors!
Unfortunately, knowing how to identify good and bad (and premature) information, and how to filter out relevant information in today’s flood of publications in the field of medicine is likely to remain an intractable problem for all of us involved in conducting or assessing clinical research for the foreseeable future. This is why the critical appraisal techniques of evidence-based medicine are invaluable.
Starr10 in writing about the advances in fracture repair achieved by the AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation), says that, “Fortunately, the surgical pioneers who described early use of these techniques were harsh critics of their own work. The need for better methods and implants was evident.” From its founding, the AO inculcated a culture in which data, positive or negative, was shared.
Perhaps the ‘Golden Age of Orthopedic Surgery’ has already passed. But even with all of the advances in today’s operating room, we should continue to strive to improve what it is we do, even if it is only incrementally. As this editorial has illustrated, complacency about clinical research data presents a challenge to better patient care. We need to continue to be inquisitive and questioning in our quest to be better!
Dr. Helfet is Associate Editor of Trauma of this journal; Professor, Department of Orthopedic Surgery, Cornell University Medical College; and Director of the Orthopaedic Trauma Service, at the Hospital for Special Surgery and New York–Presbyterian Hospital, New York, New York. Dr. Hanson is Director and Mr. De Faoite is Education Manager, AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation) Clinical Investigation and Documentation (AOCID), Dübendorf, Switzerland.
Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.
Am J Orthop. 2013;42(9):399-400. Copyright Frontline Medical Communications Inc. 2013. All rights reserved.
If we were to try to identify a Zeitgeist (spirit of the time) in society, one possible answer would be data. In the field of clinical research this could mean data that is collected, not collected, public, hidden from view, published, not published—the list of issues connected to data is almost endless.
In this editorial, we would like to examine clinical research data from 3 different perspectives. What happens when there is no data available? Or when only incomplete data can be accessed? Or when all of the data is in the public realm but is uncritically taken at face value?
There is currently a groundswell of opinion that the subject of transparency of clinical trial data needs to be tackled. This campaign is particularly strong in the United Kingdom where the British Medical Journal and advocacy groups like www.alltrials.net have gained prominence. Ben Goldacre, author of the recent Bad Pharma book, goes so far as to say, “The problem of missing trials is one of the greatest ethical and practical problems facing medicine today.”1
Here in the United States we also have issues with data. One study from 2009 found that the results of only 44% of trials conducted in the United States and Canada is published in the medical literature.2 However, this study was on general medicine, how are we faring in orthopedics? A study from 2011 targeted orthopedic trauma trials registered on www.clinicaltrials.gov and followed them up to see if they were published within a reasonable timeframe.3 The result? Only 43.2% of the orthopedic trauma trials studied resulted in a publication—a figure that almost exactly mirrors the findings from the general medicine study.
Data that is not released obviously skews the evidence available to us as clinicians and researchers. More insidious still is incomplete data as it gives a false picture to anyone reading the original study or to a researcher who wants to include the study in a meta-analysis. We are all aware of the difficulty of having complete patient follow-up because, ironically, we as surgeons have enabled our patients to walk away from the study. How should we best deal with these gaps in our knowledge? Some statistical techniques have been developed to deal with just this problem.
One set of researchers looked at how missing data was dealt with in an intention-to-treat analysis in orthopedic randomized clinical trials.4 They took 1 published study and recalculated the way patients on a displaced midshaft clavicular fracture trial who were lost to follow-up are handled. These researchers used the Last Observation Carried Forward technique and compared this to the original method, which was exclusion from the analysis. This change in approach changed the significance of the nonunion and overall complication results. However, the use of these various methods to deal with missing data in intention-to-treat analysis is in itself the subject of some controversy in orthopedic clinical research.5
There is more than merely anecdotal evidence that uncritical acceptance of research findings could harm patients. We are all familiar with the recent metal-on-metal hip implant controversy when promising early results were not borne out by later experience. One study, which found combined clinical and radiographic failure rates of 28% among large diameter metal-on-metal articulations in total hip arthroplasty, notes that, “adequate preclinical trials may have identified some of the shortcomings of this class of implants before the marketing and widespread use of these implants ensued.”6
Is this volte-face in the evidence released a rare occurrence? Perhaps not. A well-known review of 49 studies from 2005 found that 45 claimed the intervention was effective.7 Subsequent investigations contradicted the findings of 7 of the original studies with positive results (16%), and a further 7 of these studies (16%) reported effects stronger than those of any of the follow-up studies, studies which were larger or better controlled. The evidence for almost one-third of the positive result studies was therefore changed, either wholly or partly. Keep in mind that this figure does not take into account the 11 positive result studies which were not replicated at all.
In all of this, we have to accept that things are rarely black and white. When is the best time to release information? For example, the conclusion for a closed fracture treatment subgroup in the study to prospectively evaluate reamed intramedullary (IM) nails in tibial fractures (SPRINT) changed only after 800 patients had been enrolled. A smaller trial would have led to an incorrect conclusion for this subgroup.8 As you can see, deciding on when to release data is a delicate subject and is influenced by many factors, not least time and costs. Many contemporary clinical researchers also operate under publication pressures.9 And all of us are aware of the kudos that accrue from being first-in-manuscript authors!
Unfortunately, knowing how to identify good and bad (and premature) information, and how to filter out relevant information in today’s flood of publications in the field of medicine is likely to remain an intractable problem for all of us involved in conducting or assessing clinical research for the foreseeable future. This is why the critical appraisal techniques of evidence-based medicine are invaluable.
Starr10 in writing about the advances in fracture repair achieved by the AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation), says that, “Fortunately, the surgical pioneers who described early use of these techniques were harsh critics of their own work. The need for better methods and implants was evident.” From its founding, the AO inculcated a culture in which data, positive or negative, was shared.
Perhaps the ‘Golden Age of Orthopedic Surgery’ has already passed. But even with all of the advances in today’s operating room, we should continue to strive to improve what it is we do, even if it is only incrementally. As this editorial has illustrated, complacency about clinical research data presents a challenge to better patient care. We need to continue to be inquisitive and questioning in our quest to be better!
Dr. Helfet is Associate Editor of Trauma of this journal; Professor, Department of Orthopedic Surgery, Cornell University Medical College; and Director of the Orthopaedic Trauma Service, at the Hospital for Special Surgery and New York–Presbyterian Hospital, New York, New York. Dr. Hanson is Director and Mr. De Faoite is Education Manager, AO (Arbeitsgemeinschaft für Osteosynthesefragen/Association for the Study of Internal Fixation) Clinical Investigation and Documentation (AOCID), Dübendorf, Switzerland.
Authors’ Disclosure Statement: The authors report no actual or potential conflict of interest in relation to this article.
Am J Orthop. 2013;42(9):399-400. Copyright Frontline Medical Communications Inc. 2013. All rights reserved.
1. Davies E. The shifting debate on trial data transparency. BMJ. 2013;347:f4485.
2. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144.
3. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M. Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskelet Disord. 2011;12:278.
4. Herman A, Botser IB, Tenenbaum S, Chechick A. Intention-to-treat analysis and accounting for missing data in orthopaedic randomized clinical trials. J Bone Joint Surg Am. 2009;91(9):2137-2143.
5. Scharfstein DO, Hogan J, Herman A. On the prevention and analysis of missing data in randomized clinical trials: the state of the art. J Bone Joint Surg Am. 2012;94 suppl 1:80-84.
6. Steele GD, Fehring TK, Odum SM, Dennos AC, Nadaud MC. Early failure of articular surface replacement XL total hip arthroplasty. J Arthroplasty. 2011;26(6 suppl):14-18.
7. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218-228.
8. Slobogean GP, Sprague S, Bhandari M. The tactics of large randomized trials. J Bone Joint Surg Am. 2012;94 suppl 1:19-23.
9. Duvivier R, Crocker-Buqué T, Stull MJ. Young doctors and the pressure of publication. Lancet. 2013;381(9876):e10.
10. Starr AJ. Fracture repair: successful advances, persistent problems, and the psychological burden of trauma. J Bone Joint Surg Am. 2008;90 suppl 1:132-137.
1. Davies E. The shifting debate on trial data transparency. BMJ. 2013;347:f4485.
2. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.Gov: a cross-sectional analysis. PLoS Med. 2009;6(9):e1000144.
3. Gandhi R, Jan M, Smith HN, Mahomed NN, Bhandari M. Comparison of published orthopaedic trauma trials following registration in Clinicaltrials.gov. BMC Musculoskelet Disord. 2011;12:278.
4. Herman A, Botser IB, Tenenbaum S, Chechick A. Intention-to-treat analysis and accounting for missing data in orthopaedic randomized clinical trials. J Bone Joint Surg Am. 2009;91(9):2137-2143.
5. Scharfstein DO, Hogan J, Herman A. On the prevention and analysis of missing data in randomized clinical trials: the state of the art. J Bone Joint Surg Am. 2012;94 suppl 1:80-84.
6. Steele GD, Fehring TK, Odum SM, Dennos AC, Nadaud MC. Early failure of articular surface replacement XL total hip arthroplasty. J Arthroplasty. 2011;26(6 suppl):14-18.
7. Ioannidis JP. Contradicted and initially stronger effects in highly cited clinical research. JAMA. 2005;294(2):218-228.
8. Slobogean GP, Sprague S, Bhandari M. The tactics of large randomized trials. J Bone Joint Surg Am. 2012;94 suppl 1:19-23.
9. Duvivier R, Crocker-Buqué T, Stull MJ. Young doctors and the pressure of publication. Lancet. 2013;381(9876):e10.
10. Starr AJ. Fracture repair: successful advances, persistent problems, and the psychological burden of trauma. J Bone Joint Surg Am. 2008;90 suppl 1:132-137.