Like other adult learners, physicians will seek and retain new knowledge only when motivated to do so (ie, when they have the need to know). As a result, efforts to increase clinicians’ use of the best information at the point of care must focus on providing them with well-validated evidence showing a direct and relevant benefit to their patients (eg, Patient-Oriented Evidence that Matters [POEMs] reviews1).
Previously,2 we described our efforts to identify the relatively few research findings in the medical literature that provide both relevant and valid new information for practicing clinicians. Of 8085 articles published in 85 medical journals over a 6-month period, only 2.6% (211) met these criteria.
These 211 research articles were summarized in issues of the newsletter Evidence-Based Practice and are incorporated into InfoRetriever, an electronic database using POEMs to improve information access at the point of care.3 Other such services, such as Journal Watch and Best Evidence4 provide similar reviews of the recent medical literature. However, Journal Watch has no published criteria explaining how articles are chosen for inclusion5 or how the validity of the information is determined. Best Evidence focuses primarily on the validity of research, and the criteria for relevance are not clearly defined.6 This valuing of rigor over relevance may lead to providing information to clinicians that they do not really need or omitting important information that they do need. In this exploratory study we aimed to find out how much overlap in content exists between Evidence-Based Practice and Best Evidence.
Methods
To evaluate the differences between Best Evidence and Evidence-Based Practice, we compared the articles in ACP Journal Club and the discontinued Evidence-Based Medicine, which are now combined into Best Evidence, with those summarized in Evidence-Based Practice. We chose for comparison the 5 issues of Evidence-Based Practice published between January and May of 1998. Since the time to publication of the ACP Journal Club and Evidence-Based Medicine is longer than that for Evidence-Based Practice, we used for comparison 6 bimonthly issues of both ACP Journal Club and Evidence-Based Medicine, starting with the November-December 1997 issues and ending with the November-December 1998 issues.
Results
Over a 5-month period, 85 POEMs were published in Evidence-Based Practice. There was little overlap between the 3 publications. Only 11 (12.9%) of these POEMs were also published in either ACP Journal Club or Evidence-Based Medicine. To compare in the other direction, 3 bimonthly issues of Evidence-Based Medicine and ACP Journal Club were chosen and compared with all issues of Evidence-Based Practice. The results are summarized in the Table. A total of 109 synopses were published in the 2 Best Evidence publications during this time. Most of these synopses (n=82, 75.2%) were not considered POEMs and were not published in Evidence-Based Practice. Of the 49 distinct articles (33 articles were reviewed in both publications) found in these publications but not selected for Evidence-Based Practice, 22 (45%) studied interventions or diseases not relevant to family practice, 15 (31%) would not induce a change in practice, 8 (16%) were in journals not covered by Evidence-Based Practice (only 1 of these articles was a POEM), 3 (6%) evaluated disease-oriented outcomes, and 1 article (2%) was a POEM that had been earmarked for inclusion in Evidence Based Practice but was lost in transmission.
Discussion
This small informal study shows the marked difference between Evidence-Based Practice and the content of Best Evidence. Readers of only Best Evidence would miss a significant amount of high-quality information directly applicable to primary care practice. Although Evidence-Based Practice and Best Evidence use essentially the same validity criteria6,7 to screen preliminary research results, the key difference between them resides in the relevance of the information each source chooses to present. Evidence-Based Practice focuses on patient-oriented evidence that matters, which is information that must pass 3 relevance criteria: (1) the affected outcome must be one that patients care about (ie, not disease-oriented outcomes); (2) the proposed intervention must be feasible and deal with a common problem; and (3) the results being presented should require a change in practice on the part of most clinicians.
The concepts embodied in evidence-based medicine have been described as the long-awaited bridge between research and clinical practice. Although the techniques of evidence-based medicine have greatly enhanced and simplified the evaluation of the validity of clinical research, they are not practical to meet the day-to-day needs of busy, real-life clinicians. This method is problem driven; the search for information begins with the generation of a specific patient-based question. However, primary care clinicians are usually in a more general keeping-up mode, where foraging for information is just as important as hunting for answers to patients’ specific questions.8