Causes of Diagnostic Errors in Hospitalized Patients
All aspects of the diagnostic process are susceptible to errors. These errors stem from a variety of faulty processes, including failure of the patient to engage with the health care system (eg, due to lack of insurance or transportation, or delay in seeking care); failure in information gathering (eg, missed history or exam findings, ordering wrong tests, laboratory errors); failure in information interpretation (eg, exam finding or test result misinterpretation); inaccurate hypothesis generation (eg, due to suboptimal prioritization or weighing of supporting evidence); and failure in communication (eg, with other team members or with the patient).2,34 Reasons for diagnostic process failures vary widely across different health care settings. While clinician assessment errors (eg, failure to consider or alternatively overweigh competing diagnoses) and errors in testing and the monitoring phase (eg, failure to order or follow up diagnostic tests) can lead to a majority of diagnostic errors in some patient populations, in other settings, social (eg, poor health literacy, punitive cultural practices) and economic factors (eg, lack of access to appropriate diagnostic tests or to specialty expertise) play a more prominent role.34,35
The Figure describes the relationship between components of the diagnostic process and subsequent outcomes, including diagnostic process failures, diagnostic errors, and absence or presence of patient harm.2,36,37 It reemphasizes the centrality of the patient in decision-making and the continuous nature of the process. The Figure also illustrates that only a minority of process failures result in diagnostic errors, and a smaller proportion of diagnostic errors actually lead to patient harm. Conversely, it also shows that diagnostic errors can happen without any obvious process-failure points, and, similarly, patient harm can take place in the absence of any evident diagnostic errors.36-38 Finally, it highlights the need to incorporate feedback from process failures, diagnostic errors, and favorable and unfavorable patient outcomes in order to inform future quality improvement efforts and research.
A significant proportion of diagnostic errors are due to system-related vulnerabilities, such as limitations in availability, adoption or quality of work force training, health informatics resources, and diagnostic capabilities. Lack of institutional culture that promotes safety and transparency also predisposes to diagnostic errors.39,40 The other major domain of process failures is related to cognitive errors in clinician decision-making. Anchoring, confirmation bias, availability bias, and base-rate neglect are some of the common cognitive biases that, along with personality traits (aversion to risk or ambiguity, overconfidence) and affective biases (influence of emotion on decision-making), often determine the degree of utilization of resources and the possibility of suboptimal diagnostic performance.41,42 Further, implicit biases related to age, race, gender, and sexual orientation contribute to disparities in access to health care and outcomes.43 In a large number of cases of preventable adverse outcomes, however, there are multiple interdependent individual and system-related failure points that lead to diagnostic error and patient harm.6,32
Challenges in Defining and Measuring Diagnostic Errors
In order to develop effective, evidence-based interventions to reduce diagnostic errors in hospitalized patients, it is essential to be able to first operationally define, and then accurately measure, diagnostic errors and the process failures that contribute to these errors in a standardized way that is reproducible across different settings.6,44 There are a number of obstacles in this endeavor.
A fundamental problem is that establishing a diagnosis is not a single act but a process. Patterns of symptoms and clinical presentations often differ for the same disease. Information required to make a diagnosis is usually gathered in stages, where the clinician obtains additional data, while considering many possibilities, of which 1 may be ultimately correct. Diagnoses evolve over time and in different care settings. “The most likely diagnosis” is not always the same as “the final correct diagnosis.” Moreover, the diagnostic process is influenced by patients’ individual clinical courses and preferences over time. This makes determination of missed, delayed, or incorrect diagnoses challenging.45,46
For hospitalized patients, generally the goal is to first rule out more serious and acute conditions (eg, pulmonary embolism or stroke), even if their probability is rather low. Conversely, a diagnosis that appears less consequential if delayed (eg, chronic anemia of unclear etiology) might not be pursued on an urgent basis, and is often left to outpatient providers to examine, but still may manifest in downstream harm (eg, delayed diagnosis of gastrointestinal malignancy or recurrent admissions for heart failure due to missed iron-deficiency anemia). Therefore, coming up with disease diagnosis likelihoods in hindsight may turn out to be highly subjective and not always accurate. This can be particularly difficult when clinician and other team deliberations are not recorded in their entirety.47
Another hurdle in the practice of diagnostic medicine is to preserve the balance between underdiagnosing versus pursuing overly aggressive diagnostic approaches. Conducting laboratory, imaging, or other diagnostic studies without a clear shared understanding of how they would affect clinical decision-making (eg, use of prostate-specific antigen to detect prostate cancer) not only leads to increased costs but can also delay appropriate care. Worse, subsequent unnecessary diagnostic tests and treatments can sometimes lead to serious harm.48,49
Finally, retrospective reviews by clinicians are subject to multiple potential limitations that include failure to create well-defined research questions, poorly developed inclusion and exclusion criteria, and issues related to inter- and intra-rater reliability.50 These methodological deficiencies can occur despite following "best practice" guidelines during the study planning, execution, and analysis phases. They further add to the challenge of defining and measuring diagnostic errors.47